Optimised video scaling - VFE



Is it possible to use the VFE as a hardware-assisted image downscaler? i.e. send raw image frames to it from RAM, not frames from the sensor frontend.

This is to downscale video received on the Ethernet rather than from a local image sensor.

Many thanks.


VFE only takes input from CSID (CSI decoder) via ISPIF. In theory you could feed the CPP (camera post processor) with input/output buffer for scaling, but this is not something which is exposed/supported AFAIK.

What is the format of your video stream ?
Did you think about using GPU (opengl) for scaling ?


We’d like to scale raw image frames straight out of the Qualcomm OMX H264 decoder (we’re basically downscaling an H264 stream: OMX decoder -> Downscale -> OMX encoder).

We have thought about OpenGL and it seems that OpenGL support is available in Linux. I’m not sure if all of the GPU memory loads / saves would be a bottleneck yet.

The Hexagon DSP or OpenCL would give us a lot of control over the scaling algorithm. Do you know if Hexagon DSP is supported on the 820 in Linux? And OpenCL?

Many thanks for the helpful reply.


Using the GPU shouldn’t create a bottleneck on embedded platforms since textures are held in the same DDR as everything else. The freedreno driver used for run the GPU does not OpenCL but it does have support for OpenGL compute shaders which might be sufficient if you want to implement sophisticated scaling algorithms on the GPU.

I haven’t got confirmation about the hexagon DSP (I’ll let you know if I do) but I would be surprised it user programming were supported at this point.


Thanks a lot, very helpful.

Please do keep us posted about whether Hexagon DSP is already supported on Linux - this is something we would be very keen to see.