I have a Dragonboard 410c and I created a pipeline to save h.264 encoded video from an USB camera (uvcvideo) to sdcard. So far, I managed to use v4l2h264enc to do hardware encoding and everything works like a charm.
The main issue with this pipeline is that my camera (like many USB cameras) is only capable to output image in only two formats:
- YUY2 [raw]
- MJPEG [compressed]
The encoder is capable to take only NV12 format as input. So, to create the pipeline, I need to do a video format conversion which sadly consumes CPU (and increases the overall power consumption).
Right now I need to find a way to do this format conversion by taking advantage of the DB410’s hardware blocks. I think I have four options (which may lead to a lower power consumption):
- glcolorconvert: plugin for gstreamer which will use the Ardeno GPU via opengl to offload the conversion
- libyuv, which can do this conversion in two steps, and uses NEON SIMD to optimize the conversion.
- mediactl: configure a pipeline that takes advantage of the Camera Subsistem Hardware (CAMSS) which has a built-in converter
- Hexagon SDK: create a gstreamer plugin which uses the dsp to offload the conversion operation.
In my development workflow I use YP - warrior, and linux-linaro-qcomlt version 4.14 (I also tried YP zeus + linux v5.4, but had some issues with gstreamer, and gave up very quickly - will return to the latest version as soon as I solve my pipeline issues).
Any advice would be greatly appreciated.