I am assuming you are referring to the “fully optimized video capture” use case, using CSI camera and the hw encoder available on SoC.
There is no obstacle for video capture.
the video capture use case requires several pieces:
- support for CSI camera
- support for color conversion in the camera subsystem (most camera would provide YUV data, which is different from the video format expected by the hw encoder , e.g. NV12)
- potentially scaling/cropping of camera picture
- hw video encoder, using the dedicated IP
- ability to share video buffers between various sub systems without any CPU copy.
as of today, #1, #2 and #4 are available, the video encoder driver is avaiable along with the video decoder , using the v4l2 m2m in kernel ‘api’. #5 should also be supported (not 100% sure).
What the previous sentence meant is that we do not have the full integration with Gstreamer. But we have a standalone test app that does video encode, and you should be able to use the camera driver and the video encoder simultaneously, we just haven’t finished the full integration.
For #3 , the work is starting, but this is likely not essential right now.
Regarding dual camera, the limitation has been fixed , and you can now use both sensors simultaneously. It was a limitation in the 16.06 release, we forgot to update this item in the 16.09 release notes.