Why 'videoconvert' is required when scaling down image from camera?

On my Dragonboard 410c I can successfully capture 1296x972 image (scaling down from 2592x1944) from OV5645 camera module with AISTAR MIPI adaptor. Here is a list of the command I use:

sudo media-ctl -d /dev/media0 -l ‘“msm_csiphy0”:1->“msm_csid0”:0[1],“msm_csid0”:1->“msm_ispif0”:0[1],“msm_ispif0”:1->“msm_vfe0_pix”:0[1]’

sudo media-ctl -d /dev/media0 -V ‘“ov5645 4-003b”:0[fmt:UYVY8_2X8/2592x1944 field:none],“msm_csiphy0”:0[fmt:UYVY8_2X8/2592x1944 field:none],“msm_csid0”:0[fmt:UYVY8_2X8/2592x1944 field:none],“msm_ispif0”:0[fmt:UYVY8_2X8/2592x1944 field:none],“msm_vfe0_pix”:0[fmt:UYVY8_2X8/2592x1944 field:none compose:(0,0)/1296x972],“msm_vfe0_pix”:1[fmt:UYVY8_2X8/1296x972 field:none]’

gst-launch-1.0 v4l2src device=/dev/video3 num-buffers=1 ! videoconvert ! ‘video/x-raw,format=UYVY,width=1296,height=972’ ! jpegenc ! filesink location=image_1296x972.jpg

I’m just wondering why videoconvert is required, or gst-launch-1.0 will fail and complain the following message:

ERROR: from element /GstPipeline:pipeline0/GstV4l2Src:v4l2src0: Internal data strea
m error.
Additional debug info:
gstbasesrc.c(3055): gst_base_src_loop (): /GstPipeline:pipeline0/GstV4l2Src:v4l2src
streaming stopped, reason not-negotiated (-4)
ERROR: pipeline doesn’t want to preroll.

Can anyone advise?

On 410c, the only supported output formats for pix interface are NV12/NV21/NV16/NV61, so you should try with something like:

media-ctl -d /dev/media0 -V '"msm_vfe0_pix":0[fmt:UYVY8_2X8/2592x1944 field:none compose:(0,0)/1296x972]'
media-ctl -d /dev/media0 -V '"msm_vfe0_pix":1[fmt:UYVY8_1_5X8/1296x972 field:none]'


gst-launch-1.0 v4l2src device=/dev/video3 num-buffers=1 ! video/x-raw,format=NV12,width=1296,height=972 !  jpegenc ! filesink location=image_1296x972.jpg

Thanks for your prompt reply and explanation. Your suggestion does work!

And, I’ve have few question listed as below:
If I simply execute the line you suggested:

gst-launch-1.0 v4l2src device=/dev/video3 ! video/x-raw,format=NV12,width=1296,height=972 ! jpegenc ! filesink location=image_1296x972.jpg

Then it would halt here:

New clock: GstSystemClock

until I press Ctrl-C.
So I added num-buffers=1 to that line:

gst-launch-1.0 v4l2src device=/dev/video3 num-buffers=1 ! video/x-raw,format=NV12,width=1296,height=972 ! jpegenc ! filesink location=image_1296x972.jpg

Then it will work normally.
Any idea about why adding num-buffers=1 is important?

To know how to use NV16 instead of NV12, I modified your lines to the following ones:

sudo media-ctl -d /dev/media0 -V ‘“msm_vfe0_pix”:0[fmt:UYVY8_2X8/2592x1944 field:none compose:(0,0)/1296x972]’
sudo media-ctl -d /dev/media0 -V ‘“msm_vfe0_pix”:1[fmt:UYVY8_2X8/1296x972 field:none]’

gst-launch-1.0 v4l2src device=/dev/video3 ! ‘video/x-raw,format=NV16,width=1296,height=972’ ! jpegenc ! filesink location=image_1296x972_NV16.jpg

However, gst-launch-1.0 failed to run and outputed the following message:

WARNING: erroneous pipeline: could not link v4l2src0 to jpegenc0, jpegenc0 can’t handle caps video/x-raw, format=(string)NV16, width=(int)1296, height=(int)972

I’m wondering how to correct my lines above in order to use NV16?

Yes, my bad, the num-buffers property specifies the number of buffers to get from the video device (camera) and to inject into the gstreamer pipeline. If you don’t specify one the stream never stop and buffers are continuously converted into jpeg… In case of snapshot like this you only want to convert one buffer, but for video preview or encoding you want a continous stream.

jpegenc element does not support NV16 as input (you can check that with ‘gst-inspect-1.0 jpegenc’)

So you can use a ‘videoconvert’ element between capture and jpegenc, which will convert the NV16 format in a format compatible with jpegenc input, but better is to convert in nv12 from hardware directly.

OK, I understand.
Thanks again for your replies and great help.
I wish you a good day!