Understanding the video conversion options

I’m using a design comparable to the DragonBoard410c that features two cameras, a OV13855 on CSI0 and a OV5640 on CSI1. I’m based on the 4.9 release, with a Yocto-built userspace image.

I’ve backported the OV5640 driver that landed in mainline and modified another driver to support the OV13855. After setting up the media links, the OV5640 works with a GStreamer pipeline, but the OV13855 however does not yet work, especially because I lack a working pipeline for testing. The driver sets MEDIA_BUS_FMT_SGRBG10_1X10 as format code, which I think should match the output format described in the datasheet (“10-bit RGB RAW”).

I’d like to know how to the hardware formats ( MEDIA_BUS_FMT_SGRBG10_1X10 for OV13855, MEDIA_BUS_FMT_UYVY8_2X8 for OV5640) into something the application stack is able to use. Eventually, I need to stream the video content into a web application which runs in a QtWebEngine/Chromium context, and if I read the code correctly, this is where the supported formats are set up:

https://github.com/qt/qtwebengine-chromium/blob/b45f07bfbe74c333f1017810c2409e1aa6077a1b/chromium/third_party/webrtc/modules/video_capture/linux/video_capture_linux.cc#L153

So my question is: how to I set up a the media pad links for bringing the native hardware formats into anything that Chromium code is willing to accept?

Thanks for any pointer!

Daniel

Generally gStreamer can automatically negotiate pixel formats if you add a videoconvert element somewhere between the source, the capsfilter (if there is one) and the sink.

Note that there is a subtly different effect if the videoconvert is placed before the caps filter. With a v4l2src I’d recommend adding it after the caps filter.

Okay, but even with such a gStreamer element in place, applications such as Chromium won’t benefit from it, because they use the naked /dev/videoX device and handle the content directly.

If I’m reading the documentation correctly, there is no way to do a conversion from 10-bit RGB into some YUV format in hardware. Is that correct?

Thanks,
Daniel

Furthermore, it seems that the camera implementation for this hardware platform can’t handle 10-bit raw SRGB formats, even though the documentation states otherwise. Can anyone tell me whether there’s something wrong with the following setup?

media-ctl -d /dev/media1 -l '"msm_csiphy0":1->"msm_csid0":0[1],"msm_csid0":1->"msm_ispif0":0[1],"msm_ispif0":1->"msm_vfe0_rdi0":0[1]'
media-ctl -d /dev/media1 -V '"ov13858 1-0010":0[fmt:SRGGB10_1X10/4224x3136 field:none],"msm_csiphy0":0[fmt:SRGGB10_1X10/4224x3136 field:none],"msm_csid0":0[fmt:SRGGB10_1X10/4224x3136 field:none],"msm_ispif0":0[fmt:SRGGB10_1X10/4224x3136 field:none],"msm_vfe0_rdi0":0[fmt:SRGGB10_1X10/4224x3136 field:none]'
gst-launch-1.0 v4l2src device=/dev/video0 ! fakesink sync=false

The gstreamer pipeline bails out with

0:00:01.461826354 2582 0x319ec0f0 WARN basesrc gstbasesrc.c:2950:gst_base_src_loop: error: Internal data stream error.
0:00:01.461882604 2582 0x319ec0f0 WARN basesrc gstbasesrc.c:2950:gst_base_src_loop: error: streaming stopped, reason not-negotiated (-4)

Note that the 13855 driver was backported from mainline and modified to make it compatible with embedded hardware. It may well be that there’s a problem in that code, but none of its callbacks report any error, and the error above suggests it’s rather a format match error that happens well before the streaming actually starts.

This is what yavta has to say:

# yavta /dev/video0
Device /dev/video0 opened.
Device `Qualcomm Camera Subsystem' on `platform:1b0ac00.camss' is a video output (without mplanes) device.
Video format: pRAA (41415270) 4224x3136 field none, 1 planes: 
 * Stride 5280, buffer size 16558080

Any hint?

Hi Daniel,

Yes, there is no support for raw to yuv conversion in hardware.

I’m not sure that v4l2src supports 10bit raw format. You can take a look at “gst-inspect v4l2src” and check.

Well, yavta can say more if you ask for it :slight_smile: You can check yavta -h and then try something like:
yavta -B capture -c10 -I -n 5 -f SRGGB10P -s 4224x3136 -F /dev/video0

Btw it seems that you are using quite an old version of the CAMSS driver. I’d recommend to upgrade to the latest, in which case you will have to change the above to:
yavta -B capture-mplane -c10 -I -n 5 -f SRGGB10P -s 4224x3136 -F /dev/video0

Best regards,
Todor

Nah, it doesn’t. Seems I need to patch that in. I wonder what other people use to access such a stream then?

Hmm, okay. That’s the driver that was in the release/qcomlt-4.9 branch up until a couple of weeks ago. I rebased now, and that new version bails out like this with the 13MP, 4-lane camera:

0:00:02.362353333  2300      0x882e0f0 WARN          v4l2bufferpool gstv4l2bufferpool.c:748:gst_v4l2_buffer_pool_start:<v4l2src0:pool:src> Uncertain or not enough buffers, enabling copy threshold
0:00:02.387700520  2300      0x882e0f0 ERROR         v4l2bufferpool gstv4l2bufferpool.c:635:gst_v4l2_buffer_pool_streamon:<v4l2src0:pool:src> error with STREAMON 32 (Broken pipe)
0:00:02.388075833  2300      0x882e0f0 WARN          v4l2bufferpool gstv4l2bufferpool.c:1058:gst_v4l2_buffer_pool_poll:<v4l2src0> error: poll error 1: Broken pipe (32)
0:00:02.388648228  2300      0x882e0f0 WARN                 v4l2src gstv4l2src.c:871:gst_v4l2src_create:<v4l2src0> error: Failed to allocate a buffer
ERROR: from element /GstPipeline:pipeline0/GstV4l2Src:v4l2src0: Could not read from resource.
0:00:02.389072447  2300      0x882e0f0 WARN                 basesrc gstbasesrc.c:2950:gst_base_src_loop:<v4l2src0> error: Internal data stream error.

I’m curious - are you testing the driver with 4-lane CSI cameras as well? If so, which one are you using?

Also, the new version of the driver made V4L2_CID_PIXEL_RATE and V4L2_CID_LINK_FREQ mandatory which the driver for the ov5640 (the secondary camera in the design) lacks. I need to implement that first. Any hint on how to come up with the correct numbers? Otherwise I’ll go for some trial and error.

Thanks again,
Daniel

A small update on this: the new camss driver is working fine for the 2-lane camera (OV5640 on CSI1), but I still fail to use the 4-lane interface (CSI0). To recap, the driver I’m using is a modified version of the mainline OV13858 driver which was altered to reflect the parameters described the sensor datasheet.

The module seems to operate fine, the driver can communicate with it via I2C, and I can see it draw a significant amount of power depending on the resolution, which makes sense. The video pipeline however does not see any frames yet, neither when testing with yavta nor with a gstreamer setup.

I wonder which values the clock-lanes and data-lanes parameters in DTS should have in this case. I’d be grateful for any pointer on how to debug this further and what to look out for.

Thanks :slight_smile: