Testing the H264 video encoder with a logitech C270 cam

Hello,

I need to record an H264 video using the hardware video encoder from a logitech c270 camera.
For this task I am following the instructions from here at the section Using CSI camera.

Everything goes fine with the gstreamer-plugins-good modification, but when I run the first command:

# media-ctl -d /dev/media1 -l '"msm_csiphy0":1->"msm_csid0":0[1],"msm_csid0":1->"msm_ispif0":0[1],"msm_ispif0":1->"msm_vfe0_pix":0[1]'

I get the error

Unable to parse link: Invalid argument (22)

Both media-ctl commands returns the same error.
I tried to execute the gst-launch-1.0 command without the previuos media-ctl commands, the video file is created but it’s empty.

Do the media-ctl commands are needed only for CSI camera ?
How can I execute successfully gst-launch for a logitech c270 camera ?

I would like to pasa USB camera raw data to the h264 encoder.

Thank you in advance.
Regards,
Simon

Hello,

I am looking forward to understand media-ctl and I understand that the commands reported in the tutorial are specific for the CSI camera interface OV5645…

In my case the C270 USB camera is /dev/media1 so:

# media-ctl -d /dev/media1 -p
Media controller API version 0.1.0

Media device information
------------------------
driver          uvcvideo
model           UVC Camera (046d:0825)
serial          2FB8A120
bus info        1.2
hw revision     0x12
driver version  4.9.39

Device topology
- entity 1: UVC Camera (046d:0825) (1 pad, 1 link)
            type Node subtype V4L flags 1
            device node name /dev/video0
    pad0: Sink
        <- "Extension 4":1 [ENABLED,IMMUTABLE]

- entity 5: Extension 4 (2 pads, 4 links)
            type V4L2 subdev subtype Unknown flags 0
    pad0: Sink
        <- "Processing 2":1 [ENABLED,IMMUTABLE]
    pad1: Source
        -> "UVC Camera (046d:0825)":0 [ENABLED,IMMUTABLE]
        -> "Extension 6":0 [ENABLED,IMMUTABLE]
        -> "Extension 7":0 [ENABLED,IMMUTABLE]

- entity 8: Extension 6 (2 pads, 1 link)
            type V4L2 subdev subtype Unknown flags 0
    pad0: Sink
        <- "Extension 4":1 [ENABLED,IMMUTABLE]
    pad1: Source

- entity 11: Extension 7 (2 pads, 1 link)
             type V4L2 subdev subtype Unknown flags 0
    pad0: Sink
        <- "Extension 4":1 [ENABLED,IMMUTABLE]
    pad1: Source

- entity 14: Processing 2 (2 pads, 3 links)
             type V4L2 subdev subtype Unknown flags 0
    pad0: Sink
        <- "Camera 1":0 [ENABLED,IMMUTABLE]
    pad1: Source
        -> "Extension 4":0 [ENABLED,IMMUTABLE]
        -> "Extension 3":0 [ENABLED,IMMUTABLE]

- entity 17: Extension 3 (2 pads, 1 link)
             type V4L2 subdev subtype Unknown flags 0
    pad0: Sink
        <- "Processing 2":1 [ENABLED,IMMUTABLE]
    pad1: Source

- entity 20: Camera 1 (1 pad, 1 link)
             type V4L2 subdev subtype Unknown flags 0
    pad0: Source
        -> "Processing 2":0 [ENABLED,IMMUTABLE]

From what I read on the web, now I have to build my own media-ctl command based on the --print-topology output. Could you suggest where to find the official media-ctl documentation ?

However, at the moment I have a further doubt: how the APQ8016 H264 hardware encoder is being used/invoked from media-ctl and gstreamer commands ?

Regards,
Simon

media-ctl is just a wrapper around the media controller API:
https://linuxtv.org/downloads/v4l-dvb-apis/uapi/mediactl/media-controller.html

This API is (mostly) used to control non-software data flow between components of a chip (TV chips in particular have very complex audio/video capture and render pipelines). IIUC the original media-ctl commands on DB410C are used to configure the camera capture pipeline to connect the camera interface to a DMA device. We can then manage the DMA process using the V4L2 API.

In the case of the webcam there is no capture pipeline to control… so everything is configured by default to grab data via V4L2.

I’m able to capture and record video from a web cam using the following pipeline (as it happens I inserted a clockoverlay to demonstrate filtering of video streams):

gst-launch-1.0 \
  v4l2src device=/dev/video2 ! \
  video/x-raw,width=1280,height=960 ! \
  clockoverlay ! \
  videoconvert ! \
  v4l2h264enc extra-controls="controls,h264_profile=4,video_bitrate=2000000;" ! \
  h264parse ! \
  mp4mux ! \
  filesink location=video.mp4 

In the above pipeline all the hardware encode steps are managed by the v4l2h264enc element (which also interacts with the hardware using the V4L2 API). The final stages involve parsing the elementary stream (in software) to extract the metadata necessary to encapsulate it into the mp4 container before storing it in a file.

PS Apologies for the jargon density in the last paragraph. I’m not sure it can be easily explained without jargon (because easily understood terms such as “compressed” apply equally to both elementary stream and the container) but I’ve done my best to provide links!

1 Like

OK, thanks !

My pipeline also is:

gst-launch-1.0 -e v4l2src device=/dev/webcam ! videoconvert ! video/x-raw,width=544,height=288,framerate=10/1 ! v4l2h264enc ! h264parse ! mp4mux ! filesink location=video.mp4

Simon

Hi Daniel,
I was wondering if you could help me with a problem of mine. I want to use two Logitech C170 USB webcams to take video feed simultaneously for stereo imaging. This will be done by a raspberry pi/ Asus tinker-board.
But I keep running out of USB bandwidth. The error I get is “VIDIOC_DQBUF” and “no such device found”. Someone suggested me that I should use webcams with onboard h264 compression but those are like 5x expensive than my current setup.
Can you please tell if I can apply h264 compression in the stream so that i can get 1080p videos from both the cameras simultaneously.
NOTE: The pi/tinker-board will be running ROS (robot operating system) so I don’t have a lot of CPU to give to compression.

Can’t really comment I’m afraid. I have never tried using a web cam with onboard compression.

Is there any other way? I just need simultaneous feed from both the cameras on the raspberry pi.
Will dropping the resolution help?