Is it possible to use the H264 encoder without the gstreamer libraries?

Hello,

I am interested in using the H264 encoder without the gstreamer libraries, only using the libv4l to access the video camera and, if possible, passing the raw stream to the hardware encoder and get the encoded stream back to write the video using a kernel driver.

gstreamer is a big library and also a little bit heavyweight.

Could you please provide informations about how to use the encoder without the gstreamer libs if possible ?
Thank you in advance.
Simon

Update:

The only references for ‘h264’ to some drives in the kernel code can be found into

drivers/media/platform/qcom/venus

This link confirms that this is the driver of the hardware encoder.

Could you confirm that should be possible to inject the raw frames to the encoder and get the encoded data back using the kernel driver ?
The pseudocode I mean is:

1 - Retrieve the frames from the camera using v4linux user space libs.
2 - Pass the camera frame to the hardware encfoder using a kernel buffer, perhaps a character driver interfaced with the hardware encoder. This could imply the modification of the kernel driver venus_enc.
3 - Get the encoded data from another kernel buffer again, using a kernel character driver with venus_dec driver.

Regards.
Simon

Yes this is certainly possible, all is about using v4l2 API to manage the v4l2 buffers and controls. You probably need to mimic what is done in gstreamer (gst-plugins-good).

1 Like

for example: fmpeg -loglevel debug -f rawvideo -pix_fmt nv12 -s:v 1280:720 -r 25 -i ~/Videos/raw/freeway.yuv -c:v h264_v4l2m2m out/out.h264.mp4

you can also link libavcodec to your program (libavcodec is generated by ffmpeg), retrieve the encoder by name and perform the encoding that way. That is what Kodi does.

1 Like

BTW, the db410c/db820c codec developer (Stanimir Varbanov) also keeps a couple of public trees that might help you come up to speed if you wish to build your whole userspace v4l2 solution.

This is for encoding a test pattern
stanimir.varbanov/v4l2-encode.git - [no description]

1 Like

Thank you both guys for the quick replies.

It’s quite interesting that libavcodec can be interfaced with the encoder.

Actually I am using gstreamer becuase I need to get both the video data and also the raw data of the frames from the USB camera.
gstreamer, with the use of parallel pipelines, allows me to do this.

I did not started to study the libavcodec code yet, but I think this is also possible using this libraries.
@ldts, could you confirm that libavcodec would let me access to camera raw frames ?

@Loic, you are right, of course.

Simon

@simozz, you’ll need to use libavdevice [1] for that - to interface to the camera to get the frames- , then use liavcodec for encoding. I havent tried myself but seems pretty straightforward.

[1] Libavdevice Documentation
[2] FFmpeg: libavdevice

1 Like

Great. I will try it. Otherwise I’ll take a look at gstreamer good plugin.
Both solutions require to analyse code.
Thanks.
Simon

Have you looked at the ffmpeg support for the decoders? It’s still
abstracting things but I think it can end up with a more imperative
programming model.

1 Like

Sorry both for the typo and for joining the thread two days too late (blame my processing mail in order and not reading far enough ahead…)

Hello @danielt,

Thank you too for your contribution.
I am looking at the ffmpeg library (I am actually trying to compiling it for aarch64 out of the DB410c), and I saw that it can enable the v4l_m2m encoder.

I have just a last question.
Since it all goes around the v4linux stack, shouldn’t the same task (accessing raw data and video encoding) be saccomplished with just v4linux library and any other h264 library (i.e. openh264) ?
browsing the code of libv4linux I can see referencesme to m2m.

It seems to me that gstreamer and ffmpeg both act as a ‘high level’ libraries around v4linux and many others.

I ask it because I would like to understanda the picture at all (who do what). As I understand, it’s v4linux who directly passes the frames to the encoder and, the h264 libs get the data from it and build he video. Am I right ?

Simon