Can the Adreno 360 output H264 to a file?

Hello,

I am interested in using the Adreno 360 but I can’t find its datasheet so here’s a question.
After processing frames from an input camera, I would like to generate a H264 output video, but I don’t want to output it to a display, instead, I would like to store it in a file.

Is it possible to achieve this with Adreno 360 ?
So let it generate the H264 video and storing to a file ?
If so, what I can use for this task ?

Regards,
Simon

I don’t think this is supported by the Adreno, it is a GPU (without any OpenCL features) and not a video decoder. However the DB410C does have a hardware H.264 encoder!

In the Debian hooked up via gStreamer and can be used to encode video, there is an example pipeline in the release notes: http://builds.96boards.org/releases/dragonboard410c/linaro/debian/17.06.1/ (scroll down).

Hello @danielt

Thank you for your info.

However the DB410C does have a hardware H.264 encoder!

Yes but I have to generate h264 from processed frames.

Regards,
Simon

If you want to process the frames then obviously you would have to extended the example pipeline in the release notes to add in the appropriate processing to the pipeline. Alternatively you could also run separate frame-grab and encode pipelines (either in gStreamer or direct to V4L2) and open code the image processing.

Whichever approach you adopt, all you are really doing under the covers is getting the GPU to render to an off-screen buffer and then passing it to the encoder.

Hello @danielt

In my application I already have the processed frames, since I do it. The whole picture is:

USB Camara → H264 encoded frames → Image Processing → H264 decoded video output file (not implemented yet).

The last part is missing.
So I understand it’s impossible doing it with the Adreno.

Regards.
Simon

You mean you have an application already generating a stream of images
and you want to encode them?

There are multiple ways you could do this but I’d suggest you get
reading gStreamer tutorials such as:
https://gstreamer.freedesktop.org/documentation/tutorials/basic/short-cutting-the-pipeline.html

The general aim would be to inject the frames into a gStreamer encoder
pipeline (similar to the one shown in the release notes) using an appsrc.

The other hint is that if you want to keep using the: element ! element
style notation to describe the pipeline (the example above describes the
pipelines in C rather than as parseable strings) you’ll have to look for
examples based on GstParse too.

This might seem slightly more complex than trying to code directly to
V4L2 but there are some benefits:

  • There’s more example code :wink:
  • It’s much easier to switch between hardware and software codecs just
    by changing the elements in the encoder pipeline (meaning you can do
    a lot of the development work on your PC using a software H.264
    encoder).

Hello @danielt

This is exactely what I need to do with OpenCV.

Ragards,
Simon

In that case I would take a look at:
https://gstreamer.freedesktop.org/data/doc/gstreamer/head/gst-plugins-bad/html/gst-plugins-bad-plugins-plugin-opencv.html

Firstly you should be able to use these plugins from gst-launch very easily to do some basic testing (and perhaps incorporate suitable video converters into the pipeline). Following on from that you should also be able to use use the underlying code as inspiration on how you can best interface your own code with gStreamer.

PS I particularly like the way facedetect works… allowing gStreamer to run the entire pipeline (capture through to display/encode) and simply producing a stream of events for the application to react to. Hopefully that gives you some idea of why I’m suggesting exploiting gStreamer as much as you possibly can… making your image processing code reusable in this way is very powerful.

In that case I would take a look at …

Mmm… My entire code is with OpenCV, and as I know I can execute a gstreamer pipeline from OpenCV function, by setting this into VideoWriter initialization, by inserting this pipeline:

VideoWriter(string("appsrc ! autovideoconvert ! v4l2video1h264enc extra-controls=\"encode,h264_level=10,h264_profile=4,frame_level_rate_control_enable=1\" ! h264parse ! rtph264pay config-interval=1 pt=96 ! filesink location=pfile.mp4"), CV_FOURCC('H', '2', '6', '4'), 15, Size(DEFAULT_FRAME_Y, DEFAULT_FRAME_X)); 

I don’t remember where I found the pipeline as example.
The file pfile.mp4 is created, but is always empty…
So I am going to read something more about gstreamer.

Thank you for the infos !
Regards,
Simon

Hello @danielt,

I have a last question. In your first reply you wrote:

However the DB410C does have a hardware H.264 encoder!

Do gstreamer use the hardware H.264 encoder or I am completely misunderstanding it ?
If so, how do I use the hardware H.264 encoder with the DB410c ?

I can’t find any official documentation on the topic.
Thank you in advcance.
Regards,
Simon

As far as I know there’s only the examples in the release notes. In general we rely on the gStreamer documentation to help people build more advanced pipelines (other than introducing v4l2src and v4l2h264enc there’s nothing “special” happening that is specific to DB410C)

Anyhow I presume my reply in the other thread is sufficient?

1 Like

… at least not until we have to discuss efficient (zero-copy) passing of video data between decoders and sinks but that is also tackled in the release notes.

1 Like