Multistream Inference on Ultra 96 v2 using RTSP cameras


How can I do Multistream Inference on Ultra 96 Boards?
First of all, how do I Interface RTSP cameras with Ultra 96 v2 board?


I see your questions on the subject of implementing multiple RTSP cameras on Ultra96-V2 appearing on several places (hackster io, 96boards org):

I hope you understand that I will not be answering the same question on multiple locations.
Let me know where you want an answer.


Hi Mario,

Thanks for your reply.
It would be a great help from your side if you can assist me here itself.



Do I understand that you are trying to capture video data from multiple network cameras (ie. RTSP streams) with the Ultra96-V2 ?

If yes, I would start with gstreamer. It has a source plug-in for rtsp called “rtspsrc”.

Start small, one stream at a time.

To verify that you have gstreamer on the Ultra96-V2 image you are using,
use the following command:
$ gst-launch-1.0 --version

To verify which gstreamer plug-ins your have available, use the following command(s):
$ gst-inspect-1.0
$ gst-inspect-1.0 | grep rtspsrc

Do you know what encoding your RTSP cameras are using ?
H.264 ? H.265 ? …


Hi Mario,

Thank you very much for your kind response.

Yes, you are right. I am trying to capture the Video Data from the Multiple RTSP Cameras with Ultra 96 V2 and do Deep Learning Inference on those Streams(with Vitis AI and DPU SD Card Image given by you in this Link

I executed the Commands you have given. And the output of those commands are pasted below:

root@ultra96v2-2020-1:~#  gst-launch-1.0 --version
gst-launch-1.0 version 1.16.1
GStreamer 1.16.1
Unknown package origin
root@ultra96v2-2020-1:~# gst-inspect-1.0 | grep rtspsrc
rtsp:  rtspsrc: RTSP packet receiver

The Encoding used by my Cameras is H.264.

What are the next steps I need to follow?

And one thing is I am using exactly the same SD Card image Provided by you in this Link:


Hi Mario,

Looking forward towards your reply.

Actually, I want to run the Deep Learning inference with RTSP stream as an Input. I am refering your’s Vitis AI Tutorils( for porting my Custom trained model on Ultra 96 V2. I have created the .elf file and also ran the inference with Webcam. Atually I want to do the Inference with RTSP Camera, so what I need to change in the Inference Code so as to work for RTSP camera?



I do not have any experience with RTSP, so you are probably more experienced than me already.

After some searching on google, I made two tests with a H.264 RTSP camera (implemented with KV260).

stream ready at:

Here is what I tried for my first test.

root@u96v2-sbc-base-2020-2:~# gst-launch-1.0 rtspsrc location=rtsp:// latency=100 ! queue ! rtph264depay ! h264parse ! avdec_h264 ! videoconvert ! videoscale ! video/x-raw,width=640,height=480 ! autovideosink
WARNING: erroneous pipeline: no element "avdec_h264"

This first test did not work, since we do not have the “avdec_h264” plugin available on our system.
We do have the “omxh264dec” plugin available, so I also tried the following:

root@u96v2-sbc-base-2020-2:~#  gst-launch-1.0 rtspsrc location=rtsp:// latency=100 ! queue ! rtph264depay ! h264parse ! omxh264dec ! videoconvert ! videoscale ! video/x-raw,width=640,height=480 ! autovideosink
Setting pipeline to PAUSED ...
[2021-04-27 17:29:57.136737189] [omx_core.cpp:181]      [OMX_GetHandle] Couldnt allocate dma allocator (tried using /dev/allegroDecodeIP)
ERROR: Pipeline doesn't want to pause.
ERROR: from element /GstPipeline:pipeline0/GstOMXH264Dec-omxh264dec:omxh264dec-omxh264dec0: Could not initialize supporting library.
Additional debug info:
../../../../git/gst-libs/gst/video/gstvideodecoder.c(2627): gst_video_decoder_change_state (): /GstPipeline:pipeline0/GstOMXH264
Failed to open decoder
Setting pipeline to NULL ...
Freeing pipeline ...

This unsuccessful attempt seems to declare some kind of memory allocation error, which I do not understand.

I will try to investigate further, but expect delays in my responses …