My main purpose is to create a gstreamer pipeline using glfilterapp to compose on top of a video and the queue this composition in 2 others pipe : one for display, one to encode
I’m facing a lot of problem of either frame lost or thread race condition using this kind of pipe (exampe pipe below)
As it won’t work as expected I’m trying to directly diplay by own in the glfilterapp callback and there I’m facing an OpengGL mutlithread problem (callback is in a gstreamer thread and QT rendering loop in an other)
I success in sharing OpenGL context but when I fill a texture in one thread, the draw does nothing n the other (but if I read texture data they seems to be correct, if I download and then reupload the data it works but CPU stall at 100%) Using pixel buffer seems to work better but more CPU is consummed. Is there a mean to correctly share a texture between 2 contexts on the board ?
Gstreamer Pipeline :
gst-launch-1.0 v4l2src norm=PAL ! glfilterapp ! video/x-raw,format=UYVY ! tee name=t ! queue ! glimagesink t. ! queue ! videoconvert ! video/x-raw,format=NV12 ! v4l2enc ! fakesink
gst-launch-1.0 v4l2src norm=PAL ! glfilterapp ! video/x-raw,format=UYVY,framerate=25/1,width=800,height=600 ! glimagesink
WARNING: from element /GstPipeline:pipeline0/GstAutoVideoSink:autovideosink0/GstGLImageSink:autovideosink0-actual-sink-glimage: A lot of buffers are being dropped.
Additional debug info:
gstbasesink.c(2794): gst_base_sink_is_too_late (): /GstPipeline:pipeline0/GstAutoVideoSink:autovideosink0/GstGLImageSink:autovideosink0-actual-sink-glimage:
There may be a timestamping problem, or this computer is too slow.