使用Gstreamer在录制音频+视频时显示无音频视频。

时间:2022-02-28 16:08:05

My Logitech C920 webcam provides a video stream encoded in h264. I'm using this "capture" tool to access the data:

我的Logitech C920网络摄像头提供了一个在h264编码的视频流。我使用这个“捕捉”工具来访问数据:

So I can view live video:

所以我可以观看现场视频:

/usr/local/bin/capture -d /dev/video0 -c 100000 -o | \
  gst-launch-1.0 -e filesrc location=/dev/fd/0 \
                    ! h264parse \
                    ! decodebin\
                    ! xvimagesink sync=false

...or record the stream as a raw h264 file:

…或者将该流记录为一个原始的h264文件:

/usr/local/bin/capture -d /dev/video0 -c 100000 -o | \
  gst-launch-0.10 -e filesrc location=/dev/fd/0 \
                     ! h264parse \
                     ! mp4mux \
                     ! filesink location=/tmp/video.mp4

...but I can't for the life of me figure out how to do both at the same time. Having a live feed on screen while recording can be useful sometimes, so I'd like to make this work. Spent hours and hours looking for a way to grab and screen simultaneously but no luck. No amount of messing around with tees and queues is helping.

…但我不能让我的生活弄明白如何同时做这两件事。在屏幕上有一个实时的feed是很有用的,所以我想做这个工作。花了好几个小时寻找一种同时抓取和屏幕的方法,但是没有运气。没有多少人摆弄t恤和排队是有帮助的。

Guess it would be a bonus to get ALSA audio (hw:2,0) into this as well, but I can get around that in an ugly hacky way. For now, I get this even though hw:2,0 is a valid input in Audacitu or arecord, for example:

我猜得到ALSA音频(hw:2,0)也会有好处,但我可以用一种丑陋的方式来解决这个问题。现在,尽管hw:2,0是Audacitu或arecord的有效输入,但我还是得到了它:

Recording open error on device 'hw:2,0': No such file or directory
Recording open error on device 'plughw:2,0': No such file or directory

So to recap: would love to put those two video bits together, bonus if audio would work too. I feel like such a newbie.

所以,回顾一下:如果音频也能工作,那就把这两个视频放在一起吧。我觉得自己像个新手。

Thanks in advance for any help you can provide.

感谢您提供的任何帮助。

edit: non-working code:

编辑:非工作代码:

/usr/local/bin/capture -d /dev/video1 -c 100000 -o | \
     gst-launch-1.0 -e filesrc location=/dev/fd/0 ! tee name=myvid ! h264parse ! decodebin \
     ! xvimagesink sync=false myvid. ! queue ! mux. alsasrc device=plughw:2,0 ! \
     audio/x-raw,rate=44100,channels=1,depth=24 ! audioconvert ! queue ! mux. mp4mux \
     name=mux ! filesink location=/tmp/out.mp4 

...leads to this:

…导致:

WARNING: erroneous pipeline: could not link queue1 to mux 

Edit: Tried umlaeute's suggestion, got a nearly empty video file and one frozen frame of live video. With/without audio made no difference after fixing two small errors in the audio-enabled code (double quotation mark typo, not encoding audio to anything compatible with MP4. Adding avenc_aac after audioconvert did that trick). Error output:

编辑:尝试umlaeute的建议,得到一个几乎空的视频文件和一个冻结的视频帧。使用/没有音频,在修复了音频编码的两个小错误(双引号错误,没有编码音频到与MP4兼容的任何东西)后,没有任何区别。在audioconvert之后添加avenc_aac。错误输出:

Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstAudioSrcClock
Redistribute latency...
ERROR: from element /GstPipeline:pipeline0/GstMP4Mux:mux: Could not multiplex stream.
Additional debug info:
gstqtmux.c(2530): gst_qt_mux_add_buffer (): /GstPipeline:pipeline0/GstMP4Mux:mux:
DTS method failed to re-order timestamps.
EOS on shutdown enabled -- waiting for EOS after Error
Waiting for EOS...
ERROR: from element /GstPipeline:pipeline0/GstFileSrc:filesrc0: Internal data flow error.
Additional debug info:
gstbasesrc.c(2809): gst_base_src_loop (): /GstPipeline:pipeline0/GstFileSrc:filesrc0:
streaming task paused, reason error (-5)

EDIT: Okay, umlaeute's corrected code works perfectly, but only if I'm using v4l2src instead of the convert tool. And for now, that means grabbing the MJPEG stream rather than the H264 one. No skin off my nose, though I guess I'd prefer a more modern workflow. So anyway, here's what actually works, outputting an MJPEG video file and a real-time "viewfinder". Not perfectly elegant but very workable. Thanks for all your help!

编辑:好的,umlaeute的修正代码非常好用,但前提是我使用的是v4l2src,而不是转换工具。现在,这意味着要获取MJPEG流而不是H264流。虽然我想我更喜欢一个更现代的工作流程,但我的鼻子上没有皮肤。无论如何,这是实际工作的,输出一个MJPEG视频文件和一个实时的“取景器”。不是很优雅但很可行。谢谢你的帮助!

gst-launch-1.0 -e v4l2src device=/dev/video1 ! videorate ! 'image/jpeg, width=1280, height=720, framerate=24/1' ! tee name=myvid \    
      ! queue ! decodebin ! xvimagesink sync=false \     
      myvid. ! queue ! mux.video_0 \    
      alsasrc device="plughw:2,0" ! "audio/x-raw,rate=44100,channels=1,depth=24" ! audioconvert ! lamemp3enc ! queue ! mux.audio_0 \    
      avimux name=mux ! filesink location=/tmp/out.avi

1 个解决方案

#1


0  

gstreamer is often a bit dumb when it comes to automatically combining multiple different streams (e.g. using mp4mux). in this case you should usually send a stream not only to a named element, but to a specific pad (using the elementname.padname notation; the element. notation is really just a shorthand for "any" pad in the named element).

在自动组合多个不同的流(例如使用mp4mux)时,gstreamer通常是有点笨的。在这种情况下,您通常应该发送一个流,不仅是一个命名的元素,而是一个特定的pad(使用elementname)。padname符号;元素。表示法实际上只是在命名元素中“任意”pad的简写。

also, it seems that you forgot the h264parse for the mp4muxer (if you look at the path the video takes, it really boils down to filesrc ! queue ! mp4mux which is probably a bit rough).

此外,似乎您忘记了mp4muxer的h264parse(如果您查看视频的路径,它实际上可以归结为filesrc !队列!mp4mux可能有点粗糙)。

while i cannot test the pipeline, i guess something like the following should do the trick:

虽然我无法测试管道,但我想下面应该做的事情是:

 /usr/local/bin/capture -d /dev/video1 -c 100000 -o | \
   gst-launch-1.0 -e filesrc location=/dev/fd/0 ! h264parse ! tee name=myvid \
     ! queue ! decodebin ! xvimagesink sync=false  \
     myvid. ! queue  ! mp4mux ! filesink location=/tmp/out.mp4

with audio it's probably more complicated, try something like this (obviously assuming that you can read audio using the alsasrc device="plughw:2,0" element)

使用音频可能更复杂,尝试类似的方法(显然,假设您可以使用alsasrc设备="plughw:2,0"元素来读音频)

 /usr/local/bin/capture -d /dev/video1 -c 100000 -o | \
   gst-launch-1.0 -e filesrc location=/dev/fd/0 ! h264parse ! tee name=myvid \
     ! queue ! decodebin ! xvimagesink sync=false  \
     myvid. ! queue ! mux.video_0 \
     alsasrc device="plughw:2,0" ! "audio/x-raw,rate=44100,channels=1,depth=24"" ! audioconvert ! queue ! mux.audio_0 \
     mp4mux name=mux ! filesink location=/tmp/out.mp4

#1


0  

gstreamer is often a bit dumb when it comes to automatically combining multiple different streams (e.g. using mp4mux). in this case you should usually send a stream not only to a named element, but to a specific pad (using the elementname.padname notation; the element. notation is really just a shorthand for "any" pad in the named element).

在自动组合多个不同的流(例如使用mp4mux)时,gstreamer通常是有点笨的。在这种情况下,您通常应该发送一个流,不仅是一个命名的元素,而是一个特定的pad(使用elementname)。padname符号;元素。表示法实际上只是在命名元素中“任意”pad的简写。

also, it seems that you forgot the h264parse for the mp4muxer (if you look at the path the video takes, it really boils down to filesrc ! queue ! mp4mux which is probably a bit rough).

此外,似乎您忘记了mp4muxer的h264parse(如果您查看视频的路径,它实际上可以归结为filesrc !队列!mp4mux可能有点粗糙)。

while i cannot test the pipeline, i guess something like the following should do the trick:

虽然我无法测试管道,但我想下面应该做的事情是:

 /usr/local/bin/capture -d /dev/video1 -c 100000 -o | \
   gst-launch-1.0 -e filesrc location=/dev/fd/0 ! h264parse ! tee name=myvid \
     ! queue ! decodebin ! xvimagesink sync=false  \
     myvid. ! queue  ! mp4mux ! filesink location=/tmp/out.mp4

with audio it's probably more complicated, try something like this (obviously assuming that you can read audio using the alsasrc device="plughw:2,0" element)

使用音频可能更复杂,尝试类似的方法(显然,假设您可以使用alsasrc设备="plughw:2,0"元素来读音频)

 /usr/local/bin/capture -d /dev/video1 -c 100000 -o | \
   gst-launch-1.0 -e filesrc location=/dev/fd/0 ! h264parse ! tee name=myvid \
     ! queue ! decodebin ! xvimagesink sync=false  \
     myvid. ! queue ! mux.video_0 \
     alsasrc device="plughw:2,0" ! "audio/x-raw,rate=44100,channels=1,depth=24"" ! audioconvert ! queue ! mux.audio_0 \
     mp4mux name=mux ! filesink location=/tmp/out.mp4