我正在尝试使用FFMPEG对使用h264和MediaCodec创建的MediaCodec进行修改,并使用FFMPEG的RTMP支持将其发送到youtube。我已经创建了两个管道,并通过WriteableByteChannels从java (android)编写。我可以发送到一个管道,就像这样(接受空音频):
./ffmpeg -f lavfi -i aevalsrc=0 -i "files/camera-test.h264" -acodec aac -vcodec copy -bufsize 512k -f flv "rtmp://a.rtmp.youtube.com/live2/XXXX"YouTube流运行得很完美(但我没有音频)。使用两个管道,这是我的命令:
./ffmpeg \
-i "files/camera-test.h264" \
-i "files/audio-test.aac" \
-vcodec copy \
-acodec copy \
-map 0:v:0 -map 1:a:0 \
-f flv "rtmp://a.rtmp.youtube.com/live2/XXXX""管道是用mkfifo创建的,并从java打开,如下所示:
pipeWriterVideo = Channels.newChannel(new FileOutputStream(outputFileVideo.toString()));执行的顺序(目前在我的测试阶段)是创建文件,启动ffmpeg (通过亚行shell),然后开始记录打开通道。ffmpeg将立即打开h264流,然后等待,因为它正在从管道中读取,打开的第一个通道(用于视频)将成功运行。当尝试以同样的方式打开音频时,它失败了,因为ffmpeg实际上还没有开始从管道中读取。我可以打开第二个终端窗口,猫音频文件,我的应用程序吐出了我希望是编码的aac,但ffmpeg失败,通常只是坐在那里等待。下面是详细的输出:
ffmpeg version N-78385-g855d9d2 Copyright (c) 2000-2016 the FFmpeg
developers
built with gcc 4.8 (GCC)
configuration: --prefix=/home/dev/svn/android-ffmpeg-with-rtmp/src/ffmpeg/android/arm
--enable-shared --disable-static --disable-doc --disable-ffplay
--disable-ffprobe --disable-ffserver --disable-symver
--cross-prefix=/home/dev/dev/android-ndk-r10e/toolchains/arm-linux-androideabi-4.8/prebuilt/linux-x86_64/bin/arm-linux-androideabi-
--target-os=linux --arch=arm --enable-cross-compile
--enable-librtmp --enable-pic --enable-decoder=h264
--sysroot=/home/dev/dev/android-ndk-r10e/platforms/android-19/arch-arm
--extra-cflags='-Os -fpic -marm'
--extra-ldflags='-L/home/dev/svn/android-ffmpeg-with-rtmp/src/openssl-android/libs/armeabi '
--extra-ldexeflags=-pie --pkg-config=/usr/bin/pkg-config
libavutil 55. 17.100 / 55. 17.100
libavcodec 57. 24.102 / 57. 24.102
libavformat 57. 25.100 / 57. 25.100
libavdevice 57. 0.101 / 57. 0.101
libavfilter 6. 31.100 / 6. 31.100
libswscale 4. 0.100 / 4. 0.100
libswresample 2. 0.101 / 2. 0.101
matched as AVOption 'debug' with argument 'verbose'.
Trailing options were found on the commandline.
Finished splitting the commandline.
Parsing a group of options: global .
Applying option async (audio sync method) with argument 1.
Successfully parsed a group of options.
Parsing a group of options: input file files/camera-test.h264.
Successfully parsed a group of options.
Opening an input file: files/camera-test.h264.
[file @ 0xb503b100] Setting default whitelist 'file'我想,如果我能让ffmpeg开始听这两个管道,其余的就能解决了!
耽误您时间,实在对不起。
编辑:我已经通过分离音频管道连接和编码取得了进展,但是现在一旦视频流被传递,它就会在音频上出错。我启动了一个单独的线程来创建用于音频的WriteableByteChannel,它从未通过FileOutputStream创建。
matched as AVOption 'debug' with argument 'verbose'.
Trailing options were found on the commandline.
Finished splitting the commandline.
Parsing a group of options: global .
Successfully parsed a group of options.
Parsing a group of options: input file files/camera-test.h264.
Successfully parsed a group of options.
Opening an input file: files/camera-test.h264.
[file @ 0xb503b100] Setting default whitelist 'file'
[h264 @ 0xb503c400] Format h264 probed with size=2048 and score=51
[h264 @ 0xb503c400] Before avformat_find_stream_info() pos: 0 bytes read:15719 seeks:0
[h264 @ 0xb5027400] Current profile doesn't provide more RBSP data in PPS, skipping
[h264 @ 0xb503c400] max_analyze_duration 5000000 reached at 5000000 microseconds st:0
[h264 @ 0xb503c400] After avformat_find_stream_info() pos: 545242 bytes read:546928 seeks:0 frames:127
Input #0, h264, from 'files/camera-test.h264':
Duration: N/A, bitrate: N/A
Stream #0:0, 127, 1/1200000: Video: h264 (Baseline), 1 reference frame, yuv420p(left), 854x480 (864x480), 1/50, 25 fps, 25 tbr, 1200k tbn, 50 tbc
Successfully opened the file.
Parsing a group of options: input file files/audio-test.aac.
Applying option vcodec (force video codec ('copy' to copy stream)) with argument copy.
Successfully parsed a group of options.
Opening an input file: files/audio-test.aac.
Unknown decoder 'copy'
[AVIOContext @ 0xb5054020] Statistics: 546928 bytes read, 0 seeks这里是我尝试打开音频管道的地方。
new Thread(){
public void run(){
Log.d("Audio", "pre thread");
FileOutputStream fs = null;
try {
fs = new FileOutputStream("/data/data/android.com.android.grafika/files/audio-test.aac");
} catch (FileNotFoundException e) {
e.printStackTrace();
}
Log.d("Audio", "made fileoutputstream"); //never hits here
mVideoEncoder.pipeWriterAudio = Channels.newChannel(fs);
Log.d("Audio", "made it past opening audio pipe");
}
}.start();谢谢。
发布于 2016-02-18 16:11:35
你的解释不太清楚,我可以看出你想要解释你到底在做什么,但这是行不通的。
首先,你能描述一下实际的问题吗?我不得不读到一半你的帖子变成了一个8行的段落,看起来你似乎是在建议悬挂。这就是问题所在吗?你真的想说清楚这件事。
第二:如何将数据导入FIFO?这很重要。您的帖子完全不清楚,您似乎建议ffmpeg读取整个视频文件,然后移动到音频。对吗?还是这两条溪流同时被输送到ffmpeg?
最后:如果ffmpeg挂起,可能是因为您的一个输入管道阻塞(您将数据推送到FIFO-1,缓冲区已满,但是ffmpeg需要FIFO-2中的数据,而缓冲区为空)。两个FIFO都需要始终独立地填充数据。
https://stackoverflow.com/questions/35469445
复制相似问题