解决linux - Combining videos with ffmpeg using crossfades and plain cuts
I am writing a script to combine/slice arbitrary video files from S3 into one output video. So far, I am doing this by first trimming the videos to their proper length using
ffmpeg -i input-X.mp4 -ss start -t duration slice-X.mp4 and recombining the resultant slices with the ffmpeg
I want to be able to crossfade and cut between videos.
concat does not support transitions. What is the best way to combine videos with crossfades and cuts on the Linux command line? Is ffmpeg the best tool for the job?
My question is similar to "How do you create a crossfade transition between multiple videos in FFMPEG?" but I do not necessarily need to use ffmpeg. Also, I want to be able to fade in between some slices and cut between others.linux video scripting ffmpeg video-editing
this question asked Mar 7 '16 at 19:45 Zach 52 6
Below is the single step command template, assuming five slices.
ffmpeg -i input.mp4 -i input.mp4 -i input.mp4 -i input.mp4 -i input.mp4 \ -filter_complex \ "[0:v]trim=0.5:4.5,setpts=PTS-STARTPTS; \ [1:v]trim=12:17,setpts=PTS-STARTPTS+(3/TB),format=yuva420p,fade=in:st=3:d=1:alpha=1; \ [2:v]trim=34.1:36,setpts=PTS-STARTPTS+(7/TB),format=yuva420p,fade=in:st=7:d=1:alpha=1; \ [3:v]trim=21:25,setpts=PTS-STARTPTS; \ [4:v]trim=27:31,setpts=PTS-STARTPTS+(3/TB),format=yuva420p,fade=in:st=0:d=1:alpha=1; \ [0:a]atrim=0.5:4.5,asetpts=PTS-STARTPTS[1a]; \ [1:a]atrim=12:17,asetpts=PTS-STARTPTS[2a]; \ [2:a]atrim=34.1:36,asetpts=PTS-STARTPTS[3a]; \ [3:a]atrim=21:25,asetpts=PTS-STARTPTS[4a]; \ [4:a]atrim=27:31,asetpts=PTS-STARTPTS[5a]; \ overlay,format=yuv420p; \ overlay,format=yuv420p; \ overlay,format=yuv420p; \ [1a][2a]acrossfade=d=1[12a]; \ [12a][3a]acrossfade=d=1[123a]; \ [4a][5a]acrossfade=d=1[45a]; \ [123a][45a]concat=n=2:v=1:a=1[v][a]" \ -map [v] -map [a] SingleStepOutput.mp4
I have inputted the video multiple times, once for each slice, because using a single input pad (even with
asplit) leads to buffer overflows.
asetpts filters are used because
atrim carry over the original timestamps. The
setpts filters are offset for the slices which have to fade in. The offset value is the
preceding slide duration - crossfade duration. The
yuva420p is needed to create an alpha channel whose value is actually modulated by the fade filter.
this answer answered Mar 9 '16 at 12:44 Mulvya 10.7k 1 4 23
I ended up doing this by iterating on each slice and appending each to a temporary output file.
[output] <-- copy slice 1
[output] <-- cut slice 2 on to output
[output][slice 2] <-- fade slice 3 on to output
[output][slice 2][crossfade][slice 3] <-- fade slice 4 on to output, etc...
So it's n ffmpeg instructions to slice the input videos to the correct length, and then n-1 ffmpeg instructions to concatenate them all with appropriate transitions.
this answer answered Mar 8 '16 at 17:03 Zach 52 6 You are re-encoding the video a lot. You could do this in one, albeit long, command. If you're interested, I'll write it up in the next couple of days. – Mulvya Mar 8 '16 at 19:40 @Mulvya I'd be interested to see that abomination yeah – Zach Mar 8 '16 at 20:19 Abomination posted. – Mulvya Mar 9 '16 at 12:45
- 1FFmpeg 基本知识
- 3ffmpeg的编译(for x86,for arm)安装及使用（网络资料整理）
- 5ffmpeg+OpenCV Linux下安装和配置