If you write the data to the pipe in rgb24, but tell ffmpeg that it’s in yuv420p format, the video will presumably get completely mangled, although you’ll probably be just about able to make out something recognisable. Unfortunately, the sample C code outputs the pixels in rgb24 as it’s currently written. I’m not sure why the yuv420p pixel format would work when rgb24 doesn’t, but there must be something going on that I’m not aware of. Click here to download the editable Inkscape SVG file.
Note: I used Inkscape to create the video title images. Pipein = popen("ffmpeg -i output.mp4 -f image2pipe -vcodec rawvideo -pix_fmt rgb24 -", "r") Copy all frames from output.mp4 to output pipe Pipein = popen("ffmpeg -i title_modified.png -f image2pipe -vcodec rawvideo -pix_fmt rgb24 -", "r") Write next 50 frames using modified video title image from title_modified.png Pipein = popen("ffmpeg -i teapot.mp4 -f image2pipe -vcodec rawvideo -pix_fmt rgb24 -", "r") Copy all frames from teapot.mp4 to output pipe Pipein = popen("ffmpeg -i title_original.png -f image2pipe -vcodec rawvideo -pix_fmt rgb24 -", "r") Write first 50 frames using original video title image from title_original.png Pipeout = popen("ffmpeg -y -f rawvideo -vcodec rawvideo -pix_fmt rgb24 -s 1280x720 -r 25 -i -f mp4 -q:v 5 -an -vcodec mpeg4 combined.mp4", "w") If we didn't get a frame of video, we're probably at the end Read a frame from the input pipe into the buffer Open an input pipe from ffmpeg and an output pipe to a second instance of ffmpegįILE *pipein = popen("ffmpeg -i teapot.mp4 -f image2pipe -vcodec rawvideo -pix_fmt rgb24 -", "r") įILE *pipeout = popen("ffmpeg -y -f rawvideo -vcodec rawvideo -pix_fmt rgb24 -s 1280x720 -r 25 -i -f mp4 -q:v 5 -an -vcodec mpeg4 output.mp4", "w") Written by Ted Burke - last updated 12-2-2017 The program I wrote to convert the original video into the modified version is shown below. The original and modified MP4 video files can be downloaded here: The full code is below, but first let’s see the original and modified videos: The video resolution is 1280×720, which I checked in advance using the ffprobe utility that comes with FFmpeg. The modified video is saved to a second file, output.mp4. The input video I’m using is teapot.mp4, which I recorded on my phone. Basically, I read frames one at time from the input pipe, invert the colour of every pixel, and then write the modified frames to the output pipe. In this example I use two pipes, each connected to its own instance of FFmpeg. The same idea can be used to perform video processing, as shown in the program below. In my previous post, I demonstrated how FFmpeg can be used to pipe raw audio samples in and out of a simple C program to or from media files such as WAV files (based on something similar for Python I found on Zulko’s blog).