h264 lossless coding
Is it possible to do completely lossless encoding in h264? By lossless, I mean that if I feed it a series of frames and encode them, and then if I extract all the frames from the encoded video, I will get the exact same frames as in the input, pixel by pixel, frame by frame. Is that actually possible? Take this example:
I generate a bunch of frames, then I encode the image sequence to an uncom开发者_如何学Gopressed AVI (with something like virtualdub), I then apply lossless h264 (the help files claim that setting --qp 0 makes lossless compression, but I am not sure if that means that there is no loss at any point of the process or that just the quantization is lossless). I can then extract the frames from the resulting h264 video with something like mplayer.
I tried with Handbrake first, but it turns out it doesn't support lossless encoding. I tried x264 but it crashes. It may be because my source AVI file is in RGB colorspace instead of YV12. I don't know how to feed a series of YV12 bitmaps and in what format to x264 anyway, so I cannot even try.
In summary what I want to know if that is there a way to go from
Series of lossless bitmaps (in any colorspace) -> some transformation -> h264 encode -> h264 decode -> some transformation -> the original series of lossless bitmaps
If there a way to achieve this?
EDIT: There is a VERY valid point about lossless H264 not making too much sense. I am well aware that there is no way I could tell (with just my eyes) the difference between and uncompressed clip and another compressed at a high rate in H264, but I don't think it is not without uses. For example, it may be useful for storing video for editing without taking huge amounts of space and not losing quality and spending too much encoding time every time the file is saved.
UPDATE 2: Now x264 doesn't crash. I can use as sources either avisynth or lossless yv12 lagarith (to avoid the colorspace compression warning). Howerver, even with --qp 0 and a rgb or yv12 source I still get some differences, minimal but present. This is troubling, because all the information I have found on lossless predictive coding (--qp 0) claims that the whole encoding should be lossless, but I am unable to verifiy this.
I am going to add a late answer to this one after spending all day trying to figure out how to get YUV 4:4:4 pixels into x264. While x264 does accept raw 4:2:0 pixels in a file, it is really quite difficult getting 4:4:4 pixels passed in. With recent versions of ffmpeg, the following works for completely lossless encoding and extraction to verify the encoding.
First, write your raw yuv 4:4:4 pixels to a file in a planar format. The planes are a set of Y bytes, then the U and V bytes where U and V use 128 as the zero value. Now, invoke ffmpeg and pass in the size of the raw YUV frames as use the "yuv444p" pixel format twice, like so:
ffmpeg -y -s 480x480 -pix_fmt yuv444p -i Tree480.yuv \
-c:v libx264 -pix_fmt yuv444p -profile:v high444 -crf 0 \
-preset:v slow \
Tree480_lossless.m4v
Once the encoding to h264 and wrapping as a Quicktime file is done, one can extract the exact same bytes like so:
ffmpeg -y -i Tree480_lossless.m4v -vcodec rawvideo -pix_fmt yuv444p \
Tree480_m4v_decoded.yuv
Finally, verify the two binary files with diff:
$ diff -s Tree480.yuv Tree480_m4v_decoded.yuv
Files Tree480.yuv and Tree480_m4v_decoded.yuv are identical
Just keep in mind that you need to write the YUV bytes to a file yourself, do not let ffmpeg do any conversion of the YUV values!
If x264 does lossless encoding but doesn't like your input format, then your best bet is to use ffmpeg
to deal with the input file. Try starting with something like
ffmpeg -i input.avi -f yuv4mpegpipe -pix_fmt yuv420p -y /dev/stdout \
| x264 $OPTIONS -o output.264 /dev/stdin
and adding options from there. YUV4MPEG is a lossless uncompressed format suitable for piping between different video tools; ffmpeg knows how to write it and x264 knows how to read it.
FFmpeg has a "lossless" mode for x264, see FFmpeg and x264 Encoding Guide
§ Lossless H.264in essence it's -qp 0
I don't know your requirements for compression and decompression, but a general purpose archiver (like 7-zip with LZMA2) should be able to compress about as small or, in some cases, even significantly smaller than a lossless video codec. And it is much simpler and safer than a whole video processing chain. The downside is the much slower speed, and that you have to extract before seeing it. But for images, I think you should try it.
There is also lossless image formats, like .png.
For encoding lossless RGB with x264, you should use the command line version of x264 (you can't trust GUIs in this edge case, they will probably mess-up) r2020 or newer, with something like that:
x264 --qp 0 --preset fast --input-csp rgb --output-csp rgb --colormatrix GBR --output "the_lossless_output.mkv" "someinput.avs"
Any losses/differences between the input and output should be from some colour space conversion (either before encoding, or at playback), wrong settings or some header/meta-data that was lost. x264 don't supports RGBA, but RGB is ok. YUV 4:4:4 compression is more efficient, but you will lose some data in colour space conversion as your input is RGB. YV12/i420 is much smaller, and by far the most common colour space in video, but you have less chroma resolution.
More information on x264 settings: http://mewiki.project357.com/wiki/X264_Settings
Also, avoid lagarith. It uses x87 floating point... and there are better alternatives. http://codecs.multimedia.cx/?p=303 http://mod16.org/hurfdurf/?p=142
EDIT: I don't know why I was donwvoted. Please leave a comment when you do that.
I agree that sometimes the loss in data is acceptable, but it's not simply a matter of how it looks immediately after compression.
Even a visually imperceptible loss of color data can degrade footage such that color correction, greenscreen keying, tracking, and other post tasks become more difficult or impossible, which add expense to a production.
It really depends when and how you compress in the pipeline, but ultimately it makes sense to archive the original quality, as storage is usually far less expensive than reshooting.
To generate lossless H.264 with HandBrake GUI, set Video Codec: H.264, Constant Quality, RF: 0, H.264 Profile: auto. Though this file is not supported natively by Apple, it can be re-encoded as near-lossless for playback.
HandBrake GUI's Activity Window:
H.264 Profile: auto; Encoding at constant RF 0.000000...profile High 4:4:4 Predictive, level 3.0, 4:2:0 8-bit
H.264 Profile: high; Encoding at constant RF 0.000000...lossless requires high444 profile, disabling...profile High, level 3.0
If you can't get lossless compression using a h.264 encoder and decoder, perhaps you could look into two alternatives:
(1) Rather than passing all the data in h.264 format, some people are experimenting with transmitting some of the data with a residual "side channel":
- (h.264 file) -> h264 decode -> some transformation -> a lossy approximation of the original series of bitmaps
- (compressed residual file) --> decoder -> a series of lossless residual bitmaps
- For each pixel in each bitmap, approximate_pixel + residual_pixel = a pixel bit-for-bit equal to the original pixel.
(2) Use Dirac video compression format in "lossless" mode.
Use FFmpeg with PowerShell. Type ffmpeg -h encoder=libx264rgb
.
You can see Supported pixel formats: bgr0 bgr24 rgb24
When you encoding RGB to YUV or vice versa, you always loose quality.
But if you use -pix_fmt yuv444p -profile:v high444p
your losses are least.
But if you use libx264rgb
of ffmpeg encoder libx264rgb with format of pixel rgb24
you don't have any loss of quality.
A lot of application (for example Davinci Resolve) cannot read rgb 24 format of pixel.
I reccomend you to use:
ffmpeg -i ["your sequence of rgb image.png"] -c:v libx264rgb -video_size [your size] -framerate [your fps] -r [your fps] -qp 0 -pix_fmt rgb24 -profile:v high444 -preset veryslow -level 6.2 "your_video.mov"
Unfortunately, I don't know how to create sequence. But it is possible in FFmpeg.
精彩评论