I'm reminded here that JPEG includes arithmetic encoding as part of the standard, but almost everyone uses Huffman because up until a couple years ago arithmetic encoding was patent-encumbered (the patents are expired now). Is anyone aware of a study like Mozilla's that considers JPEG-with-arithmetic-encoding? Or perhaps it does, and I failed to notice?
Most competing file formats seem to beat JPEG by only a slim margin, and what I've read on arithmetic encoding suggests it gives a ~5-10% gain, which would make that difference slimmer still, perhaps vanishing into the uncertainty of the usefulness of these quality benchmarks. Of course, there would be inertia to overcome to support it, as with a new format, but recompiling everyone's libjpeg is surely less work than adding support for whole new file formats. At the very least, it seems there might be a better effort/payoff ratio.
If you're going to break backwards compatibility it would be better to spend the pain on a modern format instead of reviving an arcane and previously unimplemented part of the old JPEG. Like the article admits, we know how to do significantly better now, so a patent free HEVC alternative would be the logical target.
JPEG is absolutely awesome and this is a valuable addition.
I was using the very first release of the source back in the stone age or so. We took passport photo images with a video camera at reasonably high resolution and then scaled them down and compressed with PCX to save on storage.
Quality after compression was absolutely terrible.
Then Tom Lane came along with libjpeg and suddenly the quality was better than what we could print!
Too late to edit: PCX is lossless, what we did was reduce the higher frequency bits in the images before using PCX in order to achieve a reasonable compression ratio, in effect making a lossy wrapper around PCX. It was a pretty crude way of making this work and JPEG was so much better that it is hard to believe we managed to sell our customers on the original version. I should see if I can dig up some of those old images, they're interesting historically.
Thank you for your work. As a heads-up, I do not see any reference to -quant-table option as mentioned in the linked blog in cjepg.1 on GH.
Also, I have a little script that uses imagemagick to watermark an image and then create a resized image smaller image for web. Does mozjpeg's cjpeg support 48bpp png or is there some better way to no lose some detail due to rounding by avoiding going from 24-bit RGB out of imagemagick to YCbCr?
We have no plans to use mozjpeg in Firefox, it will continue to use libjpeg-turbo. The decoder that comes with mozjpeg is unmodified from libjpeg-turbo, mozjpeg is focused on compression for those serving up images.
Stared at both side by side and really struggled to tell the difference. Great job!
Sorry WebP is great but I just don't see it getting adapted unless all browsers get on board as well as big software. JPEG is practically a household name, photographers, artists, insta-grammers,all know what it is and short of a mild revolution I just don't see it.
For nicer looking images, scan as grayscale and then use Photoshop's Image -> Adjustments -> Curves to turn everything almost-white to white and everything almost-black to black:
1-bit images should generally be saved using JBIG2 compression (as far as I know the best lossless compression scheme for 1-bit images), rather than PNG. In some cases CCITT Group 4 compression might be used, for compatibility reasons, e.g. in a TIFF wrapper.
JBIG2 might be the best compression for bitmap images, but you probably never want to use its lossy mode for text images[0]
The fact that a single difference in a configuration bit (lossy/non-lossy) can introduce subtle and easy-to-miss errors (to the point that this went unnoticed by Xerox QA and thousands of their customers!) is an indication to me that while JBIG2 might be superior in terms of the compression it offers, it's not a solution I would use without very serious consideration and deliberation.
Just to make this more clear: if you borked the compression setting on jpeg, you get smeared images. If you bork the lossy/nonlossy flag on JBIG2, you get images that look great but may have a random letter or digit swapped for another.
thank you! and great tips. the original image was something I grabbed from the web, part of some research. either way really appreciate the tips and the info.
would adding dithering support to the encoder help with gradient smoothness? i know it helps a lot with non-compressed formats in addition to shrinking filesize (though it may not be the case with jpeg compression). you can toy with the params [1] and see that even dropping target palette color count by >50%, still gets good results with a dithering kernel selected. repo here [2], btw.
You could also just modulate the lambda used in the trellis quantization so that it is less aggressive in smooth blocks, and more aggressive in textured blocks. It's not as good as being able to change the quantizer, but you can get somewhere around half the benefits of real activity masking by changing lambda alone.
Thanks, but unfortunately it's not that good of a comparison since the original is already heavily compressed. Could you do another with a less compressed original image (and, preferably, more color variation)?
So does it mean that Daala compression can be used to produce some new image format when it will be ready (similarly to how WebP was produced from VP8)?
based on my experience with bpg compression of 39 megapixel image takes 2s with jpeg turbo (the original is a raw tiff but already cached). same image bpg is 8m30s. this is on an ivy bridge xeon. i was wanting to smash a few hundred thousand of these 39mp images for transport and backup storage but unacceptible time wise. how much faster would daala be than hvec?
Daala's methodology of video compression differs from HEVC, namely it optimizes the perceived quality of the image. So in theory it can be computationally lighter, because it can save on areas which are affecting perception less. But it's not there yet.
I'm not sure though how exactly it translates into still images compression efficiency. For video they do plan to eventually beat HEVC both on quality and algorithmic delay.
Since it's mentioned in the article: Does anyone have experience with lossy png tools?
I'm currently working on a project that needs alpha channels. I've been optimizing the images with pngcrush, which helped (interestingly, images put out with Adobe products where already pretty optimized, but I'm generating thumbnails locally with sips, where pngcrush often saves 60+%).
Still, for photographic images, file size often remains multiple times larger than what I'd expect from a high-quality JPEG.
>Does anyone have experience with lossy png tools?
I do. Lossy PNGs work great for images with not a lot of colors. I use pngquant and optipng a lot in my work to compress a lot of PNG images with practically no visual quality loss.
For very colorful images, lossy (quantized & dithered) PNGs just don't work, though. They just end up looking nasty with larger filesizes than what high quality JPG gives you.
I use optipng and pngquant together myself... it's been a while since I've looked though... here's the contents of my "!Drag PNG here to Optimize (pngquant and optipng as png8).bat" file ... should be able to do similar in bash/zsh script.
@echo off
set batchdir=%~d0%~p0
:start
"%batchdir%pngquant.exe" --ext .q.png --force --verbose 256 %1
"%batchdir%optipng.exe" -force -o4 -out "%~dpn1.opt.png" "%~dpn1.q.png"
del "%~dpn1.q.png"
shift
if NOT x%1==x goto start
rem pause
Ditto what Diaz said. The author of this article BTW also makes some rad lossy PNG tools. I took a 3MB website down to under 1MB recently, the majority of which were screenshots.
This is totally awesome. Nothing bothers me more than seeing that awkward noise around images I export from Photoshop. Someone mentioned this already, but I hope this finds its way into the apps I use.
On a totally unrelated note, Denny, the dude that dropped the first comment on that post, is not a stand-up guy.
Ladies and gentlemen, you are hereby urged NOT to download binaries from random links on the internet - this specific link may or may not be innocent (I have no idea). But how do you know it is not trojaned?
It's OK, the MD5 sum is 951b878016159eadd8bfca08c3670038. And as we all know from downloading software on the Internet, if a checksum is posted it must be safe.
Virus scanners are based on blacklists and some iffy heuristics. A clean result is not a guarantee of safety. Nearly any custom-built malware will slip by.
Windows is a huge pain for compiling software with dependencies, but I don't think MozJPEG has any. There are detailed build instructions for Windows included.
No offence intended - buy you are in this case, a stranger giving a candy.
Some strangers with candy have no sinister intentions, but kids should still avoid them, because of the irreparable harm that those with sinister intentions will cause, all kids should avoid candy from strangers.
(In some of the results "Police declined to tell", and some describe the same incident, but that's just the first page with the first search term that came to my mind).
I have a feeling that nobody would really bother with WebP for its compression, but does JPG/PNG have:
* Lossy compression with alpha channels.
* Efficient lossless compression of photo-like images.
* Efficient compression of photo-like and diagram-like images in the same format (and in the same image, e.g. screenshots containing photos).
* Good lossy compression of diagram-like images.
• There's a draft for gracefully-degrading JPEG eXTensions that add all the features you want http://www.jpeg.org/jpegxt/index.html (by encoding classic JPEG + residual image hidden in JPEG metadata).
WebP is a bit of a hack: it has JPEG-like algorithm for photos (VP8) and a custom PNG-like algorithm for lossless. Technically it's not much different than having JPEG and PNG and using same filename extension for both.
JPEG 2000 and JPEG XR have truly scalable algorithm that can support lossy and lossless.
I did, last summer, converted all images (35K) on my NSFW hobby site (check profile) to WebP with no jpeg fallback or shabby javascript decoder (which don't work on very high res images), and haven't looked back.
On my journey to 1000ms-to-glass with a site like mine, I'm going to go with the format that gives me dramatic size savings, thank you Google.
That said, I can see how it benefits Firefox users not to be able to render WebP... sigh.
It would be helpful if 4chan followed my lead by at least allowing users to post WebP with something like mod_pagespeed running.
> I'm going to go with the format that gives me dramatic size savings, thank you Google.
As far as I've seen, testing has shown that WebP is not dramatically better than JPEG, as long as you're using a clever encoder (like MozJPEG, which is what we're talking about). If you have evidence to the contrary, I'm sure the MozJPEG guys would appreciate a test-case!
> That said, I can see how it benefits Firefox users not to be able to render WebP... sigh.
Instead of spending energy on dubious WebP, Mozilla spends energy on improving JPEG (which benefits everybody now) and Daala (which will hopefully benefit everybody eventually). I think it's a pretty sensible trade-off.
An nginx redirect based on user agents to an apology and a list of download links to WebP friendly browsers. I used to include a link to a Firefox fork that supported WebP natively, but no one bothered.
I made a sort of Google+ companion to the site which I'd bump them onto but I still haven't gotten the hang of not getting banned.
Yes, but not when storage, bandwidth, money, a desire to deliver only the best user experience (or nothing) and pushing WebP are concerns.
By the way, it's remarkable when running an image-heavy site how much bot/mass downloader traffic relative to humans vanish when turning away Firefox user agents.
Most competing file formats seem to beat JPEG by only a slim margin, and what I've read on arithmetic encoding suggests it gives a ~5-10% gain, which would make that difference slimmer still, perhaps vanishing into the uncertainty of the usefulness of these quality benchmarks. Of course, there would be inertia to overcome to support it, as with a new format, but recompiling everyone's libjpeg is surely less work than adding support for whole new file formats. At the very least, it seems there might be a better effort/payoff ratio.