This saves us the trouble of maintaining our own implementation,
and instantly brings us to full WOFF2 feature parity with others.
Co-Authored-By: Andrew Kaster <akaster@serenityos.org>
No behavior change. No measurable performance different either.
(I tried `hyperfine 'Build/lagom/bin/image --no-output foo.webp'`
for a few input images before and after this change, and I didn't
see a difference. I also tried if moving both
Gfx::CanonicalCode::read_symbol() and
Compress::CanonicalCode::read_symbol() inline, and that didn't
help either.)
This is useful to find the best matching color palette from an existing
bitmap. It can be used in PixelPaint but also in encoders of old image
formats that only support indexed colors e.g. GIF.
* Matches how the loader is organized
* `compress_VP8L_image_data()` will grow longer when we add actual
compression
* Maybe someone wants to write a lossy compressor one day
No behavior change.
JPEG2000 is the last image format used in PDF filters that we
don't have a loader for. Let's change that.
This adds all the scaffolding, but no actual implementation yet.
This URL library ends up being a relatively fundamental base library of
the system, as LibCore depends on LibURL.
This change has two main benefits:
* Moving AK back more towards being an agnostic library that can
be used between the kernel and userspace. URL has never really fit
that description - and is not used in the kernel.
* URL _should_ depend on LibUnicode, as it needs punnycode support.
However, it's not really possible to do this inside of AK as it can't
depend on any external library. This change brings us a little closer
to being able to do that, but unfortunately we aren't there quite
yet, as the code generators depend on LibCore.
.pam is a "portrable arbitrarymap" as documented at
https://netpbm.sourceforge.net/doc/pam.html
It's very similar to .pbm, .pgm, and .ppm, so this uses the
PortableImageMapLoader framework. The header is slightly different,
so this has a custom header parsing function.
Also, .pam only exixts in binary form, so the ascii form support
becomes optional.
This is a simple container for 2d CMYK data.
This is a new class instead of a new format in Bitmap because:
* Almost all clients of Bitmap will want to deal with RGB data
* It keeps the ARGB32 typedef working
* Bitmap already does a lot of things
The idea is that a few select places will be able to use
CMYKBitmap and a color profile to do a better CMYK -> RGB conversion
than an image decoder could do, and then store the result in a Bitmap
for later display then.
CMYKBitmap misses some of Bitmap's features, such as shared
memory support, high-dpi scaling capability, etc.
Before this change, we would only cache and reuse Gfx::ScaledFont
instances for downloaded CSS fonts.
By moving it into Gfx::VectorFont, we get caching for all vector fonts,
including local system TTFs etc.
This avoids a *lot* of style invalidations in LibWeb, since we now vend
the same Gfx::Font pointer for the same font when used repeatedly.
This compression (tag Compression=2) is not very popular on its own, but
a base to implement CCITT3 2D and CCITT4 compressions.
As the format has no real benefits, it is quite hard to find an app that
accepts tho encode that for you. So I used the following program that
calls `libtiff` directly:
```cpp
#include <vector>
#include <cstdlib>
#include <iostream>
#include <tiffio.h>
// An array containing 0 and 1 of length width * height.
extern std::vector<uint8_t> array;
int main() {
// From: https://stackoverflow.com/a/34257789
TIFF *image = TIFFOpen("input.tif", "w");
int const width = 400;
int const height = 300;
TIFFSetField(image, TIFFTAG_IMAGEWIDTH, width);
TIFFSetField(image, TIFFTAG_IMAGELENGTH, height);
TIFFSetField(image, TIFFTAG_PHOTOMETRIC, 0);
TIFFSetField(image, TIFFTAG_COMPRESSION, COMPRESSION_CCITTRLE);
TIFFSetField(image, TIFFTAG_BITSPERSAMPLE, 1);
TIFFSetField(image, TIFFTAG_SAMPLESPERPIXEL, 1);
TIFFSetField(image, TIFFTAG_ROWSPERSTRIP, 1);
std::vector<uint8_t> scan_line(width / 8 + 8, 0);
int count = 0;
for (int i = 0; i < height; i++) {
std::fill(scan_line.begin(), scan_line.end(), 0);
for (int x = 0; x < width; ++x) {
uint8_t eight_pixels = scan_line.at(x / 8);
eight_pixels = eight_pixels << 1;
eight_pixels |= !array.at(i * width + x);
scan_line.at(x / 8) = eight_pixels;
}
int bytes = int(width / 8.0 + 0.5);
if (TIFFWriteScanline(image, scan_line.data(), i, bytes) != 1)
std::cerr << "Something went wrong\n";
}
TIFFClose(image);
}
```
According to the CSS font matching algorithm specification, it is
supposed to be executed for each glyph instead of each text run, as is
currently done. This change partially implements this by having the
font matching algorithm produce a list of fonts against which each
glyph will be tested to find its suitable font.
Now, it becomes possible to have per-glyph fallback fonts: if the
needed glyph is not present in a font, we can check the subsequent
fonts in the list.
We will need to use ColorSpace in TagTypes.h, and it can't include
Profile.h.
Also makes Profile.cpp a bit smaller.
No behavior change, pure code move.
This will allow us to generate code that handle and provide easy access
to metadata stored in TIFF's tags. The generator is a Python script, and
it output both TIFFMetadata.h and TIFFTagHandler.cpp files.
The generator will definitely need some update to support all TIFF and
EXIF tags, but that will still be easier than writing everything
ourselves.
Some small modifications are needed in TIFFLoader.cpp to make it
compatible with the new `Metadata` class.
Before this change, we used Gfx::Bitmap to represent both decoded
images that are not going to be mutated and bitmaps corresponding
to canvases that could be mutated.
This change introduces a wrapper for bitmaps that are not going to be
mutated, so the painter could do caching: texture caching in the case
of GPU painter and potentially scaled bitmap caching in the case of CPU
painter.
IFF was a generic container fileformat that was popular on the Amiga
since it was the only file format supported by Deluxe Paint.
ILBM is an image format popular in the late eighties/nineties
that uses the IFF container.
This is a very first version of the decoder that only supports
(byterun) compressed files with bpp <= 8.
Only the minimal chunks are decoded: CMAP, BODY, BMHD.
I am planning to add support for the following variants:
- EHB (32 colours + lighter 32 colours)
- HAM6 / HAM8 (special mode that allowed to display the whole Amiga
4096 colours / 262 144 colours palette)
- TrueColor (24bit)
Things that could be fun to do:
- Still images could be animated using color cycle information
Currently, the `isobmff` utility will only print the media file type
info from the FileTypeBox (major brand and compatible brands), as well
as the names and sizes of top-level boxes.
JPEG-XL is a new image format standardized by the same committee as the
original JPEG image format. It has all the nice feature of recent
formats, and great compression ratios. For more details, look at:
https://jpegxl.info/
This decoder is far from being feature-complete, as it features a grand
total of 60 FIXMEs and TODOs but anyway, it's still a good start.
I developed this decoder in the Serenity way, I just try to decode a
specific image while staying as close as possible to the specification.
Considering that the format supports a lot of options, and that we
basically support only one possibility for each of them, I'm pretty sure
that we can only decode the image I've developed this decoder for.
Which is:
0aff 3ffa 9101 0688 0001 004c 384b bc41
5ced 86e5 2a19 0696 03e5 4920 8038 000b
This adds a decoder for the TinyVG vector format (https://tinyvg.tech/).
TinyVG is a very simple binary vector format, but it is good enough to
represent a lot of SVGs, without needing the full web engine.
The main use case for Serenity is for scalable icons (which .tvg easily
handles).
This encoder is very naive as it only output SOF0 images and uses
pre-defined Huffman tables.
There is also a small bug with quantization which make using it
over-degrade the quality.
This is an implementation of the scanline edge-flag algorithm for
antialiased path filling described here:
https://mlab.taik.fi/~kkallio/antialiasing/EdgeFlagAA.pdf
The initial implementation does not try to implement every possible
optimization in favour of keeping things simple. However, it does
support:
- Both evenodd and nonzero fill rules
- Applying paint styles/gradients
- A range of samples per pixel (8, 16, 32)
- Very nice antialiasing :^)
This replaces the previous path filling code, that only really applied
antialiasing in the x-axis.
There's some very nice improvements around the web with this change,
especially for small icons. Strokes are still a bit wonky, as they don't
yet use this rasterizer, but I think it should be possible to convert
them to do so.
...and keep a forwarding header around in VP9, so we don't have to
update all references to the class there.
In time, we probably want to merge LibGfx/ImageDecoders and LibVideo
into LibMedia, but for now I need just this class for the lossy
webp decoder. So move just it over.
No behvior change.
Pure code move (except of removing `static` on the two public functions
in the new header), not behavior change.
There isn't a lot of lossy decoder yet, but it'll make implementing it
more convenient.
No behavior change.
This makes all the code for fill_path() member functions of the painter,
and moves them into a new FillPathImplementation.cpp. This allows us
to avoid polluting Painter.h with implementation details, and makes
the edit, compile, retry loop much shorter.
This might be useful for converting data from arbitrary profiles to
sRGB.
For now, this only encodes the transfer function and puts in zero values
for chromaticities, whitepoint, and chromatic adaptation matrix.
This makes the profile unusable for now. But I've spent a very long time
reading things and need to check in some code, and it's some progress.
The encoded transfer function exactly matches the one in GIMP's built-in
sRGB ICC profile (but not the Compact-ICC-Profiles v4 one or the
RawTherapee v4 one -- I'll add a comment about why later.)
At the moment, this processes the RIFF chunk structure and extracts
the ICCP chunk, so that `icc` can now print ICC profiles embedded
in webp files. (And are image files really more than containers
of icc profiles?)
It doesn't even decode image dimensions yet.
The lossy format is a VP8 video frame. Once we get to that, we
might want to move all the image decoders into a new LibImageDecoders
that depends on both LibGfx and LibVideo. (Other newer image formats
like heic and av1f also use video frames for image data.)
For example, consider the Pirate Flag emoji, which is the code point
sequence U+1F3F4 U+200D U+2620 U+FE0F. Our current emoji resolution does
not consider U+200D (Zero Width Joiner) as part of an emoji sequence.
Therefore fonts like Katica, which have a glyph for U+1F3F4, will draw
that glyph without checking if we have an emoji bitmap.
This removes some hard-coded code points and consults the UCD's code
point properties for emoji sequence components and variation selectors.
This recognizes the ZWJ code point as part of an emoji sequence.
The patch also contains modifications on several classes, functions or
files that are related to the `JPGLoader`.
Renaming include:
- JPGLoader{.h, .cpp}
- JPGImageDecoderPlugin
- JPGLoadingContext
- JPG_DEBUG
- decode_jpg
- FuzzJPGLoader.cpp
- Few string literals or texts