

It’s actually pretty funny to think about other AI scrapers ingesting this nonsense into the training data for future models, too, where the last line isn’t enough to get the model to discard the earlier false text.


It’s actually pretty funny to think about other AI scrapers ingesting this nonsense into the training data for future models, too, where the last line isn’t enough to get the model to discard the earlier false text.


Most Costco-specific products, sold under their Kirkland brand, are pretty good. They’re always a good value and they’re sometimes are among the best in class separate from cost.
I think Apple’s products improved when they started designing their own silicon chips for phones, then tablets, then laptops and desktops. I have beef with their operating systems but there’s no question that they’re better able to squeeze battery life out of their hardware because of that tight control.
In the restaurant world, there are plenty of examples of a restaurant having a better product because they make something in house: sauces, breads, butchery, pickling, desserts, etc. There are counterexamples, too, but sometimes that kind of vertical integration can result in a better end product.


Their horizontal integration is made more seamless by the vertical integration.
On an Apple laptop, they’re the OEM of the hardware product itself, while also being the manufacturer of the CPU and GPU and the operating system. For most other laptops that’s 3 or 4 distinct companies.


Yeah, getting too close turns into an uncanny valley of sorts, where people expect all the edge cases to work the same. Making it familiar, while staying within its own design language and paradigms, strikes the right balance.
I think with cheaper consumer desktops using IDE hard drives, that worked out of the box, but some more exotic storage configurations (SCSI, anything to do with RAID) were a little bit harder to get going.
My first Linux distro was Ubuntu in 2006, with a graphical installer from the boot CD. It was revolutionary in my eyes, because WinXP was still installed using a curses-like text interface at the time. As I remember, installing Ubuntu was significantly easier than installing WinXP (and then wireless Internet support was basically shit in either OS at the time).


Kinda off topic, but now I’m wondering whether Europeans think of phone size (and laptops and screens) in terms of inches rather than centimeters?


More along the lines of a “pizza finder” service that scours different menus and shows the pizza options at a bunch of places, whether those places exclusively offer pizza, specialize in pizza with some other options, or just offer pizza as one of several options. It would be perfectly reasonable for such a service to only return results related to pizza, without any implicit suggestion that each place it returns only has pizza available.
Plenty of the AI functions on phones are on-device. I know the iPhone is capable of several text-based processing (summarizing, translating) offline, and they have an API for third party developers to use on-device models. And the Pixels have Gemini Nano on-device for certain offline functions.


End to End Encryption protects the messages *between the ends". If an “end” is compromised the best E2EE technology isn’t going to protect confidentiality.
Just ask Pete Hegseth, who invited a journalist into an E2EE signal chat. The journalist was an authorized “end” and could therefore read the conversation.
This change is about employers who already have full access to the “end” of the Android phone itself when that phone is in an enterprise managed state. Perfect encryption between that phone and other parties doesn’t change anything because the employer has full access to the phone itself.
And LUKS was already widely available on Linux as alternative.
Yeah, I found LUKS and LVM to be more intuitive for creating encrypted partitions, and had that on my daily driver by around 2009 or so, so I never really felt the need to try Truecrypt.
Still a pretty limited palette, everyone wearing the same color shirts.
PNG tends to fail hard with textures. For example, my preferred theme in my chess app, which has some wood grain textures, generates huge screenshot file sizes (2MB), whereas the default might be less than 10% as large. Similarly, when I screenshot this image the file size jumps to 2MB for a 0.8 megapixel image.
Rendered textured scenes could easily overload the PNG compression algorithm to where they’re huge, and if Discord is historically associated with gaming, one can imagine certain video game screenshots blasting past that 40mb limit.
I think HEIC plays friendly for how they store live photos: a container that has both a still image and a video of the surrounding time context. HEIC for the still photo and HEVC for the video probably optimizes the hardware acceleration for fast, low power processing of both parts of the data, and allows for a higher quality extraction of an alternative still photo from a different part of the video.
And maybe they want to have more third party support in place before they set JXL as a default. All the power and space savings in the world on capture might not mean as much if the phone has to do the work of exporting a JPEG or HEIC for each time that file interfaces with an app or the browser or whatever.
JPEG XL has a mode for losslessly encoding any lossy JPEG into a smaller file size without any loss of quality. Wikipedia has some description of general approaches for losslessly encoding JPEG files further.
I don’t know if webp uses any of these tricks, but I don’t see why it would be hard to imagine that compression artifacts from a 30-year-old format can be encoded more efficiently today.
Google didn’t kill JPEG XL. It might have set browser support back some, but there’s still a place for JPEG XL to take over.
All the modern video-derived formats (webp, heif/heic, avif) tend to be optimized for screen resolutions. But for print photography (including just plain old regular photography that wants to keep the option open of maybe printing some of the images eventually), the higher resolutions and higher quality stretches the limits of where those codecs actually perform well (in terms of file sizes, perceived quality, computational power of coding or decoding).
JPEG XL knocks the other modern images out of the water at those print resolutions and color spaces and quality. It’s not just for photography, either: medical imaging, archiving, printing, etc., all use much higher resolutions that what is supported on any screen.
And perhaps most importantly for future support, the iPhone now supports taking images in JPEG XL. If that becomes a dominant format for photographic workflows, to replace stuff like DNG and other raw formats, browser support won’t hold back the format’s adoption.
And if you already have compression artifacts, what use is lossless?
To further reduce file size without further reducing quality.
There are probably billions of jpeg files out there in the world already encoded in lossy JPEG, with no corresponding higher quality version actually available (e.g., the camera that captures the image and immediately saves it as JPEG). We shouldn’t simply accept that those file sizes are going to forever be stuck, and can think through codecs that further compress the file size losslessly from there.
It was the Joint Picture Experts Group that invented it, so Google had no ownership over it, unlike WebP.
No, JPEG called for submission of proposals to define the new standard, and Google submitted its own PIK format, which provided much of the basis for what would become the JXL standard (the other primary contribution being Cloudinary’s FUIF).
Ultimately, I think most of the discussion around browser support thinks too small. Image formats are used for web display, sure, but they’re also used for so many other things. Digital imaging is used in medicine (where TIFF dominates), print, photography, video, etc.
I’m excited about JPEG XL as a replacement for TIFF and raw photography sensor data, including for printing and medical imaging. WebP, AVIF, HEIF, etc. really are only aiming for replacing web distributed images on a screen.
If you screenshot computer/phone interfaces (text, buttons, lots of flat colors with adjacent pixels the exact same color), the default PNG algorithm does a great job of keeping the file size small. If you screenshot a photograph, though, the PNG algorithm makes the file size huge, because it’s just really poorly optimized for re-encoding images that are already JPG.
AI drives 48% increase in Google emissions
That’s not even supported by the underlying study.
Google’s emissions went up 48% between 2019 and 2023, but a lot of things changed in 2020 generally, especially in video chat and cloud collaboration, dramatically expanding demand for data centers for storage and processing. Even without AI, we could have expected data center electricity use to go up dramatically between 2019 and 2023.
In terms of usage of AI, I’m thinking “doing something a million people already know how to do” is probably on more secure footing than trying to go out and pioneer something new. When you’re in the realm of copying and maybe remixing things for which there are lots of examples and lots of documentation (presumably in the training data), I’d bet large language models stay within a normal framework.