I agree the number of long-term users will not match the download figures.
OTOH, this is an obscure distro so the surprising numbers may be a sign that many more PC owners are trying out a more mainstream distro than Zorin on their old Windows 10 machine.
While the old PC is not actually that old, and in perfect working condition, Linux may have never had as much opportunity for uptake in that regard.
I swear there are people who take a massive amount of pride in using languages or technology that are obscure. That thing use to be Rust, but now that it's become fairly mainstream they have to find something new to move to.
I wonder how long it will be before we start seeing "Ada is the new Rust" YouTube videos.
JPEG-XL is both a lossy and lossless codec. It is already being used in Camera DNG format, making the RAW image smaller.
While lossy codec is hard to compare and up for debate. JPEG-XL is actually better as a lossless codec in terms of compression ratio and compression complexity. There is only one other codec that beats it but it is not open source.
HALIC is by far the best lossless codec in terms of speed/compression ratio. If lossy mode were similarly available, we might not be discussing all these issues. I think he stopped developing HALIC for a long time due to lack of interest.
Its developer is also developing HALAC (High Availability Lossless Audio Compression). He recently released the source code for the first version of HALAC. And I don't think anyone cared.
As in, a clear way to detect whether a given file is lossy or lossless?
I was thinking that too, but on the other hand, even a lossless file can't guarantee that its contents aren't the result of going through a lossy intermediate format, such as a screenshot created from a JPEG.
I find it incredibly helpful to know that .jpg is lossy and .png is lossless.
There are so many reasons why it's almost hard to know where to begin. But it's basically the same reason why it's helpful for some documents to end in .docx and others to end in .xlsx. It tells you what kind of data is inside.
And at least for me, for standard 24-bit RGB images, the distinction between lossy and lossless is much more important than between TIFF and PNG, or between JPG and HEIC. Knowing whether an image is degraded or not is the #1 important fact about an image for me, before anything else. It says so much about what the file is for and not for -- how I should or shouldn't edit it, what kind of format and compression level is suitable for saving after editing, etc.
After that comes whether it's animated or not, which is why .apng is so helpful to distinguish it from .png.
There's a good reason Microsoft Office documents aren't all just something like .msox, with an internal tag indicating whether they're a text document or a spreadsheet or a presentation. File extensions carry semantic meaning around the type of data they contain, and it's good practice to choose extensions that communicate the most important conceptual distinctions.
> Knowing whether an image is degraded or not is the #1 important fact about an image for me
But how can you know that from the fact that it's currently losslessly encoded? People take screenshots of JPEGs all the time.
> After that comes whether it's animated or not, which is why .apng is so helpful to distinguish it from .png.
That is a useful distinction in my view, and there's some precedent for solutions, such as how Office files containing macros having an "m" added to their file extension.
Obviously nothing prevents people from taking PNG screenshots of JPEGs. You can make a PNG out of an out-of-focus camera image too. But at least I know the format itself isn't adding any additional degradation over whatever the source was.
And in my case I'm usually dealing with a known workflow. I know where the files originally come from, whether .raw or .ai or whatever. It's very useful to know that every .jpg file is meant for final distribution, whereas every .png file is part of an intermediate workflow where I know quality won't be lost. When they all have the same extension, it's easy to get confused about which stage a certain file belongs to, and accidentally mix up assets.
>I find it incredibly helpful to know that .jpg is lossy and .png is lossless.
Unfortunately we have been through this discussion and author of JPEG-XL strongly disagree with this. I understand where they are coming from, but for me I agree with you it would have been easier to have the two separated in naming and extensions.
But JPEG has a lossless mode as well. How do you distinguish between the two now?
This is an arbitrary distinction, for example then why do mp3 and ogg (vorbis) have different extensions? They're both lossy audio formats, so by that requirement, the extension should be the same.
Otherwise, we should distinguish between bitrates with different extensions, eg mp3128, mp3192, etc.
In theory JPEG has a lossless mode (in the standard), but it's not supported by most applications (not even libjpeg) so it might as well not exist. I've certainly never come across a lossless JPEG file in the wild.
Filenames also of course try to indicate technical compatibility as to what applications can open them, which is why .mp3 and .ogg are different -- although these days, extensions like .mkv and .mp4 tell you nothing about what's in them, or whether your video player can play a specific file.
At the end of the day it's just trying to achieve a good balance. Obviously including the specific bitrate in a file extension goes too far.
Legacy. It’s how things used to be done. Just like Unix permissions, shared filesystem, drive letters in the file system root, prefixing urls with the protocol, including security designators in the protocol name…
Be careful to ascribe reason to established common practices; it can lead to tunnel vision. Computing is filled with standards which are nothing more than “whatever the first guy came up with”.
If the alternative was putting the information in some hypothetical file attribute with similar or greater level of support/availability (like for filtering across various search engines and file managers) then I'd agree there's no reason to keep it in the file extension in particular, but I feel the alternative here is just not really having it available in such a way at all (instead just an internal tag particular to the JXL format).
Well yeah, you can turn any lossless format lossy by introducing an intermediate step that discards some amount of information. You can't practically turn a lossy format into a lossless format by introducing a lossless intermediate step.
Although, if you're purely speaking perceptually, magic like RAISR comes pretty close.
Think of all the use cases where the output is going to be ingested by another machine. You don't know that "perceptually lossless" as designed for normal human eyeballs on normal screens in normal lighting environments is going to contain all the information an ML system will use. You want to preserve data as long as possible, until you make an active choice to throw it away. Even the system designer may not know whether it's appropriate to throw that information away, for example if they're designing digital archival systems and having to consider future users who aren't available to provide requirements.
Here is a list of major ARM licensees, categorized by the type of license they typically hold.
1. Architectural Licensees (Most Flexible)
These companies hold an Architectural License, which allows them to design their own CPU cores (and often GPUs/NPUs) that are compatible with the ARM instruction set. This is the highest level of partnership and requires significant engineering resources.
Apple: The most famous example. They design the "A-series" and "M-series" chips (e.g., A17 Pro, M4) for iPhones, iPads, and Macs. Their cores are often industry-leading in single-core performance.
Qualcomm: Historically used ARM's core designs but has increasingly moved to its own custom "Kryo" CPU cores (which are still ARM-compatible) for its Snapdragon processors. Their recent "Oryon" cores (in the Snapdragon X Elite) are a fully custom design for PCs.
NVIDIA: Designs its own "Denver" and "Grace" CPU cores for its superchips focused on AI and data centers. They also hold a license for the full ARM architecture for their future roadmap.
Samsung: Uses a mixed strategy. For its Exynos processors, some generations use semi-custom "M" series cores alongside ARM's stock cores.
Amazon (Annapurna Labs): Designs the "Graviton" series of processors for its AWS cloud services, offering high performance and cost efficiency for cloud workloads.
Google: Has developed its own custom ARM-based CPU cores, expected to power future Pixel devices and Google data centers.
Microsoft: Reported to be designing its own ARM-based server and consumer chips, following the trend of major cloud providers.
2. "Cores & IP" Licensees (The Common Path)
These companies license pre-designed CPU cores, GPU designs, and other system IP from ARM. They then integrate these components into their own System-on-a-Chip (SoC) designs. This is the most common licensing model.
MediaTek: A massive player in smartphones (especially mid-range and entry-level), smart TVs, and other consumer devices.
Broadcom: Uses ARM cores in its networking chips, set-top box SoCs, and data center solutions.
Texas Instruments (TI): Uses ARM cores extensively in its popular Sitara line of microprocessors for industrial and embedded applications.
NXP Semiconductors: A leader in automotive, industrial, and IoT microcontrollers and processors, almost exclusively using ARM cores.
STMicroelectronics (STM): A major force in microcontrollers (STM32 family) and automotive, heavily reliant on ARM Cortex-M and Cortex-A cores.
Renesas: A key supplier in the automotive and industrial sectors, using ARM cores in its R-Car and RA microcontroller families.
AMD: Uses ARM cores in some of its adaptive SoCs (Xilinx) and for security processors (e.g., the Platform Security Processor or PSP in Ryzen CPUs).
Intel: While primarily an x86 company, its foundry business (IFS) is an ARM licensee to enable chip manufacturing for others, and it has used ARM cores in some products like the now-discontinued Intel XScale.
Sure, but i suspect for basically all of us (maybe Elon is surfing HN today), that literally means nothing. Few of us have the 100's of millions required to design and fab a competitive SoC, and for those that do, the arm licenses are easier to acquire than the knowledge of how to build a competitive system (see RISC-V). You might as well complain about TSMC not publishing the information on how to fab 2nm parts or the code used to generate the mask sets.
For the rest of us, what matters is whether we can open digikey/newegg/whatever and buy a few machines and whether they are open enough for us to achieve our goals and their relative costs. So that list of vendors is more appropriate because they _CAN_ sell the resulting products to us. The problem is how much of their mostly off the shelf IP they refuse to document, resulting in extra difficulties getting basic things working.
Yes, I think most of us are clear that seL4 isn't Unix. But people continue to complain that anything with a Posix layer is Unix-like, and therefore somehow 'bad'. My point was that virtually everyone who complains about this never, ever explains what would have been better to implement, just that it should have been different.
0. https://sites.google.com/site/amigadocuments/