This treatment of memory safety is becoming almost a cargo cult at this point. If this were a binary issue, then clearly Rust wouldn't cut it because it is quite common in Rust (much more so than in, say, Java) to rely on unsafe code. So if you think Rust is good at memory safety, that means that you must believe that some level of unsafety is acceptable. The only question is how much, and what you're willing to pay to reduce it.
The reason we care about memory-safety so much, compared to other invariants we'd like our programs to have is because, as the article notes, a very high portion of vulnerabilities are due to memory-safety violations. This is why preventing or reducing such violations is important in the first place.
But if we look at vulnerability rankings [1][2], we see that Zig's memory safety covers the top weaknesses just as well as Rust, and much better than C. The vast difference in the complexity of these two languages is because Rust pays a lot to also prevent less dangerous vulnerabilities, outside the top 5.
So if Rust is good because it eliminates some very common dangerous vulnerabilities thanks to its memory safety, then Zig must also be good for eliminating the same ones. Calling it C-like because it doesn't eliminate some less common/dangerous vulnerabilities just because Rust does, is just a misunderstanding of why this is all important in the first place. (Plus, if it's important to reduce the security vulnerabilities due to memory safety violations, isn't it better to make avoiding the worst outcomes more approachable?)
In software correctness there are few easy choices. Everything boils down to how much you can and should pay to improve your confidence that a certain level of damage won't occur. It's a complicated subject, and trying to present it as a simple one does it a great disservice.
In fact, both Rust and Zig address some of the most common/dangerous vulnerabilities — more than use-after-free - just as well as C, which is to say, not, or barely, at all. I.e. there are worse vulnerabilities that neither one of them eliminates than the ones Rust eliminates and Zig doesn't.
There is no doubt that Rust and Zig are meant to appeal to people with different aesthetic preferences, but the attempt to distinguish them by turning the matter of memory-safety into a binary one simply doesn't make sense. The property of memory safety is itself not binary in both languages, and the impact of memory safety is split between more and less important effects.
I understand why people wish to find objective metrics to prefer one language over another, but often such metrics are hard to come by, and extrapolation based on questionable assumptions is not really objective.
But if you choose to only focus on security weaknesses, and you choose to ignore the language design's impact on code reviews or the fact that allocations are much more visible in Go than in C (which is not very objective, but perhaps you consider these harder to quantify), you would still have to conclude that there's a big difference - on your chosen metric alone - between Zig and C, and a rather small difference between Rust and Zig.
What I think really happens, though, is that most of the preference boils down to aesthetics, and then we desperately seek some objective measures to rationalise it.
> Much of Zig seems to me like "wishful thinking"; if every programmer was 150% smarter and more capable, perhaps it would work.
But if working harder to satisfy the compiler is something that requires less competence than other forms of thinking about a program, then why Rust? Why not ATS? After all, Rust does let you eliminate more bugs at compile time than Zig, but ATS lets you eliminate so many more. So, if this is an objective measure to reject Zig in favour of Rust, then it must also be used to reject Rust in favour of ATS.
Neither Rust nor Zig are anywhere near either extreme on compile-time guarantees in general and memory-safety in particular. They're closer to each other on the spectrum than either one of them is to either C or ATS. They both compromise heavily. It's perfectly fine to prefer one compromise over the other, but to a measure that would settle which of these compromises is objectively better is just not something we have at this time.
I have and never will forgive Sound Blaster for using legal costs to destroy a competitor, Aureal.
Aureal made the most unbeliveably amazing sound card, which use ray-tracing for sound, in hardware, to produce 3D sound like you are actually there. The sound engine knew the geometry of the space you were in, in your game.
I played the original Half-Life using this, and it was peak gaming.
I can't feel enthusiasm for the accomplishments of a man who cackled with glee while taking food and medicine from the world's poorest children.
Before Trump and DOGE I was enthusiastic about SpaceX. I literally have Eric Berger's book on SpaceX directly in my line of sight on my bookshelf. Now I hope the fucking thing explodes.
> People don’t develop video codecs for fun like they do with software. And the reason is that it’s almost impossible to do without support from the industry.
Hmm, let me check my notes:
- Quite OK Image format: https://qoiformat.org/
- Quite OK Audio format: https://qoaformat.org/
- LAME (ain't a MP3 Encoder): https://lame.sourceforge.io/
- Xiph family of codecs: https://xiph.org/
Some of these guys have standards bodies as supporters, but in all cases, bigger groups formed behind them, after they made considerable effort. QOI and QOA is written by a single guy just because he's bored.
For example, FLAC is a worst of all worlds codec for industry to back. A streamable, seekable, hardware-implementable, error-resistant, lossless codec with 8 channels, 32 bit samples, and up to 640KHz sample rate, with no DRM support. Yet we have it, and it rules consumer lossless audio while giggling and waving at everyone.
On the other hand, we have LAME. An encoder which also uses psycho-acoustic techniques to improve the resulting sound quality and almost everyone is using it, because the closed source encoders generally sound lamer than LAME in the same bit-rates. Remember, MP3 format doesn't have an reference encoder. If the decoder can read the file and it sounds the way you expect, then you have a valid encoder. There's no spec for that.
> Are you really saying that patents are preventing people from writing the next great video codec?
Yes, yes, and, yes. MPEG and similar groups openly threatened free and open codecs by opening "patent portfolio forming calls" to create portfolios to fight with these codecs, because they are terrified of being deprived of their monies.
If patents and license fees are not a problem for these guys, can you tell me why all professional camera gear which can take videos only come with "personal, non-profit and non-professional" licenses on board, and you have pay blanket extort ^H^H^H^H^H licensing fees to these bodies to take a video you can monetize?
For the license disclaimers in camera manuals, see [0].
To respond to some of the questions or those parts I personally find interesting:
The custom TUI library is so that I can write a plugin model around a C ABI. Existing TUI frameworks that I found and were popular usually didn't map well to plain C. Others were just too large. The arena allocator exists primarily because building trees in Rust is quite annoying otherwise. It doesn't use bumpalo, because I took quite the liking to "scratch arenas" (https://nullprogram.com/blog/2023/09/27/) and it's really not that difficult to write such an allocator.
Regarding the choice of Rust, I actually wrote the prototype in C, C++, Zig, and Rust! Out of these 4 I personally liked Zig the most, followed by C, Rust, and C++ in that order. Since Zig is not internally supported at Microsoft just yet (chain of trust, etc.), I continued writing it in C, but after a while I became quite annoyed by the lack of features that I came to like about Zig. So, I ported it to Rust over a few days, as it is internally supported and really not all that bad either. The reason I didn't like Rust so much is because of the rather weak allocator support and how difficult building trees was. I also found the lack of cursors for linked lists in stable Rust rather irritating if I'm honest. But I would say that I enjoyed it overall.
We decided against nano, kilo, micro, yori, and others for various reasons. What we wanted was a small binary so we can ship it with all variants of Windows without extra justifications for the added binary size. It also needed to have decent Unicode support. It should've also been one built around VT output as opposed to Console APIs to allow for seamless integration with SSH. Lastly, first class support for Windows was obviously also quite important. I think out of the listed editors, micro was probably the one we wanted to use the most, but... it's just too large. I proposed building our own editor and while it took me roughly twice as long as I had planned, it was still only about 4 months (and a bit for prototyping last year).
As GuinansEyebrows put it, it's definitely quite a bit of "NIH" in the project, but I also spent all of my weekends on it and I think all of Christmas, simply because I had fun working on it. So, why not have fun learning something new, writing most things myself? I definitely learned tons working on this, which I can now use in other projects as well.
TLDR in RTL simulation XiangShanV3 out-performs my current desktop (Zen1) per cycle in the tested scalar benchmarks by 2x. It's probably closer to 1.5x on regular code.
This roughly matches the reported SPECint2006 scores:
* XiangShanV2: 9.55/GHz
* XiangShanV3: 15/GHz
* Zen2 [1]: 10.5/GHz at boost, 9/GHz at base frequency (closest to Zen1 I could find)
x86 has a parity flag. It only takes the parity of the lowest 8 bits though. Why is it there? Because it was in the 8086 because it was in the 8080 because it was in the 8008 because Intel was trying to win a contract for the Datapoint 2200.
Sometimes the short instruction variant is correct, but not if it makes a single instruction break down into many uops as the microcode is 1000x slower.
Oh, but you need to use those longer variants without extra uops to align functions to cache boundaries because they perform better than NOP.
Floats and SIMD are a mess with x87, AMX (with incompatible variants), SSE1-4 (with incompatible variants), AVX, AVX2, and AVX512 (with incompatible variants) among others.
The segment and offset was off dealing with memory is painful.
What about the weird rules about which registers are reserved for multiply and divide? Half the “general purpose” registers are actually locked in at the ISA level.
Now APX is coming up and you get to choose between shorter instructions with 16 registers and 2 registers syntax or long instructions with 32 registers ands 3 registers instructions.
And this just scratches the surface.
RISCV is better in every way. The instruction density is significantly higher. The instructions are more simple and easier to understand while being just as powerful. Optimizing compilers are easier to write because there’s generally just one way to do things and is guaranteed to be optimized.
It depends. Exists few levels, to know exactly need more information.
Mostly used in modern tech types 1 and 3a from my comment, but large scale production CPU SOCs usually 3b or hybrid 3b with 2, and sometimes you could found type 2 in consumer electronics or for example in routers (because cheap in range of few thousands chips; 3b effective for millions+; 3a in range from 50k to millions).
Sometimes also used Application-specific instruction set processor (ASIP), as I know usually they are some standard design core with one time programmable ROM. I mentioned it only for complete image, sure they are not for our case, but sales could confuse with soft core.
1. "Genuine" soft core - FPGA - matrix with logic elements and RAM defines connections (rows and columns), also similar CPLD, though different architectures, but near same idea, most difference FLASH (ROM) instead of RAM. Others could be considered hard core.
2. Uncommitted Logic Array, ULA, very similar for FPGA, but instead of RAM, use matrix of metal wires, for example cut by laser. Usually, up to 100% faster than same tech FPGA and have about halved power consumption. And you may already understand, now they used when somebody created successful FPGA design and see market potential to sell few thousands chips.
Most known ULAs was in ZX80, ZX81 and Spectrum.
3. Custom chips, I prefer to name they "rendered", because, need physical simulation package to got work.
Using standard libraries of elements, templates if you wish. So, better than ULA, but worse performance than next type.
3b Full-custom design
It is what name suggest - using simulation to make working semiconductor structures. Most important feature - expensive and risky, but if have enough time/money, could make real wonders, like Atari/Commodore chips (except gate array or ULA Gary and Gayle custom chips), or modern CPU SOC if you wish.
It has never been about how much you eat... unless you truly do have an eating disorder where you are eating far in excess of your daily maintenance value, (anywhere from 1800 to 2400 depending on the individual).
What makes the difference is what you eat. Unmanaged insulin levels, even for so-called "healthy" individuals, leads to weight gain, especially in areas of the body associated with heart disease and stroke.
Your body requires multiple PHds in multiple medical and biochemical disciplines to understand. Trying to slap CICO on everything is both ingenuous and gaslighting.
Want to lose weight? If you already have a healthy calorie intake, decrease simple/refined carbs, increase healthy fats, avoid unhealthy refined oils from plant sources (margarine, crisco, other synthetic lard/fat replacements), increase meat protein if you're not getting at least 4 oz a day, avoid grains entirely if need-be (but certainly, less is more).
Still not working? Increase salt intake, low salt diets interfere with the angiotensin-renin system. Even with people with high blood pressure, lowering sodium intake often drives blood pressure higher (a documented life-saving quirk of physiology), but increasing potassium decreases blood pressure reliably.
Easiest way to achieve this is just to eat a normal, ungimmicky, unbullshitted, whole foods diet. Is it ultraprocessed? Is there more weird-ass chemicals on the box's ingredient list than not? Can you make this, yourself, in your kitchen, following a basic recipe? No? Then don't eat it.
Yes I agree. That's a very informative page that we can thank archive.org for saving!
Over the years I have come across a lot of sites & threads that go indepth on certain chips but knowing the amount of work it would involve to sort through and the fact that sometimes the information isn't quite as accurate as they would lead you to beleve I never started to complie it into one big list. But maybe I should start.
Here's a few more sites you might find interesting
Sorry to hear you're disappointed and it didn't meet your HIGH level of expectations. As I stated this was MY attempt to make the MOST COMPLETE list available of EVERY KNOWN custom chip by NAME for the Amiga and few more added in for good measure! Again BY NAME!! Not perfect but pretty damn complete.
It wasn't meant to be the most comprehensive list with every version, date of manufacture, plant of origin, pin outs, know bugs, voltage & timing charts, die shots, etc... for each of them. Maybe someday but not today...
Hopefully I made it easy enough for anyone to point out errors or to add a bit of information. Just click the 'COMMENT' link.
Now I'll ask YOU... Do you happen to know of a site that has a MORE COMPLETE listing? If so PLEASE do share or maybe you have a list of your own? I would LOVE to compare.
Just trying to do my SMALL part for the community as I have been doing for the past 22 years...
oh and to you point how about this page for starters. Sadly it only exists on archive.org. (For reference see SOURCES #)
> The next conversation unfortunately only brought more red flags. The first hint of impacting mGBA development had dropped: suddenly they were talking about delaying an mGBA release for a nebulous amount of time, directly contrary to what had been discussed prior. [...] The next conversation was suddenly about delaying until after the Pocket was released. At no point was such a thing discussed prior, but it was worded like it was. This was explained as putting a little bit of extra time after the release, though the reason was left implied; presumably they didn’t want mGBA stealing the Pocket’s thunder, as though that were at all a realistic scenario. And the amount of extra time proposed? Six months.
> By now it was clear to me that they didn’t respect me at all and there was no truth to the claim of the job not impacting my open source work. It all seemed to point to them seeing me as a source of cheap labor and then didn’t care at all how it impacted me, so long as I did the work for them. [It] really reflects on how little Analogue seems to actually care about the retro emulation community as a whole. In conversations with other emulator developers over the past it was spelled out that kevtris thinks of FPGA-based hardware emulation as inherently superior to software emulation, and is plenty willing to keep research he does towards the goal of perfecting his hardware solutions private, all while claiming that not only is it not even emulation (with an asterisk of course), it’s also the only route to perfect emulation. Neither of these claims is true.
> When I asked kevtris if he would release all of the documentation he had on GB/GBA he had said yes, after the Pocket shipped, but I’ve yet to see him release any of the documentation he’d promised for other projects, such as SNES, which have had products on the market for years now. The most I’ve seen is extremely basic overviews of a handful of obscure GBA behavior that, while valuable, is assuredly a tiny fraction of what he has.
The emulator writer scene lives on back-chatter. Analogue isn't even the only one... ask some developers what they think of RetroArch, who simply bundle up emulator cores... You won't get pretty answers.
You can load Black Magic Probe firmware on a bluepill and have JTAG for about $2. I also ported it to nRF52840 if you have an extra dongle laying around. Or esp8266 for debugging over wifi (wireless JTAG is super useful sometimes.)
Segger has been really successful at marketing "JTAG == JLink" but it is just not true.
If you are in Japan, I would highly suggest you visit the Toyota museum in Nagoya. Toyota started as a loom manufacturer and the museum has a ton of working artifacts (I recall one is from the 1600s).
They have a tour that shows you the full evolution of the loom technology including one operating by punch cards.
I had no idea until I visited the museum shortly before COVID
"The original Remove Richard Stallman post contained leaked communications from a private mailing list. In it, the author quotes an email from Stallman where he explains that Marvin Minsky likely wouldn’t have known that the woman on Jeffrey Epstein’s island was coerced:
…the most plausible scenario is that she presented herself to him as entirely willing. Assuming she was being coerced by Epstein, he would have had every reason to tell her to conceal that from most of his associates.
A paragraph later, the author summarizes Stallman’s view as:
…he says that an enslaved child could, somehow, be “entirely willing”.
This is the opposite of what Stallman said, but this lie was repeated by the press. An article in the Daily Beast said:
Stallman wrote that “the most plausible scenario” for Giuffre’s accusations was that she was, in actuality, “entirely willing.”
An article in Vice spread the same lie:
Early in the thread, Stallman insists that the “most plausible scenario” is that Epstein’s underage victims were “entirely willing” while being trafficked.
There are two possibilities here. Either the author of the Medium post was not capable of correctly parsing the sentence, or she didn’t care about truth and was leveling as many accusations as possible in the hope that one would stick. In other words: she is either foolish or malicious. The same goes for the writers of the Vice and Daily Beast articles. To describe what they did as journalism would be an insult to journalists."
"By satisfying the mob today, we are sacrificing our future. That’s the real risk."
> Im more curious what tiktok data amouts to and how it could be used by CCP
Let's pretend I hire two people: The first is a private detective, the other is an entertainer.
The private detective is able to follow you around and watch everything you do. They know when you eat, when you sleep, and even when you poop. They know who you talk to and for how long. They know everything that interests you and bores you, down to the microsecond (knowing what makes you pause). The PI gets to know you pretty well and honestly, probably better than many close friends.
The entertainer is your main source of entertainment. They offer a wide variety of things and they're highly addictive and prevent you from being bored. Since they are a major part of your day, they are a major influence of the information that you consume. Be this in comedy, politics, academic information, or whatever. That all depends on what interests you that day.
Now I've hired these two people and am able to direct them. The PI are my eyes and ears, the entertainer are my hands. If I want to make the most profit off of you I can make deals with McDonalds and get the entertainer to influence you that way (maybe do comedy bits about burgers) and the PI can track how interested and influential the entertainer is. Allowing us to refine our techniques on a personal level. On the other hand, if I am interested in politics I can do the same. I know what makes you afraid. I know what makes you sad. I know what makes you angry. I know what makes you feel good.
Now we're just looking at you, a single person here. But I have a billion PIs and entertainers. I know all your friends, family members, and even your crushes. I know how close all these bonds are because I'm doing to them what I am doing to you. Consider this and then tell me that I don't have influence over you. I am a significant part of your environment. You may have free will, but you are also a product of your environment.
> My naive assumption is that Tiktok is a meme sharing platform and timesink
And that's why I have influence over you. The less important you think it is, the more influence I have since your guard is down. Same way comedians crack people up.
TBP is now proxy software that anyone can install[0] which talks to a central API[1] for torrent listings. Each provider can tailor the experience and add ads or donation buttons as they desire. You can search for "TPB proxy list"[2] to find a list of TPB sites. I currently use tpb.party which I find the fastest and least intrusive.
For those wanting to explore and learn about this type of hardware attack, check out a relatively new book published by "No Starch Press" called "The Hardware Hacking Handbook" [1].
Play around with fault injection and differential power analysis with easy to obtain hardware such as a Raspberry Pi.
A lot of Linux people have the impression that LibreSSL is largely incompatible with OpenSSL (not true), that the ABI breaks every six months (not true), or that it requires heavy patching of downstream software work to maintain (not true anymore).
Years ago there was also a big article from Alpine, one of the distros that tried to switch to it and had to switch back. The now-outdated article seems to be the main citation for those opposed to even giving LibreSSL a chance now. In fact Alpine is reconsidering a switch back from OpenSSL after the 3.x branch was shown to be such a disaster.
It's a standard thing to do in EE curricula; you normally do it in a one-semester class, and there are literally thousands of open-source synthesizable CPU cores on GitHub now. Some one-semester classes go so far as to design ASICs and, if they pass DRCs, get them fabbed through something like MOSIS or CMP.
To take three examples to show that designing a CPU is less work than writing a novel:
In all three cases, this doesn't include testbenches and other verification work, but as I understand it, that's usually only two or three times as much work as the logic design itself.
Maybe we should have a NaCpuDeMo, National CPU Design Month, like NaNoWriMo.
I haven't quite done it myself. Last time I played https://nandgame.com/ it took me a couple of hours to play through the hardware design levels. But that's not really "design" in the sense of defining the instruction set (which is, like Thacker's design, kind of Nova-like), thinking through state machine design, and trying different pipeline depths; you're mostly just doing the kind of logic minimization exercises you'd normally delegate to yosys.
In https://github.com/kragen/calculusvaporis I designed a CPU instruction set, wrote a simulator for it, wrote and tested some simple programs, designed a CPU at the RTL level, and sketched out gate-level logic designs to get an estimate of how big it would be. But I haven't simulated the RTL to verify it, written it down in an HDL, or breadboarded the circuit, so I'm reluctant to say that this qualifies as "designing a single CPU" either. (Since it's not 01982 anymore maybe you should also include a simple compiler backend before you say a new ISA is really designed?)
But I also wouldn't say I'm "well versed in the topic". I can say things about what makes CPUs fast or slow, but I don't know them from my own experience; I'm mostly just repeating things I've heard from people I judge as credible on CPU design. But what is that credibility judgment based on? How would I know if I was just believing a smooth charlatan who doesn't really know any more than I do? And I think Rob is in the same situation as I am, just worse, because he has even less experience.
Re "whether this is important in real world scenarios", in the context of high-end audio, there are four types of products:
1. Those that make clearly audible and measurable differences. Speakers, headphones, and upgrading from total rock-bottom near-broken electronics.
2. Those that make measurable differences that probably aren't audible. Various super-low distortion figures on amps and DACs, or going from 320kbps MP3 to FLAC. Almust certainly not detectable by human ears, but instruments can see a real, measurable improvement.
3. Those that make unmeasurable differences that are not audible. These are obviously worthless and snake oil.
4. Those that make unmeasurable differences that are audible. These are either magic (none that I've seen so far), reveal limits to our previous understanding of audio measurement (these probably have existed at some point, but I'm skeptical that any current products qualify), or are actually #3 in disguise.
I think the worst you can say about Schiit is that their stuff falls into #2 -- measurably, but not audibly, an improvement over some other thing -- and since they relentlessly refuse to make official claims about the audible properties of their products, I think that puts them into solid respectability as an audio company. The worst you can accuse them of, really, is unnecessary metaphorical gold-plating.
The reason we care about memory-safety so much, compared to other invariants we'd like our programs to have is because, as the article notes, a very high portion of vulnerabilities are due to memory-safety violations. This is why preventing or reducing such violations is important in the first place.
But if we look at vulnerability rankings [1][2], we see that Zig's memory safety covers the top weaknesses just as well as Rust, and much better than C. The vast difference in the complexity of these two languages is because Rust pays a lot to also prevent less dangerous vulnerabilities, outside the top 5.
So if Rust is good because it eliminates some very common dangerous vulnerabilities thanks to its memory safety, then Zig must also be good for eliminating the same ones. Calling it C-like because it doesn't eliminate some less common/dangerous vulnerabilities just because Rust does, is just a misunderstanding of why this is all important in the first place. (Plus, if it's important to reduce the security vulnerabilities due to memory safety violations, isn't it better to make avoiding the worst outcomes more approachable?)
In software correctness there are few easy choices. Everything boils down to how much you can and should pay to improve your confidence that a certain level of damage won't occur. It's a complicated subject, and trying to present it as a simple one does it a great disservice.
In fact, both Rust and Zig address some of the most common/dangerous vulnerabilities — more than use-after-free - just as well as C, which is to say, not, or barely, at all. I.e. there are worse vulnerabilities that neither one of them eliminates than the ones Rust eliminates and Zig doesn't.
There is no doubt that Rust and Zig are meant to appeal to people with different aesthetic preferences, but the attempt to distinguish them by turning the matter of memory-safety into a binary one simply doesn't make sense. The property of memory safety is itself not binary in both languages, and the impact of memory safety is split between more and less important effects.
I understand why people wish to find objective metrics to prefer one language over another, but often such metrics are hard to come by, and extrapolation based on questionable assumptions is not really objective.
But if you choose to only focus on security weaknesses, and you choose to ignore the language design's impact on code reviews or the fact that allocations are much more visible in Go than in C (which is not very objective, but perhaps you consider these harder to quantify), you would still have to conclude that there's a big difference - on your chosen metric alone - between Zig and C, and a rather small difference between Rust and Zig.
What I think really happens, though, is that most of the preference boils down to aesthetics, and then we desperately seek some objective measures to rationalise it.
> Much of Zig seems to me like "wishful thinking"; if every programmer was 150% smarter and more capable, perhaps it would work.
But if working harder to satisfy the compiler is something that requires less competence than other forms of thinking about a program, then why Rust? Why not ATS? After all, Rust does let you eliminate more bugs at compile time than Zig, but ATS lets you eliminate so many more. So, if this is an objective measure to reject Zig in favour of Rust, then it must also be used to reject Rust in favour of ATS.
Neither Rust nor Zig are anywhere near either extreme on compile-time guarantees in general and memory-safety in particular. They're closer to each other on the spectrum than either one of them is to either C or ATS. They both compromise heavily. It's perfectly fine to prefer one compromise over the other, but to a measure that would settle which of these compromises is objectively better is just not something we have at this time.
[1]: https://cwe.mitre.org/top25/archive/2024/2024_cwe_top25.html
[2]: https://cwe.mitre.org/top25/archive/2024/2024_kev_list.html