Hacker Newsnew | past | comments | ask | show | jobs | submit | BubbleRings's commentslogin

No numbers in Chrome on iPhone, fyi.

I can’t tell if you’re joking, because I’m looking at the site on chrome on iPhone and there’s numbers.

So is there a list of the most popular apps that made use of the infected lotusbail npm package?

NPM show 0 dependents in public packages. The 56k downloads number can easily have been be gamed by automation and therefore not a reliable signal of popularity.

If you like playing with old hardware, be aware that old CRTs have a gotcha that can getcha: they hold a charge that can shock you across the room, and they can hold that charge for weeks or more. Google how to discharge it before poking around in a CRT.

When I was about 12 I got an old TV in my room which I of course decided to take apart to figure out how it worked.

I was VERY smart and of course unplugged the TV before doing anything.

My flat head screwdriver brushed against the wrong terminal in the back, I was literally thrown across the room several feet, and my flat head screw driver was no longer usable as the tip had deformed and slightly melted.

I later found an electronics book that had a footnote mentioning grounding out the tube before going near it…


How does an electric shock throws someone across the room? What's the mechanism for this push?

I know a shock can paralyze (by contracting the muscles) and it can burn (by joule effect) but never seen one push


AC current paralyzes by alternately contracting and relaxing your muscles, 60 times per second. This tends to lock you in place because the electricity is a higher voltage than your nerves and overrides any command you send every 60th of a second. It could take you several minutes to die, and you will be suffering in pain and terror the whole time as you are unable to let go…

DC current jolts you “across the room“ by contracting your muscles all at once. Of course the exact effect depends on your posture; sometimes it just makes you stand upright or pull your arms in. This tends to disconnect you from the source of the electricity, limiting the damage. Note that if you cannot actually jump all the way across the room then the jolt probably can’t knock you all the way across the room either. If you fall over your head could end up pretty far away from where it started, though, and if you lose consciousness even for a little while then that can affect your perception too. It could certainly throw the screwdriver all the way across the room.

If you pay attention to the special effects that show up in movies and television you’ll soon realize that they simulate shocks by putting the actor in a harness and then pulling on it suddenly. This sudden movement away from the source of the “shock” stops looking very convincing when you notice that the movement starts at their torso rather than in their legs and arms.


I have been electrocuted twice once as a kid (which I don't remember but my parents reminded me) and once as a teenager which I definitely remember. My country's voltage was 240 volts at 50 Hz. I remember screaming uncontrollably as the current flowed through my arm and chest but managed to drop the live wire. The floor was parquet: wood.

Ouch, that is lucky.

I remember putting some keys into an electrical socket when I was quite young. My hand must have bridged live and neutral, so the current only flowed from thumb to forefinger rather than through my chest to my feet. But it was accompanied by a flash of light and an arc that I saw as a forked tongue. I told my mom that it had bitten me :)


That's fascinating, thanks for taking the time to write this

Also, what an horrifying way to die


Agreed. I’m with Quark; I want to wake up in Heaven and have no idea how I got there.

Typically it forces your leg muscles to contract as the current flows to ground and you literally kick yourself into the air.

Exact same thing happened to me as a child. I do not remember the event, but I do remember waking up on the other side of the room.


"by contracting the muscles"

Yeah the tube is essentially a large capacitor :P

I also learned electronics by shocking myself often


It's not the tube (which is just a chamber for an electron gun. It's the high voltage capacitors used to hold charge for the supply driving the electron gun.

It's the tube itself which forms the capacitor. I'm not aware of any sets that used a separate capacitor across the final anode voltage.

You either learn by shocking yourself, or die trying.

The survival selection is real in electronics.


Not just shock you across the room, but shock straight into your next life.

So we are all in post-tube life?

I have a vague recollection that my little cousin was nearly ended when he managed to destabilize the stand that a CRT was sitting on, and it fell just behind him, but I may be entirely hallucinating that memory.

Regardless, there are multiple ways old CRTs can cause great harm.


My family bought our first one and we used it keep it on a carpet floor - boy was that an electrifying experience

Same goes for your microwave.

I filed my collaborative filtering patent in 1995, describing the core basic way that your “desert island 5 favorite albums”, and the 5 favorites list from many other people, could be used to recommend music you would like. The patent is a nice tutorial on how it is done, check it out here if you’d like:

https://patentimages.storage.googleapis.com/9d/f9/19/08ac5ef...

Here is part of the story on my website… I’ll write it up better one of these days:

https://www.whiteis.com/similarities-engine

Yeah yeah it was a software patent. If that bugs you, you can take solace in the fact that I blew it executing on monetizing it. Microsoft ended up owning it and I went on to other adventures.

Here’s a list of the 456 US Patents that cite the Similarities Engine patent as prior art: https://www.whiteis.com/cites-to-se-patent


It is not clear if using the ring requires you to wear a Pebble watch. Does it work with an Apple Watch instead? Maybe add that to your FAQ OP.


First let’s have it create maybe 100 more entries, then have people vote on which are the best 30, THEN put all the effort into creating all the fake articles and discussions. As good as the current 30 are, maybe the set could still be made twice as good. And have a set of short “explain xkcd”-style entries somewhere so people can read up on what the joke is, when they miss a specific one. Then send it to The Onion and let them make a whole business around it or something.

Definitely one of the best HN posts ever. I mean come on!:

FDA approves over-the-counter CRISPR for lactose intolerance (fda.gov)


Save some of the not-top-30 posts, and add in a sprinkling of Hiring, Show HN, YC Summer 2035 acceptances/rejections, or product launches - of founders who just vibe coded something based on a presumed 6 week ago version of this future HN universe.


That one's a bit optimistic for the FDA.

But it nailed fusion and Gary Marcus lesssgoo


The Gary Marcus headline is perfect.


> …reused its embedding matrix as the weights for the linear layer that projects the context vectors from the last Transformers layer into vocab space to get the logits.

At first glance this claim sounds airtight, but it quietly collapses under its own techno-mythology. The so-called “reuse” of the embedding matrix assumes a fixed semantic congruence between representational space and output projection, an assumption that ignores well-known phase drift in post-transformer latent manifolds. In practice, the logits emerging from this setup tend to suffer from vector anisotropification and a mild but persistent case of vocab echoing, where probability mass sloshes toward high-frequency tokens regardless of contextual salience.

Just kidding, of course. The first paragraph above, from OP’s article, makes about as much sense to me as the second one, which I (hopefully fittingly in y’all’s view) had ChatGPT write. But I do want to express my appreciation for being able to “hang out in the back of the room” while you folks figure this stuff out It is fascinating, I’ve learned a lot (even got a local LLM running on a NUC), and very much fun. Thanks for letting me watch, I’ll keep my mouth shut from now on ha!


Disclaimer: working and occasionally researching in the space.

The first paragraph is clear linear algebra terminology, the second looked like deeper subfield specific jargon and I was about to ask for a citation as the words definitely are real but the claim sounded hyperspecific and unfamiliar.

I figure a person needs 12 to 18 months of linear algebra, enough to work through Horn and Johnson's "Matrix Analysis" or the more bespoke volumes from Jeffrey Humpheries to get the math behind ML. Not necessarily to use AI/ML as a tech, which really can benefit from the grind towards commodification, but to be able to parse the technical side of about 90 to 95 percent of conference papers.


One needs about 12 to 18 hours of linear algebra to work though the papers, not 12 to 18 months. The vast majority of stuff in AI/ML papers is just "we tried X and it worked!".


You can understand 95+% of current LLM / neural network tech if you know what matrices are (on the "2d array" level, not the deeper lin alg intuition level), and if you know how to multiply them (and have an intuitive understanding why a matrix is a mapping between latent spaces and how a matrix can be treated as a list of vectors). Very basic matrix / tensor calculus comes in useful, but that's not really part of lin alg.

There are places where things like eigenvectors / eigenvalues or svd come into play, but those are pretty rare and not part of modern architectures (tbh, I still don't really have a good intuition for them).


I was about to respond with a similar comment. The majority of the underlying systems are the same and can be understood if you know a decent amount of vector math. That last 3-5% can get pretty mystical, though.

Honestly, where stuff gets the most confusing to me is when the authors of the newer generations of AI papers invent new terms for existing concepts, and then new terms for combining two of those concepts, then new terms for combining two of those combined concepts and removing one... etc.

Some of this redefinition is definitely useful, but it turns into word salad very quickly and I don't often feel like teaching myself a new glossary just to understand a paper I probably wont use the concepts in.


This happens so much! It’s actually imo much more important to be able to let the math go and compare concepts vs. the exact algorithms. It’s much more useful to have semantic intuition than concrete analysis.

Being really good at math does let you figure out if two techniques are mathematically the same but that’s fairly rare (it happens though!)


> There are places where things like eigenvectors / eigenvalues or svd come into play, but those are pretty rare and not part of modern architectures (tbh, I still don't really have a good intuition for them)

This stuff is part of modern optimizers. You can often view a lot of optimizers as doing something similar to what is called mirror/'spectral descent.'


Indeed. "Spectral" describes the collection of eigenvalues!


Eigenvector/eigenvalues: direction and amount of stretch a matrix pushes a basis vector.


for anyone looking to get into it, mathacademy has a full zero to everythign you need pathway that you can follow to mastery

https://mathacademy.com/courses/mathematics-for-machine-lear...


There is no mention of llm there?


if you want to use llms, just download one and play with it. if you want to understand llms enough to push research forward, learn the underlying math


100%


OP here -- agreed! I tried to summarise (at least to my current level of knowledge) those 12-18 hours here: https://www.gilesthomas.com/2025/09/maths-for-llms


> 12 to 18 months of linear algebra

Do you mean full-time study, or something else? I’ve been using inference endpoints but have recently been trying to go deeper and struggling, but I’m not sure where to start.

For example, when selecting an ASR model I was able to understand the various architectures through high-level descriptions and metaphors, but I’d like to have a deeper understanding/intuition instead of needing to outsource that to summaries and explainers from other people.


I was projecting as classes, taken across 2 to 3 semesters.

You can gloss the basics pretty quickly from things like Kahn academy and other sources.

Knowing Linalg doesn't guarantee understanding modern ML, but if you then go read seminal papers like Attention is All You Need you have a baseline to dig deeper.


It's just a long winded way of saying "tied embeddings"[1]. IIRC, GPT-2, BERT, Gemma 2, Gemma 3, some of the smaller Qwen models and many more architectures use weight tied input/output embeddings.

[1]: https://arxiv.org/abs/1608.05859


The turbo encabulator lives on.


It's a 28 part series. If you start from the beginning, everything is explained in detail.


As somebody who understands how LLMs work pretty well, I can definitely feel your pain.

I started learning about neural networks when Whisper came out, at that point I literally knew nothing about how they worked. I started by reading the Whisper paper... which made about 0 sense to me. I was wondering whether all of those fancy terms are truly necessary. Now, I can't even imagine how I'd describe similar concepts without them.


i consider it a bit rude to make people read AI output without flagging it immediately


I'm glad I'm not the only one who has a Turbo Encabulator moment when this stuff is posted.


I was reading this thinking "Holy crap, this stuff sounds straight out of Norman Rockwell... wait, Rockwell Automation. Oh, it actually is"


The second paragraph is highly derivative of the adversarial turbo encabulator, which Schmithuber invented in the 90s. No citation of course.


Are you saying I should have attributed, or ChatGPT should have? I suppose I would have but my spurving bearings were rusty.


I have no idea what you’ve just said, so here is my upvote.


Ha! Just experienced this. It was very frustrating.


They really need to add a "punish the LLM" button.


Some services have the down thumb


I need something stronger than that.


I don’t understand. The link opens to a web page, and the download link is clearly labeled as a PDF. Why the warning? And why warn about PDFs in general, have they been having zero day embedded malware lately or something?


This comment chain gave me a fun idea to lightly troll people. Just comment "Caution: <file format or file type>" on a thread with no further explanation and gaslight people into thinking there is some problem


Afterwards, you can run into a theater and yell “fire!”


What a wild concept in this case:

With a little effort and research someone could come up with a reasonable estimate that read something like, “a typical 15-year-old reading through this comic once in a typical way would have cost the family X dollars”, and X might literally be $100k. Certainly well over $10k.


A careless 15 year old would take off millions from the value.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: