Hacker Newsnew | past | comments | ask | show | jobs | submit | dinkelberg's commentslogin

Not an example, but maybe this is interesting for folks who haven't really heard of the peptide business before. https://www.theguardian.com/wellness/2026/feb/05/injectable-...

Clickbait title. He didn't invent OnlyFans. He created a similar site which failed.


I think the punctuation makes it clear -- imagine "How I invented Facebook. In 2001." The full stop in the middle of the sentence breaks it and makes you realise he's speaking figuratively.


If the comma was strictly a pause for effect, sure. But it's grammatically correct and I wouldn't have ever read it the way you suggested.


The article by mathematician John Kemeny, who amongst other things was an assistant to Albert Einstein at the IAS, describes four methods of applying mathematics to problems that are not innately about numbers (algebraic) or space (geometric). He divides the space of such methods firstly into a) not using numbers, b) introducing artificial numbers, and secondly also into using either 1) algebra or 2) geometry.

For geometry not using numbers, he shows how graph theory can be applied to the problem of social balance as defined by psychologist Fritz Heider. This example is based on work by Dorwin Cartwright and Frank Harary.

For algebra not using numbers, he chooses the theory of group actions, and applies it to a way of preventing incestuous relationships that was used in some cultures, which works by assigning each child a group that they are exclusively allowed to marry in. This example is based on work by André Weil and Robert R. Bush.

For geometry using numbers, he uses an adjancency matrix to show how you can find out how many ways there are to send a message from one person to another in a network.

For algebra using numbers, he defines axioms for a distance function for rankings with ties, which can be shown to be unique (probably up to some isomorphy), and which can be used to derive a consensus ranking from a set of rankings. This appears to be the central piece of the article, as that is an example that he developed himself together with J.L. Snell and which was yet to be published.


Please don't do this here. Article summaries have always been eschewed on HN.


I would have liked a summary before reading.

Why is writing a summary a bad thing?


Because HN readers can't know if the summary is an accurate representation of the original article, nor what detail or nuance has been winnowed out in the summarizing process. But if there is a summary that seems "good enough" to form an opinion, then the discussion on HN will be based on the summary, not on the complete article. We see the same thing with editorialized titles.

A better way to get a taste of the article is to look over the HN discussion. The top comment(s) should give people a hint as to what it's about and whether it's worth the time to read the whole thing. Otherwise just reading the HN discussion should be a good way to get the jist of it. But that only works if enough of the commenters have actually read the whole article rather than a summary.


Kemeny is an interesting fellow. He is part of the duo responsible for the BASIC language (at Dartmouth).

I found his book "Man and the Computer" particularly prescient.

https://en.wikipedia.org/wiki/John_G._Kemeny

https://archive.org/details/mancomputer00keme


Aren’t many algebraic results dependent on counting/divisibility/primality etc...?

Numbers are such a fundamental structure. I disagree with the premise that you can do mathematics without numbers. You can do some basic formal derivations, but you can’t go very far. You can’t even do purely geometric arguments without the concept of addition.


Addition does not require numbers. It turns out, no math requires numbers. Even the math we normally use numbers for.

For instance, here is associativity defined on addition over non-numbers a and b:

a + b = b + a

What if you add a twice?

a + a + b

To do that without numbers, you just leave it there. Given associativity, you probably want to normalize (or standardize) expressions so that equal expressions end up looking identical. For instance, moving references of the same elements together, ordering different elements in a standard way (a before b):

i.e. a + b + a => a + a + b

Here I use => to mean "equal, and preferred/simplified/normalized".

Now we can easily see that (a + b + a => a + a + b) is equal to (b + a + a => a + a + b).

You can go on, and prove anything about non-numbers without numbers, even if you normally would use numbers to simplify the relations and proofs.

Numbers are just a shortcut for dealing with repetitions, by taking into account the commonality of say a + a + a, and b + b + b. But if you do non-number math with those expressions, they still work. Less efficiently than if you can unify triples with a number 3, i.e. 3a and 3b, but by definition those expressions are respectively equal (a + a + a = 3, etc.) and so still work. The answer will be the same, just more verbose.


>Numbers are just a shortcut for dealing with repetitions

An interesting explanation, I think I agree


That is not really a very deep result.


Lena Söderberg expressed her wish for her image to be "retired from tech" in 2019 (see the end of this clip, https://vimeo.com/372265771), when the above alternative image was published.


According to that blog post (https://security.googleblog.com/2024/09/eliminating-memory-s...), the vulnerability density for 5 year old code in Android is 7.4x lower than for new code. If Rust has a 5000 times lower vulnerability density, and if you imagine that 7.4x reduction to repeat itself every 5 years, you would have to "wait" (work on the code) for... about 21 years to get down to the same vulnerability density as new Rust code has. 21 years ago was 2004. Android (2008) didn't even exist yet.


Remember that there are other types of vulnerabilities too. If there are less of them in old code then it may make up for more memory issues.


If you want to keep XSLT in browsers alive, you should develop an XSLT processor in Rust and either integrate it into Blink, Webkit, Gecko directly, or provide a compatible API to what they use now (libxslt for Blink/Webkit, apparently; Firefox seems to have its own processor).


There's no need; they already have a polyfill for XSLT I believe, they could ship that as part of the browser. Or compile libxslt to webassembly


> you should

or a multi-trillion dollar company should.


Fair point. But probably not going to happen...


Counter question: How do you know it works?

A file manager better be rock solid, I don't want a bug to delete any files or do other shenanigans.


That is a valid question.

But that would apply to any app that deals with files like this one does.

This one is open source and we can run some code analysis on it, compile locally, etc. I am not well versed in security checks but I guess you get the idea.


For those who want to use Google's Android File Transfer app for Mac, which for some reason isn't regularly available from Google anymore, it's still available by direct download: https://dl.google.com/dl/androidjumper/mtp/current/AndroidFi...


What really grinds my gears is that I have devices that only work with AFT and not OpenMTP. Like my Hisense A9. Because AFT will crash if you try to transfer hundreds of files. I wish I could get rid of AFT but I can't

I also have a usb-c flash drive for copying as well.

Amazon has a great MTP app but it only works with Kindles.


Thank you, i've been looking for this for years


Does the app itself still work?


I have used it a year ago with macOS 14 or 15 and it worked. I've had problems copying too many files at once (don't remember the problem exactly), that's why I only copy about 100 at a time.

Your mileage may vary.


Are you speaking of chroma subsampling, or is there a property of the discrete cosine transform that makes it more effective on luma rather than chroma?


Probably chroma subsampling - storing color at lower resolution than luminance to take advantage of the aforementioned sensitivity difference. Since it’s stored at 1/4 resolution it can alone almost halve the file size.

Saying it’s the insight that led to JPEG seems wrong though, as DCT + quantization was (don’t quote me on this) the main technical breakthrough?


Chroma subsampling was developed for TV, long before JPEG.


Would you tolerate using DANE?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: