Hacker Newsnew | past | comments | ask | show | jobs | submit | more tootyskooty's commentslogin

Before LLMs (not hard-set order): IDE/interface -> Stack Overflow -> Docs -> Library code -> Github

LLMs now slot in first or second, typically completely eliminating SO. Others still provide value.


Woah so I bet you are an LLM power user


One underdiscussed advantage is that an LLM makes knowledge language agnostic.

While less obvious to people that primarily consume en.wiki (as most things are well covered in English), for many other languages even well-understood concepts often have poor pages. But even the English wiki has large gaps that are otherwise covered in other languages (people and places, mostly).

LLMs get you the union of all of this, in turn viewable through arbitrary language "lenses".


Google will also have good results to report for this year's IMO, OpenAI just beat them to the announcement


I think google did some official collaboration with IMO, and will announce later. Or at least that's what I read from the IMO official saying "AI companies should wait 1 week before announcing so that we can celebrate the human winners" and "to my knowledge oai was not officially collaborating with IMO" ...


The conclusion is that research takes time to productize, and this is cutting-edge research. OAI employees stated that there isn't anything math-specific (think AlphaGeometry) about this model. It's a general system.


Honestly might be more indicative of how far behind vision is than anything.

Despite the fact that CV was the first real deep learning breakthrough VLMs have been really disappointing. I'm guessing it's in part due to basic interleaved web text+image next token prediction being a weak signal to develop good image reasoning.


Is there anyone trying to solve OCR, I often think of that annas-archive blog about how we basically just have to keep shadow libraries alive long enough until the conversion from pdf to plaintext is solved.

https://annas-archive.org/blog/critical-window.html

I hope one of these days one of these incredibly rich LLM companies accidentally solves this or something, would be infinitely more beneficial to mankind than the awful LLM products they are trying to make


You may want to have a look at Mistral OCR: https://mistral.ai/news/mistral-ocr


I'm assuming he means the "generate an image and order 500 stickers" one.


I've always wanted a sort of "semantic image store" that I can dump all my photos into and then search for content in English or by similarity metrics.

Have you played around with anything like that? Seems like a locally running CLIP model could do the job.


It's not exactly plain English, but tools like Photoprism run tagging models on their servers so you can search your pictures.


Thanks, I didn't know about Photoprism! My phone seems to do similar auto-tagging but I found it to not be flexible enough.

Honestly might do a CLIP powered version myself. I only need image ("similar picture") and language search, doesn't seem that difficult.


The ICEs are already plenty fast, the issue is they share rail with the much slower and less reliable RBs. Any delay cascades and you can't just make ICEs go faster to catch up.

On a €/(avg. ICE speed) basis likely makes more sense to invest directly in RB.


While DB is obviously involved, this test train included cars from a new design that Siemens primarily is aiming at the export market (E.g. US Brightline West, various projects in Asia, ...).


Yea. Who cares if you can hit 405 km/h if you are just going to get stuck behind a train carrying goods for 2 hours unable to move.


Still working on https://periplus.app, and recently started to see some traction.

It's an environment for open-ended learning with LLMs. Something like a personalized, generative Wikipedia. Has generated courses, documents, exams and flashcards.

Each document links to more documents, which are all stored in a graph you grow over time.


This is great. I love this concept. Built something similar myself a few months back (just the course generation part): https://quickguide.site/

A few courses I generated using above:

- https://dev.to/freakynit/network-security-cdn-technologies-a...

- https://dev.to/freakynit/aws-networking-tutorial-38c1

- https://dev.to/freakynit/building-a-minimum-viable-product-m...

- https://dev.to/freakynit/startup-metrics-5ed7


Supremely impressive, and I lean a bit towards the more AI-hesitant side.


I tried to get it to generate a foreign language reading comprehension course (and even included custom instructions to make the course generate reading comprehension passages to emulate a test), but it just generated a course about _how_ to effectively read different kinds of texts, without actually generating the foreign-language passages themselves.


Yeah, doesn't work for generating language-learning content yet. Something more aligned to what you'd find on Wikipedia tends to work best.

I'm thinking you could have it in the same interface eventually, but right now all the machinery & prompts assume it's decomposable declarative knowledge.


wow I just tried this, absolutely fantastic. I really hope you take this all the way, I will be sharing with friends!


Edit: upgrading my review from fantastic to probably one the best first experiences I've had with an LLM app. You got my money!

Do you have any socials? Would love to keep up with updates about this project


Thanks for the positive feedback (and the sub)!! Means a lot.

No socials so far as I've mostly been posting updates on the Anthropic discord. But I made an X account for it just now (@periplus_app) where I'll mirror the updates.

You can also reach me any time by email for bug reports, feature reqs etc.


Beeminder: Personal accountability through commitment contracts. Helps me stay on track with my goals, often serves as a little extra "push" to do something useful even if I'm really low on willpower that day.

Anki: Maybe not underrated, but seems like it only really took off in language-learning circles. I create a card for anything new that I'd like to retain, and have been doing so for almost 10y now. Really multiplies the long-term value of sitting down and learning, since I can be relatively certain that I'll keep the knowledge with me for a long time. Particularly useful for papers.


Yeah, I've heard about Anki.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: