Hacker Newsnew | past | comments | ask | show | jobs | submit | proof_by_vibes's commentslogin

I'll take three hours instead and hop on a train to get brunch.

I agree that it is relative, but disagree with your conclusion. I think the relativity you have in mind is what we normally think of as a setting.


I've been iterating on sodium bindings in Lean4 for about four months, and now that I've gotten to Ristretto255 I can see why the author is excited about its potential. Ristretto is a tightly designed API that allows me to build arbitrary polynomials on Curve25519 and I've been having a blast tinkering and experimenting with it! If the author by chance reads this, just want to say thank you for your work!


You have a public repo of this?


Yes: https://github.com/rj-calvin/sodium

The bindings are set and have a monadic interface, but there's some abstractions that still need refining/iterating: mostly I want to be able to formalize keyboard input and eventually build a tactic framework for zero-knowledge proofs.


I've been writing [libsodium](https://doc.libsodium.org/) bindings in Lean4 and have ended up using `native_decide` quite liberally, mostly as a convenience. Can any Lean devs provide a more thorough interrogation of this? Should I go back and try to scrub its usage out of my library? Logically it seems consistent with what I'm trying to do with Lean4's FFI (i.e. you really do need to explicitly trust the Lean kernel since I'm adding nontrivial computation using a foreign cryptography library) but I'm curious if this isn't necessary and whether Lean devs would push back on its use.


This is perfect. I'm currently creating a MUD and these are exactly the kind of fonts I want. Thanks for sharing!


I was personally looking for a bitmap font that resembled old fantasy games for use in a kernel. I was able to write a compile time constant parser for the .hex file format used here.

Do you have a link to the MUD you're working on?


Are there any experts that could help me bootstrap myself on the current literature on "world models?"


In this current generation, "world models" is basically a marketing term. You can research gaussian splatting, novel view synthesis, neural radiance fields (nerf), etc... I find Mr Nerf is good to follow: https://x.com/janusch_patas

There is another thing called world models that involves predicting the state of something after some action. But this is a very very limited area of research. My understanding of this is that there just isn't much data of action->reaction.

Same issue with gaussian splatting/nerf really, very little data (relative to text/images/videos) of text -> 3d splats. I'd guess what world labs are doing is text -> image -> splats, but of course it is just speculation.


> There is another thing called world models that involves predicting the state of something after some action. But this is a very very limited area of research. My understanding of this is that there just isn't much data of action->reaction.

Folks interested in this can look up Yann LeCun's work on world models and JEPA, which his team at Meta created. This lecture is a nice summary of his thinking on this space and also why he isn't a fan of autoregressive LLMs: https://www.youtube.com/watch?v=yUmDRxV0krg


I'm of the opinion that formalization is the biggest bottleneck of current generation LLMs. However, I don't think that this necessarily suggests that LLMs don't benefit from formal methods. Given existing abstractions, Lean4's exceptional tooling allows for more efficient iteration with LLMs and requires less human supervision since Lean's language server provides specific and actionable feedback whenever the LLM makes a mistake. I've also noticed that LLMs can reason about code written in Lean4 far more effectively than in Python, despite Python having orders of magnitude more training data than Lean.

Nonetheless, I concur that LLMs don't yet know how to translate a request stated in a prompt to a complete Lean4 interpretation. My practice so far has usually required me to first choose an existing reference file that is similar to my desired goals, and use this reference as "inspiration" for how the LLM should go about formalization.


Finally I find this argument. Agreed, and I'm baffled that people think that AI is what's going to "solve loneliness." Loneliness has already been solved by YouTube/Twitch. The brain is easily tricked into thinking that it is "being social" when it is subject to the effects of the parasocial relationships that are formed by these platforms. People's afternoons are rapidly becoming consumed by hours of YouTube where they come out of it with a brain telling them: "boy, that's enough social interaction for today!" Introversion has become an epidemic as a result.


It's not just streamers - fictional characters are also increasingly engineered to be this way. Besides the loot box aspect, many East Asian gacha games are built with parasocial relationships with the characters in mind, for one.

(See community controversies surrounding Girls' Frontline 2 and Snowbreak for examples.)


As a playwright, I've certainly thought about AI impacting the art. In fact, it was the very eloquence of chatgpt's output that initiated all of this mania in the first place: not only was chatgpt able to explain to me gauge theory with surprising accuracy, it was able to do so using perfect Elizabethan english—exactly as I had instructed it to.

There is a missing ingredient that LLMs lack, however. They lack insight. Writing is made engaging by the promise of insight teased in its setups, the depths that are dug through its payoffs, and the revelations found in its conclusion. It requires solving an abstract sudoku puzzle where each sentence builds on something prior and, critically, advances an agenda toward an emotional conclusion. This is the rhetoric inherent to all storytelling, but just as in a good political speech or debate, everything hinges on the quality of the central thesis—the key insight that LLMs do not come equipped to provide on their own.

This is hard. Insight is hard. And an AI supporter would gladly tell you "yes! this is where prompting becomes art!" And perhaps there is merit to this, or at least there is merit insofar as Sam Altman's dreams of AI producing novel insights remain unfulfilled. This condition notwithstanding, what merit exactly do these supporters have? Has prompting become an art the same way that it has become engineering? It would seem AlphaWrite would like to say so.

But let's look at this rubric and evaluate for ourselves what else AlphaWrite would like to say:

```python # Fallback to a basic rubric if file not found return """Creative writing evaluation should consider: 1. Creativity and Originality (25%) - Unique ideas, fresh perspectives, innovative storytelling 2. Writing Quality (25%) - Grammar, style, flow, vocabulary, sentence structure 3. Engagement (20%) - How compelling and interesting the piece is to read 4. Character Development (15%) - Believable, well-developed characters with clear motivations 5. Plot Structure (15%) - Logical progression, pacing, resolution of conflicts""" ```

It's certainly just a default, and I mean no bad faith in using this for rhetorical effect, but this default also acts as a template, and it happens to be informative to my point. Insight, genuine insight, is hard because it is contingent on one's audience and one's shared experiences with them. It isn't enough to check boxes. Might I ask what makes for a better story: a narrative about a well developed princess who provides fresh perspectives on antiquated themes, or a narrative about a well developed stock broker who provides fresh perspectives on contemporary themes? The output fails to find its audience no matter what your rubric is.

And here lies the dilemma regarding the idea that prompts are an art: they are not. The prompts are not art by the simple fact that nobody will read them. What is read is what all that is communicated and any discerning audience will be alienated by anything generated by something as ambiguous as a English teacher's grading rubric.

I write because I want to communicate my insights to an audience who I believe would be influenced by them. I may be early in my career, but this is why I do it. The degree of influence I shall have measures the degree of "art" I shall attain. Not by whether or not I clear the minimum bar of literacy.


Testing in general is quickly being outmoded by formal verification. From my own gut, I see software engineering pivoting into consulting—wherein the deliverables are something akin to domain-specific languages that are tailored to a client's business needs.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: