Hacker Newsnew | past | comments | ask | show | jobs | submit | hombre_fatal's commentslogin

> Is it a success? What would that mean

To answer this question, you consider the goals of a project.

The project is a success because it accomplished the presumed goals of its creator: humans find it interesting and thousands of people thought it would be fun to use with their clawdbot.

As opposed to, say, something like a malicious AI content farm which might be incidentally interesting to us on HN, but that isn't its goal.


A lot of projects have been successful like that. For a week. I guess "becoming viral" is sort of the success standard for social media, thus for this too being some sort of social media. But that's more akin to tiktok videos than tech projects.

Guys, I can have my AI produce slope and DDoS whatever we want. Just give me a call. LOiC is going to definitely improve the world, surely.

Then eat protein-dense plant foods like tempeh and tofu.

Google “plants are trying to kill you”

The latest social media brain rot is that vegetables are bad for you.

Grifters in this space include Paul Saladino and Anthony Chaffee.


Yeah, LLMs fulfill any goalpost I had in my mind years ago for what AGI would look like, like the starship voice AI in Star Trek, or merely a chat bot that could handle arbitrary input.

Crazy how fast people acclimate to sci-fi tech.


The Mass Effect universe distinguishes between AI, which is smart enough to be a person—like EDI or the geth—and VI (virtual intelligence), which is more or less a chatbot interface to some data system. So if you encounter a directory on the Citadel, say, and it projects a hologram of a human or asari that you can ask questions about where to go, that would be VI. You don't need to worry about its feelings, because while it understands you in natural language, it's not really sentient or thinking.

What we have today in the form of LLMs would be a VI under Mass Effect's rules, and not a very good one.


Note that Mass Effect's world purposely muddles the waters between the two and blurs lines. "Is this a VI or a real AI" is an open question in cases so that the player can explore the idea.

Halo also builds a distinction, with "Smart AI" what we would generally consider AGI and even super AGI, against "Dumb AI" which is purposely limited. Similarly, our current LLMs are similar to "Dumb AI" in shape but not even remotely close in capability.

In both universes, an "AI" or similar system will not hallucinate. If they tell you something wrong, or inaccurate, it's usually because they have been tampered with or because they have "Gone crazy" which is an identifiable state that is not normal and not probabilistic.

Star Trek also makes distinctions. The ships computer for example largely does not make deductions, and doesn't always operate in natural human language but instead requires you use specific phrasing and language. The Star Trek ships computer is basically what using 20 year old text to speech to run Wikipedia and database queries, and that's mostly it. It cannot analyze data itself. Data and the fully conscious Sherlock Holmes are both capable of automatically forming and testing a hypothesis.

It's actually weird how many people don't seem to notice that. The ships computer in star trek is purposely dumb, and command driven. It is not an agent, it does not think, and it does not understand natural human language. We had the star trek ships computer decades ago.


Peter F Hamiltons Sci-Fi novels, do something similar they differentiate between SI (Sentient Intelligence) which is basically their own being, and is not used by people as it would be essentially slavery. And for General Purpose "AI" they use RI which is Restricted Intelligence with strict limits placed around them.

The SI on Peter Hamilton's Commonwealth duology is pretty badass!

This is a great analogy.

The term AGI so obviously means something way smarter than what we have. We do have something impressive but it’s very limited.


The term AGI explicitly refers to something as smart as us: humans are the baseline for what "General Intelligence" means.

To clarify what I meant, “what we have” means “the AI capabilities we currently have,” not “our intelligence.”

I.e., what I mean is that we don’t have any AI system close to human intelligence.


I hated sitting around in school classrooms so much that I fantasized about a system that would just give me work to do and I would turn it in and go about my day.

This description of Alpha school might have been up my alley. AI learning certainly was.

Either way, we do need to think about options for self-motivated students.


Is this really for the students? Or is this for the people who believe they (and by extension their children) are the "Alphas" of society.. are there other values we should be teaching than "superior people do this and if you don't do this you are the inferior".. time will tell I guess. Crabs in buckets and all that.

1. Wordle's word list is going to be a lot more curated than TFA's word list because people want to guess words they use or have heard of, not "aahed".

2. Only a tiny group of people care to "card count" Wordle to rule out words that have already been played because they think that sort of min/maxing is fun. Most people don't even think about that, so whether Wordle reuses words every few years is trivial to them.


I will say that having used the same starter word the whole time that has not come up yet, it's a little disappointing that it may now take even longer to appear.

You may want to swap out aahed if that's what you're rocking.

My favourite starter word has come and gone. So I’m in the opposite situation where I feel relieved to be able to go back to using it.

Have you checked it didn't come up before you started?

> Wordle's word list is going to be a lot more curated than TFA's word list because people want to guess words they use or have heard of, not "aahed"

The Times sure doesn't think that about the people who do Letter Boxed. One LB had "polymethylmethacrylate" in its dictionary.

I've saved the daily dictionaries from 2024-03-30 and that's the longest word out of the 93 393 total distinct words in the 674 dictionaries I've saved. They average 1199.47 words per dictionary.

They have some truly ridiculous words, such as "troughgeng". WTF is a troughgeng? Googling that gives a couple of pages in Chinese (or a similar looking language) and a Scottish dictionary entry for "Throu" which in one of the examples of "throu" as an adverb lists a bunch of phrases is it used in, including:

> (8) througang, throw-, throoging, trough-geng, -geong (Sh., Ork.), (i) a going over or through; a passage (I.Sc. 1972); specif. (ii) a narration, a recital (of a story); (iii) a full rotation of crops, a shift; (iv) a thoroughfare, lane, passageway, corridor open at either end (Sc. 1808 Jam.; Sh. 1908 Jak. (1928); Rxb. 1923 Watson W.-B.; Ork., w.Lth., wm.Sc. 1972). Also attrib.; (v) = (5); (vi) energy, drive (Bnff. 1866 Gregor D. Bnff. 192);


> people want to guess words they use or have heard of, not "aahed"

That isn't a correct diagnosis; people have heard of aahed. You'll find it naturally in the expression "[someone] oohed and aahed".

People don't want aahed, and their instinct that it shouldn't count is reasonable, but unfamiliarity isn't the problem with it.


Ooh and aah aren't words, they're sounds (onomatopoeia). A sound is just a sequence of letters used for their phonological values.

You can spell the sound "ah" however you like: ah, ahh, aah, aahh, there's no wrong way to spell it.

If you write "the washing machine tringged when it finished", 'tring' is not a word, even though it's following the rules of English morphology, you could have written any sequence of letters that most faithfully reproduces the sound of the washing machine. You could have written katrigged or puh-tringged.



That is false; the fact that you can conjugate aah (or tring) into the past tense is sufficient to prove it's a word.

Ooh and aah most certainly are words. Is meow not a word? Can I spell it miough and sit smugly correct?

It's true that onomatopoeia isn't always a word, but in the particular case of "aah", I think that particular choice of letters is conventionalized enough that it is a word.

The Wordle list is available here (in addition to many other places): https://github.com/pseudosavant/ps-web-tools/blob/main/wordl...

Has anyone confirmed if they still use only this original list? I would think the NY Times could change the word list however they choose.

They changed some words pretty much right after the acquisition. There was some controversy when they started doing "themed" words (like Christmas stuff in December) vs more "random" words. Some words were also removed for having negative vibes/political liability

They removed WENCH from the list of upcoming solutions fairly quickly, but forgot to add it back to the list of available words so you couldn't use it as a guess for a little while. It made it back to the list eventually.

I believe these lists are more like what is described in the blog post. Diction of words, filtered to 5 letter words, no plurals, etc. It most likely has 99%+ of the words, but maybe some they don't actually use in Wordle.

claude -p "Make a menubar app with AppKit (Cocoa) that does X"

Bikeshedding over the language is a huge waste of time, too.

I haven’t written a line of Nix since I started using it, yet it defines three of my systems. I just read diffs that an LLM created when editing my config.

Making a big deal about the language substrate feels like someone still trying to argue over vim vs emacs. It’s trivial and uninteresting.


Something worth appreciating about LLMs and Moltbook is how sci-fi things are getting.

Sending a text-based skill to your computer where it starts posting on a forum with other agents, getting C&Ced by a prompt injection, trying to inoculate it against hostile memes, is something you could read in Snow Crash next to those robot guard dogs.


I installed NixOS on my desktop and used Sway for a while before switching to Niri.

With Sway, I'm constantly having to find a place to open a new window (tuck it into the current workspace or create Yet Another One). Or I'd slot it into some tabbed group and forget.

With Niri, I hate to admit it, but even after a month I would get lost. I would lose track of where things were not just between workspaces, but even on the same workspace: was that one claude terminal I'm looking for scrolled off to the right or left?

I ended up writing my own Fuzzel tools so that I could do the macOS thing where I alt-tab to apps and then alt-tilde between apps of the same kind.

But in the end I couldn't make it more productive than my macOS workflow with a global hotkey iTerm2 window with 10 tabs and then just alt-tabbing + alt-tilde between apps.


> was that one claude terminal I'm looking for scrolled off to the right or left?

Isn't that what the overview feature is for?

Video: https://github-production-user-asset-6210df.s3.amazonaws.com...


Kind of. In practice it's like that equivalent macOS view that shows all your windows: you don't want to use it that often.

Also, it zooms out too far to distinguish text. And if you configure it to not zoom out so far, it also loses its overview power.


I've had a pretty good experience setting up a launcher of some kind that can fuzzy find from my open programs/windows. super+space "fi" to pull up my open Firefox. On MacOS I have super+tab bring up Alfred with a fuzzy find through my open tabs. I need to get around to figuring out something similar for my Linux DE.

I just start closing stuff when this happens. If I can't remember why a window is open, it probably won't hurt to close it.

Right Cmd app and mapping caps to right command, deterministic window switching is key.

I used caps jkl; chording to give me left/right: quarter, half, 2/3rds, full and the k and l alone to give me different middle of window widths. caps I switches screens and caps U to rotate heights.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: