Hacker Newsnew | past | comments | ask | show | jobs | submit | more precompute's commentslogin

This is local, but I've found that external inference is fast enough, as long as you're okay with the possible lack of privacy. My PC isn't beefy enough to really run whisper locally without impacting my workflow, so I use Groq via a shell script. It records until I tell it to stop, then it either copies it to the clipboard or writes it into the last position the cursor was in.


What computer are you using? You really should give Parakeet a try, I find it runs in a few hundred milliseconds even on a Skylake i5 from 10 years ago.


I've been using a 42-key corne for 5 years and I can't imagine using a regular keyboard. The way I sit is now different: my arms are spread apart instead of crowding in front of me. This alone helps me pay more attention to the work I'm doing because I'm not hunching over. It took a few months for the layout to sink in, but the ability to customize the layout was insanely helpful and now the way I interface with a computer is exactly how I think it ought to be. The entire thing was $90 with a 3d printed case that's still holding strong. I tented it with some jenga blocks and it's flawless. I almost never look at the thing.

The Moonlander is way too large IMO. A 42 key layout is about perfect and requires ~zero wrist movement.

The corne has three thumb buttons on each side, but it's effectively five thumb buttons on each side because two can be pressed at the same time. So your layout can be [Mod1] [Layer] [Mod2] and you can easily use [Mod1]/[Mod2] with anything on [Layer]. And when you press [Mod1], a thumb key on the other hand becomes [Mod2]. So you basically get to use every possible combo. I have five mod keys this way: Shift, Ctrl, Alt, Super and Hyper. And multiple layer keys.


That's a broad statement that's untrue for most genres, at least right now. However, it is true for the YA lit / chic-lit and the smut-disguised-as-fantasy genres. These already have authors pumping out a book every few months with different characters jammed into similar situations. And people have personalized these kinds of interactions already (CharacterAI and others). Literature is literature, whether human or LLM-generated and will sell like hot cakes if it caters to the common denominator well enough.

It can also be seen in newer fanfics. If you visit AO3 you might get to see a few that are completely written with LLMs and the authors sometimes don't even bother reading it themselves before publishing it. The lower quality is almost always apparent from the get-go.

LLMs can be very useful for writing, but I don't see serious writers using LLMs except for maybe checking facts / as a knowledgebase. The lowest tier of readers was already one-shot by LLMs over a year ago.


Lots of stories on youtube seem to be AI written or highly assisted (With AI generated voices, subtitles and pictures).

I was listening to "HFY Sci-Fi" at one point but there are hundreds of channels with "Getting revenge on the Boss" or 50 other story genres. Each pumping out a new story every week. Some taken from other sources, some AI generated.


I have a lot of experience with "LLM voice" as well, and none of that sounds even remotely LLM-written.

The smoking gun for LLM-written text is when the text is a "linked list". It can only ever directly reference the previous thing. That's not the case here. And the latest Hunger Games book isn't yet another amazon-published slopfest. It's been through a couple of rounds of editing, at the very least.

I'm not saying RedditOP is completely off-kilter. There might be something to what he's saying. Maybe Suzanne Collins (the author of the book) has been consuming a lot of LLM-generated content. Or maybe she's just ahead of the curve and writing in a style that's likely to catch fire (no pun intended) [1].

[1]: Yes, I wrote this myself! And the entire reply!


> none of that sounds even remotely LLM-written.

Did we read the same extracts? The nonsensical actions and movements of the lovers in the entire train scene? The obnoxious call and response structure? The absurd comparison between a grandmother skin and a spiders web because "silk"?

> It can only ever directly reference the previous thing. That's not the case here. And the latest Hunger Games book isn't yet another amazon-published slopfest. It's been through a couple of rounds of editing, at the very least.

I don't agree with your assessment here, "the last thing" can be literally anything the user prompts. Are you suggesting that because none of the previous books in the series are written by AI, that's somehow an argument that the latest in the series can't be?


Okay, first, I haven't read that book. I'm going off the cherrypicked examples in the OP.

If a LLM was indeed used, the output was likely massaged to a degree that wouldn't be immediately obvious.

Now, my 2c: Writing is sometimes atrocious and sometimes authors jam in stupid things to maintain flow. If a person could make the connection between "Silk", "Spider", "weaving" and "grandmother" then another person could as well (even when one is "verifying" and another "proving"). And using those properly and in context and succinctly is far beyond most LLMs, and would require a fair amount of gambling, which would be out of character for someone whose writing prowess has been verified pre-LLMs.

As for what I mean by "directly reference the previous thing": LLMs can jam in well-known (to them) phrases and sentences and structures onto an idea/request. However, they are unable to loop upon the particulars of that idea/request in a coherent manner, which leads to slopification at large output sizes, and shows us the ceiling of the quality of writing a LLM can output.


I use AI for world and character-building in the RPG i DM, and this is for sure something an AI would write. AI i use professionally do like linked list, but when i ask it without a MCP, to "write" a character and its background, catchphrases and what he would do in 3-4 different situation, you'll get paragraph that suspiciously look like the first example.


LLMs can change their "voice" with a simple instruction. If detecting LLM-generated text was as easy as you think it is then services in this space wouldn't suffer any false-positives or any false-negatives.


Oh I don't think it's easy or that one can be completely sure about it. And in my experience, LLMs aren't very good at changing their voice. If you have any examples to the contrary, I'd like to see em.


I don't have examples, I'm asking you to provide evidence which supports your extraordinary claim.

You claimed, with a high level of confidence, that the text isn't written by AI because it lacks obvious "tells" which you believe to be present in any LLM generated text. But if the absence of these "tells" reliably indicated human writing then LLM detectors would have false negative rate of approximately 0%, do they?


It's not necessary for the tell to be easily computable. It's not something that's true everywhere, but it has been true more often than not when I've tried to use LLMs. And I haven't seen a LLM-generated piece of writing that's sufficiently long and complicated enough to rule this tell out.


So this is just org-id?


That was an awesome article.


The Night Land by William Hope Hodgson.


Not called "Kinux" or "Linuks" or something? Missed opportunity.


or kinos ;p


kOS!


>Modos, a two-person startup with open-hardware roots, thinks it has cracked part of that problem with a development kit capable of driving an e-paper display at refresh rates up to a record 75 hertz.

Call me crazy, but I'd rather see these guys get a couple million than yet another chatgpt wrapper.


On Debian, I use a zeitgeist-compatible clipboard, and a rofi menu to fuzzy search / select the entries. Zeitgeist uses SQLite as well. It has only ever given me grief when I copied a very large image (>50MB) to the clipboard.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: