The company features on the front page was a great opportunity to point out their positive impacts on the world. Instead, they focus on $BBB. Nice new site though.
I think that a warning of public ridicule may be fine. However, actually doing it is quite low brow IMO. I'm sad to see more and more otherwise admirable projects step down to that (assuming they actually do it).
An unenforced threat is toothless. Publicly stating we do not appreciate XYZ pr that was ai generated, low effort and in bad faith is perfectly acceptable.
> The industry had reason to be optimistic that 2025 would prove pivotal. In previous years, AI agents like Claude Code and OpenAI’s Codex had become impressively adept at tackling multi-step computer programming problems.
Thanks for sharing. I always enjoy reading author-publisher process articles as they get to the true behind the scenes story. I can relate to most things mentioned, and the terms seem identical to what I had when writing Modern Fortran with Manning. I also started with the intent to write for experts, but the publisher pushed for targeting beginners. The author can concede or (usually) give up the project.
One important aspect to this is that a typical first-book technical author knows well the subject matter, and sometimes knows how to write too (but usually not, as was my case), but does not know how to edit, typeset, publish, market, and sell well. That's what the publisher knows best. And of course, they want sales, and they understand that overall beginner books sell better than advanced/expert level books.
I encourage the author to continue writing and self-publish, and at a later time a publisher come to package and market a mostly finished product.
Neal delivers. I recently learned that viruses are not considered living being, but I'm nevertheless happy they're included here because they're both relevant and interesting in this context.
Not that I'm qualified to reply, but I think this is debated. I seem to recall reading in "Immune" by Philipp Dettmer that there is an argument that a virus is analogous to a spore stage of life, and the virus begins "living" when it plants itself inside a cell full of "nutrients", sheds it's skin and begins consuming and replicating.
It is always going to be controversial but after discovery of prions - needle shifted to "self-replicating means nothing and viruses are also dead". Then scientists also found viruses large enough that they get infected with other viruses, and parasitic cells that are missing most parts required for metabolism, so it is getting more fuzzy again.
From what I remember from undergrad the reason they're not life is that they lack their own metabolism, they use the metabolism of host cells. And metabolism needs to be a constant thing, they don't have any when outside a cell.
Hey, if they originated naturally and interact with the environment and reproduce, they are living beings. Mere human taxonomists can't just "classify" away the fact.
"Recent large-scale upticks in the use of words like “delve” and “intricate” in certain fields, especially education and academic writing, are attributed to the widespread introduction of LLMs with a chat function, like ChatGPT, that overuses those buzzwords."
OK, but please don't do what pg did a year or so ago and dismiss anyone who wrote "delve" as AI writing. I've been using "delve" in speech for 15+ years. It's just a question where and how one learns their English.
Funny enough, I avoided the em dash, because everyone was using hyphens and I didn't want forensic linguistics bored. Now that AI got my FBI agents on welfare and em dashed the internet kaputt, now that I am liberated, I can't tell an em dash and hyphen apart, hand–written in my diary.
Genuine question, do you actually use the formal emdash in your writing? AIs are very consistent about using the proper emdash—a double long dash with no spaces around it, whereas humans almost always tend to use a slang version - a single dash with spaces around it. That's because most keyboards don't have an emdash key, and few people even know how to produce an actual emdash.
That's what makes it such a good giveaway. I'm happy to be told that I'm wrong, and that you do actually use the proper double long dash in your writing, but I'm guessing that you actually use the human slang for an emdash, which is visually different and easily sets your writing apart as not AI writing!
I have a Compose key binding in https://github.com/kragen/xcompose which maps Compose Space Minus to "—" with two thin spaces on each side of it, because I prefer the spaces. But HN rewrites the thin spaces to regular spaces, so on HN I just use "—" without the spaces, the way ChatGPT does, which is Compose Minus Minus Minus, and is in the standard Compose key bindings (if you map your keyboard to have a Compose key at all).
Same. I feel like I've been using "--" in my online writing for decades now. Take that, LLMs; I used it before it was cool... er... before it was a weak signal that a piece of text was written by an LLM.
In LaTeX (and probably smartypants which is another of those bare pre-unicode ASCII to fancy text converters that can get stacked into markdown--but I can't remember if dash handling specifically is in there), "--" is en-dash and "---" is em-dash. The single "-" gives a hypen which is handled differently than an en-dash in typesetting.
So... that's just to say that people who are exposed to the sorts of can't-unsee-it-now typesetting OCD that LaTeX and various popular extension packages within that ecosystem exposes can learn to write write "--" as en-dash.
It's sort of like being unable to return to the blissful state of not being hyperaware that Ariel and Helvetica are different.
Macs and iDevices have been auto-transforming -- into – for well over a decade now, and on the iOS standard keyboard both – and — are just a single long press of the dash key away.
Microsoft Word does this too. I've recently started manually uncorrecting these corrections in my writing because of this new implication that I used Chat-GPT.
Still less obvious than the emails I see sent out which contain emojis, so maybe I'm overthinking things...
This is a ridiculous maladaptive behavior. Word has been replacing dashes forever, consequently it has been unintentionally ubiquitous in business writing forever. That this character ever became a heuristic for AI is silly.
the dashes and the auto capitalization are awful for technical writing. The ctrl-z becomes painful and annoying very quickly. Would that Word supported markdown.
It’s a per-app setting that sometimes needs to be set in the text field’s context menu. There’s also a few apps that just don’t integrate with the macOS text system.
I would write it with Option-shift-hyphon when I used macOS.
On Linux, I use Compose-hyphen-hyphen-hyphen.
I don't use it as often as I used to; but when I was younger, I was enough of a nerd to use it in my writing all the time. And yes, always careful to use it correctly, and not confuse it with an en-dash. Also used to write out proper balanced curly quotes on macOS, before it was done automatically in many places.
I always used to google search "emdash unicode" and copy-paste the character, but I guess now I'll save several minutes from my essay-writing by switching to the lazy single-dash typology that I don't like the look of. Soon I'm going to have to start throwing in speling errors and other things too.
> That's because most keyboards don't have an emdash key, and few people even know how to produce an actual emdash.
There’s a subculture effect: this has been trivial on Apple devices for a long time—I’m pretty sure I learned the Shift-Option-hyphen shortcut in the 90s, long before iOS introduced the long-press shortcut—and that’s also been a world disproportionately popular with the kind of people who care about this kind of detail. If you spend time in communities with designers, writers, etc. your sense of what’s common is wildly off the average.
> Genuine question, do you actually use the formal emdash in your writing?
"the formal emdash"?
> AIs are very consistent about using the proper emdash—a double long dash with no spaces around it
Setting an em-dash closed is separate from whether you using an em-dash (and an em-dash is exactly what it says, a dash that is the width of the em-width of the font; "double long" is fine, I guess, if you consider the en-dash "single long", but not if, as you seem to be, you take the standard width as that of the ASCII hyphen-minus, which is usually considerably narrower than en width in a proportional font.)
But, yes, most people who intentionally use em-dashes are doing so because they care about detail enough that they are also going to set them closed, at least in the uses where that is standards. (There are uses where it is conventional to set them half-closed, but that's not important here.)
> whereas humans almost always tend to use a slang version - a single dash with spaces around it.
That's not an em-dash (and its not even an approximation of one, using a hyphen-minus set open—possibly doubled—is an approximation of the typographic convention of using an en-dash set open – different style guides prefer that for certain uses for which other guides prefer an em-dash set closed.) But I disagree with your claim that "most humans" who describe themselves as using em-dashes instead are actually just approximating the use of en-dashes set open with the easier-to-type hyphen-minus.
It was an abuse of “slang” to mean “typographic approximation”; now what, exactly, did you think “and its not even an approximation of one, using a hyphen-minus set open—possibly doubled—is an approximation of the typographic convention of using an en-dash set open” meant?
I do use the hyphen-minus set open sometimes - I'd prefer em-dash closed everywhere, but sometimes it's difficult to type an em-dash, and if I'm having to use hyphen, a closed hyphen looks very wrong. Similarly, "--" is shorthand for en-dash as you say, and "---" (even closed) looks too busy.
I’ve used “real” em-dashes and en-dashes in my writing generally since I switched to using Macs about 20 years ago. Before that I used them for e.g. academic writing, which I mainly did in LaTeX, but not so often elsewhere.
They’re simple enough key combinations (on a Mac) that I wouldn’t be surprised if I guessed them. I certainly find it confusing to imagine someone who has to write professionally or academically not working out how to type them for those purposes at least.
I will use a double hyphen: -- which Microsoft Word and I think most word processors I've used will auto-replace with an em dash. I will sometimes even type the double hyphen to represent an em dash in places where it doesn't get replaced, like internet comments. I'm kind of surprised more people don't use two hyphens as em dash shorthand, to be honest.
IIRC, -- for emdash used to be common on Usenet, which is where I picked it up and still do it. But there's a word for us with usenet experience -- old. (should have been a colon there, but...)
Most people probably don't. I'm an editor who's been working in print for years, so the keyboard shortcut for an em dash is muscle memory for me at this point. I have always been a Chicago Manual of Style person, so I don't place spaces around the em dash. AP style guide users do place a space around it.
I have --- set to autocorrect to —. I've been using it in formal writing for 30 years. When we were in high school, we had a "Dash Party" in English class, where we ate Twinkies and learned about the different dashes.
I would argue that LLMs overuse the emdash more because they overuse specific rhetorical devices, e.g. antithesis, than because they are being too correct about punctuation.
> Genuine question, do you actually use the formal emdash in your writing?
I’m not the person you asked, but I do.
> the proper emdash—a double long dash with no spaces around it
The spaces around it depend on style guide, it is not universal that they should not exist.
> That's because most keyboards don't have an emdash key
Nor do they have keys for proper quotes and apostrophes or interrobangs, yet it doesn’t stop people from using them. The keys don’t need to exist.
> That's what makes it such a good giveaway.
It’s not. It might be one signal but it is far from sufficient.
> I'm happy to be told that I'm wrong, and that you do actually use the proper double long dash in your writing
I do use the proper em-dash in my writing—and many other characters too—and my HN history is ample proof. I explained at length in another comment how I insert the characters, plus how simple it is if you use any Apple OS.
Nah, I've been using them correctly for years. My preference for them came by way of reading a lot in generally, but especially from Salinger.
I do them without surrounding spaces, because that's... how you're supposed to use them, and it's also less typing.
They also used to be a really good Shibboleth to tell if someone was using a Mac—the key combo on there is easy, and also easy to remember, so Mac users were far more likely than the median to employ em-dashes. It wasn't a sure tell, but it was pretty reliable.
I use en dash with two spaces and have done so before AI. But my comments here are from after GPT 4 released, so I guess I can't prove I didn't use AI to write them, although I don't think any AIs use that style. Here is one from February 2024: https://news.ycombinator.com/item?id=39386480. I don't like how "-" looks, it just looks like a minus sign and too short.
Been using shift+option+hyphen to make and use em-dashes (sans spaces) since at least 2005, when I got my first publishing job and also started blogging (so writing a ton more). I also use option+hyphen (en-dash) for date and number ranges. In my experience, ChatGPT consistently adds spaces around both.
I've had an Autohotkey replacement for the proper em dash character for over 10 years, using shorthand characters which triggers the replacement. Whether spaces are around the dash is a difference in style (see: various publications' style guides), though I use the no spaces style.
Being able to insert self-interjections and such with the correct character would undoubtedly be more widespread if it were more accessible to insert for most.
I for one, use an actual em dash in my writing—or at least I used to. Option + Shift + the hyphen key on Mac. I never knew if I was using it correctly, but I'd learn to copy how I'd seen it used in books and articles and things. Now, I have an incessant paranoia around using it.
The same way it learned to act like a personal assistant, even though very few humans are personal assistants.
The LLM is first trained as an extreneley large Markov model predicting text scraped from the entire Internet. Ideally, a well trained such Markov model would use em dashes approximately as frequently as they appear in real texts.
But that model is not the LLM you actually interact with. The LLM you interact with is trained by somethig called Reinforcement Learning from Human Feedback, which involves people reading, rating and editing its responses, biasing the outputs and giving the model a "persona".
That persona is the actual LLM you interact with. Since em dash usage was rated highly by the people providing the feedback, the persona learned to use it much more frequently.
Option + shift + hyphen or hold hyphen on any Mac or iPhone to get an em dash. I use them very frequently, because they're the correct character for the use case.
It depends on the keyboard layout. Some (US) do have it like you described, but others have the dashes reversed.
Both make sense, to a degree. On the one hand you can argue that the em-dash—being longer—should require and extra key, but on the other hand it has more uses so it should not have the extra key to be more accessible.
I've found that people who say this sort of thing rarely change their beliefs, even after being given evidence that they are wrong. The fact is, as numerous people have pointed out, Word and other editors/word processors change '--' to an em-dash. And the "slang version" of an em-dash is "I went to work--but forgot to put on pants", not "I went to work - but forgot to put on pants".
BTW, "humans almost always tend to use" is very poor writing--pick one or the other between "almost always" and "tend to". It wouldn't be a bad thing if LLMs helped increase human literacy, so I don't know why people are so gung ho on identifying AI output based on utterly non-substantive markers like em-dashes. Having an LLM do homework is a bad thing, but that's not what we're talking about. And someone foolishly using the presence of em-dashes to detect LLM output will utterly fail against someone using an editor macro to replace em-dashes with the gawdawful ' - '.
But do you call that latter thing you do “an em-dash”? Do you tell a peer “You should put an em-dash here” when what you mean is a “space en-dash space”?
P.S. Why would someone be "suspicious" of people doing their writing in Word and copying it into a comment field? Suspicious of what, exactly? What crime is being committed? The issue here is AI, not people's workflow methods. I have sometimes written lengthy comments in my editor (emacs) which gives me many more editing features, doesn't throw away my work with the wrong keystroke (not a problem at HN but it is at sites where the comment field is a pop-up), and doesn't randomly freeze or slow down radically (this seems to be a problem with my browser).
I reject everything else about that poorly reasoned "suspicious" response as well.
I type em dashes as double hyphens. Sometimes the software resolves them to a true em dash, but sometimes not.
I never use hyphens where em dashes would be correct.
I do have issues determining when a two-word phrase should or shouldn't be hyphenated. It surely doesn't help that I grew up in a bilingual English/German household, so that my first instinct is often to reject either option, and fully concatenate the two words instead.
(Whether that last comma is appropriate opens a whole other set of punctuation issues ... and yes, I do tend to deliberately misuse ellipses for effect.)
Reddit and HN are among the highest quality sources of training text and are probably weighted very heavily as "probably human" in the mainstream models.
Any source of text with huge amounts of automated and community moderation will be better quality than, say, Twitter.
That depends heavily on the subreddits you browse. There absolutely are places with high quality content, though it feels like they are getting sparser and sparser.
Not in that sense; high quality in the sense that there are a lot of actual, real people posting there, and those people tend to come from a pretty diverse set of backgrounds.
Perhaps on the smaller subreddits, but have a look at /r/all on any given day and it's obvious that real people, and diverse backgrounds, it is not. Every single subreddit that goes above a certain activity threshold collapses into the exact same state of astroturfed, mass-produced political slop targeted towards low IQ people.
Although I'm sure @stinkbeatle was joking, I should clarify that most LLMs are trained on books and online articles written by professional writers. That's why they tend to have a rich vocabulary and use things like hyphens.
I agree, HN is an amazing community with brilliant people and top quality content, but it's not enough to train an LLM.
Last thing. An LLM is just a tool, it can clean up your writing the same way a photo app can enhance your pictures. It took a while for people to accept that grandma's photos looked professional because they had filters. Same will happen with text. With ChatGPT, anyone can write like a journalist. We're just not used to grandma texting like one, yet :)
I often edit things in Word — I have a document that I can alt-tab to and type things. It has spellcheck, etc. that my browser window does not, and I’m not at risk of losing if I refresh or something. Then copy-paste back.
Word converts any - into an em dash based on context. Guess who’s always accused of being a bot?
The thing is, AI learned to use these things because it is good typographical style represented in its training set.
Dammit -- I use my dashes all the time (though always double them like here). I hope AI didn't ruin this for me.
(I learned to use dashes like this from Philip Dick's writings, of all places, and it stuck. Bet nobody ever thought of looking for writing style in PKD!).
I encountered the TeXbook at a young and impressionable age, and ever since I've used em- and en-dashes a bit more often than a style guide would suggest. Not to mention diareses, though those haven't been flagged as LLM stigmata yet.
My workaround (well, to be honest, I've always done this: I love a good em dash, they're terrifically satisfying to use, but I'm too lazy to type them), is to use two single dashes--like so.
Good. It's a crutch for poorly composed sentences or for prose intending to imitate the affect of poorly composed sentences. There's not a single sentence under the sun that needs an emdash. Commas and parentheses can do it all, and an excess of either is a sign of poorly edited prose.
I don't buy the pro-clanker pro-em dash movement that has come out of nowhere in the past several years.
> There's not a single sentence under the sun that needs an emdash
Sentences "need" very little, but without style and personality, writing becomes very boring. I suppose simplicity without any affectation works for raw communication of plain technical facts, but there's more to writing than that.
What's the error? I'd hyphenate "poorly-composed" (most wouldn't these days, but they can go to hell) and I think it's a bit too wordy for what it's communicating, but I don't see what I'd call an actual error.
I would personally avoid writing that "poorly composed sentences" have an "affect"—rather than the writer having or presenting an affect, or the sentences' tone being affected—as I find an implied anthropomorphizing of "sentences" in that usage, which anthropomorphizing isn't serving enough useful purpose, to my eye, that I'd want it in my writing, but I'm not sure I'd call that an error either.
What did you mean?
> Commas and parentheses can do it all, and an excess of either is a sign of poorly edited prose.
This attitude, however, is a disease of modern English literacy.
I meant "affect" and not "effect." You need to learn what affect means. I'm not asking you to learn about affect theory, but ffs no part of my sentence implied it meant "effect" and not "affect." Ugh. It doesn't even make sense. What would the "effect" of "poorly composed sentences" be? Only affect makes sense there.
Psychology., feeling or emotion.
Psychiatry., an expressed or observed emotional response.
Restricted, flat, or blunted affect may be a symptom of mental illness, especially schizophrenia.
Obsolete., affection; passion; sensation; inclination; inward disposition or feeling.
Now let's replace that in my original phrase:
> prose intending to imitate the affect of poorly composed sentences
becomes
> prose intending to imitate the feeling or emotion of poorly composed sentences
My point was that the author is trying to convey a specific feeling by way of poorly composed sentences. Perhaps they want a colloquial feel or a ranting feel or a rambling one. An obvious example would be the massive run-on sentence in Ulysses.
For both of these examples who the fuck cares. I just evaluate AI writing people send me the same as any writing.
If they’re using AI to speed things up and deliver really clear and on point documents faster then great. If they can’t stand behind what they’re saying I will call them out.
I get AI written stuff from team members all the time. When it’s bad and is a waste of my time I just hit reply and say don’t do this.
But I’ve trained many people to use AI effectively and often with some help they can produce way better SOPs or client memos or whatever else.
It’s just a tool. It’s like getting mad someone used spell check. Which by the way, people used to actually argue back in the 80’s. Oh no we killed spelling bees what a lost tradition.
This conversation has been going on as long as I’ve been using tech which is about 4 decades.
Use of spell check is a net positive but it has led to some widespread errors, like people (and widely read publications) misspelling "led" as "lead" (pet peeve).
But yes, it's absurd to complain about LLMs resulting in increased literacy.
I think it's easier to just stop using em dashes, as much as I like them. People have latched on to this because it works a good amount of the time, so I don't think they will stop. I don't even think they should stop, because, well, it works a good amount of the time.
You're making the point that OP never actually uses the em dash, by surveying their HN comments, in order to defend the notion that no one actually used em dashes prior to their proliferation by LLMs? Or do you mean something else?
You can find an em dash in my comment history if you're curious. Despite what could be said about poor sample selection, consider the imbalance of the argument being made: the frequency of em dash use is disproportionate to the suspicion thrust upon a sample of writing. I.e., a single em dash is suspicious, regardless of how many times it might show up. Therefore, it's more likely that someone who uses em dashes—even if only rarely—will self-select to respond to a thread like this and feel compelled to defend themselves.
Haha yep. I never saw a single person use these in internet comments pre-2023. Plenty of hyphens to simulate it - like this - but not actual em dashes. No matter how many people swear up and down that they're so important.
I hang out in the chess.com daily puzzle chat, which has a large contingent of children, so I see quite a few accusations like "you're all a bunch of nerds". My standard response is "nerd" is what stupid people call smart people.
in your software computing ecosystem and of people you've had this discussion, they don't, but the computer magically replaces - with — in various places, so obviously some people do, in some places. And then you have nerds who still put two spaces after a period, or know the difference between ... and … Do they not exist either? The only reason you think that they overwhelmingly don't is due to your biased lived experience.
My company currently has a guideline that includes “therefore” and similar words as an example of literary language we should avoid using, as it makes the reader think it’s AI.
It really made me uneasy, to think that formal communication might start getting side looks.
What’s worse is that this window might shift as writing becomes less formal and new material is included in the training corpus. By 2035 any language above a first grade reading level will be grounds for AI suspicion.
I sat in a meeting with professionals where one person asked for the presentation to be reworded at a fifth grade reading level. He said it with a straight face.
By 2035 we will live in the world full of TikTok videos where ability to write will be absurd to people as Not Sure in Idiocracy… this is hyperbole, ofc… but you know what I want to say.
Whenever there are commonly agreed upon and known tell-tale signs of AI writing, the model creators can just retrain to eliminate those cues. On an individual level, you can also try to put it in your personalization prompt what turns of phrase to avoid (but central retraining is better).
This will be a cat and mouse game. Content factories will want models that don't create suspicious output, and the reading public will develop new heuristics to detect it. But it will be a shifting landscape. Currently, informal writing is rare in AI generation because most people ask models to improve their formulations, with more sophisticated vocabulary etc. Often non-native speakers, who then don't exactly notice the over-pompousness, just that it looks to them like good writing.
Usually there are also deeper cues, closer to the content's tone. AI writing often lacks the sharp edge, when you unapologetically put a thought there on the table. The models are more weasely, conflict-avoidant and hold a kind of averaged, blurred millennial Reddit-brained value system.
Words like that were banned in my English classes for being empty verbiage. It's a good policy even if it seems like a silly purpose. "Therefore" is clumsy and heavy handed in most settings.
I’m curious about this (I’m not a native speaker). What alternative would you use when you want to emphasize a cause-effect relationship, in an engineering context for example?
“Most times A happens before B, but this order it’s not guaranteed. Therefore, there is a possibility of {whatever}.”
Alternatives that come to mind are “as a consequence”, “as a result”, “this means that”, but those are all more verbose, not less.
A simple “so” could work, but it would make the sentence longer, and the cause-effect relationship is less explicit I think.
"He didn't send the letter. The lawsuit was dropped."
"He didn't send the letter therefore the lawsuit was dropped."
Two very different examples. "therefore" in the second example communicates a causal effect from the independent clause that isn't present in the first example.
I'm sure one could argue that context clues could imply that same connection and therefore "therefore" is redundant but I just don't agree with the premise.
Therefore is reasonable in that case, though it still reads a bit clumsy. "The lawsuit was dropped" seems like the most important part of that blurb, so leading with it flows better. "The lawsuit was dropped after he didn't send the letter" is so much nicer. You get to the point and explain it immediately after instead of giving the reader information you have to contextualize after. "Therefore" just reads as pedantic and overbearing in most situations in my opinion (and I guess my teacher's opinion too).
As I mentioned in a reply to the other comment, this often means you have your ordering mixed up.
As an example, here's what you original statement said (with some grammar corrected):
"Most times A happens before B, but the order is not guaranteed. Therefore, there is a possibility of {whatever}."
Here it is if you lead with the important outcome and provide the justification after, using a non-restrictive relative clause to add the fact that A often happens before B:
"There is a possibility of {whatever}, as, while A happens before B, the order is not guaranteed."
In my opinion, this is clearer in intent. It provides the important information immediately and then justifies it immediately after. The original sentence provides information without context and then contextualizes it using "therefore", which comes across a bit pedantic to me. I am a native American English speaker though, and the tone of prose does vary depending on the culture of the person reading it.
Therefore isn't empty verbiage. It's just communication, it's a conjunctive adverb. Therefore implies causation or at least some connection between clauses.
I could see arguing that starting a sentence or paragraph with "Therefore, " repeatedly in one essay is empty but tbh your teacher just sounds jaded.
"The Dwarves tell no tale; but even as mithril was the foundation of their wealth, so also it was their destruction: they delved too greedily and too deep, and disturbed that from which they fled, Durin's Bane" - J.R.R. Tolkien spoken by Gandalf, 1954
Dismissing individual cases of use of those words is probably wrong, but noticing an uptick in broad popularity is very relevant and clear evidence of LLMs influencing language.
> Dismissing individual cases of use of those words is probably wrong, but noticing an uptick in broad popularity is very relevant and clear evidence of LLMs influencing language.
Can't it also be evidence that more and more writing is LLM generated?
I wouldn't say it's exactly "buzzwords", although their presence can be one signal out of many, but a particular style and word choice that makes it easy to detect AI-generated text.
Imagine the most vapid, average, NPC-ish corporate drone that writes in an overly positive tone with fake cheerfulness and excessive verboseness. That's what AI evokes to me.
The opposite is someone who is trying to tell you something but assumes you already know what they're trying to tell you and that you will ask questions if you don't understand.
It saves time but it means people have to say when they don't understand and some find that too much of a challenge.
I know my lexicon has expanded with 5 letter words. Coffee and Wordle kicks off the morning and I got to believe many other folks do the same. It would be fun to know how much that silly puzzle is impacting things. Love it when my Bride gives me the side eye and tries to pass off NORIA as something she uses all the time.
Sure. Heuristics are a thing, though. I love my non-chatgpt en/em dashes (option/option + shift + dash on a mac makes it convenient, given you know that it exists and care) but alas, when suddenly you see them everywhere, you do take notice.
It's funny, because it was the "em-dashes mean AI" thing that finally reminded me to deal with the fact that the extension that I had been using for typographical dashes (and other things) on desktop browsing (the main place I used them on my desktop) had been broken for a while and get around to adding keyboard shortcuts instead.
Same here. I frequently use "garner", "meticulous" and "surpass", along with copious usage of the em-dash to indicate breaks in the chain of thought. These are not buzzwords. They're words.
What I do worry about is the rise of excessive superlatives: e.g. rather than saying, "okay", "sounds good" or "I agree", saying "fantastic!", "perfect!" or "awesome!". I get the feeling this disease originated in North America and has now spread everywhere, including LLMs.
Funnily enough, I was using the word superlative more as an adjective, than the noun that refers to the part of grammer (adjective), if that makes sense.
> Moria. You fear to go into those mines. The Dwarves delved too greedily and too deep. You know what they awoke in the darkness of Khazad-dûm. Shadow and flame.
Didn’t realise Tolkien used ChatGPT way back when. What a hack.
There needs to be a clear, succinct name for this phenomena of accusing a person or their work of being AI without proof. This is going to do more damage than AI performing human tasks. Just the mere suspicion that they probably didn't do-the-thing themselves. Anyone, particularly artists, who are "too good" at their craft are going to have their recognition stolen from them.
Unfortunately, sometimes new attention on a topic impacts it in a retrospective way. I have been in drones world for ~10 years and the past 2 years it has been a shitshow and only brings bad attention, ruining the fun hobby for everyone.
Delve is especially bad because it was due to World of Warcraft introducing "Delves". When I see something like this that uses delve as an example, you can bet the research is going to be poor.
I play WoW daily and this is what I always think of when someone brings up the word "delve". It's unclear if Brann would summon more or less nerubians if he were piloted by ChatGPT though.
In the "opinion" of ChatGPT, my style of writing is "academic". I'm not exactly sure why. Perhaps I draw from a vocabulary or turns of phrase that aren't necessarily characteristic of colloquial speech among native speakers. Technically, English wasn't my first language, so perhaps this is something like the case with RP English in Britain. Only foreigners speak it, so if you speak RP, then you aren't a native Brit.
In any case, it's possible to misuse, abuse, or overuse words like "delve", but to think that the the mere use of "delve" screams "AI-generated"...well, there are some dark tunnels that perhaps such people should delve less into.
> In the "opinion" of ChatGPT, my style of writing is "academic".
It may simply be glazing. If you ask it to estimate your IQ (if it complies), it will likely say >130 regardless of what you actually wrote. RLHF taught it that users like being praised.
And, if you want to have some fun, you could give it your writing sample - but say that it's from a random blog post you found online. See what it tells you on that.
It really is a shame that an average user loves being glazed so much. Professional RLHF evaluators are a bit better about this kind of thing, but the moment you begin to funnel in-the-wild thumbs-up/thumbs-down feedback from the real users into your training pipeline is the moment you invite disaster.
By now, all major AI models are affected by this "sycophancy disease" to a noticeable degree. And OpenAI appears to have rolled back some of the anti-sycophancy features in GPT-5 after 4o users started experiencing "sycophancy withdrawal".
I wonder if someone would build a personalized social media simulator where you are the most popular person, a top celebrity and you get the most likes, and you everyone posts selfies with you (generated with editing models like Gemini's nano banana), and whatever dumb opinion you have, it's affirmed as genius and so on. Like a UI clone of a site like Instagram, but text and images populated by AI, with a mix of simulated real celebrities and random generated NPCs.
People get hooked on the upvote and like counters on Reddit and social media, and AI can provide an always agreeing affirmation. So far it looks like people aren't bothered by the fact that it's fake, they still want their dose of sycophancy. Maybe a popularity simulator could work too.
Not everyone appreciates having his speech characterized as "academic" - in certain circles, it's viewed rather poorly - so I'm not convinced of the glazing hypothesis.
ChatGPT certainly makes distinctions. If I give it a blog post written by a philosophy professor, I get "formal, academic, and analytical". If I feed it an article from The Register, I get "informal and conversational". The justifications it gives are accurate.
"Academic" may simply mean that your writing is best characterized as an example of clearly written prose with an expository flavor, and devoid of regional and working class slang as well as any colloquialisms. Which, again, points to my RP comparison.
The first question matters because frying an AI with RL on user feedback means that the preferences of an average user matter a lot to it.
The second question matters because any LLM is incredibly good at squeezing all the little bits of information out of context data. And the data you just gave it was a sample of your writing. Give enough data like that to a sufficiently capable AI and it'll see into your soul.
But is it really 100% positive? If you write a paper sounding academic is fine, but not necessarily if you write a novel. Especially if you try to blend in or mimic a certain style.
That assumes the characterization is perceived as flattering, or that enough data on me would allow it to "think" it would be to me. Generally, given the anti-intellectual bias in American popular culture, I'm on the fence about that. But then, what are the biases of the corpus ChatGPT was trained on?
For context, I was asking GPT to rewrite some passage in the style of various authors, like Hemingway or Waugh. I didn't even ask it for an assessment of my writing; I was given that for free.
In retrospect (this was while ago), I think the passage may have been expository in character, so perhaps it is not much a mystery why it was characterized as "academic". (When I give it samples similar to mine now, I get "formal, academic, and analytical tone". Compare this to how it characterizes an article from The Register as written in an "informal and conversational tone", in part because of the "colloquial jargon" and "pop culture references"). So my RP comparison is sensible. And there's the question of social class as well. I don't exactly speak like a construction workers, as it were.
Even if, for some reason, you think LLM's are fit for evaluating writing style (I don't), I'd at least ask Gemini Pro and Claude Opus to see if there's consensus among the plausible sounding bullshit generators.
The honest answer is we need to change our language because of AI in situations where it may be ambiguous about whether we are human or AI, e.g. online.
In my native language, I tend to use more sophisticated, academic, or professional vocabulary. But when I speak or write in English, I usually stick to simpler words because they’re easier for most people, both native and non-native speakers, to understand. For years, I’ve avoided using the kind of advanced vocabulary I normally would in my native language when writing in English, mainly because I didn’t want it to come across as something written by a bot.
And in writing, I like using long dashes—but since they’ve become associated with ChatGPT’s style, I’ve been more hesitant to use them.
Now that a lot of these “LLM buzzwords” have become more common in everyday English, I feel more comfortable using them in conversation.
Fair enough, but if you know you're audience may be dismissive of your writing and its message if you use such words, it behooves one to steer clear of AI slop words. IIRC, such offenses in school writing are tagged PWC (poor word choice).
The thing is virtually every single thing that gets presented as an "AI tell" is just "a word, punctuation mark, or pattern of presenting information more common in a training set which includes a high volume of formal writing and professional presentations than it is in the experience of people whose reading and writing is mostly limited to social media and low-effort listicle-level online 'journalism'."
So, yeah, if your target audience are the people who take those "AI tells" seriously and negatively react to them, definitely craft your writing to that audience. But also, consider if that is really your target audience...
> So, yeah, if your target audience are the people who take those "AI tells" seriously and negatively react to them, definitely craft your writing to that audience. But also, consider if that is really your target audience
Nowadays if you write anything you only have two audiences
The first audience is people who care what you are saying
The second audience is AI scrapers
People who do not care what you have to say will have an AI summarize it for them, so they aren't your audience
It's one of the few books that I went into totally blind, and then hate-finished just so that I could confidently condemn it.
I've deleted a paragraph or two to avoid unilaterally taking everything too off topic, but I'll just say that the book is a self-contradictory artifact of hypocrisy that disrespects the reader.
I also went into that book blind. I was in grade 12 and some organization was offering scholarships to people who wrote an essay about the book. I had a twice-daily 45-minute bus ride to fill, so it seemed like an easy win.
Probably not the type of organization to give a scholarship to those who write an essay critical of the work.
Myself, I read it at age 12 and bought its premise at the time. Therefore I mentally categorize Ayn Rand devotees as people with the maturity I had at 12. That's a pretty low bar they're failing to clear.
As someone who writes above a fifth grade reading level, this whole thing has been so depressing. It's like Idiocracy-level. People are going to assume I'm using AI because I use the word "intricate"? ffs.
I mean, what's actually fascinating is that Paul Graham didn't predict that this distinction - the ability to determine AI vs humans will go away over time, the more chatbots rub off on humans.
And it does it in an unusual way. After browsing around on the website and then noticing that the back button history is just the same site name repeated many times, I worried about my history needlessly polluted by this website. But when I opened my browser history, it was just a handful of URLs in there, each representing the screenshot I had actually clicked on.
So, yes, it does break the back button, but it doesn't pollute the actual history.
reply