Hacker Newsnew | past | comments | ask | show | jobs | submit | cleak's commentslogin

That’s pretty neat. It reminds me of how VAEs work: https://en.wikipedia.org/wiki/Variational_autoencoder


I’m guessing a good chunk of the page is AI generated - em dashes and random emojis.


Statements like this always feel a bit rude to me—as a Chinese, I use em dashes (in Chinese texts) on a daily basis and insert them in English texts when I see fit.

A bit of background: Em dashes “—” (or, very often, double em-dashes “——”) are to Chinese texts what hyphens “-” are to English texts. We use them in ranges “魯迅(1881-1936)”, in name concatenations “任-洛二氏溶液(Ringer-Locke solution)”, to express sounds “呜——”火车开动了, or `“Chouuuuuuuuu”, starts the train' in English, and in place of sentence breaks like this——just like em dashes in English texts. They are so commonly used that most Chinese input methods map Shift+- (i.e., underscores “_”) to double em-dashes. So, as a result, while I see many English people have to resort to weird sequences like “Alt + 0151” for an em-dash, a huge population in the world actually has no difficulty in using em-dashes. What a surprise!

As for this article, obviously it was translated from its Chinese version, so, yeah I don't see em-dashes as an AI indicator. And for the weird emoji “” (U+1F54A), I'm fairly certain that it comes from the Chinese idiom “放鸽子” (stand someone up, or, literally, release doves/pigeons), which has evolved into “鸽了” (pigeon'ed), a humorous way to say “delayed, sorry!”.

[0] https://zh.wikisource.org/wiki/标点符号用法


Totally agree, I don't think em dashes are a particularly useful AI tell unless they're used in a weird way. Left to my own devices (as a native English speaker who likes em dashes and parentheticals), I often end up with at least one em dash every other paragraph, if not more frequently.

On another note, it may be useful to you to know that in most English dialects, referring to a person solely by their nationality (e.g., when you wrote "as a Chinese") is considered rude or uncouth, and it may mark your speech/writing as non-native. It is generally preferable to use nationalities as adjective rather than nouns (e.g., "as a Chinese person"). The two main exceptions are when employing metonymy, such as when referring to a nation's government colloquially (e.g., "the Chinese will attend the upcoming UN summit") or when using the nationality to indicate broad trends among the population of the nation (e.g., "the Chinese sure know how to cook!"). I hope this is considered a helpful interjection rather than an unwelcome one, but if not, I apologize!


> referring to a person solely by their nationality (e.g., when you wrote "as a Chinese") is considered rude or uncouth

I don’t think that applies when referring to yourself, as the parent did.


Thank you! It would indeed require extra effort for me to notice issues like this, and it is very nice of you to have pointed it out!

Speaking of personal devices, I also have a dedicated key binding for en dashes “–” (because, well, I already have a whole tap layer for APL symbols, and it costs nothing to add one more). Since we're on HN, I believe many people here can easily do that if they wish to, so I too don't think en/em dashes are very telling, especially on HN.

(...and of course we have an xkcd for it: https://xkcd.com/3126/ )


Automatic translation, for sure, as evidenced by this sentence in the two's complement section:

In fact, complement is a concept in counting systems, and the Chinese term for it is "complement".


This comment has become more robotic than the thing it criticises. People use em-dashes and emoji all the time! They are easy to type. On Apple operating systems you can even make em-dashes accidentally by default by simply using two hyphens. Those by themselves aren’t sufficient to detect LLM writing, please stop propagating that wrong idea. And emoji?! Human communication over the internet is littered with them, they’re insanely popular and have their own jargon and innuendos.


Some folks actually were taught to use em-dashes as part of their normal writing, especially if you've taken a technical writing course.

I dislike that people think you're an AI if you're using proper typography. :(


Just writing multiple paragraphs with compound-complex sentences makes people think you're an AI. :(


Given "AI" is over 50% of all content now, even if you flip a coin chances are pretty good some article contains slop.

https://www.youtube.com/watch?v=vrTrOCQZoQE

The Ubuntu repositories curate both the legacy and more modern logisim fork:

sudo apt-get install logisim

sudo snap install logisim-evolution

Microcap 12 is also still available from the archive.org web cache, was made free, and runs in Wine64 just fine:

https://web.archive.org/web/20230214034946/http://www.spectr...

Microcap will handle both Analog and Digital simulations.

KiCad now also supports Spice, and reports it should import the free LTSpice libraries. I have yet to find a use case for the kicad sim option... so YMMV.

https://www.kicad.org/discover/spice/

Best of luck, =3


It might be "proper" but I never liked it.

Many proper uses of the em-dash put two words visually together—despite being parts of two distinct units separated by the em-dash.

I much prefer using a normal dash with a space on each side - like this.


Totally agree with this view. Why use an extra character when we already have a dash - just to add a pixel or two on either side. How an em-dash visually connects two words is not pleasant either, I prefer to have a space between them. For writing English, the ASCII character set is plenty.


Most AI tells are like this. I have been taught in marketing training to list things in pairs of three, because that's punchy, sufficiently succinct and very memorable. Now this is strongly associated with AI

After all AI didn't pick up these habits out of nowhere - all the tells are good writing advice and professional typography, but used with a frequency you would only see in highly polished texts like marketing copy


This is the exact reason I left Cursor for Claude Code. Night and day difference in reliability. The Windows experience might be especially bad, but it would get constantly hung or otherwise fail when trying to run commands. I also had to babysit Cursor and tell it to continue for mid sized tasks.


They've improved performance dramatically in the last few weeks, might have fixed your issues.


Its clear they've been shipping a lot of windows updates.


It does seem significantly better on Windows. I'll give it another chance over the next couple weeks.


It isn’t present on mobile (at least on iOS) unfortunately. I use it all the time on the web though, and it’s very useful.


And now it's gone on web this morning. I'm not sure if something in the conversation blocks it or if it's part of some A/B test.


I’ve found I do get small bursts of 10x productivity when trying to prototype an idea - much of the research on frameworks and such just goes away. Of course that’s usually followed by struggling to make a seemingly small change for an hour or two. It seems like the 10x number is just classic engineers underestimating tasks - making estimates based on peak productivity that never materializes.

I have found for myself it helps motivate me, resulting in net productivity gain from that alone. Even when it generates bad ideas, it can get me out of a rut and give me a bias towards action. It also keeps me from procrastinating on icky legacy codebases.


This looks interesting, but I have no idea what I’m looking at with the original paper. Could someone provide a simple summary that doesn’t rely on knowledge of Quadratic Voting?


Quadratic Voting and Quadratic Funding have some ideas in common, but they refer to separate concepts. To learn more about these topics, I would probably check out the website for RadicalxChange. IIUC RxC is the main public body attempting to realize the theoretical benefits of QF and related ideas.

Here's an explanation of Quadratic Funding from their website[1], which I guess they now refer to as "Plural Funding":

  Plural Funding (also known as Quadratic Funding or QF) is a more democratic and scalable form of matching funding for public goods, i.e. any projects valuable to large groups of people and accessible to the general public.
  
  “Matching funding” is a model of funding public goods where a fund from governments or philanthropic institutions matches individual contributions to a project. Plural Funding optimizes matching funds by prioritizing projects based on the number of people who contributed. This way, funds meant to benefit the public go towards projects that really benefit a broad public, instead of things that only have a few wealthy backers. In Plural Funding, [total funding] for a proposal is [the square root of each contribution to it → summed up, then squared.] Plural Funding strongly encourages people to make contributions, no matter how small, and ensures a democratic allocation of funds meant to benefit the public.
[1] https://www.radicalxchange.org/wiki/plural-funding/

EDIT: formatting


So I make 1,000,000 separate donations in the amount of $0.01?


Both QF and QV rely on verifying identity, so that would be counted as one donation of $10,000. The entire point is letting numbers of people balance in some way against amounts of money, so allowing one person to count multiple times breaks the system


I used to think the employer drives to contribute to the company's preferred charity were bad before QF; they’re bound to get a lot worse with it, giving power to employers with lots of employees and a willingness to encourage them to donate as little as a penny.


So I have to give it out and have people donate it on my behalf?


Total funding in this case would be: (sqrt(.01) * 1000000)^2 = 10 trillion dollars.


(10 billion, surely?)


Oh right: (sqrt(.01) * 1000000)^2 = 1.0e10


Here's a very brief summary of what Quadratic Funding is (which is distinct from Quadratic Voting):

Quadratic Funding is a mechanism where individuals voluntarily contribute funds for some public good (e.g. an open source software project), and then these are matched such that the total funding amount is equal to the square of the sum of the square roots of the individual contributions. Under certain assumptions, this formula results in an optimal outcome, where each individual contributes an amount that maximizes their individual utility (given what others are contributing), and total utility for society is also maximized.


a more plain-english explanation from Vitalik's blog:

https://vitalik.eth.limo/general/2019/12/07/quadratic.html


Better to learn about Quadratic Voting, and ignore the magical thinking of Quadratic Voting.


The book Smarter Faster Better introduced me to the concept of disfluency - the idea that extra friction such as awkward fonts, new environments, different tools, etc will pull you out of autopilot mode and force you to think in new ways. I haven’t seen references to it elsewhere, but it’s changed how I approach problems and learning the last 9 years. Switching to a notebook is one great way I use to trigger this as well.


Interesting. I noticed the same and sometimes I change my emacs theme just to get a fresh perspective. Sometimes I also disable syntax highlighting when typing so I won't get distracted.


> Sometimes I also disable syntax highlighting when typing

Was waiting for someone to comment this. It's a somewhat known strategy if you have to read through a bunch of boring code you don't want to work with, and find hard to focus with, to turn off the syntax highlight and somehow the brain stops glossing over/skimming the code and starts to pay more attention.

I found it led to marginal difference at best, as with most strategies. It does work well initially though, as I guess the brain stops being able to use colors it's used to for grouping stuff together and similar.


Along similar lines, sometimes I print out code on paper and make notes with a pen, sitting far away from my computer. Of course there's no, or very minimal, syntax highlighting then.


A bit extreme on my end, but I've got the spare hardware for it - when I get into a rut I change operating systems, so I'm bouncing back and forth between macOS and Windows or Linux.

I'm adept at using both but the change adds just enough friction and visual differences to spark creativity, or a little productivity boost.


I have syntax set to off in my .vimrc file, to have no syntax highlighting at all, and it's off all the time, not just while typing, i.e., even while moving through or changing text (code), or even when just reading it.

I find it much better that way.


Thanks for sharing. I just added Smarter Faster Better to my reading list, along with Annie Murphy Paul’s The Extended Mind.

Both seem to explore how breaking out of autopilot can unlock better thinking, which is exactly what I’ve been noticing in my own routines.



Interesting. For me it's the opposite, e.g. just changing keyboard layouts between pc and mac breaks my brain hard, and I feel useless.


I experience similar and I think of discomfort tolerance like a muscle. The more I (am forced to) use it, the less strain I experience when using it.

I am naturally prone to optimizing friction away--autistic engineer--but have come to realize regularly putting myself in uncomfortable positions professionally and personally works for me as a form of exposure therapy.

Nowadays, in the event I'm thrust into such an unfamiliar situation against my will, I'm still functional.

An enormously valuable knock on effect was coming to the realization the things I enjoy most in life are those which have been a surprise, and I would have simply avoided weren't I being intentional in pushing my own boundaries.


When I was dual booting my Mac between macOS and Windows I used to swap the keyboard and mouse at the same time. I found it helped with handling the differences between the two operating systems.


This is a somewhat wasteful one, but when I really really can't focus or make progress on untangling an issue or if I just want to fully understand a file, I will print out my code on paper and go through it with a red pen line by line. It's rare, but it works just like editing an essay. I notice things I wouldn't otherwise.


That used to be common some years ago, with or without a red pen or any pen.


I’m curious about this as well, especially since all coding assistants I’ve used truncate long before 1M tokens.


Something I’ve found true of Claude, but not other models, is that when the benchmarks are better, the real world performance is better. This makes me trust them a lot more and keeps me coming back.


I've actually shifted most of my personal dev to Rust now, and so this vtable hack has become less relevant. Rust makes it very easy to avoid virtual functions. If I had to redo this bit of code in Rust, a trait would boil any sort of update function down to its concrete type and give (I'd expect) great performance with all the convenience of virtual functions.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: