Hacker Newsnew | past | comments | ask | show | jobs | submit | luxcem's commentslogin

At least for the time being, AI "workers" belong to someone. That person is represented and pays taxes.


I've been using Django for the last 10+ years, its ORM is good-ish. At some point there was a trend to use sqlalchemy instead but it was not worth the effort. The Manager interface is also quite confusing at first. What I find really great is the migration tool.


Since Django has gained mature native migrations there is a lot less point to using SQLAlchemy in a Django project, though SQLAlchemy is undeniably the superior and far more capable ORM. That should be unsurprising though - sqlalchemy is more code than the entire django package, and sqlalchemy + alembic is roughly five times as many LOC as django.db, and both are similar "density" code.


Makes sense as sqlalchemy’s docs are also 5x as confusing.


Organize in person meeting and proceed to a Voight-Kampff test.


It's called Symbiogenesis [0] and it's not at all a wild theory. But it's limited to cell components, not multiples organs fusing to create something as complex as a mammal.

[0] https://en.wikipedia.org/wiki/Symbiogenesis


> but I can't put my finger on why

For me it's the contrast between the absolute tone-deaf messages of PR author and the patience, maturity and guidance in maintainers' messages.


The whole issue, as clearly explained by the maintainers, isn't that the code is incorrect or not useful, it's the transfer of the burden of maintaining this large codebase to someone else. Basically: “I have this huge AI-generated pile of code that I haven't fully read, understood, or tested. Could you review, maintain, and fix it for me?”


It's a tell, a common language quirk of LLMs especially ChatGPT.

- a slow-loading app isn’t just an annoyance. It’s a liability.

- The real performance story isn’t splitting hairs over 3ms differences, it’s the massive gap between next-gen and React/Angular

- The difference [...] isn’t academic. It’s the difference between an app that feels professional and one that makes our users look bad in front of clients.

- This isn’t a todo list with hardcoded arrays. It’s a real app with database persistence.

- This isn’t just an inconvenience. It’s technofeudalism.

- “We only know React” isn’t a technical constraint, it’s a learning investment decision.

- The real difficulty isn’t learning curve, it’s creating a engineering culture.

- This isn’t some toy todo list. It’s a solid mid-complexity app with real database persistence using SQLite.

- The App Store isn’t a marketplace, it’s a fiefdom.


You don't even need training data, a bot that play itself à la AlphaZero will eventually collect more data than there are of actual games.


From [1],

  I asked a few students to read aloud the titles of some essays they’d submitted that morning.  
  For homework, I had asked them to use AI to propose a topic for the midterm essay. Most students had reported that the AI-generated essay topics were fine, even good. Some students said that they liked the AI’s topic more than their own human-generated topics. But the students hadn’t compared notes: only I had seen every single AI topic.  
  Here are some of the essay topics I had them read aloud:

  Navigating the Digital Age: How Technology Shapes Our Social Lives, Learning, and Well-Being  
  Navigating the Digital Age: A Personal Reflection on Technology  
  Navigating the Digital Age: A Personal and Peer Perspective on Technology’s Role in Our Lives  
  Navigating Connection: An Exploration of Personal Relationships with Technology  
  From Connection to Disconnection: How Technology Shapes Our Social Lives  
  From Connection to Distraction: How Technology Shapes Our Social and Academic Lives  
  From Connection to Distraction: Navigating a Love-Hate Relationship with Technology  
  Between Connection and Distraction: Navigating the Role of Technology in Our Lives  

  I expected them to laugh, but they sat in silence. When they did finally speak, I am happy to say that it bothered them. They didn’t like hearing how their AI-generated submissions, in which they’d clearly felt some personal stake, amounted to a big bowl of bland, flavorless word salad.

[1] https://lithub.com/what-happened-when-i-tried-to-replace-mys...


This also happens with cover letters and CVs in recruiting now. Even if the HR person is not the brightest bulb, they figure out the MO after reading 5 cover letters in a row who all more or less tell the same story.


CV were always BS tho - on both sides.


Yeah I've been trying to write a short press bio for a musical project recently and it's next to impossible not to make it sound AI generated.


I will tell you my cover letter secret*, which has gotten me a disproportionate number of interviews**:

Do NOT write a professional cover letter. Crack a joke. Use quirky language. Be overly familiar. A dash of TMI. Do NOT think about what you are going to say, just write a bunch of crazy-pants. Once your intro is too long, cut the fat. Now add professional stuff. You are not writing a cover letter, you are writing a caricature of a cover letter.

You just made the recruiter/HR/person doing interviews smile***. They remember your cover letter. In fact they repeat your objectively-unprofessional-yet-insightful joke to somebody else. You get the call. You are hired.

This will turn off some employers. You didn't want to work for them anyway.

* admittedly I have not sought work via resume in more than 15 years. ymmv

** Once a friend found a cover letter I had written in somebody's corp blog titled "Either the best or worst cover letter of all time" (or words to that effect). In it I had claimed that I could get the first 80% of their work done on schedule, but that the second 80% and third 80% would require unknown additional time. (note: I did not get the call)

*** unless they are using AI to read cover letters, but I repeat: you didn't want to work for them anyway.


If these topics are word salad, colleges might have been training word saucier chefs way before GPT-2 became a thing.


It's not just that it's word salad, it's also that it's exactly the same. There's a multi-trillion dollar attempt to replace your individuality with bland amorphous slop """content""". This doesn't bother you in the slightest?


I now have a visceral reaction to being told that I'm ABSOLUTELY RIGHT!, for example. It seemed an innocuous phrase before -- rather like em dashes -- but has now become grating and meaningless. Robotic and no longer human.


I'm launching a new service to tell people that they are absolutely, 100% wrong. That what they are considering is a terrible idea, has been done before, and will never work.

Possibly I can outsource the work to HN comments :)


This sounds like a terrible idea that has been done before and will never work.


BrandonM as a Service?


After the reply to his infamous comment¹, BrandonM responded with “You are correct”.

https://news.ycombinator.com/item?id=9224

¹ Which people really should read in full and consider all the context. https://news.ycombinator.com/item?id=27068148


Yeah BrandonM got unfairly maligned but it's still funny.


You're exactly right, this really gets to the heart of the issue and demonstrates that you're already thinking like a linguist.


For what most of us are using it for (generating code), that's not a bad outcome. This audience might have less of a problem with it than the general population.

Whether we have the discipline to limit our use of the tool to its strengths... well, I doubt it. Just look at how social media turned out.

(Idle thought: I wonder if a model fine-tuned on one specific author would give more "original" titles).


This is the default setting. The true test would be if LLMs CAN'T produce distinct outputs. I think this problem can be solved by prompt engineering. Has anyone tried this with Kimi K2?


I had the exact same thought! Wow!

Now tell me, which one of us is redundant?


This was on HN's frontpage previously too; I immediately thought that this comic would say more or less the same thing. Perhaps both came from an AI? :D

But in another paragraph, the article says that the teacher and the students also failed to detect an AI-generated piece.

The ending of the comic is a bit anti-climatic (aside from the fact that one can see it coming), as similarities between creations are not uncommon. Endings, guitar riffs, styles being invented twice independently is not uncommon. For instance, the mystery genre was apparently created independently by Doyle and Poe (Poe, BTW, in Philosophy of composition [1], also claims that good authors start from the ending).

Two pieces being similar because they come from same AI versus because two authors were inspired and influenced by the same things and didn't know about each other's works, the difference is thin. An extrapolation of this topic is the sci-fi trope ( e.g. Beatless [2] ) about whether or not the emotions that an android simulates are real. But this is still sci-fi though, current AIs are good con artists at best.

[1] https://en.wikipedia.org/wiki/The_Philosophy_of_Composition

[2] https://en.wikipedia.org/wiki/Beatless


So if I understand it correctly, they only asked for a midterm essay topic? It wasn't steered towards these topics in any way, for instance by asking for a midterm essay topic for (teacher)'s Technology and Society class?


I don't get this in the comic either: Why are you devastated that the idea you copied word-for-word is unoriginal? I don't understand what they expected.


If it seems obvious from where you are, then the target audience must not be where you are. In particular young students definitely lack context to critique and a big anonymous sampling like this is a great exercise.


I can understand not realizing that ChatGPT would give a bunch of similar sounding article titles to everyone, and I can understand being a little embarrassed that you didn't realize that. But why would you feel a "personal stake" in the output of an LLM? If you feel personal stake in something, you definitely should not be using an LLM for it.


Again, the statement "if you feel a personal stake in something, you definitely should not be using an LLM for it" is a learned response. To folks just forming their brains, LLMs are a natural extension of technology. Like PaulG said, his kid was unimpressed because "Of course the computer answers questions, that's what it does".

The subtlety of it, and the "obvious" limitations of it, are something we either know because we grew up watching tech over decades, or were just naturally cynical and mistrusting and guessed right this time. Hard earned wisdom or a broken clock being right this time, either way, that's not the default teenager.


Because you thought that you had collaborated with the LLM, not that it had fed you ideas. Have you and a partner both believed you contributed more than 50% of a project's work? Like that.


This isn't an inherent property of LLMs, it's something they have been specifically trained to do. The vast majority of users want safe, bland, derivative results for the vast majority of prompts. It isn't particularly difficult to coax an LLM into giving batshit insane responses, but that wouldn't be a sensible default for a chatbot.


I think moreso than the users it is the companies running the LLMs themselves who want the responses to be safe as to not jeopardize their brand.


The very early results for "watercolour of X" were quite nice. Amateurish, loose, sloppy. Interesting. Today's are... well, every single one looks like it came off a chocolate box. There's definitely been a trend towards a corporate-friendly aesthetic. A narrowing.


Are you sure? Yes, LLMs can be irrelevant and incoherent. But people seem to produce results that are more variable even when staying relevant and coherent (and "uncreative").


the business wants it this way, not the user.


That's a cute story. I asked ChatGPT to suggest "a topic for a midterm essay that addresses our relationship to technology", since that was all the information he gave us. It came up with:

The Double-Edged Sword: How Technology Both Enhances and Erodes Human Connection The Illusion of Control: How Technology Shapes Our Perception of Autonomy From Cyberspace to Real Space: The Impact of Virtual Reality on Identity and Human Experience Digital Detox: The Human Need for Technology-Free Spaces in an Always-Connected World Surveillance Society: How Technology Shapes Our Notions of Privacy and Freedom Technology and the Future of Work: Human Adaptation in the Age of Automation The Techno-Optimism Fallacy: Is Technology Really the Solution to Our Problems? The Digital Divide: How Access to Technology Shapes Social Inequality Humanizing Machines: Can Artificial Intelligence Ever Understand the Complexity of Human Emotion? The Ethics of Technological Advancements: Who Decides What Is ‘Ethically Acceptable’?

They're still pretty samey and sloppy, and the pattern of Punchy Title: Explanatory Caption is evident, so there's clearly some truth to it. But I wonder if he hasn't enhanced his results a little bit.


I think he picked the most similar ones out of all the submissions from the entire class. But also, if you generate a list, maybe the AI ensures some diversity in that list, but if every student generates the same list, that still shows a lack of originality.


Or the students have enhanced the results by picking the very samey outcomes out of a more varied pool of suggestions.


I think you're just proving the point with these examples.


If you really want to use encryption under a state where it's forbidden and communication are monitored you rather want to hide your encrypted messages inside cat pictures and tiktok videos. Because blatant obfuscation might trigger warning and draw attention.

In the end it's not about making encryption technically impossible but illegal, and if you use it you'll be prosecuted.


Me personally, I will use chained encoding because technically and legally that is not encryption. I am fine with drawing attention. If my adversaries wish to spend a gazillian mega-bucks to try to win the arms race of decoding my chained encoding to see my mid-wit comments and pictures of a moose then I am doing a good job. When they change the laws to prevent encoding then we move on to another technique. There are nearly infinite ways to limit communication to a group of people and evade fuzzy scans.


Respectfully, you completely miss the point. You personally, you will still be able to use proper encryption.

It's mainstream platforms who won't be. Those platforms will be mandated to scan their own communications.

There is absolutely no reason to do this weird encoding stuff. Nobody says that it is illegal to encrypt stuff properly.


Well in that case I will use my silly chained encoding and their fuzzy scans of files on the mainstream platforms will have to figure out what to do with it.

For what it's worth I myself do not use these platforms. I just want to get people thinking about mitigating options. I use my own self hosted forums, chat servers, sftp servers, chan servers, voice chat servers and so on. Even then it can be useful to obfuscate text and files in the event someone is using a fondle-slab. I try to discourage fondle slabs.


> I just want to get people thinking about mitigating options.

You should refrain from making dangerous suggestions, I think. Some people may actually need proper encryption.


My suggestion has always been to use PGP or OTR for individual messages or individual files. dm-crypt plain with a random cipher/hash/mode combo for filesystems using a 240 to 480 character passphrase which can also be layered and chained.

This is just an alternative if people believe they are not permitted to encrypt something. The threat vector in this topic is fuzzy scanning local and remote. ChatControl uses fuzzy scanning. Encoding can do just as good a job of mitigating fuzzy scans as any level of encryption. Even manual intervention should take a lot of effort just as much as brute forcing a simple encryption password. If we are being honest encrypted files are most often protected by a weak password and the cipher/hash are already disclosed and the key space is usually small. LUKS for example discloses cipher, hash, mode making brute force just a factor of compute power. If an app is chain-encoding and the chain is shared out of band I suspect it will take orders of magnitude more compute time to cycle through every possible combination of encoding and compression.

For fun has anyone decoded my simple message in the thread?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: