Hacker Newsnew | past | comments | ask | show | jobs | submit | injidup's commentslogin

That is the decision of artists to sign with a mega corp. Any tom dick or harry can create a Spotify account, load their warbling autotuned ditty written by themselves ( or AI ) on any theme, in any genre and wait for fame or fortune to appear or not. You can take your 70% or whatever the exact number is with no.middle man if you like.

Unfortunately the number of people producing music and the quantity of it is much higher than the number of people able to consume it. And culture is simply network effects. You listen to what your friends or family listen to. Thus there are only a small number of artists who make it big in a cultural sense.

And one of the cheat codes for cracking the cultural barrier is to use a mega corp to advertise for you but if course the devil takes his cut.

Anyway AI is coming for all these mega corps. If you haven't tried SUNO and many of you have it's amazing how convincingly it can crack specific Genres and churn out quality music. Call it slop if you like but the trajectory is obvious.

As a consumer you will get you own custom music feed singing songs about YOUR life or desired life and you will share those on your social media account and some of those will go viral most will die.

Content creation as a career is probably dead.


(a) you can’t directly upload to Spotify. You need an intermediary in the shape of a distributor. Whether that’s a label or a DIY platform like DistroKid.

(b) Spotify introduced a threshold of 1000 streams before they pay anything. This disincentivises low quality warbling autotuned ditties as they are unlikely to pass that threshold. (It’s more nuanced - you don’t just need 1000 streams from a handful of accounts as that could easily be gamed.)

(c) Suno and Udio have been forced into licensing deals with the major record companies. The real threat will be when we see an open sourced Qwen or DeepSeek style genAI for music creation.


> Any tom dick or harry can create a Spotify account, load their warbling autotuned ditty written by themselves ( or AI ) on any theme, in any genre and wait for fame or fortune to appear or not

No, you literally can't.


A 2d sketcher with constraints is kind of similar. For example the equation

A = B + C

Where A, B, C are the lengths of 3 parallel lines. Within the sketcher you can drag the length of any one of those lines and the other two will adjust to keep the constraints.


Yes! I'd really like to make something graphical in this same idea space next. See g9.js for example, or parametric CAD software like FredCAD which kinda does what you said.

Most “AI is rubbish” takes treat it as an open-loop system: prompt → code → judgment. That’s not how development works. Even humans can’t read a spec, dump code, and ship it. Real work is closed-loop: test, compare to spec, refine, repeat. AI shines in that iterative feedback cycle, which is where these critiques miss the point.


What’s tough for me is figuring out where people are realizing significant improvements from this.

If you have to set up good tests [edit: and gather/generate good test data!] and get the spec hammered out in detail and well-described in writing, plus all the ancillary stuff like access to any systems you need, sign-offs from stakeholders… dude that’s more than 90% of the work, I’d say. I mean fuck, lots of places just skip half that and figure it out in the code as they go.

How’s this meaningfully speeding things up?


Why the word "sustainable" in here? It's like every product pitch these days needs the word "sustainable" in it to pass legal.


It's simple. Booking.com will fuck you over and have all sorts of fine print to cover themselves. However I can simply recommend if they do something like cancel a confirmed booking don't bother contacting customer support. Simply get on Facebook and start swearing and causing a huge fuss till they sort it out. They will tell you 100 times that they are very sorry and they would love to help but they just can't and they feel horrible about it all but "the policy" forbids them doing anything that could smell like genuine customer service. Simply raise the temperature of agitation just as this customer did and eventually booking.com will buckle.

I had exactly the same case. I had a non cancellable room booked for an event and a week or two before the event it was cancelled and booking tried to claim they were not an agent, they were not part of the contract, that they cared very deeply. Customer support in English cost 1€ per minute and they kept putting me on hold. Eventually I just went to Facebook and asked GPT to start incrementally generating more and more offensive posts direct at their social media account. It's much cheaper than their customer support line and it actually reaches someone who can do something.


Opens instantly on my machine. It takes the same amount of time as neovim.

Now if you want to complain about something then vscode takes 12 seconds to load


You take a photo of an AI generated photo. What's yr proof worth then?


Yes, IIRC if you measure an image signal, i.e. here: image, that is at twice the resolution of the sensor you use, there won't be any artifacts.

Nyquist–Shannon sampling theorem.

But, if the sony sensor also measures depth information, this attack vector will fall flat. Pun intended.


When rocking your Meta, Ray Ban, MacDonalds, Tesla XR AR 0009fNG plus Reality engine contact lense inplants it will be important to cross reference your experiences with what really happened.


Yep this is coming soon. You'll be required to own and operate wearables to participate in the social web, or post photos anywhere.


I have this annoying problem. When not plugged into my car I can say

"Hey google remind me to call in sick tomorrow"

and it will put an entry into the calendar for me.

But if it is plugged into my car the same request will elicit an

"I'm sorry please talk to your system administrator to get this feature enabled"

This has gone on for years with google assistant and workspace accounts. There is always some random interlock in some byzantine permissions table that get's triggered at the most inconvenient moment and you can't do what yesterday you could do.


Shouldn't you be getting the LLM to also generate test cases to drive the code and also enforce coding standards on the LLM to generate small easily comprehensible software modułes with high quality inline documentation.

Is this something people are doing?


The problem is similar to that of journalism vs social media hoaxes.

An llm-assisted engineer writes code faster than a careful person can review.

Eventually the careful engineers get ran over by the sheer amount of work to check, and code starts passing reviews when it shouldn’t.

It sounds obvious, that careless work is faster than careful one, but there are psychological issues in play - expectation by management of ai as a speed multiplier, personal interest in being perceived as someone who delivers fast, concerns of engineers of being seen as a bottleneck for others…


> expectation by management of ai as a speed multiplier

In many cases, it's more than expectation. For top management especially, these are the people who have signed off on massive AI spending on the basis that it will improve productivity. Any evidence to the contrary is not just counter to their expectations - it's a giant flashing neon sign screaming "YOU FUCKED UP". So of course organizations run by those people are going to pretend that everything is fine, for as long as anything works at all.

And then the other side of this is the users. Who have already been conditioned to shrug at crappy software because we made that the norm, and because the tech market has so many market-dominant players or even outright monopolies in various niches that users often don't have a meaningful choice. Which is a perfect setup for slowly boiling the frog - even if AI is used to produce sloppy code, the frog is already used to hot water, and already convinced that there's no way out of the pot in any case, so if it gets hotter still they just rant about it but keep buying the product.

Which is to say, it is a shitshow, but it's a shitshow that can continue for longer than most engineers have emotional capacity to sustain without breaking down. In the long term, I expect AI coding in this environment to act as a filter: it will push out all the people who care about quality and polish out of the industry, and reward those who treat clicking "approved" on AI slop as their real job description.


I have no issue getting LLMs to generate documentation, modular designs or test cases. Test cases require some care; just like humans LLMs are prone to making the same mistake in both the code and the tests, and LLMs are particularly prone to not understanding whether it's the test or the code that's wrong. But those are solvable.

The things I struggle more with when I use LLMs to generate entire features with limited guidance (so far only in hobby projects) is the LLM duplicating functionality or not sticking to existing abstractions. For example if in existing code A calls B to get some data, and now you need to do some additional work on that data (e.g. enriching or verifying) that change could be made in A, made in B, or you could make a new B2 that is just like B but with that slight tweak. Each of those could be appropriate, and LLMs sometimes make hillariously bad calls here


The LLM will generate test cases that do not test anything or falsely flag the test as passing, which means you need to deeply review and understand the tests as well as the code it's testing. Which goes back to the point in the article, again.


>which means you need to deeply review and understand the tests as well as the code it's testing

Yes...? Why wouldn't you always do this LLM or not?


I came back to this. NOBODY mentioned speckit!

https://github.com/github/spec-kit

""" Spec-Driven Development flips the script on traditional software development. For decades, code has been king — specifications were just scaffolding we built and discarded once the "real work" of coding began. Spec-Driven Development changes this: specifications become executable, directly generating working implementations rather than just guiding them. """

The takeaway is that instead of vibecoding you write specs and you get the LLM to align the generated code to the specs.


I just wrote a reply elsewhere, but we got a new vibe-coded (marketing) website. How is an LLM going to write test cases for that? And what good will they do? I assume it will also change the test cases when you ask it to rewrite things.


> How is an LLM going to write test cases for that?

"Please generate unit tests for the website that exercise documented functionality" into the LLM used to generate the website should do it.


The people who are doing that aren't writing these blog posts. They're writing much better code & faster, while quietly internally panicking a bit about the future.


You're probably talking about infamous Dark Matter Developers [1]. When the term was coined, I thought there were many of them, now seeing how many developers are here at HN (including myself) I doubt there are many left /s.

The quote that is interesting in the context of the fast-pacing LLM development is this

> The Dark Matter Developer will never read this blog post because they are getting work done using tech from ten years ago and that's totally OK

[1] https://www.hanselman.com/blog/dark-matter-developers-the-un...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: