Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In order to make better ai tools for generating specific parts of a song, you ideally want models that understand what good music sounds like when put together. These sorts of "generate whole songs" are a predecessor to more specific tooling. These tools are slowly moving downstream (look at the evolution of Suno) and will almost certainly eventually move to a place where they are just a part of the music production workflow. We increasingly have improved tools to break down full tracks into stems, stems to/from midi/lyrics.

Lots of potential musicians / producers that can write a catchy tune, lyrics, create midi work, etc; but maybe can't play / don't own the instruments they want to use (could be disabled) or maybe don't have a great singing voice. These ai tools can lower the bar for more people to create music at a higher level. It can also act as a improvisational partner, to explore more musical space faster.

As a personal anecdote of where AI might be useful, as a hobby I occasionally participate in game jams, sometimes working on music / sound effects to stretch my legs form my day job. One game jam game I worked on was inspired by a teammates childhood in Poland. So I listened to a bunch of traditional Polish music and created a track inspired by said music. I'm pretty happy with how it came out, but with current AI I'm sure I could have improved the results significantly. If I were to be making it now, I would be able to upload the tracks I wrote, see how the AI might bring it closer to something that sounds authentic, and using that to help me rewrite parts of the melody where it was lacking. Then I could have piped in my final melody with it's inauthentic midi instrument (I neither own, nor play traditional polish stringed instruments) and used it to make something that sounds much closer to my target, with a more organic feel.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: