Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

DALL-E won't steal jobs. But Stable Diffusion (a publicly available model similar to DALL-E) will flood the job market once people bolt on various GUIs. E.g, https://www.reddit.com/r/StableDiffusion/comments/wyduk1/sho... is a Photoshop plugin that uses Stable Diffusion as a backend for interactive image generation. Stuff like this will make it much easier for more people to become >90th percentile illustrators (as defined today). This means the market for many digital arts degrees is going to become glutted and salaries are going to crash like a rock.


The standard could also change. It's done it before. Today's tools already let countless people do things that would've required huge expense three decades ago. We've recently been in a fairly grounded/realistic style in many types of commercial art, maybe more imaginative imagery will start to grab the eye if this more clip-art-y sort of thing becomes easier and easier. (Imaginative in the "not along the existing trends" sense, not just purely "nothing like this has ever been conceived before.")

A DIY website today can look better than the most professional one from 20 years ago. And it took away the jobs of folks who just wrote HTML to lay things out on a page. But very few people want to put "the same level of thing that anyone without any artistic talent can whip up" as their public facing website, so the question is what will the most skilled masters of the new tools end up doing to differentiate themselves. (In terms of any single particular digital artist who doesn't want to learn new tools, that's still bad, though... ;) )


This reminds me of how VisiCalc / Lotus 123 / Microsoft Office made many tasks traditionally done by a dedicated office worker semi-obsolete. A middle/upper manager no longer had to call on a tabulator to calculate figures or make a spreadsheet. Yet for all the power of these applications, the need for actual accountants never went away.


I feel like the number of secretaries and receptionists has collapsed however under the weight of everyone learning how to type, office suites, voice menus, and good-enough speech-to-text.

So the concern isn't without merit.


“Secretaries” that might be more related to shrinking women’s pay gap and empowerment of women


I don't understand the advantage everyone here is seeing. The produced artwork in that reddit thread is a digital collage. The perspective is mangled, the end result looks like a geocities page and a myspace page had a miyasaki-themed baby.

I understand it's a tech demo. The tech just patches in holes in the artwork with flat shapes that match a brushstroke style. It's Clone Stamp 2.0. Stable Diffusion doesn't understand how to integrate together images, it doesn't understand how to create images with all the techniques artists use to make a coherent image.

If you took the basic shapes of a miyasaki film, frame by frame, like a young girl with a pig, or a young girl with a dragon and fed those shapes into the filter, you wouldn't get the miyasaki film back out of the algorithm. SD flattens, freezes and arbitrarily recreates.

This isn't art, this is a frozen heart. There is a sneaking idea among the heart of our youth, that you could capture and recreate all vision in words. Isn't possible.


All art is gonna start looking the exact same if it's just a collage of generated images.


I wonder if this will happen to frontend and backend developers with the exact same principle: Have the bulk of the lifting done by some stable diffusion for CRUD api/frontend and test/modify edge cases.

I think your last point on salaries: This will increase salaries for those with a lot of experience but you are correct, for vast majority of digital art degrees graduating will suffer from demand plummet.

I really fear for the young Z generation, it appears the bar to entry is increasing as these AI tools automate bulk of their requirement.

It's akin to how github copilot generates a ton of boilerplate work, something that used to be delegated to junior devs.


Yep, it's eating its way upward from junior to intermediate. And even if that doesn't take away the whole Senior role, it can certainly do a good chunk of the day to day work. We haven't even begun to see the market effects of github copilot or stable diffusion yet.


That demo video is to me further confirmation that adding computers to creative activities sucks the joy out of them and makes them bland, boring and pointless.

Illustration has been reduced to dragging pictures around, using the eraser tool and typing commands in a box: artistic dystopia.

Not only does that not deserve to steal anyone’s job, it should be ejected into space and forever forgotten.


Unfortunately the tech is already here and won’t dissapear. Most people will have to become AI assisted image creators. Like you say it will make the work joyless and much more precarious. People will be required to create much more output in same amount of time. It will be more stressful and busy and the time saved will not go to them but their bosses.

AI is going to probably do that with many jobs and while everybody will point out how industrial revolution went well (we moved to knowledge work) for those who had to stay in the factories it is not so great after all. The craftsmen proud of their work became cogs in joyless process. Entirely replaceable.

If this is gonna happen with knowledge workers… great. Maybe the luddites were right.


> This means the market for many digital arts degrees is going to become glutted and salaries are going to crash like a rock.

That's the important part isn't it? Lots of other people will lose their jobs.

But no, let's talk more about what "real art" is or how art will exist in some form or another because you can't kill an idea.


> Lots of other people will lose their jobs.

I don’t understand what jobs people are talking about.

Making a living as an illustrator comes from a long stream of networking and brand building, that path is reserved to a crazy small minority and being talented and good at drawing is just the entry ticket to participate in the battle royale.

If you just need a nice illustration you can already catch any random guy on a drawing board and pay them nothing close to a living salary, and more like enough for two lattes at a starbucks. An AI plugin doesn’t seem to me to change much of that situation.


Downvoted because the sarcastic bent of the comment makes its actual intended meaning somewhat unclear.


Right. With all due respect to illustrators my observation of the evolution of web design: since the 2000s it started with a graphic designer and an creating the web pages, THEN Bootstrap was enough with some small changes from the graphic designer.

We can criticise stable diffusion quality and/or creativity but at the end is the market and time-to-find. If people find th graphic solution in just one second, they don't need to think further except for outliers.

Another question is when will this happen for developers. I am not talking only about AI but about more robust frameworks that quickly solve a development project without coding. It can be good frameworks more than AI. Once we reach it ideas and business will be the core more than (development) execution.


> This means the market for many digital arts degrees is going to become glutted and salaries are going to crash like a rock.

Art degrees are notoriously undervalued... There are jokes about it going far back in time.

Social Media has also done a lot of damage to art-based professions and opportunity therein over time, as artists are required to endlessly share their work for free in order to promote it and stay relevant... Actually social media has devalued art probably far beyond what AI could do from what I can see.

Most of the AI image generation services "sample" real objects from undisclosed sources, and then do most of their work to cover up or alter the sampled object. There will eventually be issues of duplicated works and copyright infringement that will take it all down a notch if you ask me.

Just like in the music industry now, many people are finding samples within the footprints of a lot of works that trigger content ID issues. When this happens on a much larger scale, the only music of value comes from human artists that can create completely unique works with keen composition that is aware of human wants and needs. If you ask me, AI is still pretty far away from being able to do that right now. From what I can observe, there are still humans behind each "Ai driven tool" that are actively tweaking the nuances of AI to make it look like it is more "sentient"... I'll only begin to worry about AI when it runs with all hands off, and when it can update and develop itself (completely on it's own). We keep redefining the term, but I think something that shouldn't be co-mingled with technology advancement driven by constant human interventions behind the curtains.

For more context, check out the story of "FN Meka", a massive blunder of a project to introduce a (fake) AI music artist launched by Capitol Records that melted down for them just this month.


It will most likely cause a "genericization" like "temp music" caused in Hollywood movie music.

See "Every Frame a Painting" for "The Marvel Symphonic Universe": https://www.youtube.com/watch?v=7vfqkvwW2fs


That has already happened with visuals in movies too. People are maybe not so sensitive about it. But atleast its still people creating those.

Spotify now has floods of AI generated ambient music pretending to be composed by humans. This is comming for visuals now.


These AI Models will make an impact in many domains and affect the distribution of job types available in art, legal, software, and other professions.

To prepare for the upcoming major changes, we need to urgently make creativity and innovation much more prominent in curricula across the board. Kids and adults will need to learn to adapt so they can thrive with such AI tools in the market.

Skills such as creativity, collaboration, communication, innovation using new technologies will be even more crucial to a large number of people in the coming years.

==============

I found this article to be a good discussion of many points involved. The author demonstrates pretty good understanding of the powers and limitations of these narrow AI models, although I'd say DALL-E-like models do have some level of understanding of how people use objects, based on both their textual and visual training data.

https://betterprogramming.pub/dall-e-2-will-disrupt-art-deep...

"...What these systems have is the meta-ability of learning to paint — anything in any style, already invented or to be invented. This is a crucial difference and it’s what distinguishes DALL·E 2 from any previous disruptive single-purpose technology.

...Artists can escape a camera because it’s static in its abilities. It can take pics and nothing else. ... AI wouldn’t replace just one style, it would replace all styles.

....

The most important one is that it doesn’t understand that the objects it paints are a reference to physical objects in the world. If you ask DALL·E 2 to draw a chair it can do it in all colors, forms, and styles. And still, it doesn’t know we use chairs for sitting down. Art is only meaningful in its relationship with either the external physical world or our internal subjective world — and DALL·E 2 doesn’t have access to the former and lacks the latter.

...

This means it can’t understand the underlying realities in which its paintings are based and generalize those into novel situations, styles, or ideas it has never seen before.

This limitation is key because we humans can do it very easily. ..."


>it can’t understand the underlying realities in which its paintings are based and generalize those into novel situations, styles, or ideas it has never seen before

As long as we keep putting a greater emphasis on the final output than we do on the process that led to the output, this "limitation" won't be a factor.


> And still, it doesn’t know we use chairs for sitting down.

I imagine if I gave it the prompt "chair being used by woman" then a majority of the pics would have someone sitting in it.

I realize your point is more of a "Chinese Room" philosophical objection. But it's important to note the level of sophistication these models have.

I'll also add that Nvidia is currently doing research on 3D model based generation, so that the model is aware of the 'skeleton' if the subject before illustrating it.


I’ve seen som hilarious DALL-E images like a razer gaming coffin. That are exactly what you would expect and also not I would say fairly novel images.


This is always the assumption of a new technology, and relies on people that assume the absolute minimum of the person's role. It seems you might have the belief that a creative's role is to sit and generate MS Office style clipart.

Instead let's take a short journey into the role of an illustrator with an example job.

An author hires an illustrator to produce a series of illustrations for their new children's story. The illustrator reads the story and discuss ideas with the author, they will talk through and present styles (or make a new one) that match the feel of the story, and match the progression of the story-arc. They'll craft mood boards and multiple story boards to best capture the story, then work through revisions with the author and finally proceed to artwork. Their skills extend beyond drawing, their skills are rather in the knowledge in how to craft and present in a visually engaging way, the picture is the end point - but everything else is 80-90% of the job. I know watercolour artists whose stunning visuals take less than 10 minutes to paint, their whole job is about the steps that came to that point.

However, let's get back to the illustrator - the artwork will then be tailored to suit the medium, for a book this may mean altering the imagery away from the spine area or where thumb placement typically is - for digital this could involve developing layered sub-elements to support animation or parallax, or ensuring that screen size isn't going to ruin the experience and so on, at this point we've now engaged other kinds of creatives to suit the specific technicalities of the output.

I won't go on, but the process does continue from here, it's a highly tailored process and every job is clearly different. AI text to image doesn't do any of this. It currently can't even do consistent images (hopefully that will change.)

Now we have people on the internet who have never worked in a creative field and not really experimented with StableDiffusion to see its obvious limitations all while telling the world that these people will be unemployable and that their craft will be worthless.

I think my simple example above helps outline that the drawing of a picture is just one part of the role. Here I've used an illustrator as an example, but it could have just as easily been a branding designer with a logo, a character designer, a story-teller, a prototyper and so on and so forth. All of these people ultimately produce a picture, but the "job" is everything that leads up to that point.

AI will merely be a new tool for these people, and they'll be excited to receive it.


In the short term, attorneys at any major commercial enterprise are going to look at Stable Diffusion's license and shudder.

The license is well-intentioned, but the use-restrictions are both broad and vague-enough that they may add extra unnecessary liabilities that one doesn't generally expect to find from a graphics-artist contractor.


Eh, its a collage using a bunch of work by skilled craftsmen by someone skilled in a craft.

OP's point was the tools get better, but still need someone with skills to use them.


Or the power law will become worse and Sayori alone will hog entire market.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: