I am infuriated every time someone mentions the threat AI poses, the media and others rant about some existential crisis mankind faces of either some type of super intelligent machine, some killer robot, or some other ridiculous nonsense.
The truth is that AI models, right now, are amplifying sensational and provocative content across our shared information sphere. AI chooses what information we get when we search on the internet or look at our social media feeds. Models trained from data collected from social media can be used to manipulate public opinion. Any data derived from contentious social interactions across platforms can be employed in psychological operations against other groups by hostile actors to generate strife in a select population.
These things are very real threats and as a result people suffering today.
All the rest of the AI fear mongering is such a distraction for the general populace that it almost functions as straw-man threat or controlled opposition crammed into a headline:
"Should we be worried about military's new Skynet project? Experts say, 'No'".
But when someone tries to explain to others the threat of something inane like Facebook, there is general disbelief that it could be a catalyst for something like genocide.
We should be reminded of this every time this subject comes up.
But, to answer your question:
No position which requires depth or broad scope of knowledge across fields will be at risk. You cannot replace a software engineer with AI. Artists will experience a shift in the market and they stand to lose if copyright cannot protect their work from being assimilated into training sets without permission. There will also be new positions opening in different industries to employ, train and maintain these newer generative systems. Artists I know have already be working with generative models like Stable Diffusion because they find it intriguing. Overall it will not be catastrophic.
The truth is that AI models, right now, are amplifying sensational and provocative content across our shared information sphere. AI chooses what information we get when we search on the internet or look at our social media feeds. Models trained from data collected from social media can be used to manipulate public opinion. Any data derived from contentious social interactions across platforms can be employed in psychological operations against other groups by hostile actors to generate strife in a select population. These things are very real threats and as a result people suffering today. All the rest of the AI fear mongering is such a distraction for the general populace that it almost functions as straw-man threat or controlled opposition crammed into a headline: "Should we be worried about military's new Skynet project? Experts say, 'No'". But when someone tries to explain to others the threat of something inane like Facebook, there is general disbelief that it could be a catalyst for something like genocide. We should be reminded of this every time this subject comes up.
But, to answer your question: No position which requires depth or broad scope of knowledge across fields will be at risk. You cannot replace a software engineer with AI. Artists will experience a shift in the market and they stand to lose if copyright cannot protect their work from being assimilated into training sets without permission. There will also be new positions opening in different industries to employ, train and maintain these newer generative systems. Artists I know have already be working with generative models like Stable Diffusion because they find it intriguing. Overall it will not be catastrophic.