I agree a lot with the sentiments here and I think people who want to avoid being filmed should have that right. But, as someone who doesn't mind (and is younger) I suppose I could share my rationalizion for it (as flawed as it may be)
One often mentioned reason is the fear that in some way your likeness will end up in something significant, or viral. That makes sense, it's the most invasive and significant violation. We "risk becoming the side character in someone else's parasocial relationship" as another commentator mentioned. I myself wouldn't want that either, but I derive some comfort from one main observation: virality doesn't scale. A lot of the worries come from the fact that "everyone is filming now", "everything is shared now". That's true, but the likelihood of any of this ever becoming popular or even seen goes down as the volume goes up. That alone is enough for me to not be that worried, at least not by the increased prevalence of public filming/photography.
On the other hand, this does nothing to limit the effect of data harvesting and government espionage, a real worry I might have.
It's interesting that you mentioned being younger. One thing I've noticed is that as people accumulate different experiences and social groups (not necessarily just because of age), they often develop different "personas" depending on the environment. In one setting, you might be an enthusiast sharing a video about a hobby, while in another you might be a CEO interacting with your team, shareholders, partners, or customers where you naturally behave differently. The challenge is managing these "many worlds" without them colliding. One solution that's becoming more feasible now is the ability to modify your appearance and voice depending on the context.
I've faced this problem for almost every task in my life, from the creative stuff already mentioned too less obvious things, like socializing (Seeing what you said wrong without knowing how you could have said it better). Because of this, the only things I have been able to bear "practicing" are ones that are outside of any public view. Ones were my taste was nonexistent. Code is one of them. We don't see much code (good or bad) in public, and so it's one of few areas where my taste could only improve after I saw the failures in my own work after I had produced it, rather than during.
I think it's the case that both these systems can, should, and do coexist. When doing research or development, precision is more important than immediate grasp-ability. With people in general conversation, we operate on rules of thumb and appearances really do sometimes matter more than rigor. Colloquial speech then takes precedence, even if it is imprecise.
This article is nice because it is both interesting in the purely rigorous sense (phylogenetically), and it highlights this divide between precise definitions and the words we find useful (most of all in that catchy title!).
Yes and no. People also exhibit these biases, but because degree matters, and because we have no other choice, we still trust them most of the time. That's to say; bias isn't always completely invalidating. I wrote a slightly satirical piece "People are just as bad as my LLMs" here: https://wilsoniumite.com/2025/03/10/people-are-just-as-bad-a...
Personally, I have been very pleased with the results despite the limitations.
Like many (I suspect), I have had several users provide comments that the AI processes I have defined have made meaningful impacts on their daily lives - often saving them double digit hours of effort per week. Progress.
> [...] as something that happened to Facebook/Meta rather than something driven by Facebook/Meta to satisfy Wall Street. Social media did not naturally evolve into what it is today:
As soon as you have any platform which says "hey you there with an email address, you can put content on here that can be seen by anyone in the world." you will slowly end up with a scene that looks like all these sites we have now. Advertiser's and influencers will be there, at your behest or otherwise. There's only two options to avoid this.
1. Aggressively tune your algorithm against pure engagement and toward proximity.
2. Explicitly dissallow broad reach of content.
And when I say aggressively I really mean it. If people can "follow" others unilaterally, even only showing "followed content" will still lead to most people seeing mostly high engagement posts, rather than their friends.
At what point (degree of intervention) does something go from "natural" to "driven"? It's a hard question, but one things for sure, a Facebook that didn't allow high engagement content would already be dead.
I wrote about this a bit on my blog [1] but to add to this idea,
> Maybe prostitution to the owners will be the only available job.
Pre industrial revolution, labor was in a sense cheaper than it was during the industrial revolution, and one genuinely interesting set of professions that existed then was house servants. Butlers, maids, cooks, under butlers and so on. It would not be inconceivable to see the return of these kinds of jobs? Ones where the value of human labor is so low that working for someone with wealth in their own home is common?
I suppose that house servants performed two functions - menial tasks, and emotional labour. Competence at both was essential. A house servant who was great at making beds but super annoying or had no emotional intelligence probably didn't last very long.
The menial tasks component will probably be automated away. Which leaves the emotional labour. Put another way, our job will be to be liked by the people with power. Liked enough to be kept around.
The only job left in the future is that of the pet.
I feel like there is a big risk here, which others have already mentioned, but in fact this risk has already been realized w.r.t npm packages.https://news.ycombinator.com/item?id=41178258
While some mistakes are probably inevitable (like it happened with Tea protocol), sourcing a wide range of metrics from multiple sources, fixing bugs and building reasonable guardrails can prevent them from repeating.
For instance, given that my algo-donating aims to support the global OSS supply chain (not to distribute any crypto tokens like Tea did), it could potentially even focus only only "old" repos. They carry higher maintenance-related risks, but it will take years to distort such target area for donations.
reply