UBI isn't about making financial sense it's about keeping the last traces of society duct taped together before it all collapses. Remove all pathways to a middle-class life and you're left with a populace on the precipice of violent revolts.
What I mean is that it simply would not work. The math doesn't add up. It would directly lead to Weimar levels of hyperinflation. Which is a far worse outcome.
Depends on how you define it, I suppose. The State of Alaska has been providing a universal basic income to its residents for almost half a century now, and that seems to work just fine.
Not enough to live on. In 2022, the payment was $3,284 per eligible resident, and the 2023 payment was $1,312. I could not easily find the 2024 figure. This is is paid once a year, not monthly.
Also - importantly - it is not an income tax revenue funded payment. It is a distribution of proceeds from a productive business.
Think of it this way: the entire world pays Alaska residents for the use of their oil, as a sort of tax that is worked into every energy intensive step of industry or petroleum-derived material.
Alaska's oil is only ~1% of world oil production, but its population is approximately 0.01% of world population, so Alaska residents get approximately 100x what the global per capita oil dividend would be. Oil industry is approximately 2.5% of global GDP. Stack all these multipliers together and we could expect a global per-capita total-GDP dividend of between $525 - $1325 per person per year. Exceeding this (as we did with PPP "loans" during COVID) would have compounding economic effects that lead to hyperinflation.
This is napkin math with spherical cow assumptions. Other factors would further limit UBI dividends to be less than this. But it shows that with existing national dividend systems as model, we can't even get within an order of magnitude of the low end of what UBI proponents are advocating.
I'm familiar with it. My point is that one needn't allow the perfect to be the enemy of the good: a UBI does not have to offer a full living income to be worth doing.
I’m an artist who’ve always struggled to learn how to code. I can pick up on computer science concepts, but when I try to sit down and write actual code my brain just pretends it doesn’t exist.
Over like 20 years, despite numerous attempts I could never get past few beginner exercises. I viscerally can’t stand the headspace that coding puts me in.
Last night I managed to build a custom CDN to deliver cool fonts to my site a la Google fonts, create a gorgeous site with custom code injected CSS and Java (while grokking most of it), and best part … it was FUN! I have never remotely done anything like that in my entire life, and with ChatGPT’s help I managed to it in like 3 hours. It’s bonkers.
AI is truly what you make of it, and I think it’s an incredible tool that allows you to learn things in a way that fits how your brain works.
I think schools should have curriculum that teaches people how to use AI effectively. It’s truly a force multiplier for creativity.
This is actually what I'm most excited about: in the reasonably near future, productivity will be related to who is most creative and who has the most interesting problems rather than who's spent the most hours behind a specific toolchain/compiler/language. Solutions to practical problems won't be required to go through a layer of software engineer. It's going to be amazing, and I'm going to be without a job.
> productivity will be related to who is most creative and who has the most interesting problems rather than who's spent the most hours behind a specific toolchain/compiler/language.
Why stop at software? AI will do this to pretty much every discipline and artform, from music and painting, to law and medicine. Learning, mastery, expertise, and craftsmanship are obsolete; there's no need to expend 10,000 hours developing a skill when the AI has already spent billions of hours in the cloud training in its hyperbolic time chamber. Academia and advanced degrees are worthless; you can compress four years of study into a prompt the size of a tweet.
The idea guy will become the most important role in the coming aeon of AI.
Also, since none of us will have any expertise at all anymore, everything our AI makes will look great. No more “experts” pooping our parties. It’s gonna be awesome!
Why would you be out of a job? Nothing he described is something that someone is being paid to do. Look at everything he needs just to match a fraction of your power.
Consumer apps may see less sales as people opt to just clone an app using AI for their own personal use, customized for their preferences.
But there’s a lot of engineering being done out there that people don’t even know exists, and that has to be done by people who know exactly what they’re doing, not just weekend warriors shouting stuff at an LLM.
> But there’s a lot of engineering being done out there that people don’t even know exists, and that has to be done by people who know exactly what they’re doing, not just weekend warriors shouting stuff at an LLM.
Except that the expectation of them is going to be higher and higher possibly with a downward pressure on compensation. I guess it'll find equilibrium somewhere eventually.
So your understanding is that the chief value a software engineer provides is experience utilizing a specific toolchain/compiler/language to generate code. Is that correct?
I think much of HN has a blind spot that prevents them from engaging with the facts.
Yes, AI currently has limitations and isn't a panacea for cognitive tasks. But in many specific use cases it is enormously useful, and the rapid growth of ChatGPT, AI startups, etc. is evidence of that. Many will argue that it's all fake, that it's all artificial hype to prop of VC evaluations, etc. They literally will see the billions in revenue as not real, same with all the real people upskilled via LLM's in ways that are entirely unique to the utility of AI.
I would trust many peoples' evaluations on the impacts of AI if they could at least engage with reality first.
Well, if you use LLMs to write a book and claim to be a writer that is crossing a line. A writer doesn't just write the book, they do a lot of thinking over the concepts of the book and their writing style is somewhat unique to themselves. It's deceiving to prentend you're not using AI when you are. And if you are that's fine as long as you disclose it.
To me the progress achieved so far has been overhyped in many respects. The numbers out of Google that 25% of the code being generated is AI or some high number like that? BS. It’s gamified statistics that look at the command completion (not AI trying to solve a problem) vs what’s accepted and it’s likely hyper inflated even then.
It works better than you for UI prototypes when you don’t know how to do UI (and maybe even faster even if you do). It doesn’t work at all on problems it hasn’t seen. I literally just saw a coworker staring at code for hours and getting completely off track trying to correct AI output vs stepping through the problem step by step using how we thought the algorithm should work.
There’s a very real difference between where it could be in the future to be useful vs what you can do with it today in a useful way and you have to be very careful about utilizing it correctly. If you don’t know what you’re doing and AI helps you get it done cool, but also keep in mind that you also won’t know if it has catastrophic bugs because you don’t understand the problem and the conceptual idea of the solution well enough to know if what it did is correct. For most people there’s not much difference but for those of us who care it’s a huge problem.
I'm not sure if this post is ragebait or not but I'll bite...
If anything, HN is in general very much on the LLM hype train. The contrarian takes tend to be from more experienced folks working on difficult problems that very much see the fundamental flaws in how we're talking about AI.
> Many will argue that it's all fake, that it's all artificial hype to prop of VC evaluations, etc. They literally will see the billions in revenue as not real
That's not what people are saying. They're noting that revenue is meaningless in the absence of looking at cost. And it's true, investor money is propping up extremely costly ventures in AI. These services operate at a substantial loss. The only way they can hope to survive is through promising future pricing power by promising they can one day (the proverbial next week) replace human labor.
> same with all the real people upskilled via LLM's in ways that are entirely unique to the utility of AI.
Again, no one really denies that LLMs can be useful in learning.
This all feels like a strawman-- it's important to approach these topics with nuance.
I was talking to a friend today about where AI would actually be useful in my personal life, but it would require much higher reliability.
This is very basic stuff, not rewriting a codebase, creating a video game from text prompt or generating imagery.
Simply - I would like to be able to verbally prompt my phone something like "make sure the lights and AC are set to I will be comfortable when I get home, follow up with that plumber if they haven't gotten back to us, place my usual grocery order plus add some berries plus anything my wife put on our shared grocery list, and schedule a haircut for the end of next week some time after 5pm".
Basically 15-30min of daily stupid personal time sucks that can all be accomplished via smartphone.
Given the promise of IoT, smart home, LLMs, voice assistants, etc.. this should be possible.
This would require it having access to my calendar, location, ability to navigate apps on my phone, read/send email/text, and spend money. Given the current state of the tools, even if there is a 0.1% chance it changes my contact card photo to Hitler, replies to an email from my boss with an insult, purchases $100,000 in bananas, or sets the thermostats to 99F.. then I couldn't imagine giving an LLM access to all those things.
Are we 3 months, 5 years, or never away from that being achievable? These feel like the kind of things previous voice assistants promised 10 years ago.
because the preachers preach how amazing it is on their greenfield 'i built a todo list app in 5 minutes from scratch' and then you use it on an established codebase with a bigger context than the llm could ever possibly consume and spend 5x more time debugging the slop than it would've taken you to do the task yourself, and you become jaded
stop underestimating the amount of internalized knowledge people can have about projects in the real world, it's so annoying.
an llm can't ever possibly get close to it. there's some guy in a team in another building who knows why a certain weird piece of critical business logic was put there 6 years ago, the llm will never know this and won't understand this even if it consumed the whole repository because it would have to work there for years to understand how the business works
A completely non-technical saleslady on our team prototyped a whole JS web app that generated some data based on some user inputs (and even generated PDFs), which solved a problem our customers were having and our devs didnt have the time to develop yet.
This obviously was a temporary tool we'd never let touch our github repo but it still very much worked and solved a niche problem. It even looked like our app because the LLM could consume screenshots to copy our designs.
I'm on board with vibe coding = non-maintainable, non-tested, mostly useless code by non-devs. But the plus side it will expose many many people to learn basic programming and fill many tiny gaps not solved by bigger more serious pieces of code. Especially once people start building infrastructure and tooling around these non-devs, like hosting, deployment, webhook integrations, etc.
Do people actually learn when using these tools though? I mean, I’m sure they can be used to learn, just like TikTok could be used to read John Stuart Mill. But I doubt that’s what it’s going to be used for in real life.
If the barrier to entry is lower then more people will engage with it. Everything in life is about incentives. This is a hugely powerful tool for people working in the information industry, which is most people with office jobs. A sales person who can overcome a simple customer objection without a major time investment with devs is a sales person who makes more $$ and gets more promotions.
Most people in practice won't, they'll stick to what they know, but there's tons of semi-nerds on the edges who are going to flourish in the next decade. Which is great news for the economy.
Engaging with it is different than "learning" though, that's specifically what I was talking about. LLMs seem to be interesting because they're a technology that doesn't encourage you to learn. I know people who talk to ChatGPT, copy code, run it, paste errors into ChatGPT, copy output, run it, etc. They're not really learning anything, they're a glorified console through which ChatGPT can interact with their machine. I'm not saying that that's exactly what happened in your story. I just think that learning will be the exception, not the rule.
But that’s not good. You don’t want Bob to be the gate keeper for why a process is the way it is.
In my experience working with agents helps eliminate that crap, because you have to bring the agent along as it reads your code (or process or whatever) for it to be effective. Just like human co-workers need to be brought along, so it’s not all on poor Bob.
totally. Especially when i'm debugging something for colleagues or friends, given a domain and a handful of ways of doing it, if i'm familiar with it I generally already get a sense of the reasons why its failing or falling short. This has nothing to do with the code base, any given language not withstanding. It comes from years and decades of systems and idiosyncratic behaviors of systems and examples which strangely rear their heads in notable ways.
Theese notable ways may also not be commonly known or put into words, but they persist nevertheless.
This post is about a specific, complex system that stretches from operating system to the physical world, as well as some philosophical problems.
What you're describing is a dead simple hobby project that could be completed by a complete novice in less than a week before the advent of LLMs.
It's like saying "I'm absolutely blown away by microwaves, I can have a meal hot and ready in just a few minutes with no effort or understanding. I think all culinary schools should have a curriculum that teaches people how to use microwaves effectively."
Maybe the goal of education should be giving people a foundation that they can build on, not make them an expert in something that a low skill ceiling and diminishing returns.
I’m actually a huge fan of Brett Victor and I felt like he’s kinda missing the dynamic, adaptable nature of AI that allows non-technical people like me to finally access the system layer of computation for our creative ends.
In other words, in many ways, AI (or rather llms) is the very thing that Brett Victor has spent his whole career imagining and creating - a computing interface that closes gap between human imagination and creation. But here, he’s focusing on the negatives while neglecting, IMHO, the vast potential of AI to allow people to connect, create, and express themselves. As in truly having a PERSONAL computer.
At Dynamicland, he was attempting to build a system that non-technical people like me can interface in a way that makes sense to us.
Taking your unnecessarily disparaging microwave analogy - Using CHATGPT, I can understand it, reprogram it, and do fun stuff, like I don’t know - set up a basketball hoop that sets the timer based on how many shots I make, despite having limited or no technical background. Like I can tell chatgpt my crazy vision, and it will give me step by step approach, with proper resources, and respond in a way that I can grok to build this thing.
THIS is why I'm awestruck.
My anecdote is just my personal reaction to the post. Besides, what’s wrong with people expressing themselves freely here?
> I’m actually a huge fan of Brett Victor and I felt like he’s kinda missing the dynamic, adaptable nature of AI that allows non-technical people like me to finally access the system layer of computation for our creative ends.
To "grok" something is to understand it on a deep, fundamental level. Following a checklist from an LLM and the thing you're doing eventually working isn't grokking.
To be clear, I'm very glad that you and others can throw together new projects. Your excitement seems genuine, and more excitement in the world is good. And perhaps you'll be one of the miniscule minority who will use LLMs to really get to a deeper level of understanding with new things.
But I wonder if your excitement may be misleading you here and making it harder for you to grok Bret Victor's post - on any level. I don't think Victor is interested in computing in the way you think he is. There's a world of difference between being able to cobble a web project together, and the kinds of philosophical shifts a project like dynamicland is proposing and enacting.
In the interest of people expressing themselves freely, I'd go so far as to say it's particularly surprising to read all this from "an artist". There was a time when being an artist implied the person had reflected and read and thought about larger perspectives across a range of subjects - philosophy, science, religion, etc.
Here, in this instance, I can't help feeling there's some crunchy irony in the fact that a deeply radical (scientifically, artistically, technologically, socially) project like dynamicland is met by an artist excited to be able to plug web services into each other, strongly claiming from the very heart of the cultural slop wars, that the dynamicland people might be confused and maybe LLMs is the real answer.
Respectfully, consider that maybe the perspective from which they're viewing the problem is simply much deeper than what you've been able to grasp so far. I don't mean it disparagingly or cynically, in fact it's great news, you've vistas to explore here!
I suggest reading more from dynamicland directly, and bret's website too, a few of bret's talks, alan kay is very good, there's tons of stuff if you get into it. Don't neglect the history of computing, it's full of amazing ideas.
I'm following my own advice there at the end and browsing around a bit more, and one book dynamicland links to on their bookshelf (https://dynamicland.org/2024/Roots/) is "Tools for Thought" (https://www.rheingold.com/texts/tft/) which has this amazing blurb which reminded me of this thread:
> Tools for Thought is an exercise in retrospective futurism; that is, I wrote it in the early 1980s, attempting to look at what the mid 1990s would be like. My odyssey started when I discovered Xerox PARC and Doug Engelbart and realized that all the journalists who had descended upon Silicon Valley were missing the real story. Yes, the tales of teenagers inventing new industries in their garages were good stories. But the idea of the personal computer did not spring full-blown from the mind of Steve Jobs. Indeed, the idea that people could use computers to amplify thought and communication, as tools for intellectual work and social activity, was not an invention of the mainstream computer industry nor orthodox computer science, nor even homebrew computerists. If it wasn't for people like J.C.R. Licklider, Doug Engelbart, Bob Taylor, Alan Kay, it wouldn't have happened. But their work was rooted in older, equally eccentric, equally visionary, work, so I went back to piece together how Boole and Babbage and Turing and von Neumann — especially von Neumann — created the foundations that the later toolbuilders stood upon to create the future we live in today. You can't understand where mind-amplifying technology is going unless you understand where it came from.
This was the most surprising/disturbing/enlightening part of the post imo. Surpring; this person literally had no clue! Disturbing; this person literally had no clue? Enlightening; this person literally did not need a clue.
My takeaway as an AI skeptic is AI as human augmentation may really have potential?
I feel like, AI makes learning way more accessible, at least it did for me, where it evoked a childlike sense of curiosity and joy for learning new things.
I’m also working on a Trading Card Game, where I feed it my drawings and it renders it into final polished form based on visual style that I spent some time building in chat GPT. It’s like an amplifier / accelerator.
I feel like, yes while it can augment us, at the end day it depends on our desire to grow and learn. Otherwise, you will end up with same result as everybody else.
I enjoy the fun attitude, I had a family member state something similar, but I always warn: with powerful AI comes serious consequences. Are we ready for those consequences? How far do we want AI to reach into our lives and livelihood?
We managed to survive the nuke and environmental lead (two examples of humanity veering into drastically wrong directions)
we are never ready for seismic changes. But we will have to adapt one way or another, might as well find a good use for it and develop awareness as a child would around handling knives.
We cannot predict what the consequences will be, but, as a species, we are pretty good at navigating upheavals, opportunities. There are no guarantees that human ingenuity is likely to always save the day, but the fact evolution has bestowed us with risk taking, curiosity, so we won’t stop.
No, we are not good at that. We have dire warnings of extreme catastrophes heading our way for decades now and instead of fixing what is broken, we collectively decide to race to our extinction faster.
Modern Humans have been around and successful for 10s of thousands of years. This might be a genetic dead end in the relatively near short term, but I bet we and at least one branch of our descendants will live for many more 10s or 100s of thousands of years.
Makes sense. I know what I built is nowhere near actual software development. Still I was able to quickly learn how things work through GPT.
Since I’ve literally been working on this project for two days, here’s a somewhat related answer to your question: I’ve been using chat gpt to build art for TCG. Initially I was resistant and upset at AI companies were hoovering up people’s work wholesale for training data (which is why I think now is an excellent time to have serious conversation about UBI, but I digress).
But I finally realized that I could develop my own distinctive 3D visual style by feeding GPT my drawings and having it iterate in interesting directions. It’s fun to refine the style, by having GPT simulate actual camera lens and lighting set up.
But yes I’ve used AI to make numerous stylistic tweaks to my site, including building out a tagging system that allows me to customize the look of individual pages when I write a post)
Hope I’ll be able to learn how to build an actual complex app one day, or games.
Genuine question, how would you feel about reading the dual to your comment here?
"I'm a computer scientist who's always struggled to learn how to paint." "Last night I managed to create a gorgeous illustration with Stable Diffusion, and best part ... it was FUN!" "Art hasn't felt this fun for a long time."
Maybe CDN isn’t the right term after all, see I’m not a software engineer!
But, basically I wanted a way to have a custom repository of fonts a la Google Fonts (found their selection kinda boring) that I could pull from.
Ran fonts through transfonter to convert them to .woff2, set up a GitHub repository (which is not designed for people like me), and set up an instance on Netlify, then wrote custom CSS tags for my ghost.org site.
The thing that amazes me is that aside from my vague whiff of GitHub, I had absolutely no idea how to do this. Zilch. Nada. Chat GPT gave me a clear step by step plan, and exposed me to Netlify, how to write CSS injections, how ghost.org tagging works from styling side of things. And I’m able to have back and forth dialogue with it, not only to figure out how to do it, but understand how it works.
Sounds more like a Continuous Integration / Continuous Deployment (CI/CD) pipeline - defined as a set of practices that automate the process of building, testing and deploying software. Or rather, fonts in this case.
A Content Delivery Network (CDN) is a collection of geographically scattered servers that speeds up delivery of web content by being closer to users. Most video/image services use CDNs to efficiently serve up content to users around the world. For example, someone watching Netflix in California will connect to a different server than someone watching the same show in London.
Yes, it's an ever patient teacher that's willing to chop up any subject matter into bits that are just the correct shape and size for every brain, for as long and as deep as you're willing to go. That's definitely one effective way to use it.
Nice! That's what we all want, but without the content reapropriation, surveillance and public discourse meddling.
Think of a version that is even more fun, won't teach your kids wrong stuff, won't need a datacenter full of expensive chips and won't hit the news with sensacionalist headlines.
Nothing to be embarrassed about, different strokes for different folks.
However, Monty Python was far from lazy, they cleverly deconstructed a repressive British culture at the time, they mocked class and authority, uptight education institutions, pointless bureaucracies, religious hypocrisy, and violent glorification of British history.
Their humor can be pretty crass by today’s standards, but if you approach their work as an absurdist, subversive satire, they’re one of the best that have ever done it.
I’ve always found their underlying message to be “don’t take things too seriously and enjoy life.”
As for what’s funny, they’re just absurd. A flying bunny that rips knights head off, an accidental messiah who just points out basic common sense which is interpreted by the masses as direct edict from God, brilliant deconstruction of bullshit bureaucracy in form of ministry of silly walks. Things like that.
But if that doesn’t tickle your fancy, that’s okay too.
As somebody who have been in and out mental hospitals, this is very familiar.
At one point in my early twenties, I was literally thinking about killing myself every single waking second.
Finally I decided to throw in the towel, and got around to planning my end.
Sat there, ran through various scenarios in my head. All the way up to putting the gun in my mouth and pulling the trigger.
My imagination always terminated in blackness. Nothingness.
Then I realized I didn’t want to die, but didn’t want to live this way. And promised myself that I would do everything in my power to heal.
It took me about 15 years to get where I am now. Stable and mostly at peace with myself, but it was an arduous and painful journey - with many terrifying moments where I couldn’t see a way out.
I’m incredibly grateful that I managed to figure it out and I wouldn’t wish that agony on anybody.
I'm really glad to hear you made it through such a challenging time in your life. Your strength and determination are incredibly inspiring. I'm curious to know more about your healing process. Did you work on identifying and addressing the root causes of your pain, perhaps through therapy or counseling? Sometimes, personal issues can stem from our familial relationships. Did you find that to be the case in your journey? I'm interested to learn more about how you managed to overcome your struggles and achieve the peace you have today.
It was a long winding journey. Generational trauma is definitely a real thing.
I tried a lot of things, from therapy to drugs to sex clubs, was grasping at straws, trying to understand what was happening.
It was a complicated and personal journey, but my biggest breakthrough occurred when I surrendered to the healing powers of my body and spirit.
I had to just stop and do nothing, and just allow all the hurt, shame, and fear come up to be processed. My ego fought like hell to keep them suppressed.
My mind was a storm of self-abuse, thoughtforms that told me that I was worthless and to be destroyed. I had to learn how to accept them and return to my heart (my emotional center) and just practice sitting with my feelings. Over time these thoughts dissipated, and I saw that they were driven by unrecognized emotional energy and signals from my body.
I was fortunate that I managed to create a situation that allowed me to lie fallow for a long time to heal.
There were many layers, but the crux of my issues was being severed from my authentic self. This happened because I grew up in a household with a lot of emotional abuse and coercive control, to the point where I suppressed myself and created a persona to survive.
If you read accounts of cult-abuse survivors, it’s like their real self is always trying to break through but they have been trained or mislead into ignoring themselves.
My experience was very similar.
I could write more about this, but pivotal points were recognizing that I had agency, learning how to trust my body and emotions over my ego, and understanding that at our core, we are love.
Had to learn how to love myself unconditionally basically; a constant practice.
And a big part of the self-love journey was learning how to truly take care of myself. How to eat properly, how to rest, cutting out toxic people from my life, becoming clear with my boundaries, forgiving myself, accepting my pace of progress, and so on.
Once I recognized the pain in my heart, I just kept returning to it, and over time that guided me towards the truth.
The part that resonates with me is the need to see a third alternative between death and the status quo. It is easy to feel trapped and without agency, and this leads to a downward spiral. In my experience and opinion, the key to finding a way out is reclaiming agency and belief that you have the power to change. Willpower is a tricky thing. If you think you have it you do, and if you don't think you have it you don't. It is largely a mystery to me how people go from one to the other.
In my case, one thing that helped is realizing I had choices and wasn't as trapped as I felt. I could leave my family, and my job if I wanted. I could always quit it all and live in a shack somewhere reading books. Just knowing I had the power of choice and wasn't trapped gave me the strength to make progress.
I now think trained helplessness and philosophies that view humans only as a product of their environment are extremely dangerous.
I'm also curious about this, but unfortunately I think that's where it will end for me in practice. Even if xylitol is effective, it seems the xylitol you'll get in any products (at least in the US) is going to be industrial byproduct from sketchy unregulated sources, just with greenwashed packaging. If there is a gum or rinse out there that is transparent and credible about their source of xylitol, and it's a source you can trust with your health, I'd love to try it out.
(I admit that you can probably say the same thing about any toothpaste you can buy in the US. But those have at least some additional benefits from regulation.)
The initial data on xylitol looks promising for reducing cavities! Particularly for stimulating saliva production (dry mouth is a large contributor to cavities risk) and as a sugar substitute that cavity-causing species can't digest into acid. I'm excited to see more research come out and compare it with our data.
My main problem is that at this point, the value of entire collective creative output of humanity should go to the living not the select few.
IMHO AI companies should pay into some kind of UBI fund/ Sovergeign fund.
Time for capitalism to evolve, yo!