At the same time, there is nothing stopping him from handing the government the difference between what he has paid in the past and what he would have paid with the typical tax rates he's talking about. Talk is cheap.
It depends on your objective. Is your objective to take money from billionaires? Then, sure, he could just give it.
Or is your objective to fund the federal government? Mitt Romney doesn't have that much money. (All the billionaires together don't, at least not for long. But they go much further than Romney alone would.)
I love writing with a dash—it packs a punch! On Writing Well by William Zinsser, who taught writing at Yale and Columbia, says this about the dash:
Somehow this invaluable tool is widely regarded as not quite proper—a bumpkin at the genteel dinner table of good English. But it has full membership and will get you out of many tight corners. The dash is used in two ways. One is to amplify or justify in the second part of the sentence a thought you stated in the first part. "We decided to keep going—it was only 100 miles more and we could get there in time for dinner." By its very shape the dash pushes the sentence ahead and explains why they decided to keep going. The other use involves two dashes, which set apart a parenthetical thought within a longer sentence. "She told me to get in the car—she had been after me all summer to have a haircut—and we drove silently into town." An explanatory detail that might otherwise have required a separate sentence is dispatched along the way.
Love this quote. In my youth, I attempted several times to use () for a thought within a larger sentence. My writing teacher at the time hated it! I never knew the correct way to do that or how to articulate it. "A parenthetical thought within a longer sentence" — I was so close!
This sounds like a comment from someone who doesn't have visibility into how good the models are getting and how close they are to fully autonomous, production-grade software development.
This is an easy theory to prove; if AI was anywhere close to a senior engineer, we'd see the costs of software development drop by a corresponding amount or quality would be going up. Not to mention delivery would become faster. With LLMs being accessible to the general public I'd also expect to see this in the open-source world.
I see none of that happening - software quality is actually in freefall (but AI is not to blame here, this began even before the LLM era), delivery doesn't seem to be any faster (not a surprise - writing code has basically never been the bottleneck and the push to shove AI everywhere probably slows down delivery across the board) nor cheaper (all the money spent on misguided AI initiatives actually costs more).
It is a super easy bet to take with money - software development is still a big industry and if you legitimately believe AI will do 90% of a senior engineer you can start a consultancy, undercut everyone else and pocket the difference. I haven’t heard of any long-term success stories with this approach so far.
TL;DR: Code is the easy part; and at least in the last few years was rarely the bottleneck so even if we get rid of coding we don't deliver infinity amount of software. The "What to build" usually takes longer than building it. The amount will only go up where coding was holding things up or the main portion of time spent in delivering software (Hint: It usually isn't even 20% of the time in delivery times in my experience). There's many other stages to the SDLC and lots of processes even before then for large scale systems.
On your point about a consultancy; many of the software dev consultancies will dry up w.r.t work. There won't be success as you state -> after all if your consultancy can do it so can an LLM so why do I need you as the middleman? After all just get Claude/Gemini/etc to do it for small things; you are already seeing this effect in things like graphic design, copywriting and other small creative skills. For large things with large complexity and judgement you need domain experts and guardrails again and other non-coding jobs -> that slows things down considerably so but still better to be in those jobs than anything requiring intelligence now.
As a result coding could easily be automated entirely and we may only see for example 20% increase in total "large" software velocity. As I mentioned in another comment it will be the people in the chain who produce little value but are required for other reasons (e.g. compliance, due diligence, sales, consultants, etc) that will remain and will be the bottleneck. The people that techies thought offered little value and made up inefficiencies and didn't contribute at all -> they have the last laugh in the end and they have AI to thank for that.
Personally in my team I know we are seeing significant improvement to the point where hiring is no longer considered; I'm worried about our senior staff even. Anything that is labor, and not deciding "what to do" I feel I no longer need help with nearly as much. This is many components in a large public org. Feel like I only need two staff now, and that's more to understand the problem and what to do then the action of actually doing it; and a backup for accountability. If I hire more its only because we I can't keep up with the AI and am burning out, and I won't because I don't want to "hire to fire" later on if we run out of product work. It makes me anxious, and I can't recommend anyone with honesty to make this their career anymore; anything else feels more like false hope at this point.
This sounds like a comment from someone who has tested it in a limited capacity such as small blog sites or side projects that did not need to be maintained
When a company wants to develop a product and has to choose to build, buy, or partner this kind of acquihire is a great move. Buy the builders with unique domain expertise to build it from scratch internally. If you buy a company with a fully-baked product the post-merger integration risks organ rejection.
reply