Spaghetti without tomato sauce? That's like pissing in the morning without farting. Sure, it'll get the main job done but it's not the same pure pleasure.
Fwiw, you can neutralize tomato sauce with a little bit of baking soda. Start with a pinch, stir, wait thirty seconds, and taste to see if you need more.
My guess is that the pasta mentioned above is spaghetti "aglio olio e peperoncino" (garlic, olive oil, red pepper), so not just olive oil.
Could be the recipe with the highest ratio taste/effort you can find, something that even a drunk student can pull off a 4 in the morning, so they probably just continued their tradition from the university years
Butter emulsifies into a sauce just from the residual heat of the spaghetti (and some mechanical action - stirring, pan flip, etc)
Oil needs a bit more help, otherwise it's just grease on noodles. The starchy water the pasta was cooked in can do most of the heavy lifting there, but the addition of garlic helps too.
Almost zero, the non-zero ones are about supporting legacy software/hardware. Also I would not touch C on any DOS platform, since the great Turbo Pascal exists for it. Nothing else even compares to it.
No, it didn't. Or rather it did for run of the mill coder camp wanna be programmer. Like you sound you are one. For me it's the opposite. That's because I don't do run of the mill web pages, my work instead is very specific and the so called "AI" (which is actually just googling with extra spice on top, I don't think I'll see true AI in my lifetime) is too stupid to do it. So I have to break it down into several sessions giving only partial details (divide and conquer) otherwise will confabulate stupid code.
Before this "AI" I had to do the mundane tasks of boilerplate. Now I don't. That's a win for me. The grand thinking and the whole picture of the projects is still mine, and I keep trying to give it to "AI" from time to time, except each time it spits BS. Also it helps that as a freelancer my stuff gets used by my client directly in production (no manager above, that has a group leader, that has a CEO, that has client's IT department, that finally has the client as final user). That's another good feeling. Corporations with layers above layers are the soul sucking of programming joy. Freelancing allowed me to avoid that.
I'm curious: could you give me an example of code that AI can't help with?
I ask because I've worked across different domains: V8 bytecode optimizations, HPC at Sandia (differential equations on 50k nodes, adaptive mesh refinement heuristics), resource allocation and admission control for CI systems, custom network UDP network stack for mobile apps https://neumob.com/. In every case in my memory, the AI coding tools of today would have been useful.
You say your work is "very specific" and AI is "too stupid" for it. This just makes me very curious what does that look like concretely? What programming task exists that can't be decomposed into smaller problems?
My experience as an engineer is that I'm already just applying known solutions that researchers figured out. That's the job. Every problem I've encountered in my professional life was solvable - you decompose it, you research up an algorithm (or an approximation), you implement it. Sometimes the textbook says the math is "graduate-level" but you just... read it and it's tractable. You linearize, you approximate, you use penalty barrier methods. Not an theoretically optimal solution, but it gets the job done.
I don't see a structural difference between "turning JSON into pretty HTML" and using OR-tools to schedule workers for a department store. Both are decomposable problems. Both are solvable. The latter just has more domain jargon.
So I'm asking: what's the concrete example? What code would you write that's supposedly beyond this?
I frequently see this kind of comment in AI threads that there is more sophisticated kinds of AI proof programming out there.
Let me try to clarify another way. Are you claiming that say 50% of the total economic activity is beyond AI? or is some sort of niche role that only contributes 3% to GDP? Because its very different if this "difficult" job is everywhere or only in a few small locations.
Did you played Assassin's Creed Valhalla? In it there is a board game called Orlog. Go and make that game to be multiplayer so you can play with your spouse/son/friend. Come back to me once you're done and we see then how much time it took you.
Or remake the Gwent board game that is in Witcher 3.
Make either of that mobile game so you can enjoy in the same room with the person you love. Also make sure you can make multiple decks (for Gwent) / multiple starting God (for Orlog) and you just select the start and hit "ready to play" (or whatever). You'll know what I mean once you understand either of these games.
Good luck with having any of them made in one session and not breaking the big picture in million of pieces and you keep the big picture in your head.
I'm trying to understand where our viewpoints differ, because I suspect we have fundamentally different mental models about where programming difficulty actually lives.
It sounds like you believe the hard part is decomposing problems - breaking them into subproblems, managing the "big picture," keeping the architecture in your head. That this is where experience and skill matter.
My mental model is the opposite: I see problem decomposition as the easy part - that's just reasoning about structure. You just keep peeling the onion until you hit algorithmically irreducible units. The hard part was always at the leaf nodes of that tree.
Why I think decomposition is straightforward:
People switch jobs and industries constantly. You move from one company to another, one domain to another, and you're productive quickly. How is that possible if decomposition requires deep domain expertise?
I think it's because decomposition is just observing how things fit together in reality. The structure reveals itself when you look at the problem.
Where I think the actual skill lived:
The leaf nodes. Not chipping away until you are left with "this is a min-cut problem" - anyone off the street can do that. The hard part was:
- Searching for the right algorithm/approach for your specific constraints
- Translating that solution into your project's specific variables, coordinate system, and bookkeeping
Those two things - search and translation - are precisely what AI excels at.
What I think AI changed:
I could walk into any building on Stanford campus right now, tap a random person (no CS required!) on the shoulder, and they could solve those leaf problems using AI tools. It no longer requires years of experience and learned skills.
I think this explains our different views: If you believe the skill is in decomposition (reasoning about structure), then AI hasn't changed much. But if the skill was always in search and translation at the leaf nodes (my view), then AI has eliminated the core barrier that required job-specific expertise.
Does this capture where we disagree? Am I understanding your position correctly?
Challenge accepted. If you get back to me I'll livestream this on Saturday.
I want to be crystal clear about what I'm claiming
My Claim:
AI assistance has effectively eliminated specialized job skills as a barrier. Anyone can now accomplish what previously required domain expertise, in comparable time to what a pre-AI professional would take.
Specifically:
- I've never written a game. I've never used a browser rigid body physics library. Never written a WebGL scene with Three.js before. Zero experience. So I should fail write.
- I think I could recreate the full 3D scene, hand meshes, rigging, materials, lighting - everything you see in that screenshot - using AI assistance
- I'm not going to do ALL of that in a couple hours, because even a professional game developer couldn't do it all from scratch in a couple hours. They would have assets, physics engine, rendering engine, textures, etc all because they were creating Orlog inside a larger game that provides all these affordances.
- But I could do it in the same timeframe a professional would have taken pre-AI
My interpretation of your challenge:
You're claiming that writing the multiplayer networking and state management for a turn-based dice game is beyond what AI can help a -- what you called me "run of the mill coder camp wanna be programmer" -- accomplish in a reasonable timeframe. That even with a simple 2D UI, I lack the fundamental programming skills to write the multiplayer networking code and manage state transitions properly.
So here's what I'll build:
A multiplayer Orlog game with:
- Full game logic implementing all Orlog rules and mechanics
- two players can connect and play together
- observers can join and watch
- game state properly synced managed across clients.
- Real dice physics simulation (because otherwise the game feels boring and unsatisfying - I'll grant you that point). But I'll have a server pick the dice roll values to avoid cheating. (Easiest trick in the book, just run the physics offscreen first, find the face that lands up, remap the textures, replay on screen this time), but use a library for simple rigid body physics engine because you couldn't write one from scratch in 3 hours either.
- Visual approach: Simple 2D/indie game cartoon style UI, with dice rolling in a separate physics area (just compositing dice roll app at bottom of screen, results animate onto 2D board, reality is they are totally separate systems)
What I need from you:
1. Is this the right interpretation? You're claiming the networking/state management is beyond what AI can help me accomplish?
2. Time predictions:
- How long would a competent game developer take to build what I've described?
- How long will it take me?
3. At what point do I prove my point? What's the minimum deliverable you'd accept as having completed the challenge?
Are you willing to make concrete, falsifiable predictions about this specific challenge?
Go ahead and do it. Put the code at your convenience on github. I don't care about your livestream, you can do it whenever. What I want is an app that I can install on my smartphone, Android, and that I can have my son do it as well and we play together. I am giving you a generous one week, should be more than enough for your "AI can do it easier". I did it in a month, and I put like 120 hours during that time. You get close to those 120 hours or over, you fail this "challenge accepted". Have fun.
I remember when I learned Java after TP. I went to team's Java go for guy and asked "how do I declare a global variable?". His response of "there is no such thing, everything is a class" was the start of my hatred towards Java. It only grew from there :)
It has a Mediatek soc, custom roms for these chips are scarce. If you look at the supported devices on the Lineage wiki, you’ll see only 2 out of 550 devices have a Mediatek soc[0], most of them are Qualcomm.
And iirc from the xda forums, even for Xiaomi phones with a Qualcomm soc it isn’t certain anyone will try to make a custom rom. Xiaomi just releases too many devices to have support for all of them.
A lot of people asking here in comments how the implementation would look like, given this is HN and the crowd here is technical inclined. No idea how they will do it but I can tell you how I did it in 2008.
See, in 2008 one of my projects had a client that had a lot of venues around continental US and Mexico and those venues were having sparse internet connection (think sky resort venue, remote and internet delivered by antennas that weather could affect it). Meaning when internet was not available any card transaction was a no go. This was a problem to be solved so my client asked if there is a way to make offline credit payments. So here is my implementation: -read credit card details and deliver the goods -> store card details in a local database, encrypted -> check online connectivity -> when internet was a go try to charge the card. If it was good then all was done, details were erased from local storage, everybody happy. If it failed then retry, 5 times per day, for 5 different days. After 25 tries, blacklist the credit card. Forward the information to legal department and mark that credit card as not acceptable from now on. So if you screwed the client with a bad credit card, you screw it only for 5 days maximum. And you also had a legal department on your ass. Meaning you got a fake card, good for you, keep it up cause now you are also on Secret Service radar (most people don't know but Secret Service, not FBI, gets involved in this). In the years I got involved in this project, 8 years, the number of times this was an issue raised to legal department was like under 5. So most folks actually pay and the few that got retried had probably a temporary problem with their funds and eventually they got it back on track. For those under 5 I think all of them eventually cut a deal with legal without raising the issue further up. Sorry guys, no juicy story involving Secret Service here.
Probably this worked because the goods were kinda under $50 as price. So maximum you'd screw the company I worked for like $500. And most likely this would not work with a big retailer like Amazon where you can purchase for thousand of $ in a single transaction.
But it had the advantage that it worked with all credit cards, debit or otherwise, Visa/MasterCard or whatever. If I would be on the implementation side nowadays from the Sweden bank in this article, I would probably do it like somebody else already proposed here in comments. Get the card to also contain an electronic signature which means a lot more scrutiny to get it released, which means yeah!, your privacy is fucked to Alpha Centauri and back if you try anything shady.
Wait until we get to know another specie then we will not just fill that Unicode space, but we will ditch any utf-16 compatibility so fast that will make your head spin on a snivel.
Imagine the code points we'll need to represent an alien culture :).