> What’s the point in generating writing or generating art if it gives next to zero feelings of accomplishment?
That's how I feel with programming, and sometimes I feel like I'm taking crazy pills when I see so many of my colleagues using AI not only for their job, but even for their week-end programming projects. Don't they miss the feeling of..... programming? Am I the weird one here?
And when I ask them about it, they answer something like "oh but programming is the boring part, now I can focus on the problem solving" or something like that, even though that's precisely what they delegate to the AI.
You're not crazy at all. I engineer pretty-big full stack systems for a living, as a lone coder. I relish when I actually sit down and write the code. To turn a customer concept into animated UI functionality. To write a cron task that auto-generates a weekly prize contest. To hand-craft SQL and add a new feature that lets people see 10 years of data in a new way, on an old codebase.
I've let Claude run around my code and asked it for help, etc. Once in awhile it's able to diagnose some weird issues - like last month, it actually helped me figure out why PixiJS was creating undefined behavior after textures were destroyed on the GPU, in a very specific case. But the truth is, I wouldn't hire an intern or an employee to write my code because they won't be able to execute exactly what I have in mind.
Ironically, in my line of work, I spend 5x as many hours thinking about what to build and how to build it as I do coding it. The fun part is coding it. And, that's the only time I charge for. I may spend 10 hours thinking about how to do something, drawing diagrams, making phone calls to managers and CEOs, and I won't charge any of that time. When I'm ready to sit down and write the code:
I go to a bar.
I turn my phone off.
I work for 6 hours, have 4 drinks, and bill $300 per hour.
I don't suspect that the kind of coding I'm doing, which includes all the preparation and thought that went into it, and having considered all edge cases in advance, is going to be replaced by LLMs. Or by the children who use LLMs. They didn't have much of a purchase on taking my job before, anyway... but sadly the ones who are using this technology now have almost no hope of ever becoming proficient at their profession.
I agree with this on a subjective level, but it also makes me think of blacksmiths at country fairs. I'm sure they deeply enjoy the work, but industrialized CNC and other technology replaced them long ago. It's a craftsman's hobby now.
What you describe sounds very pleasant and I am sure it leads to great results. I kind of envy you.
However, these two things are different: the kind of work that feels fulfilling, meaningful and even beautiful, versus: delivering the needed/wanted product.
A vibe coded solution that basically works, for a quarter of the cost, has advantages.
The notion of intentionally making work harder and less productive as some sort of protest against society seems so bizarre and self defeating. No one else will even notice or care. There will be zero broader impact.
I agree. But I also think that workflows like that of noduerme might be due to his own preferences as much as the needs of the customer. I am sure it is a good process, but it is also something that feels good for the developer himself. So there will be a drive to use it not for business reasons but for personal. Then it is not based on business needs.
I don't know about that. Too much is allowed to not endure. I don't want to push on that point too hard, because I get what you are saying: things that are worth something will persist. Still, it would be nice if we didn't have the ridiculous churn of stuff.. that does nothing but gather dust only to be thrown away.
Fully disagree. First, I question the value of something merely enduring. But that aside, implicit in what you're saying here is that the "skill of the swing," so to speak, doesn't matter, whereas only the quantity of swings is what matters. Baseball players clearly negate this.
The $300/hr rate is, to be honest, quite cheap when you don't charge all the time that went into meetings and preparing to write the code. It's probably more like $60/hr if you included all that. However, I don't need to account for my whereabouts the rest of the time, and I can just show a log of the code in progress if I'm ever asked about the time I bill. Of course when you actually sit down and begin to write something new, you begin actually thinking about modules and namespaces and consolidating functions and which things you can streamline, and so on... which is why it's fun. You may change your mind several times as you realize that all of this behavior should go into the parent class or something like that. [I have a special $150/hr rate I sometimes bill for "yak shaving" - clients appreciate it, actually.] But then it's just about painting something which you already have in your mind. I prefer to be paid for my painting, by the hour, rather than ever charging a project rate. I'm always concerned that my consulting is going to be mistaken for wasted time. I never want to be accused of wasting a client's time or overbilling; but they understand that when I sit down to write it, it will be done right the first and last time, and that it could not be done any faster or better than that.
Coding is not making a thing that appears to work. It's craftsmanship. It's quite difficult to convince a client that something which appears to work as a demo is not yet suitable or ready for production. It may take 20 more hours before it's actually ready to fly. Managing their expectations on that score is a major part of the work as well.
> And when I ask them about it, they answer something like "oh but programming is the boring part, now I can focus on the problem solving" or something like that, even though that's precisely what they delegate to the AI.
This I think I can explain, because I'm one of these people.
I'm not a programmer professionally for the most part, but have been programming for decades.
AI coding allows me to build tools that solve real world problems for me much faster.
At the same time, I can still take pride and find intellectual challenges in producing a high quality design and in implementing interesting ideas that improve things in the real world.
As an example, I've been working on an app to rapidly create Anki flashcards from Kindle clippings.
I simply wouldn't have done this over the limited holiday time if not for AI tools, and I do feel that the high level decisions of how this should work were intellectual interesting.
That said, I do feel for the people who really enjoyed the act of coding line by line. That's just not me.
This phrase betrays a profoundly different view of coding to that of most people I know who actively enjoy doing it. Even when it comes to the typing it's debatable whether I do that "line by line", but typing out the code is a very small part of the process. The majority of my programming work, even on small personal projects, is coming up with ideas and solving problems rather than writing lines of code. In my case, I prefer to do most of it away from the keyboard.
If AI were a thing that could reliably pluck the abstract ideas from my head and turn them into the corresponding lines of code, i.e. automate the "line by line" part, I would use it enthusiastically. It is not.
It's not the typing, obviously, you're right. I think the parent is talking about it being an "intellectual exercise" to organize their thoughts about what they wanted to see as a result, whereas we who enjoy programming enjoy the exercise of breaking down thoughts into logical and algorithmic segments, such that no edge cases are left behind, and such that we think through the client's requirements much more thoroughly than they thought through them themselves. A physician might take joy in finding and fixing a human or animal malady. A roofer might take joy in replacing a roof tile, or a whole roof. But what job besides coding offers you the chance to read through the entire business structure of.. a lawyer, a doctor, a roofing company, a bakery.. and then decide how to turn their business into (a) a forward-facing, customer-friendly website, and (b) a lean data-gathering machine and (c) a software suite and hosting infrastructure and custom databases tailored to their exact needs, after you've gleaned those needs from reading all their financials and everything they've ever put out into the world?
The joy of writing code is turning abstract ideas into solid, useful things. Whether you do most of it in your head or not, when you sit down to write you will find you know how you want to treat bills - is it an object under payroll or clients or employees or is it a separate system?
LLMs suck at conceptualizing schema (and so do pseudocoders and vibe coders). Our job is turning business models into schemata and then coding the fuck out of them into something original, beautiful, and useful.
Let them have their fun. They will tire of their plastic toy lawnmowers, and the tools they use won't replace actual thought. The sad thing is: They'll never learn how to think.
> The sad thing is: They'll never learn how to think.
Drawing a sense of superiority out of personal choices or preferences is a really unfortunate human trait; particularly so in this case since it prevents you from seeing developments around you with clarity.
I agree with the person you're answering. LLM-assisted coding is like reading a foreign language with a facing translation: most students who do this will make the mistake of thinking they've translated and understood the original text. They haven't. People are abysmal at maintaining an accurate mental accounting of attribution, authorship, and ownership.
I disagree. When you add an abstraction layer, the user of that layer continues to write code. That's not the case when people rely heavily on LLMs. They're at best reading and tweaking the model's output.
That's not the only way to use an LLM. One can instead write a piece of code and then ask the tool for analysis, but that's not the scenario that people like me are criticizing or concerned about -- and it's not how most people imagine LLMs will be used in the future, if models and tools continue to improve. People are predicting that the models will write the software. That's what people like me and the person I agreed with are criticizing and concerned about.
I'm uncomfortable with the idea not because it's outside of my area of comfort but because people don't understand code they read the way they understand code they write. Writing the code familiarizes the writer with the problem space (the pitfalls, for instance). When you haven't written it, and you've instead just read it, then you haven't worked through the problems. You don't know the problem space or the reasons for the choices that the author made.
To put this another way: you can learn to read a language or understand it by ear without learning to speak it. The skills are related, but they're separate. In turn, people acquire and develop the skills they practice: you don't learn to speak by reading. Junior engineers and young people who learn to code with AI, and don't write code themselves, will learn, in essence, how to read but not how to write or 'speak;' they'll learn how to talk to the AI models, and maybe how to read code, but not how to write software.
> If AI were a thing that could reliably pluck the abstract ideas from my head and turn them into the corresponding lines of code, i.e. automate the "line by line" part, I would use it enthusiastically.
So I take it you don't let coding agents write your boilerplate code? Do you instead spend any amount of time figuring out a nice way to reduce boilerplate so you have less to type? If that is the case, and as intellectually stimulating as that activity may be, it probably doesn't solve any business problems you have.
If there is one piece of wisdom I could impart, it's that you can continue enjoying the same problem solving you are already doing and have the machine automate the monotonous part. The trick is that the machine doesn't absorb abstract ideas by osmosis. You must be a clear communicator capable of articulating complex ideas.
Be the architect, let the construction workers do the building. (And don't get me started, I know some workers are just plain bad at their jobs. But bad workmanship is good enough for the buildings you work in, live in, and frequent in the real world. It's probably good enough for your programming projects.)
From the way you describe it, our process does not sound that different, except that this
> If AI were a thing that could reliably pluck the abstract ideas from my head and turn them into the corresponding lines of code, i.e. automate the "line by line" part, I would use it enthusiastically. It is not.
... is exactly how this often works for me.
If you don't get any value out of this at all, and have worked with SOTA tools, we must simply be working in very different problem domains.
That said I have used this workflow successfully in many different problem domains, from simple CRUD style apps to advanced data processing.
Two recent examples to make it more concrete:
1) Write a function with parameter deckName that uses AnkiConnect to return a list of dataclasses with fields (...) representing all cards in the deck.
Here, it one-shots it perfectly and saves me a lot of time sifting through crufty, incomplete docs.
2) Implement a function that does resampling with trilinear interpolation on 3d instance segmentation. Input is a jnp array and resampling factor, output is another array. Write it in Jax. Ensure that no new instance IDs are created by resampling, i.e. the trilinear weights are used for weighted voting between instance IDs on each output voxel.
This one I actually worked out on paper first, but it was my first time using Jax and I didn't know the API and many of the parallelization tricks yet. The LLM output was close, but too complex.
I worked through it line by line to verify it, and ended up learning a lot about how to parallelize things like this on the GPU.
At the end of the day it came out better than I could have done it myself because of all the tricks it has memorized and because I didn't have to waste time looking up trivial details, which causes a lot of friction for me with this type of coding.
I do read it. In my experience the project will quickly turn into crap if you don't. You do need to steer it at a level of granularity that's appropriate for the problem.
Also, as I said, I've been coding for a long time. The ability to read the code relatively quickly is important, and this won't work for early novices.
The time saving comes almost entirely from having to type less, having to Google around for documentation or examples less, and not having to do long debugging sessions to find brainfart-type errors.
I could imagine that there's a subset of ultra experienced coders who have basically memorized nearly all relevant docs and who don't brainfart anymore... For them this would indeed be useless.
I mean, I'm curious what kind of code it's saving you time on. For me, it's worse than useless, because no prompt I could write would really account for the downwind effects in systems that have (1) multiple databases with custom schema, (2) a back-end layer doing user validations while dispatching data, (3) front-end visual effects / art / animation that the LLM can't see or interpret, all working in harmony. Those may be in 4 different languages, but the LLM really just can't get a handle on what's going on well enough. Just ends up hitting its head on a wall or writing mostly garbage.
I have not memorized all the docs to JS, TS, PHP, Python, SCSS, C++, and flavors of SQL. I have an intuition about what question I need to ask, if I can't figure something out on my own, and occasionally an LLM will surface the answer to that faster than I can find it elsewhere... but they are nowhere near being able to write code that you could confidently deploy in a professional environment.
I’m far more in the camp of not AI than pro LLM but I gave Claude the HTML of our jira ticket and told it we had a Jenkins pipeline that we wanted to update specific fields on the ticket of using python. Claude correctly figured out how we were calling python scripts from Jenkins, grabbed a library and one shorted the solution in about 45 seconds. I then asked it to add a post pipeline to do something else which it did, and managed to get it perfectly right.
It was probably 2-3 hours work of screwing around figuring out issue fields, python libraries, etc that was low priority for my team but causing issues on another team who were struggling with some missing information. We never would have actually tasked this out, written a ticket for it, and prioritised it in normal development, but this way it just got done.
I’ve had this experience about 20 times this year for various “little” things that are attention sinks but not hard work - that’s actually quite valuable to us
> It was probably 2-3 hours work of screwing around figuring out issue fields
How do you know AI did the right thing then? Why would this take you 2-3 hours? If you’re using AI to speed up your understanding that makes sense - I do that all the time and find it enormously useful.
But it sounds like you’re letting AI do the thinking and just checking the final result. This is fine for throwaway work, but if you have to put your name behind it that’s pretty risky, since you don’t actually understand why AI did what it did.
Because I tested it, and I read the code. It was only like 40 lines of python.
> Why would this take you 2-3 hours?
It's multiple systems that I am a _user_ of, not a professional developer of. I know how to use Jira, I'm not able to offhand tell you how to update specific fields using python - and then repeat for Jenkins, perforce, slack. Getting credentials in (Claude saw how the credentials were being read in other scripts and mirrored that) is another thing.
> This is fine for throwaway work, but if you have to put your name behind it that’s pretty risky, since you don’t actually understand why AI did what it did.
As I said above, it's 30 lines of code. I did put my name beind it, it's been running on our codebase on every single checkin for 6 months, and has failed 0 times in that time (we have a separate report that we check in a weekly meeting for issues that were being missed by this process). Again, this isn't some massive complicated system - it's just glueing together 3/4 APIs in a tiny script in 1/10 of the time that it took me to do it. Worst case scenario is it does exactly what it did before - nothing.
I've used it for minor shit like that, but then I go back and look at the code it wrote with all its stupid meandering comments and I realize half the code is like this:
const somecolor='#ff2222';
/* oh wait, the user asked for it to be yellow. Let's change the code below to increase the green and red /
/ hold on, I made somecolor a const. I should either rewrite it as a var or wait, even better maybe a scoped variable! /
hah. Sorry I'm just making this shit up, but okay. I don't hire coders because I just write it myself. If I did, I would assign them all* kinds of annoying small projects. But how the fuck would I deal with it if they were this bad?
If it did save me time, would I want that going into my codebase?
I've not found it to be bad for smaller things, but I've found once you start iterating on it quickly devolves into absolute nonsense like what you talked about.
> If it did save me time, would I want that going into my codebase?
Depends - and that's the judgement call. I've managed outsourcers in the pre-LLM days who if you leave them unattended will spew out unimaginable amounts of pure and utter garbage that is just as bad as looping an AI agent with "that's great, please make it more verbose and add more design patterns". I don't use it for anything that I don't want to, but for so many things that just require you to write some code that is just getting in the way of solving the problem you want to solve it's been a boon for me.
I've also not had great experiences with giving it tasks that involve understanding how multiple pieces of a medium-large existing code base work together.
If that's most of what you do, I can see how you'd not be that impressed.
I'd say though that even in such an environment, you'll probably still be able to extract tasks that are relatively self contained, to use the LLM as a search engine ("where is the code that does X") or to have it assist with writing tests and docs.
Your conclusion is spot on. Fuzz generators excel at fuzzy tasks.
"Convert the comments in this DOCX file into a markdown table" was an example task that came up with a colleague of mine yesterday. And with that table as a baseline, they wrote a tool to automate the task. It's a perfect example of a tool that isn't fun to write and it isn't a fun problem to solve, but it has an important business function (in the domain of contract negotiation).
I am under the impression that the people you are arguing with see themselves as artisans who meticulously control every bit of minutiae for the good of the business. When a manager does that, it's pessimistically called micromanagement. But when a programmer does that, it's craftsmanship worthy of great praise.
Same way you test code you wrote by hand. In-place and haphazardly, until you have it write unit tests so you can have it done more methodically. If it hallucinates a library or function that doesn't exist, it'll fail earlier in the process ; compilation).
I've used Claude to write code, and it is much harder to test that code than it is to test code "haphazardly" as I write it myself. Reason being, I can test mine after each new line I write and make sure that line is doing what I intend it to do. After Claude writes a whole set of functions, it could take hours to test all the potential failure modes.
BTW, if it doesn't take you hours to test the failure modes, you're not thinking of enough failure modes.
The time savings in writing it myself has a lot to do with this. Plus I get to understand exactly why each line was written, with comments I wrote, not having to read its comments and determine why it did something and whether changing that will have other ramifications.
If you're doing anything larger than a sample React site, it's worth taking the time to do it yourself.
Well, you could also generate the tests by CC, check them to make sure they’re legitimate, then let it implement it?
The main key in steering Claude this month (YMMV), is basically giving tasks that are localized, can be tested out and not too general. Then you kinda connect the dots in your head. Not always, but you can kinda get gist of what works and what doesn’t.
> AI coding allows me to build tools that solve real world problems for me much faster.
But it can't actually generate working code.
I gave it a go over the Christmas holidays, using Copilot to try to write a simple program, and after four very frustrating hours I had six lines of code that didn't work.
The problem was very very simple - write a bit of code to listen for MIDI messages and convert sysex data to control changes, and it simply couldn't even get started.
I'm sure someone is about to jump in and tell you why you're doing it wrong, but I'm in a similar position to you. I spent the last few days using the AI to help me pull together evidence for our ISO audit and while it didn't do a bad job, it was rife with basic errors. Simple things like consistently formatting a markdown document would work 9/10 times with the other time having it ignore the formatting, or deciding to rewrite other bits of the document for no reason.
Yeah, unfortunately quality of tooling varies heavily. Like a range of producing garbage, to working code. Claude Code got significantly good in the last couple of months, and it’s been noticeable. I’ve been trying to plug LLMs into my workflow throughout the year, to make sure I don’t fall behind the industry. And this last month was it when it “clicked”. It works in large and small projects as long as you kinda know how to localize the tasks.
I know "try this other tool" is probably an eye-roll-worthy response, but as someone who's not a programmer but is in IT and has to write some scripts every once in a while and has a lot of AI-heavy dev friends - all I've ever heard about Copilot is that it's one of the worst.
I recently used Claude for a personal project and it was a fairly smooth process. Everyone I know who does a lot of programming with AI uses Claude mostly.
> "oh but programming is the boring part, now I can focus on the problem solving" or something like that, even though that's precisely what they delegate to the AI.
Take game programming: it takes an immense amount of work to produce a game, problems at multiple levels of abstraction. Programming is only one aspect of it.
Even web apps are much, much more than the code backing them. UIUX runs deep.
I'm having trouble understanding why you think programming is the entirety of the problem space when it comes to software. I largely agree with your colleagues; the fun part for me, at this point in my career, is the architecture, the interface, the thing that is getting solved for. It's nice for once to have line of sight on designs and be able to delegate that work instead of writing variations on functions I've written thousands if not tens of thousands of times. Often for projects that are fundamentally flawed or low impact in the grand scheme of things.
I don't know why people build houses with nail guns, I like my hammer... Whats the point of building a house if you're not going to pound the nails in yourself.
AI tooling is great at getting all the boiler plate and bootstrapping out of the way... One still has to have a thoughtful design for a solution, to leave those gaps where you see things evolving rather than writing something so concrete that you're scrapping it to add new features.
You can pick apart a nail gun and see how it exactly works pretty easily. You cant do that with LLMs. Also a nail gun doesn't get less accurate the more nails you shoot one after another, a LLM does get less accurate the more steps it goes through. Also a nail gun shoots straight and not in random directions as that would be considered dangerous. A LLM does shoot into random directions. The same prompt will often yield different results. With a nail gun you can easily pull the plug and you wont have to verify if the nail got placed correctly for an unreasonable amount of time, with LLM output you have to verify everything which takes a lot of time. If an LLM really is such a great tool for you I fear you are not verifying everything it does.
If the boilerplate is that obvious why not just have a blueprint for that and copy and paste it over using a parrot?
Also I dont have a nail gun subscription and the nail gun vendor doesnt get to see what I am doing with it.
You mention a thousand ways the analogy breaks when you take it too far, but you didn't address the actual (correct) point the analogy was making: Some people don't enjoy certain parts of the creative process, and let an LLM handle them. That's all.
> Some people don't enjoy certain parts of the creative process,
Sure
> and let an LLM handle them.
This is probably the disputed part. It is not a different way of development, and as such it should not be presented like that. In software, we can use ready-made components, choose between different strategies, build everything in a low-level language etc. The trade-offs coming with each choice is in principle knowable; the developer is still in control.
LLMs are nothing like that. Using a LLM is more akin to management of outsource software development. On the surface, it might look like you get ready-made components by outsourcing it to them, but there is no contract about any standard, so you have to check everything.
Now if people would present it like "I rather manage an outsourcing process than doing the creative thing" we would have no discussion. But hammers and nails aren't the right analogies.
>LLMs are nothing like that. Using a LLM is more akin to management of outsource software development.
You're going to have to tell us your definition of 'Using a LLM' because it is not akin to outsourcing (As I use it).
When I use clause, I tell it the architecture, the libraries, the data flows, everything. It just puts the code down which is the boring part and happens fast.
The time is spent mostly on testing, finding edge cases. The exact same thing if I wrote it all myself.
> 'Using a LLM' because it is not akin to outsourcing (As I use it).
The things you do with an LLM are precisely what many other IT-firms do when outsourcing to India. Now you might say that this would be bonkers, but that is also why you hear so often that LLM's are the biggest threat to outsourcing instead of software development in general. The feedback cycle with an LLM is much faster.
> I don't see how this is hard for people to grasp?
I think I understand you, and I think you have/had something else in mind when hearing the term outsourcing.
I don't think people use an LLM and say "I wrote some code", but they do say "I made a thing", which is true. Even if I use an LLM to make a library, and I decide the interfaces, abstractions, and algorithms, it was still me who did all that.
> Using a LLM is more akin to management of outsource software development.
This is a straw man argument. You have described one potential way to use an LLM and presented it as the only possible way. Even people who use LLMs will agree with you that your weak argument is easy to cut down.
You can't stretch it until it breaks and then say "see? It broke, it wasn't perfect". It works for the purpose it was made, and that's all it needed to work for.
This appears to misunderstand both construction and software development, nail guns and LLMs are not remotely parallel.
You’re comparing a deterministic method of quickly installing a fastener with something that nondeterministically designs and builds the whole building.
Nail guns are great. For nails that fit into them and spaces they fit into. But if you can't hit a nail with a hammer, you're limited to the sort of tasks that can be accomplished with the nail guns and gun-nails you have with you.
That's the problem with solving a casually made metaphor instead of sticking to the original question. Since when is AI assisted coding only when you do 100% AI and not a single line yourself? That is only the extreme end! Same with the nails actually. I doubt the builders don't also have and use hammers.
> This is the way with many labor-saving devices.
I think that's more the problem of people using only the extremes to build an argument.
Sure, but I prefer to work on projects that are fundamentally sound and high impact. Indeed, I have certainly noticed a pattern that very often ai enthusiasts exalt its capabilities to automate work that appears to be of questionable value in the first place, apart from the important second order property of keeping the developer sheltered and fed.
Programming is a ton of fun. There are competing concerns though.
I recently wrote a 17x3 reed-solomon encoder which is substantially faster on my 10yo laptop than the latest and greatest solution from Backblaze on their fancy schmancy servers. The fun parts for me were:
1. Finally learning how RS works
2. Diving in sufficiently far to figure out how to apply tricks like the AVX2 16-element LUT instruction
3. Having a working, provably better solution
The programming between (2) and (3) was ... fine ... but I have literally hundreds of other projects I've never shipped because the problem solving process is more enjoyable and/or more rewarding. If AI were good enough yet to write that code for me then I absolutely would have used it to have more time to focus on the fun bits.
It's not that I don't enjoy coding -- some of those other unshipped projects are compilers, tensor frameworks, and other things which exist purely for the benefit of programmer ergonomics. It's just that coding isn't the _only_ thing I enjoy, and it often takes a back seat.
I most often see people with (what I can read into) your perspective when they "think" by programming. They need to be able to probe the existing structure and inject their ideas into the solution space to come up with something satisfactory.
There's absolutely nothing wrong with that (apologies if I'm assuming to much about the way you work), but some people work differently.
I personally tend to prefer working through the hard problems in a notebook. By the time the problem is solved, its ideal form in code is obvious. An LLM capable of turning that obvious description into working code is a game changer (it still only works like 30% of the time, and even then only with a lot of heavy lifting from prompt/context/agent structure, so it's not quite a game changer yet, but it has potential).
If you're curious, you might also be interested in Cauchy-Reed Solomon coding. This converts Galois field operations into XORs by treating elements of GF(2^n) as bit matrices. The advantage then is that instead of doing Galois field operations, you can just xor things for much better performance. The canonical paper is https://web.eecs.utk.edu/~jplank/plank/papers/CS-05-569.pdf.
I enjoy the programming, and the problem solving, but only sometimes the typing. Advent of Code last month was fun to do in Common Lisp, I typed everything but two functions myself, and only consulted with the subreddit and/or the AI on a couple problems. (Those two functions were for my own idea of using A-star over Morton Numbers, I wrote about those numbers with some python code in 2011 and didn't feel like writing the conversion functions again. It didn't work out anyway, I had to get the hint of "linear programming" and a pointer to GLPK, which I hadn't used before, so I had the AI teach me how to use it for standard sorts of LP/MIP problems, and then I wrote my own Lisp code to create .lp files corresponding to the Advent problem and had GLPK execute and give the answers.)
If it's a language I don't particularly enjoy, though, so much the better that the AI types more of it than me. Today I decided to fix a dumb youtube behavior that has been bugging me for a while, I figured it would be a simple matter of making a Greasemonkey script that does a fetch() request formed from dynamic page data, grabs out some text from the response, and replaces some other text with that. After validating the fetch() part in the console, I told ChatGTP to code it up and also make sure to cache the results. Out comes a nice little 80 lines or so of JS similar to how I would have written it setting up the MutationObserver and handling the cache map and a promises map. It works except in one case where it just needs to wait longer before setting things up, so I have it write that setTimeout loop part too, another several lines, and now it's all working. I still feel a little bit of accomplishment because my problem has been solved (until youtube breaks things again anyway), the core code flow idea I had in mind worked (no need for API shenanigans), and I didn't have to type much JavaScript. It's almost like using a much higher level language. Life is too short to write much code in x86 assembly, or JavaScript for that matter, and I've already written enough of the latter that I feel like I'm good.
Let me explain my perspective as I do vibe coding for some side projects.
AI (and even if it works correctly) is a thing filling the blanks.
Still, the value depends on how much work you put in.
For recent ones, it is a interactive visualization of StarCraft 2 (https://github.com/stared/sc2-balance-timeline).
Here I could do it myself (and spend way more time than I want to admit on refactoring, so code looks OK-ish), but unlikely I would have enough time to do so. I had the very idea a few years ago, but it was just too much work for a side project. Now I did it - my focus was high-level on WHAT I want to do and constant feedback on how it looks, tweaking it a lot.
Another recent is "a project for one" of a Doom WAD launcher (https://github.com/stared/rusted-doom-launcher). Here I wouldn't be able to do it, as I am not nearly as proficient in Rust, Tauri, WADs, etc. But I wanted to create a tool that makes it easy to to launch custom Doom maps with ease of installing a game on Steam.
In both cases the pattern is the same - I care more on the result itself that its inner workings (OK, for viz I DO care). Yes, it takes away a lot of experience of coding oneself. But it is not something entirely different - people have had the same "why use a framework instead of writing it yourself", "why use Python when you could have used C++", "why visiting StackOverflow when you could have spend 2 days finding solution yourself".
With side projects it is OUR focus on what we value. For someone it is writing low-level machine code by hand, even it it won't be that useful. For some other, making cute visual. For someone else, having an MVP that "just works" to test a business idea.
That’s a great SC2 balance tool, kudos for that! I’ve been out of touch with the scene, are the balance changes recently for the good of the pro scene? I only watch ASL these days, no time for GSL
For watching current games, I cannot recommend better than Lowko (https://www.youtube.com/@LowkoTV) - he covers the main matches, and make a commentary in a style I like.
> I see so many of my colleagues using AI not only for their job, but even for their week-end programming projects
When writing code in exchange for money the goal is not to write code, it's to solve a problem. Care about the code if you want but care about solving the problem quickly and effectively more. If LLMs help with that you should be using them.
On personal projects it depends on your goal. I usually want the tool more than whatever I get from writing code. I always read whatever an LLM spits out to make sure I understand it and confirm it's correct but why wouldn't I accelerate my personal tool development as well?
"I love complicated mathematical questions, and love doing the basic multiplication and division calculations myself without a calculator. I don't understand why people would use a calculator for this."
"I love programming, and don't understand why people would use C++ instead of using machine lamguage. You get deep down close to the hardware, such a good feeling, people are missing out. Even assembly language is too much of a cheat."
In the other hand - people still knit, I assume for the enjoyment of it.
Each level of programming abstraction has taken us a step further from the bare metal, as they used to say. In machine code you specify what goes where at the memory/register level. Assembly gives you human-readable mnemonics. C abstracts away direct control, but you still manually allocate memory. Java and C# abstract away memory management but you still declare types. Python and JavaScript abstract away type declarations but you still define variables and program structure. With AIs define your end goal in plain language and then you either have to understand code to edit, or literally depend on the machine to fix everything.
In a sense it's like SQL or MiniZinc: you define the goal, and the engine takes care of how to achieve it.
Or maybe it's like driving: we don't worry about spark advance, or often manual clutches, anymore, but LLMs are like Waymo where your hands aren't even on the steering wheel and all you do is specify the destination, not even the route to get there.
> Don't they miss the feeling of..... programming? Am I the weird one here?
Our company is "encouraging" use of LLMs through various carrots and sticks; mostly sticks. They put out a survey recently asking us how we used it, how it's helped, etc. I'll probably get fired for this (I'm already on the short list for RIFs due to being remote in a pathological RTO environment and being easily the eldest developer here, but...), but I wrote something like:
"Most of us coders, especially older ones, are coders because we like coding. The amount of time and money being put spent to make coders NOT CODE is incredible."
> That's how I feel with programming, and sometimes I feel like I'm taking crazy pills when I see so many of my colleagues using AI not only for their job, but even for their week-end programming projects. Don't they miss the feeling of..... programming? Am I the weird one here?
I've played with using LLMs for code generation in my own projects, and whilst it has sometimes been able to solve an issue - I've never felt like I've learned anything from it. I'm very reluctant to use them for programming more as I wouldn't want my own skills to stagnate.
ADHD is a thing for some people though. What works for one person might not work for other people. I have a friend who spends probably 8-10 hours a day writing code, every single day. I just can't do this personally, therefore, my ideas/projects never actually go anywhere.
AI tools allow me to do a lot of stuff within a short time, which is really motivating. They also automatically keep a log of what I was doing, so if I don't manage to work on something for weeks, I can quite easily get back in and read my previous thinking.
It can also get very demotivating to read 10 stackoverflow discussions from a Google searches that don't solve my problem. This can cause me to get out of 'the zone' and makes it extremely hard to continue. With AI tools, I can rephrase my question if the answer isn't exactly what I was looking for and steer towards a working solution. I can even easily get in depth explanations of provided solutions to figure out why something doesn't work.
I also have random questions pop up in my brain throughout the day. These distract me from my task at hand. I can now pop this question into an AI tool and have it research the answer, in stead of being distracted for an hour reading up on brake pads or cake recipes or the influence of nicotine on driving ability.
I do use ChatGPT for side projects, but only as a last resort and always as a discussion partner, not a code writer. I always tell it beforehand “no code just discussion”. The fun is in figuring out as much stuffs as possible by myself and write the implementations, and I’m not paying someone to take my fun.
But again my projects are more research than product, so maybe it’s different.
It depends. My job and hobbies are closer to mechatronics than programming. I do like programming; I appreciate the feeling of cracking a hard problem or getting something to work for the first time, and I tend to prefer programming things myself when it makes sense.
But, I almost never do something "for the programming". Programming is just an ingredient to make the thing I actually want. This is why I use Solidworks and not OpenSCAD for most 3D modeling, for example. I've learned many things from it but I can't honestly say I'm in it for the programming.
I like programming. Quite a bit. But the modern bureaucratic morass of web technologies is usually only inspiring in the small. I do not like the fact that I have to balance so many different languages and paradigms to get to my end result.
It would be a bit like a playwright aficionado saying “I really love telling stories through stage play” only to discover that all verbs used in dialogue had to be in Japanese, nouns are a mix of Portuguese and German, and connecting words in English. And talking to others to put your play on, all had to be communicated in Faroese and Quechua.
I’m convinced that some people are overly susceptible to the “preference optimized” nature of AI output and end up completely blind to its quality and usefulness.
Not to say it’s useless garbage, there is some value for sure, but it’s nowhere near as good as some people represent it to be. It’s not an original observation, but people end up in a “folie a deux” with a chatbot and churn out a bunch of mediocre stuff while imagining they’re breaking new ground and doing some amazing thing.
You can ask, in the same vein, why use Python instead of C? Isn't the real joy of programming in writing effective code with manual memory management and pointers? Isn't the real joy in exploring 10 different libraries for JSON parsing? Or in learning how to write a makefile? Or figuring out a mysterious failure of your algorithm due to an integer overflow?
I mean we're programmers. Even though it's much more popular these days the very nature of what we do makes us "weird". At least compared to the average person. But weird isn't bad.
(Why people doing it if they find it so boring? And why side projects?! I know it pays well but there are plenty of jobs that do. I mean my cousin makes more as a salesman and spends his days at golf courses. He's very skilled, but his job is definitely easier)
> "oh but programming is the boring part, now I can focus on the problem solving"
I also can't comprehend people when they say this.
For starters it's like saying "I want you learn an instrument so I listen to scales, that way I can focus on playing songs." The fun part can't really happen without the hard part.
Second, how the fuck do you do the actual engineering when you're not writing the code? I mean sure, I can do a lot at the high level but 90% of the thinking happens while writing. Hell, 90% of my debugging happens while writing. It feels like people are trying to tell me that LLMs are useful because "typing speed is the bottleneck". So I'm left thinking "how the fuck do you even program?" All the actual engineering work, discovering issues, refining the formulation, and all that happens because I'm in the weeds.
The boring stuff is where the best learning and great ideas come from. Isn't a good programmer a lazy one? I'd have never learned about something like functors and template metaprogramming if I didn't ever do the boring stuff like write a bunch of repetitive functions thinking "there's got to be a better way!" No way is an LLM going to do something like that because it's a dumb solution until a critical mass is reached and it becomes a great solution. There's little pressure for that kind of progress when you can generate those functions so fast (I say little because there's still pressure from an optimization standpoint but who knows if an LLM will learn that unprompted)
Honestly coding with LLMs feels like trying to learn math by solely watching 3Blue1Brown videos. Yeah, you'll learn something but you'll feel like you learned more than you actually did. The struggle is part of the learning process. Those types of videos can complement the hard work but they don't replace it.
If you have yet to use it then you have no idea if it’s useful.
We can agree all day long about the pitfalls of the technology, but you’ve never used it so you don’t know if it’s causing you more work or replacing you.
Everyone is so fixated on the output as the commodity, whether it’s a blog post or a piece of code, that they fail to see the interaction itself as the locus of value. You can still do your rewarding work in a chat session, it can force you to think, challenge your ideas, and if you introduce your own spices into the soup it won't taste like slop. I like to explain my ideas until the LLM "gets it" and then ask it to "formalize" them in a nice piece of text, which I consume later as a meditation to deepen my thinking. I can't stand passive media anymore, need to be able to push back to feel satisfied, but this is only possible on forums and in AI chats.
This is the attitude of someone who uses hand tools when power tools are available. Yes you loose the personal touch but also loose the potential efficiency. Still need to measure twice and cut once though.
I don't think this analogy really works. A power tool would be a programming language with a better set of abstractions, or a good library that solves a hard problem.
AI is like delegating to a junior programmer that never learns or gets better.
I don't like your analogy because there are good reasons for amateurs not to use the power tools (for real-world crafting). They are expensive and you can hurt yourself easier. This is very unlike using AI to help you build something faster.
Maybe a better analogy might be a car with an automatic transmission, although that doesn’t capture the pitfalls of AI very well. It could be argued that a good automatic transmission has none of the serious downsides that AI has.
Still, the general idea is sometimes getting stuff faster with less effort more automatically is more important than the “reward” of doing it yourself.
If we're bringing up cars, the parallel to draw is with GPS and navigation. Do I know how to get anywhere without technology to guide me? Have I broken my brain because I've offloaded navigation to technology?
Eh it depends on a lot more factors. Perhaps at the most extreme side of the first time you take a new route.
I use a GPS all the time, but only because it also shows me traffic, red light cameras, and potential hazards. I memorized the route after the first 2-3 drives but I keep using the gps for the amenities.
That said, I’m old enough to have used printed map directions and my time in Boy Scouts gave me the skills to read a paper map too.
I'm with you. I've said it before, but: LLMs have made clear who does things for the process, and who does things for the result (obviously this is a spectrum, hardly anyone is 100% on either end).
The amount of people who apparently just want the end result and don't care about the process at all has really surprised me. And it makes me unfathomably sad, because (extremely long story short) a lot of my growth in life can be summed up as "learning to love the process" -- staying present, caring about the details, enjoying the journey, etc. I'm convinced that all that is essential to truly loving one's own life, and it hurts and scares me to both know just how common the opposite mindset is and to feel pressured to let go of such a huge part of my identity and dare-I-say soul just to remain "competitive."
I'm tired of people saying Steam on Linux just works. It doesn't.
Tried running Worms: instant crash, no error message.
Tried running Among Us: instant crash, had to add cryptic arguments to the command line to get it to run.
Tried running Parkitect: crashes after 5 minutes.
These three games are extremely simple, graphically speaking. They don't use any complicated anti-cheat measure. This shouldn't be complicated, yet it is.
Oh and I'm using Arch (BTW), the exact distro SteamOS is based on.
And of course, as always, those for which it works will tell you you're doing-it-wrong™ .
These games are all rated gold or platinum on protondb, indicating that they work perfectly for most people.
Hard to say what might be going wrong for you without more details. I would guess there's something wrong with your video driver. Maybe you have an nvidia card and the OS has installed the nouveau drivers by default? Installing the nvidia first-party drivers (downloaded from the nvidia web site) will fix a lot of things. This is indeed a sore spot for Linux gaming, though to be fair graphics driver problems are not exactly unheard of on Windows either.
Personally I have a bunch of machines dedicated to gaming in my house (https://lanparty.house) which have proven to be much more stable running Linux than they were with Windows. I think this is because the particular NIC in these machines just has terrible Windows drivers, but decent Linux drivers (and I am netbooting, so network driver stability is pretty critical to the whole system).
AoE2:DE is rated gold even though multiplayer is broken for everyone, and it lags. By now someone has posted a very complex workaround to the MP issue, but it was gold even before that.
BeamNG (before a very recent native Linux beta) was gold despite a serious fps drop and also a memleak to crash any time there's traffic.
> Installing the nvidia first-party drivers (downloaded from the nvidia web site) will fix a lot of things
Interesting. I saw somewhere else you're using Debian. Is it as opposed from Nouveau or the proprietary drivers from the Debian repos?
I'm currently testing to daily drive my desktop with linux on an NVIDIA GPU, and the Arch wiki explicitly recommends drivers from their repos. However, arch is rolling and the repo drivers are supposedly much more up to date than Debian's ones. Though, I'll keep your comment if I run into anything.
I am not familiar with Arch, so my advice might be wrong for Arch.
But I have a lot of experience on Debian and Ubuntu trying to use the packages that handle the nvidia driver installation for you. It works OK. But one day on a lark I tried downloading the blob directly from nvidia and installing that way, and I was surprised to find it was quite smooth and thorough, so I've been doing it that way ever since.
> Installing the nvidia first-party drivers (downloaded from the nvidia web site) will fix a lot of things.
Crazy—it used to be that nvidia drivers were by far the least stable parts of an install, and nouveau was a giant leap forward. Good to know their software reputation has improved somewhat
Nouveau has never been good for gaming. Not their fault (they had to reverse engineer everything), but it was only really ever viable for mostly 2D desktops in my experience.
Sure, but nvidia has always been seen as a liability for basic operation of the computer. Their driver quality is notoriously as bad as it gets. Nouveau fixed this.
I keep hearing people assert this but I've been using nvidia drivers on Linux for 25 years and aside from the pain of getting them installed, they have always just worked with no issues at all.
Whereas every time I install Debian fresh and temporarily get the Nouveau drivers temporarily, basic desktop graphics are slow (no hardware acceleration) and crash-prone.
Everyone says this but it is not my experience at all. Every time I try AMD cards I run into weird problems. The Nvidia drivers are a pain to install and tend to break randomly on kernel updates, but once built properly they always just work for me...
Did you use the proprietary AMD drivers? You need to use the open source drivers. As far as I know these should be the default on all distros, so just click through the OS installer, install Steam, and start gaming. Don't touch the drivers.
In my most recent attempt to use AMD, my problems were:
1. I needed to install a bleeding-edge kernel version in order to get support for the very new AMD card I had purchased, which was a bit of a pain on Debian. (With NVidia, the latest drivers will support the latest hardware on older kernels just fine.)
2. AMD can't support HDMI 2.1 in their open source drivers. Not their fault -- it's a shitty decision by the HDMI forum to ban open source implementations. But I was trying to drive an 8k monitor and for other reasons I had to use HDMI, so this was a deal-breaker for me. (This is actually now solvable using a DP->HDMI dongle, but I didn't discover that solution at the time.)
But every time I've tried to use AMD the problems have been different. This is just the most recent example.
Obviously I'm using the open source drivers, since the entire point of everyone's argument for AMD on Linux is the open source part.
The root problem may just be that I'm deeply familiar with the nvidia linux experience after 25 years of using it whereas the AMD experience is unfamiliar whenever I try it, so I'm more likely to get stuck on basic issues.
I used an Nvidia card for about a year, and I did get it working for the most part, but there was definitely some glitchiness I don't encounter with AMD.
For example, I use the Gamescope/Tenfoot interface on my system, and the actual menu for that is extremely glitchy on Nvidia drivers. On AMD it's absolutely perfect (I suspect because Valve develops this interface around an AMD card).
This has been my experience too, when I upgraded my GPU, I wanted to switch to Linux full time, so I went with AMD because everywhere people kept saying NVIDIA GPUs had a lot of issues, but it turned out to be the opposite. With my old card, I just have to install the proprietary NVIDIA driver, zero issues.
I think people are still clinging onto old "wisdom" that hasn't be true for decades, like "updating breaks Arch", go figure.
FYI the ProtonDB medal system is not a good measurement of a game’s performance. The founder has admitted in the past that he would have used a different rating system if he had to do it again. That’s why he created the newer “Click to Play” rating system. Only 13% of the top 1,000 games are rated Tier 1, and even that doesn’t guarantee Windows-like performance, as there are tens of thousands of issue reports across those games.
I imagine the people saying “it just works” are saying it because it does, at least for them.
SteamOS is based on Arch, but customized and aimed at specific hardware configurations. It’d be interesting to know what hardware you’re using and if any of your components are not well supported.
FWIW, I’ve used Steam on Linux (mostly PopOS until this year, then Bazzite) for years and years without many problems. ISTR having to do something to make Quake III work a few years ago, but it ran fine after and I’ve recently reinstalled it and didn’t have to fuss with anything.
Granted, I don’t run a huge variety of games, but I’ve finished several or played for many hours without crashes, etc.
I use OpenSUSE Tumbleweed, and I've never had trouble running a game that's rated gold or above. I've even gotten an Easy AntiCheat game to work correctly.
I've been gaming on linux exclusively for about 8 years now and have had very few issues running windows games. Sometimes the windows version, run through proton, runs better than the native port. I don't tend to be playing AAA games right after launch day, though. So it could be taste is affecting my experience.
I just bought another second Dell workstation (admit I hated those) and can’t wait to install SteamOS when it is released to the public. I don’t care about AAA gaming but the integrated card should be able to handle most of the games from ten years ago.
I don't have your other games, but I do have a few Worms games and they worked out of the box for me with GE Proton on NixOS.
I'm not saying "you're doing it wrong", because obviously if you're having trouble then that is, if nothing else, bad UX design, but I actually am kind of curious as to what you're doing different than me. I have an extremely vanilla NixOS setup that boots into GameScope + Tenfoot and I drive everything with a gamepad and it works about as easily as a console does for me.
If anything this is the challenge with PC as a platform being so varied, any random software/hardware/config variation could bring a whole load of quirks.
That probably includes anything that isn't a PC in a time-capsule from when the game originally released, so any OS/driver changes since then, and I don't think we've reached the point where we can emulate specific hardware models to plug into a VM. One of the reasons the geforce/radeon drivers (eg, the geforce "game ready" branding) are so big is that they carry a whole catalogue of quirk workarounds for when the game renderer is coded badly or to make it a better fit to hardware and lets them advertise +15% performance in a new version. Part of the work for wine/proton/dxvk is going to be replicating that instead of a blunt translation strictly to the standards.
Yeah, I think Linus himself pointed out that the desktop is the hardest platform to support because it's unbelievably diverse and varied.
With regards to Linux I generally just focus on hardware from brands that have historically had good Linux support, but that's just a rule of thumb, certainly not perfect.
If you need to use GE Proton then Proton doesn't "Just Work" by any definition. It "Just Works" if you jump through these other hoops and include community fixes. And then if the GE version doesn't work maybe you have to try Experimental or sometimes even Hotfix. All very much not "Just Works". Windows users don't have to download community versions of the Steam runtime in order to play certain games and then play whack-a-mole with versions whenever a new game comes out. There must be a vast difference between what "Just Works" means to folks who have used Linux for years and years and folks who come from Windows. This comment from the Far Cry 5 ProtonDB page highlights this perfectly I think: "Play great with some script. Disable hyperthreading in bios to speed up loading game"
Edit: Just tested another game for "Just Works" status.
Platinum support. 14 minutes before my first crash. Latest NixOS w/latest NVidia drivers. I have had luck on most games I play. But they also always seem to require some sort of effort to tweak settings to get it into a playable state. I'm sure I could spend 15-30 minutes researching KCD2 Steam Proton issues and get it resolved. That's effort I wouldn't have to make on Windows.
> That's effort I wouldn't have to make on Windows.
It's weird the amount of amnesia people seem to have with all the bullshit associated with Windows and Windows gaming. I think you're being absolutely ridiculous if you're claiming that things consistently work better with Windows, even for Windows games. There's been plenty of times when I had to do shit like disable "Data Execution Prevention" [1], or install weird wrappers for older games [2], or if I'm very lucky I have to go find the executable, right click on it and run with compatibility mode.
I don't think you're lying or anything, but I do think you're wrong; I have spent many hours debugging bullshit with Windows to make games run, especially older games but not always. For example, I couldn't get Chronicles of Riddick Assault on Dark Athena to work on Windows (when it was new) initially because of a weird bugginess associated with Securom and Windows. That was a headache of fighting with registry files and reinstalling to eventually get it work. I had many issues with stability when playing Borderlands on Windows, to a point that it became a running joke with my friends (despite having an up to date video card and plenty of memory). These aren't the most up to date examples because I got fed up enough that I ran away to Linux.
I admittedly don't play through a lot of newer games, but I do occasionally play through them, and I played through the entirety of Marvel's Spider-man and Myles Morales, including DLC, without any crashes and as far as I can tell literally no issues at all. I was also able to install and play Resident Evil Village, though I haven't beaten that one yet (but I have played for more than fifteen minutes). Obviously sample size of one though.
GE Proton was something I installed when I was installing the rest of my SteamOS stuff with a custom built machine, so I agree it's not "Just Works" but it was a one time thing I did and never thought about it again. I'm not sure what it buys me over regular Proton honestly; I installed it because someone told me to install it and it's not been an issue.
Nvidia drivers are definitely pretty hit-or-miss on Linux though (mostly miss), no argument on that. I usually just buy AMD and it's a non-issue.
I don't really care if you run Windows, if you like it then you're of course free to make your own bad decisions, but I just think you're wrong if you claim Windows doesn't have its share of bullshit involving basically anything graphical.
> And of course, as always, those for which it works will tell you you're doing-it-wrong™ .
This sounds like you are rejecting help because you have made up your mind in frustration already.
Because you are doing it wrong. If you want an OS that just works, you should use Ubuntu or Fedora. Why is SteamOS based on Arch then? Because Valve wants to tweak things in it and tinker with it themselves to get it how they like.
You don't.
So use an OS that requires less from you and that tries to just work out of the box, not one that is notorious for being something you break and tinker with constantly (Arch).
I've been using Arch for 15 years, it's not like I'm suddenly discovering the concept of the distro.
But when something crashes with no error message whatsoever, it makes it a tiny bit harder to troubleshoot.
Especially when so many people answer, just like I had predicted, "works on my machine". Which would only be a gotcha if I had implied it worked on no machine whatsoever. Which I didn't.
I'll tinker some more and I'll be sure to post my findings if I get these games to work.
Well then look at the logs? Sure it's not as in-your-face, but steam/proton does log and I'm fairly sure that a combination of at most setting a command invocation parameter, looking at the game logs and system logs will show you the exact problem and given that these games run just fine for a lot of people, the fix is probably trivial.
Counter point: I don’t have to look at the logs to discover obscure error reports to spend my weekend debugging something which works flawlessly on Windows. We shouldn’t have to do that.
You don't have to look at logs either in case of games and hardware combos that do run flawlessly on Linux, which is a huge chunk of all games.
But feel free to try to run a game with a missing video card driver (as you likely miss something like that) and the like on windows. I would say it's an even worse experience.
Possibly because you won’t have as many logs to look at on Windows, merely given the option of sending a proprietary dump blob to the developer’s bug tracker, and then hoping they eventually fix whatever mystery issue is affecting you. God help you if it’s the overzealous DRM or anticheat from some other game that likely isn’t present in their QA machines.
I am using Arch and all the games I played on Steam (at least 20, not the ones mentioned above) worked perfectly.
One thing that I do though is get most games at least one year after release, when probably many issues are fixed. I had tons of issues many years ago, with buggy games bought immediately after release (on Windows back then), so now I changed strategy...
Arch is nice if you want to tinker. Based on your reasoning, I wouldn't recommend it.
But if you still want arch-based, I would recommend EndevourOS, and for even a simpler/better distro, Bazzite.
You are definitely doing it wrong, I rarely have issues and when I do I just switch comparability tools. I play multiple indie games, marvel rivals, I played lots of among us on my machine in 2020. Running Pop OS
As an Arch (btw) user myself, yes, you're doing something wrong.
Arch won't hold your hands to ensure everything required is installed, because many dependencies are either optional (you have to read the pacman logs) or just hidden (because it's in the game itself). Valve actually does a great job providing a "works everywhere" runtime as their games are distributed in a flatpak-like fashion, but things can seep through the cracks.
The compositor can have an effect. The desktop settings. The GPU drivers. What's installed as far as e.g. fonts go. RAM setup, with or without swap.
As for steamOS, the real difference, is that despite being Arch-based, you're not installing Arch, but steamOS. A pre-packaged pre-configured Arch linux, with a set of opinionated software and its set of pre-made config files, for a small set of (1) devices. It's not really Arch you're installing, but a full-blow distro that happens to be arch-based.
That said, I understand your frustration as I've hit this many times on a laptop with dual graphics. Getting PRIME to run with the very first drivers that supported it was fun. Oh and I'm likely to hit the same walls as you since I just switched my gaming rig to Arch. GLHF!
Well, but many games just work. Actually, I try starting the games without any tweaks before heading over to protondb.com, and often they run just fine.
But it is also true that many games still require minor tweaks. For example, just last week, I found out that I had to enable hardware acceleration for the webview within Steam, just to be able to log in to Halo Infinite. It was just clicking a checkbox, but otherwise, the game would not have been playable.
But I am always surprised when you find out you have those kinds of issues with Windows as well.
Yeah, the same. I sometimes google "wine WoW issues" and every time there are recent threads, so I don't even try. Linux has the long way to become gamer platform.
The games don't fail to run because they are so "graphically powerful" they fail to run because you chose to set up your system without the necessary runtime.
There are people who make stripped-down versions of windows. Is it fair to say that because these releases exist that windows isn't "just works" either?
The author makes this error every single time, in both articles by him I've read today. For some reason, as a person whose native language is not English, this particular error pisses me off so much.
reply