Hacker Newsnew | past | comments | ask | show | jobs | submit | crq-yml's commentslogin

There's a cruel truth to electing to use any dependency for a game, in that all of it may or may not be a placeholder for the final design. If the code that's there aligns with the design you have, maybe you speed along to shipping something, but all the large productions end up doing things custom somewhere, somehow, whether that's in the binary or through scripts.

But none of the code is necessary to do game design either, because that just reflects the symbolic complexity you're trying to put into it. You can run a simpler scene, with less graphics, and it will still be a game. That's why we have "10-line BASIC" game jams and they produce interesting and playable results. The aspect of making it commercial quality is more tied to getting the necessary kinds and quantities of feedback to find the audience and meet them at their expectations, and sometimes that means using a big-boy engine to get a pile of oft-requested features, but I've also seen it be completely unimportant. It just depends a lot on what you're making.


I think the main reason not to go full-throttle into "vibes -> machine code" (to extrapolate past doing it in C) is because we have a history of building nested dolls of bounded error in our systems. We do that with the idea of file systems, process separation, call stacks, memory allocations, and virtual machines.

Now, it is true that vibes results in producing a larger quantity of lower-level code than we would stomach on our own. But that has some consequences for the resulting maintenance challenge, since the system-as-a-whole is less structured by its boundaries.

I think a reasonable approach when using the tools is to address problems "one level down" from where you'd ordinarily do it, and to allow yourself to use something older where there is historical source for the machine to sample from. So, if you currently use Python, maybe try generating some Object Pascal. If you use C++, maybe use plain C. If there were large Forth codebases I'd recommend targeting that since it breaks past the C boundary into "you're the operator of the system, not just a developer", but that might be the language that the approach stumbles over the most.


Solus. Same install for five years running, rolling release, no breakage.


You will still need the tool but the interface to it may start to change.

A lot of the editing functions for 3D art play some role in achieving verisimilitude in the result - that it looks and feels believably like some source reference, in terms of shapes, materials, lights, motion and so on. For the parts of that where what you really want to say is "just configure A to be more like B", prompting and generative approaches can add a lot of value. It will be a great boost to new CG users and allow one person to feel confident in taking on more steps in the pipeline. Every 3D package today resembles an astronaut control panel because there is too much to configure and the actual productions tend to divvy up the work into specialty roles where it can become someone's job to know the way to handle a particular step.

However, the actual underlying pipeline can't be shortcut: the consistency built by traditional CG algorithms is the source of the value within CG, and still needs human attention to be directed towards some purpose. So we end up in equilibriums where the budget for a production can still go towards crafting an expensive new look, but the work itself is more targeted - decorating the interior instead of architecting the whole house.


I believe Lisp is relatively more understood than Forth these days, in that most of the "big ideas" that have been built in it have also been borrowed and turned into language features elsewhere. We have a lot of languages with garbage collection, dynamic types, emphasis on a single container type, some kind of macro system, closures, self-hosting, etc. These things aren't presented with so much syntactical clarity outside of Lisp, but they also benefit from additional engineering that makes them "easy to hold and use".

Lisp appeals to a hierarchical approach, in essence. It constrains some of the principal stuff that "keeps the machine in mind" by automating it away, so that all that's left is your abstraction and how it's coupled to the rest of the stack. It's great for academic purpose since it can add a lot of features that isolate well. Everyone likes grabbing hierarchy as a way to scale their code to their problems, even though its proliferation is tied to current software crises. Hierarchical scaling provides an immediate benefit(automation everywhere) and a consequent downside(automation everywhere, defined and enforced by the voting preferences of the market).

Forth, on the other hand, is a heavily complected thing that doesn't convert into a bag of discrete "runtime features" - in the elementary bootstrapped Forth, every word collaborates with the others to build the system. The features it does have are implementation details elevated into something the user may exploit, so they aren't engineered to be "first class", polished, easy to debug. It remains concerned about the machine, and its ability to support hierarchy is less smoothly paved since you can modify the runtime at such a deep level. That makes it look flawed or irrelevant(from a Lisp-ish perspective).

But that doesn't mean it can't scale, exactly. It means that the typical enabled abstraction is to build additional machines that handle larger chunks of your problem, but the overall program structure remains flat and "aware" of each machine you're building, where its memory is located, the runtime performance envelope, and so on. It doesn't provide the bulldozers that let you relocate everything in memory, build a deep callstack, call into third-party modules, and so on. You can build those, but you have to decide that that's actually necessary instead of grabbing it in anger because the runtime already does it. This makes it a good language for "purposeful machines", where everything is really tightly specified. It has appealing aspects for real-time code, artistic integrity, verification and long-term operation. Those are things that the market largely doesn't care about, but there is a hint of the complected nature of Forth in every system that aims for those things.


Bloat mostly reflects Conway's law, with the outcome of it being that you're building towards the people you're talking to.

If you build towards everyone, you end up with a large standard like Unicode or IEEE 754. You don't need everything those standards have for your own messages or computations, sometimes you find them counter to your goal in fact, and they end up wasting transistors, but they are convenient enough to be useful defaults, convenient enough to store data that is going to be reused for something else later, and therefore they are ubiquitous in modern computing machines.

And when you have the specific computation in mind - an application like plotting pixels or ballistic trajectories - you can optimize the heck out of it and use exactly the format and features needed and get tight code and tight hardware.

But when you're in the "muddled middle" of trying to model information and maybe it uses some standard stuff but your system is doing something else with it and the business requirements are changing and the standards are changing too and you want it to scale, then you end up with bloat. Trying to be flexible and break up the system into modular bits doesn't really stave this off so much as it creates a Whack-a-Mole of displaced complexity. Trying to use the latest tools and languages and frameworks doesn't solve this either, except where they drag you into a standard that can successfully accommodate the problem. Many languages find their industry adoption case when a "really good library" comes out for it, and that's a kind of informal standardizing.

When you have a bloat problem, try to make a gigantic table of possibilities and accept that it's gonna take a while to fill it in. Sometimes along the way you can discover what you don't need and make it smaller, but it's a code/docs maturity thing. You don't know without the experience.


One of the things I remember about myself and others as young people emerging in the years around Y2K, was that we were taught presumption at every opportunity. Pat answers from the elite circles were to be found for everything, and the referential aspects of pop culture were built on that; they could critique it, make satire, but they couldn't imagine a world without it, and therefore the conversation had a gravity of the inevitable and inescapable. Piece by piece, that has been torn down in tandem with the monoculture. A lot of it has been subsequently called out as something toxic or an -ism or otherwise diminishing.

Every influencer now has this dance they do with intellectual statements where, unless they intentionally aim to create rhetorical bait, they don't make bold context-free claims. They hedge and address all sorts of preliminaries.

At the same time, the entry points to culture have shifted. There's a very sharp divide now, for example, between online posting of fine art, decorative art, commercial art, and "the online art community" - influencer-first artists, posting primarily digital character illustrations on social media. The first three are the legacy forms(and the decorative arts are probably the least impacted by any of this), but the last invokes a younger voice that is oblivious to history - they publish now and learn later, so their artistic conversation tends to be more immature, but comes with a sense of identity that mimicks the influencer space, generally. Are they making art or content? That's the part that seems to be the foundational struggle.


I believe it's more nuanced than that. Artists, like programmers, aren't uniformly trained or skilled. An enterprise CRUD developer asks different questions and proposes different answers compared to an embedded systems dev or a compiler engineer.

Visual art is millennia older and has found many more niches, so, besides there being a very clear history and sensibility for what is actually fundamental vs industry smoke and mirrors, for every artist you encounter, the likelihood that their goals and interests happen to coincide with "improve the experience of this software" is proportionately lower than in development roles. Calling it drudgery isn't accurate because artists do get the bug for solving repetitive drawing problems and sinking hours into rendering out little details, but the basic motive for it is also likely to be "draw my OCs kissing", with no context of collaboration with anyone else or building a particular career path. The intersection between personal motives and commerce filters a lot of people out of the art pool, and the particular motives of software filters them a second time. The artists with leftover free time may use it for personal indulgences.

Conversely, it's implicit that if you're employed as a developer, that there is someone else that you are talking to who depends on your code and its precise operation, and the job itself is collaborative, with many hands potentially touching the same code and every aspect of it discussed to death. You want to solve a certain issue that hasn't yet been tackled, so you write the first attempt. Then someone else comes along and tries to improve on it. And because of that, the shape of the work and how you approach it remains similar across many kinds of roles, even as the technical details shift. As a result, you end up with a healthy amount of free-time software that is made to a professional standard simply because someone wanted a thing solved so they picked up a hammer.


That's verisimilitude. We were doing that with representational art way before computers, and even doing stipple and line drawing to get "tonal indications without tonal work". Halftone, from elsewhere in the thread, is a process that does similar. When you go deeper into art theory verisimilitude comes up frequently as something that is both of practical use(measure carefully, use corrective devices and appropriate drafting and markmaking tools to make things resemble their observed appearance) and also something that usually isn't the sole communicative goal.

All the computer did was add digitally-equivalent formats that decouple the information from its representation: the image can be little dots or hex values. Sampling theory lets us perform further tricks by defining correspondences between time, frequency and amplitude. When we resample pixel art using conventional methods of image resizing, it breaks down into a smeary mess because it's relying on certain artifacts of the representational scheme that differ from a photo picture that assumes a continuous light signal.

Something I like doing when drawing digitally is to work at a high resolution using a non-antialiased pixel brush to make black and white linework, then shrink it down for coloring. This lets me control the resulting shape after it's resampled(which, of course, low-pass filters it and makes it a little more blurry) more precisely than if I work at target resolution and use an antialiased brush; with those, lines start to smudge up with repeated strokes.


I'm a bit agnostic about the specific solution these days. In general, early binding(so, static memory and types, formalized arenas and handles, in-line linear logic with few or no loops or branches) debugs more readily than late(dynamic memory allocs and types, raw pointers, virtual functions, runtime configuration). The appeal of late binding is in deferring the final computation to later so that your options stay open, while the converse is true with early binding - if you can write a program that always returns a precomputed answer, that's easier to grasp and verify.

When we compare one method of early binding with another, it's probably going to be a comparison of the granularity. Arenas are "stupid simple" - they partition out memory into chunks that limit the blast radius of error, but you still make the error. Ownership logic is a "picky generalization" - it lets you be more intricate and gain some assurances if you put up with the procedures necessary, but it starts to inhibit certain uses because its view into usage is too narrow with too many corner cases.

If we take Go's philosophy as an example of "what do you do if you want idioms for a really scalable codebase" - though you can certainly argue that it didn't work - it's that you usually want to lean on the stupid simple stuff and customize what you need for the rest. You don't opt into a picky Swiss Army Knife unless there's a specific problem to solve with it. Larger codebases have proportionately smaller sections that demand intricacy because more and more of the code is going to itself be a method of defining late binding, of configuring and deferring parts of processing.

That said, Rust lets you fall back to "stupid simple", it just doesn't pave the way to make that the default.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: