Hacker Newsnew | past | comments | ask | show | jobs | submit | spicyjpeg's commentslogin

Bayer dithering was also employed heavily on the original PlayStation. The PS1's GPU was capable of Gouraud shading with 24-bit color precision, but the limited capacity (1 MB) and bandwidth of VRAM made it preferable to use 16-bit framebuffers and textures. In an attempt to make the resulting color bands less noticeable, Sony thus added the ability to dither pixels written to the framebuffer on-the-fly using a 4x4 Bayer matrix hardcoded in the GPU [1]. On a period-accurate CRT TV using a cheap composite video cable, the picture would get blurred enough to hide away the dithering artifacts; obviously an emulator or a modern LCD TV will quickly reveal them, resulting in a distinct grainy look that is often replicated in modern "PS1-style" indie games.

Interestingly enough, despite the GPU being completely incapable of "true" 24-bit rendering, Sony decided to ship the PS1 with a 24-bit video DAC and the ability to display 24-bit framebuffers regardless. This ended up being used mainly for title screens and video playback, as the PS1's hardware MJPEG decoder retained support for 24-bit output.

[1]: https://psx-spx.consoledev.net/graphicsprocessingunitgpu/#24...


While great on paper, zero-knowledge-proof based systems unfortunately have a fatal flaw. Due to the fully anonymous nature of verification tokens, implementations must have safeguards in place to prevent users from intercepting them and passing them onto someone else; in practice, this will likely be accomplished by making both the authenticator and the target service mobile apps that rely on device integrity APIs. This would ultimately result in the same accessibility issues that currently plague the banking industry, where it is no longer possible to own a bank account in most countries without an unmodified, up-to-date phone and an Apple or Google account that did not get banned for redeeming a gift card.

Furthermore, if implementers are going to be required to verify users per-session rather than only once during signup, such a measure would end up killing desktop Linux (if not desktop PCs as a whole) by making it impossible for any non-locked-down platform to access the vast majority of the web.


I'm unsure how applicable these risks are here. The proofs appear to be bound to the app, which in turn is bound to the user's face/fingerprint (required to unlock it).

If we truly want to point out the ridiculousness of Italian tech regulations, the influencers' registry, the temporary ChatGPT ban from a few years back or even the new AI regulations cannot hold a candle to the 22-year-old war on... arcade games.

A poorly written regulation from 2003 basically lumped together all gaming machines in a public setting with gambling, resulting in extremely onerous source code and server auditing requirements for any arcade cabinet connected to the internet (the law even goes as far as to specify that the code shall be delivered on CD-ROMs and compile on specific outdated Windows versions) as well as other certification burdens for new offline games and conversions of existing machines. Every Italian arcade has remained more or less frozen in time ever since, with the occasional addition of games modded to state on the title screen that they are a completely different cabinet (such as the infamous "Dance Dance Revolution NAOMI Universal") in an attempt to get around the certification requirements.


I guess they were inspired by a very similar law in Greece from 2002[0] where in an attempt to outlaw illegal gambling done in arcades a poorly written law outlawed all games (the article mentions it was in was in public places but IIRC the law was for both public and private and the government pinky promised that they'll only act on public places). I remember reading that some internet cafes were raided by the police too :-P.

[0] https://en.wikipedia.org/wiki/Law_3037/2002


An arcade stuck in the early 00s would be my ideal third space though.


Have you seen Arcade Time Capsule? It is very accurate recreation of a classic arcade with games you can actually play if you provide the ROMs.

https://www.youtube.com/watch?v=5LOtkGN138Q


Not the OP, but I tried it when it came out. VR headset technology wasn't good enough for screens within screens and it was nauseating more than anything.

There's also impedance mismatch between using the headset controllers and the physical ones in the game. Ideally, I should be able to use my own fightstick in an augmented reality configuration.


The quest 3 is good enough and the Galaxy XR is incredibly high resolution. But it isn't a really ideal way to play arcade ROMs for long term but just to enjoy the nostalgia.


How is the Galaxy XR? I want one but I can't justify it if it doesn't connect to my non-Samsung work laptop.


I got it for $75 a month for two years. Visual clarity is incredible and monitor replacement level but comfort is meh so I bought studioform creative head strap which helped a lot. You can use Virtual Desktop to connect to any computer easily.

I'm a sysadmin so I bought it to see if it would work when I want to ssh into systems I'm physically near in racks. It has worked really well for this.



Custom Flash players were actually relatively common in game development during the mid to late 2000s, as Flash provided a ready-to-go authoring solution for UI and 2D animation that artists were already familiar with. Autodesk's Scaleform was probably the most popular implementation but a number of AAA developers had their own in-house libraries similar to Doom 3's; some of them, such as Konami's "AFP" [1], are still in use to this day (the latest game to use it, Sound Voltex Nabla, was released last month).

[1]: https://github.com/DragonMinded/bemaniutils/blob/trunk/beman...


It is actually much worse than that. Much like banking, the push for digital government services in many countries has ended up more or less requiring every citizen to own an up-to-date, non-jailbroken iOS or Android device. If you blocked your phone from accessing Apple or Google servers (or if it's 6 years old, a dumb phone or runs GrapheneOS), the support staff will just tell you to walk to your closest Best Buy equivalent and grab the cheapest Android device you can find; in the name of "security" there often is no fallback option, and when there is one it's SMS 2FA which is (understandably) rate limited to three uses per year.

If your phone gets stolen, meanwhile, you may find yourself unable to log into the police's portal for reporting it.


It has been enabled mainly by the the advent of streamlined tooling to assist with 1:1 byte-by-byte matching decompilations (https://decomp.me/ comes to mind), which allows new projects to get off the ground right away without having to reinvent basic infrastructure for disassembling, recompiling and matching code against the original binary first. The growth of decompilation communities and the introduction of "porting layers" that mimic console SDK APIs but emulate the underlying hardware have also played a role, though porting decompiled code to a modern platform remains very far from trivial.

That said, there is an argument to be made against matching decompilations: while their nature guarantees that they will replicate the exact behavior of the original code, getting them to match often involves fighting the entropy of a 20-to-30-year-old proprietary toolchain, hacks of the "add an empty asm() block exactly here" variety and in some cases fuzzing or even decompiling the compiler itself to better understand how e.g. the linking order is determined. This can be a huge amount of effort that in many cases would be better spent further cleaning up, optimizing and/or documenting the code, particularly if the end goal is to port the game to other platforms.


The PS1's GPU does not support perspective correction at all; it doesn't even receive homogeneous 3D vertex coordinates, instead operating entirely in 2D screen space and leaving both 3D transformations and Z-sorting to the CPU [1]. While it is possible to perform perspective correct rendering in software, doing so in practice is extremely slow and the few games that pull it off are only able to do so by optimizing for a special case (see for instance the PS1 version of Doom rendering perspective correct walls by abusing polygons as "textured lines" [2]).

[1]: https://github.com/spicyjpeg/ps1-bare-metal/blob/main/src/08... - bit of a shameless plug, but notice how the Z coordinates are never sent to the GPU in this example.

[2]: https://fabiensanglard.net/doom_psx/index.html


It's funny that the PS1 got so famous for 3d games, when its 'GPU' was entirely 2d.

I guess the main thing the console brought to the table that made 3d (more) feasible was that the CPU had a multiplication instruction?


A little more than just a multiplication instruction (the 68000, used in, say, the Sega Mega Drive, had one of those too). Have a look at https://www.copetti.org/writings/consoles/playstation/, and in particular, read about the GTE - it offered quite a bit of hardware support for 3D math.

Also, even though it didn't handle truly 3D transformations, the rasterizer was built for pumping out texture mapped, Gouraud shaded triangles at an impressive clip for the time. That's not nothing for 3D, compared to an unaccelerated frame buffer or the sprite/tile approach of consoles past.


It's not just a multiplication instruction. The CPU is equipped with a fixed-point coprocessor to accelerate the most common computations in 3D games, the geometry transformation engine [1], capable of carrying them out much faster than the CPU alone could. For instance, the GTE can apply a transformation matrix to three vertices and project them in 23 cycles, while the CPU's own multiplier takes up to 13 cycles for a single multiplication and 36 (!) for a division. Combined with a few other "tricks" such as a DMA unit capable of parsing linked lists (which lets the CPU bucket sort polygons on the fly rather than having to emit them back-to-front in the first place), it allowed games to push a decent number of polygons (typically around 1-3k per frame) despite the somewhat subpar performance of the cache-less MIPS R3000 derivative Sony chose.

If you have some basic familiarity with C, you can see both the GTE and the Z bucket sorting of GPU commands in action in the cube example I linked in the parent comment.

[1]: https://psx-spx.consoledev.net/geometrytransformationengineg...


Darn I posted the same thing in another thread


If anybody here wants to learn more about console graphics specifically, I think the original PlayStation is a good starting point since it's basically the earliest and simplest 3D-capable (though it would be more correct to say triangle capable, as it does not take Z coordinates at all!) GPU that still bears a vague resemblance to modern shader-based graphics pipelines. A few years ago I wrote a handful of bare metal C examples demonstrating its usage at the register level [1]; if it weren't for my lack of spare time over the last year I would have added more examples covering other parts of the console's hardware as well.

[1]: https://github.com/spicyjpeg/ps1-bare-metal


Thanks for sharing! Pikuma also has a PS1 graphic programming course that I plan to take in the future.


I'm late to the party but, as a prolific contributor to PSn00bSDK and the PS1 homebrew scene more in general, I feel obliged to shamelessly plug my own "PlayStation 1 demystified at the absolute lowest level" repo:

https://github.com/spicyjpeg/ps1-bare-metal

It's still very work in progress - I have only covered a tiny fraction of what the console's hardware can do - but I find it fascinating to explore how little code you actually need to get started on such a simple platform, even with no external SDKs or tools (aside from a completely standard MIPS GCC toolchain).


Flat triangles and trapezoids are sometimes used internally by these GPUs as a building block for other polygons, possibly since the logic to split up triangles and quads into flat trapezoids may have taken less die space than a rasterizer capable of handling three edge equations at a time rather than just two.

While exposing these lower level internal primitives typically did not make sense for general purpose graphics accelerators, some in-house embedded implementations did actually go further and only supported flat trapezoids, relying on the CPU to preprocess more complex shapes. For instance, the "SOLO" ASIC used in second-generation WebTV boxes took this approach [1] (among other interesting cost-cutting measures such as operating natively in YUV color space).

[1]: http://wiki.webtv.zone/misc/SOLO1/SOLO1_ASIC_Spec.pdf (warning: 200 MB scan)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: