Hacker Newsnew | past | comments | ask | show | jobs | submit | MindSpunk's commentslogin

This is a problem for all capturing closures though, not just Rust's. A pure fn-ptr arg can't have state, and if there's no user data arg then there's no way to make a trampoline. If C++ was calling a C API with the same constraint it would have the same problem.

Well, capturing closures that are implemented like C++ lambdas or Rust closures anyway. The executable stack crimes do make a thin fn-ptr with state.


You're still going to run into problems with mercator because under mercator the poles project to infinity, so you'd need an infinitely large texture or you special-case the poles. Many renderers do this so it is viable!

There isn't a zero tradeoff 2D solution, it's all just variations on the "squaring the circle" problem. An octahedral projection would be a lot better as there are no singularities and no infinities, but you still have non linear distortion. Real-time rendering with such a height map would still be a challenge as an octahedral projection relies on texture sampler wrapping modes, however for any real world dataset you can't make a hardware texture big enough (even virtual) to sample from. You'd have to do software texture sampling.


Seems like a cool project.

I don't understand why they're calling out the FPS of an empty scene as a useful number compared to Unity though. Ignoring that this engine will have a fraction of the features of Unity (the likely reason for the FPS number in the first place), it's just a useless benchmark because it's an empty scene. `while (true) {}` will get you the same thing.

I'd wish they'd highlight how the engine helps you make a game rather than arbitrary performance numbers on microbenchmarks that don't generalize to a real game project. You can absolutely be faster than Unity, but "9 times faster than Unity out of the box" is not a number people should take seriously without the context of where the number comes from.

I wish them well though. I'm always interested to see more work in implementing engines in GC languages. I'm personally quite interested to see what can be done in modern GC languages like Go or (modern) C# which provide lots of tools to limit GC pressure. Being able to write C-like code where needed in an otherwise managed runtime is a really powerful tool that can provide a best-of-both-worlds environment.


Agreed. Unity has a ton of features that even Godot lacks. (Unreal also has a ton of features they all lack).

I know a lot of different languages and frameworks, from C/C++ up, so I say this: The language is never the issue. Language is just syntax. ease of use is everything.

I've been wanting to make a game for a long time. I toyed with OpenGL/DirectX at multiple points since the 90s, even going so far as to creating a really cool tech demo/engine that scales with CPU core count. However, those days are in the past. I really want to focus on not writing a ton of graphics and sound code. I really want to focus on the game logic and the game itself.

The above is one of the reasons I'm finding it hard to get into Godot, even though I *really* like the project (and wish I could fund it, alas, I'm unemployed, permanently). Unity just happens to be robust enough that I might be able to scrap together a prototype. It has built in things like a terrain editor, tons of educational material, an asset store to get some placeholder assets, etc.

I saw a comment mentioning how Warcraft 2 was so awesome because it had a great editor. I saw that and also Starcraft had an amazing editor. Neverwinter Nights also had an amazing editor. We need something like that with a good license to build games. Every engine that even somewhat approaches that area blows up in complexity, has a super restrictive license, or doesn't allow you to build your own executables.

RPGMaker actually is pretty decent for 2D games, however the fixed resolution and constant dramatic shifts between versions, previous licensing issues (I haven't looked at their current license), and more make it a no go for a serious commercial game...and it doesn't do 3D.

Sorry for the rant. Don't even get me started on how much more complicated the transitions from DX8-DX12 or OpenGL 1.x/2.x -> Anything else have been. ;)


There's a cruel truth to electing to use any dependency for a game, in that all of it may or may not be a placeholder for the final design. If the code that's there aligns with the design you have, maybe you speed along to shipping something, but all the large productions end up doing things custom somewhere, somehow, whether that's in the binary or through scripts.

But none of the code is necessary to do game design either, because that just reflects the symbolic complexity you're trying to put into it. You can run a simpler scene, with less graphics, and it will still be a game. That's why we have "10-line BASIC" game jams and they produce interesting and playable results. The aspect of making it commercial quality is more tied to getting the necessary kinds and quantities of feedback to find the audience and meet them at their expectations, and sometimes that means using a big-boy engine to get a pile of oft-requested features, but I've also seen it be completely unimportant. It just depends a lot on what you're making.


> I know a lot of different languages and frameworks, from C/C++ up, so I say this: The language is never the issue. Language is just syntax. ease of use is everything.

TBH the weird C# version Unity uses has been an issue multiple times =)


"I'm authoring a new Golang based firmware for a retrocomputer I literally pulled out of the trash that nobody will use"

HN: "Ah you're sweet"

"I'm authoring a new Golang based game engine that doesn't have the featureset of things with $100 million to $3 billion of product development"

HN: "Hello, human resources?"


This would be a much better comment if it weren't for the snark and fake quotes.

> "Unity has a ton of features that even Godot lacks. (Unreal also has a ton of features they all lack)."

Hard to interpret w/out more detailed comparison but stack-ranking their featurefulness, is it Unreal, Unity, Godot?


Unity can't really be said to have less or more features than Unreal IMO. Each has features lacked by the other, and neither lacks anything really major. But if I had to pick one for being the most featureful, I'd pick Unreal. Unreal has a built-in visual programming language* and some very advanced rendering tech you might have heard about. Unity has tons of features for 2D games lacking in Unreal and supports WebGL as a build target.

(Though imo unity is the better engine. Unreal has so many bugs and so much jank that to make a real game with it you basically need a large enough team that you can have a dedicated unreal-bug-fixer employee.)

*Technically Unity has a visual scripting language too but IIUC it's tacked on and I've never heard of anyone actually using it.


Speaking of bugs, I remember the last time I thought maybe I'd try making a simple game in Unity I gave up when I couldn't stop clipping through walls.

The collision was clearly working, just some n% of the time you'd end up on the wrong side of the clean flat rectangle you were walking into.

I lost interest pretty soon after that.


That's not a bug just an inherent problem with using very thin colliders with a discrete collision detection system. It's not a Unity problem and Unity allows you to configure continuous collision detection to prevent tunneling https://docs.unity3d.com/6000.3/Documentation/Manual/Continu...

Game development is full of domain knowledge like this because games need to make compromises ensure completing simulation updates at no lower than 30Hz.


That's fair. I just remember being very frustrated that felt like a basic feature I was implementing per the tutorial broke immediately in such a fundamental way in such a simple situation, and not being able to find any sort of explanation or solution at the time. Possibly it was fully my fault and the info was readily available!

I think its really just the trappings of game development being full of tribal knowledge.

The tutorial probably should have instructed you create box colliders for walls (giving a greater area for the physics engine to catch collisions) rather than a simple plane/quad which would lead to the exact issues you had, or at least explained why a box works better than a plane.

I guess you have to balance the necessary information with overload in your tutorial or at least have an aside or additional reading that really helps understand many of these internalized understandings.


I'd say Unreal > Unity > Godot for feature space.

For performance/practicality, it's Unity > Godot > Unreal. Building something in Unreal that simply runs with ultra low frame latency is possible, but the way the ecosystem is structured you will be permanently burdened by things like temporal anti-aliasing way before you find your bearings. Unreal and Unity are at odds on MSAA support (Unity has it, Unreal doesn't). MSAA can be king if you have an art team with game dev knowledge from 10+ years ago. Unreal is basically TAA or bust.


I see it as overhead for the base engine. Though yes, even unity has some post processing built in that would affect performance.

But, I will always correct the cardinal sin of "using FPS to measure performance". Especially for an empty scene, this is pretty much snake oil. 200 fps is 5 milliseconds 1800 fps is a little pver half a millisecond. Giving back 4.5 milliseconds doesn't means much if any sense of real work will add it back.


Still not "9 times faster", and still seems disingenuous, but here is one comparison that is at least given with some more context: https://x.com/ShieldCrush/status/1943516032674537958

Ppth(//news.ycombinator.com/-+).us

They're not using a library like SDL for windowing and input as far as I can tell. All the MacOS interfaces are in Objective-C or Swift, which I would wager (I've never used Go fwiw) aren't as easy to bind to from Go code.

MoltenVK has some extra interfaces you need to integrate with too, it's not a completely hands off setup.


Are any relevant GPUs VLIW anymore? As far as I'm aware they all dropped it too, moving to scalar ISAs on SIMT hardware. The last VLIW GPU I remember was AMD TeraScale, replaced by GCN where one of the most important architecture changes was dropping VLIW.

Sadness. Tons of functions from the standard library are special cases by the compiler. The compiler can elide malloc calls if it can prove it doesn't need them, even though strictly speaking malloc has side effects by changing the heap state. Just not useful side effects.

memcpy will get transformed and inlined for small copies all the time.


How does it know what isn't visible? Can it handle glass? Frosted glass? Smoke? What if I can't see the player but I can see their shadow? What if I can't see them because they're behind me but I can hear their footsteps? What if I have 50ms ping and the player is invisible after turning a corner because the server hasn't realized I can see them yet?

To answer all those questions you either have to render the entire game on the server for every player (not possible) or make the checks conservative enough that cheaters still get a significant advantage.


GeforceNow begs to differ.

I know, not the same, but IMHO the future of anticheat. We just need faster fiber networks.


Yeah. Stadia worked well in ideal conditions, so for people lucky enough to live that life, the technology's there.


I never understoof why google gave up so early on cloud gaming. Clearly it is the future, the infrastructure will need to develop but your userbase can grow by the day.

I live a bit remote on an island group, and even though I have a 500Mbit Fiber, my latency to the next GeforceNOW datacenter is 60-70ms (which is my latency to most continental datacenters, so not NVidias fault). That makes it unplayable for i.e. Battlefield 6 (I tried, believe me), but I have been playing Fortnite (which is less aim sensitive) for 100+ hours with that.


It has been. It's been server-side for decades. It's common industry knowledge that the client can't have authority. But server-side anti cheat can't stop aimbots or wall hacks. Client side anti cheat isn't about stopping you from issuing "teleport me to here plz server" commands, it's about stopping people from reading and writing the game's memory/address space.

If you wanted to teleport (and the server was poorly implemented enough to let you) you could just intercept your network packets and add a "teleport plz" message. Real cheats in the wild used to work this way. However a wallhack will need to read the game's memory to know where players are.

What modern anti cheat software does is make it difficult for casual cheats to read/write the game's memory, and force more sophisticated cheats down detectable exploit paths. It's impossible to prevent someone from reading the memory on untrusted hardware, but you can make it difficult and detectable so you can minimize the number of cheaters and maximize the number you detect and ban.

Linux is incompatible with client anti-cheat because there is no security boundary that can't be sidestepped with a custom compiled kernel. Windows is Windows, with known APIs and ways to read process memory that can be monitored. Secure boot means only Microsoft's own built kernels can boot and you now have a meaningful security boundary. Monitor what kernel drivers are loaded and you can make it harder for cheaters to find ways in. Sure you can run in a VM, but you can also detect when it happens.

Sure we can just run with no client side anticheat at all (functionally what Linux always is unless you only run approved, signed kernels and distros with secure boot) but wallhacks and aimbots become trivial to implement. These can only really be detected server side with statistical analysis. I hope you don't ban too many innocent people trying to find all the cheaters that way.


> Linux is incompatible with client anti-cheat because there is no security boundary that can't be sidestepped with a custom compiled kernel. Windows is Windows, with known APIs and ways to read process memory that can be monitored. Secure boot means only Microsoft's own built kernels can boot and you now have a meaningful security boundary. Monitor what kernel drivers are loaded and you can make it harder for cheaters to find ways in. Sure you can run in a VM, but you can also detect when it happens.

OK I'll just compile a custom ReactOS build that lets me sidestep that boundary.


Depends on the program, and it can be a very useful tool.

Rust has Mutex::get_mut(&mut self) which allows getting the inner &mut T without locking. Having a &mut Mutex<T> implies you can get &mut T without locks. Being able to treat Mutex<T> like any other value means you can use the whole suite of Rust's ownership tools to pass the value through your program.

Perhaps you temporarily move the Mutex into a shared data structure so it can be used on multiple threads, then take it back out later in a serial part of your program to get mutable access without locks. It's a lot easier to move Mutex<T> around than &mut Mutex<T> if you're going to then share it and un-share it.

Also It's impossible to construct a Mutex without moving at least once, as Rust doesn't guarantee return value optimization. All moves in Rust are treated as memcpy that 'destroy' the old value. There's no way to even assign 'let v = Mutex::new()' without a move so it's also a hard functional requirement.


Taking games designed for desktop GPUs and running them on mobile GPUs with tile-based-deferred-rendering hardware will be a disaster. Mobile GPU designs will choke on modern games as they're designed around hardware features that mobile GPUs either don't have, or that run very slowly.

Peak theoretical throughput for the GPUs you find in ARM SoCs is quite good compared to the power draw, but you will not get peak throughput for workloads designed for Nvidia and AMD GPUs.


Isn't the GPU on Apple Silicon machines a tile-based "mobile" GPU design? Many of the hardware features that traditional GPU's have and mobile GPU's lack can be easily "faked" with GPU-side general compute.


But even the most powerful apple silicon GPU is terrible compared to an average Nvidia chip


While I agree with the general point, this statement is factually incorrect - apple's most powerful laptop GPU punches right about the same as the laptop SKU of the RTX 4070, and the desktop Ultra variant punches up with a 5070ti. I'd say on both fronts that is well above the average.


There is no world where Apple silicone is competing with a 5070ti on modern workloads. Not the hardware and certainly not the software where Nvidia DLSS is in it's own air with AMD just barely having gotten AI upscaling out and started approximating ray reconstruction.


Certainly, nobody would buy an Apple hoping to run triple-A PC games.

But among people running LLMs outside of the data centre, Apple's unified memory together with a good-enough GPU has attracted quite a bit of attention. If you've got the cash, you can get a Mac Studio with 512GB of unified memory. So there's one workload where apple silicon gives nvidia a run for their money.


Only in the size of model it can run, not speed of token generation.


Apple's MetalFX upscaler is pretty similar to DLSS (and I think well ahead of AMD's efforts on this front).

Ray tracing outside of Nvidia is a disaster all round, so yeah, nobody is competing on that front.


Ray tracing support on the newest AMD chips is getting good enough. They are still behind Nvidia but definitely not a disaster anymore


That simply isn't true. I have an RTX 4070 gaming PC and an M4 MacBook Pro w/ 36GB shared memory. When models fit in VRAM, the RTX 4070 still runs much faster. Maybe the next generation M5 chips are faster but they can't be 2-4x faster.


GP said laptop 4070. The laptop variants are typically much slower than the desktop variants of the same name.

It's not just power budget, the desktop part has more of everything, and in this case the 4070 mobile vs desktop turns out to be a 30-40% difference[1] in games.

Now I don't have a mac so if you meant "2-5x" when you said "much faster" well thdn yea, that 40% difference isn't enough to overcome that.

[1]: https://nanoreview.net/en/gpu-compare/geforce-rtx-4070-mobil...


Are there real world game benchmarks for this or are these synthetic tests?


Only a few, because it's not easy to find contemporary AAA games with native macOS ports. Notebookcheck has some comparisons for Assassins Creed: Shadows and Cyberpunk 2077[1]

[1]: https://www.notebookcheck.net/Cyberpunk-AC-Shadows-on-Apple-...


And it will consume almost as much power as the Nvidia GPU to do so.


a 4.5k$ M4 Max barely competes with an entry-level laptop with a 4060 which will be around ~1K in FPS in cyberpunk given the same settings. For AI it's even worse - on NVidia hardware you're getting double-digit speeds for FPS for real-time inference of e.g. stable diffusion, whereas on the M2 Max I have you get at best 0.5 FPS


Snapdragon doesn't do tile based deferred rendering the way Apple does (or did). Snapdragon does (or did) a form of tile-based rendering, but it is a completely different design, with completely different performance tradeoffs.


What about the Switch 2 (nVidia's Tegra) line? The one in the Swich 2 is using Ampere architecture.

That should be feasible, no?


Linux support is in a terrible state for Nvidia chips. Not going to happen.


You don't have to use tile-based rendering on these chips anymore. They can directly draw to the entire screen.


You can, but the immediate mode path is slower and uses significantly more power. Mobile GPUs are not good at modern desktop game workflows where significant portions of the frame are compute shaders. They're generally very memory bandwidth starved, and general compute sidesteps most of the optimizations the hardware has made to work around this.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: