Hacker Newsnew | past | comments | ask | show | jobs | submit | pankajdoharey's commentslogin

In a world where Crabs are trying to rewrite everything in their favourite Crab Speak its nice to see the Reverse. I wonder if it coulkd be used to translate Rust compiler itself to C :-D

https://github.com/Rust-GCC/gccrs/tree/master/gcc/rust

There's a Rust compiler in C++ in case that's any good to you


You are trying to make emacs?

Sad to see these same people were behind GlusterFS.

Well, maybe they are using that experience to build something better this time around? One can hope...

Sure but trying to close source what has been opensource for a decade or trying to reduce features is very strange. I thought those people had higher standards.

This is not a world model, this ise at best the reimplementation of the the NVIDIA prior art around NeRF / 3D Gaussian Splatting and monocular depth, wrapped in a nice product and workflow. What they’re actually shipping is an offline asset generator: you feed it text, images, or video, it runs depth/structure estimation and neural 3D reconstruction, and you get a static splat/mesh world you can then render or simulate in a real engine. That’s useful and impressive engineering, but it’s very different from a proper “world model” in the RL/embodied‑AI sense. Here there’s no online dynamics, no agent loop, and no interactive rollouts; it’s closer to a high‑end NeRF/GS pipeline plus tooling than to something like Google’s Genie/2/3, which actually couples generative rendering with action‑conditioned temporal evolution. Calling this a “world model” feels more like marketing language than a meaningful technical distinction. Infact my definition of a world model is more closer to what Demis has hinted in his discussions, that video gen models like veo are able to intuit they physics from just video trainingdata suggest that there is an underlying manifold in reality that is essentially computable and thus is being simulated by these models. Building such a model would essentially mean building a physics engine of some kind that predicts this manifold.


Exactly. It sure is around the corner, because they are talking about AGI (Actually Getting Investments).


People are specialists not generalists, creating a AI that is generalist and claiming it to have cognitive abilities the same as an "well-educated" adult is an oxymoron. And if such system could ever be made My guess is it wont be more than a few (under 5) Billion Parameter model that is very good at looking up stuff online, forgetting stuff when not in use , planning and creating or expanding the knowledge in its nodes. Much like a human adult would. It will be highly sa mple efficient, It wont know 30 languages (although it has been seen that models generalize better with more languages), it wont know entire wikipedia by heart , it even wont remember minor details of programming languages and stuff. Now that is my definition of an AGI.


Yup this is the way.


LLM as translators for Cobol code to Java or Go should be attempted. And Shut down the IBM mainframe rent seek business for good permanently.


The soon to be GCC 15 release will contain a COBOL frontend. Also other non mainframe compilers have existed for a long time, both proprietary and FOSS.

Thus, availability of a compiler is but a small piece of the puzzle. The real problem is the spider web of dependencies on the mainframe environment, as the enterprises business processes have been intertwined into the mainframe system over decades.


Which is why i think cross compiling to other dependencies and porting to other languages is a better solution. Many of these dependencies could be hardware specific. As long as core business solutions could be ported would be a win for everyone stuck in decades of vendor lockin.


I think the point was you could do that in COBOL; the vendor lock in won't go away just because you change language- it goes away when you decide to refactor the code to vendor agnostic solutions.


You're absolutely right that switching languages alone doesn't solve the problem. The real issue isn't COBOL itself but the deep entanglement of business logic with the mainframe ecosystem, things like CICS, IMS, and even the way data is stored and processed. But I still think there's a path forward, and I’ll share a thought experiment based on my experience working alongside colleagues who’ve spent years maintaining these systems.

I’ve seen firsthand how much frustration COBOL can cause. Many of my colleagues didn’t enjoy writing it, they stuck with it because it paid well, not because they loved the work. The language itself isn’t the hard part; it’s the decades of accumulated technical debt and the sheer complexity of the environment. Over time, these systems become so intertwined with business processes that untangling them feels impossible. But what if we approached it incrementally?

Imagine taking an existing COBOL codebase, say, for a large insurance system and identifying the core business logic buried within it. These are the rules and conditions that power critical operations, like calculating premiums or processing claims. Now, instead of trying to rewrite everything at once, you build a parallel backend in a modern language like Java or Go. You don’t aim for a literal translation of the COBOL code, you focus on replicating the functionality in a way that makes sense in a modern context. For example, replace hardcoded file operations with database calls, or screen based interactions with REST APIs.

Most mainframe customers already use middleware like MuleSoft or IBM Z/OS Connect to route requests to both systems simultaneously. For every write operation, you update both the mainframe’s DB2 database and a modern relational database like Postgres. For every read operation, you compare the results from both systems. If there’s a discrepancy, you flag it for investigation. Over time, as you handle more and more business scenarios, you’d start covering all the edge cases. This dual system approach lets you validate the new backend without risking critical operations.

Of course, this process isn’t without its struggles. Testing is a huge challenge because mainframe systems often rely on implicit behaviors that aren’t documented anywhere. My colleagues used to joke that the only way to understand some parts of the system was to run it and see what happened. That’s why rigorous testing and monitoring are essential you need to catch discrepancies early before they cause problems. There’s also the cultural side of things. People get attached to their mainframes, especially when they’ve been running reliably for decades. Convincing stakeholders to invest in a multi year migration effort requires strong leadership and a clear case for ROI.

But I think the effort is worth it. Moving off the mainframe isn’t just about saving money though that’s a big part of it. It’s about future proofing your organization. Mainframes are great at what they do, but they’re also a bottleneck when it comes to innovation. Want to integrate with a third party service? Good luck. Need to hire new developers? Most of them have never touched COBOL. By transitioning to a modern platform, you open up opportunities to innovate faster, integrate with other systems more easily, and attract talent who can actually work on your codebase.

In the end, this isn’t a quick fix it’s a long term strategy. But I believe it’s achievable if you take it step by step. Start small, validate constantly, and gradually build up to a full replacement. What do others think? Are there better ways to tackle this problem, or am I missing something obvious?


I don't think you're missing anything fundamental, but I've worked with systems written in Fortran, C, C++ and Python that have the same problems. I suspect the systems I'm working on in Python & Rust will have the same issues if they last 10+ years.


No, not for the foreseeable future. In fact, this is the absolute hardest possible code translation task you can give an LLM.

COBOL varies greatly, the dialect depends on the mainframe. Chatbots will get quite confused about this. AI training data doesn't have much true COBOL, the internet is polluted with GnuCOBOL which is a mismash of a bunch of different dialects, minus all the things that make a mainframe a mainframe. So it will assume the COBOL code is more modern than it is. In terms of generating COBOL (e.g. for adding some debugging code to an existing system to analyze its behavior) it won't be able to stay within the 80 column limit due to tokenization, it will just be riddled with syntax errors.

Data matters, and mainframes have a rather specific way they store and retrieve data. Just operating the mainframe to get the data out of an old system and into a new database in a workable & well-architected format will be its own chore.

Finally, the reason these systems haven't been ported is because requirements for how the system needs to work are tight. The COBOL holdouts are exclusively financial, government, and healthcare -- no one else is stuck on old mainframes for any other reason. The new system to replace it needs to exactly match the behavior of the old system, the developer has to know how to figure out the exact confines of the laws and regulations or they are not qualified to do the task of porting it. All an LLM will do is hallucinate a new set of requirements and ignore the old ones. And aside from just knowing the requirements on paper, you'd need to spend a good chunk of time just checking what the existing system is even doing, because there will be plenty of surprises in such an old system.


There are COBOL compilers that target JVM and .NET for as long as these technologies exist.

There are also modern compilers to IBM mainframes, including Go, C++, Java, PHP,..

Also outside DevOps and CNCF application space very few people bother with Go, specially not the kind of customers that buy IBM mainframes.


Apart from cobol is only part of the reason for running on a mainframe. The other part is the orchestration and "resilience" of the mainframe platform

You can run cobol on x86, there are at least two compilers.


Resilience == redundancy, it has been successfully replicated by almost every organisation without mainframes. M-MANGA (Meta, Microsoft, Apple, Netflix, Google, Amazon) Infrastructure is quite resilient.


> it has been successfully replicated by almost every organisation without mainframes

yeah but how much did it cost?

At a large financial news company I worked for, the AWS opex was £4m, All to put text on a page. You know, a solved problem. We spent years fucking about with fleet to make something "resilient". (then early, shitty k8s)

The opex for FAANG is astronomical. facebook spends something like 40 billion a year on infra. Thats without staffing costs. (manifold, the S3 clone, is a bastard to use.)

My point is, which I should make more obvious: if you buy the right mainframe, you can literally blow one of them up, and not loose any uptime. Yes, its expensive.

But, hiring a team to fuck about with k8s on AWS is going to cost more, and never be finished, because true redundant systems are hard to design and build, even by distributed systems experts.


So you are arguing that renting Mainframe is cheaper than datacenter hardware?


All I see is Lisp:

#Operator: - Operand 1 - Operand 2


I used it in production for a client last yr and i compared that with a go implementation. The concurrency support of crystal on a 40 Core machine beast the socks off golang. Crystal both maxes out all the cores to a high degree, on file io and overall speed of execution. The time taken to do the same task Crystal; beats go by a significant margin while being so easy to write. And go is a easy language to write but Crystal is so much more easy to read because of Ruby syntax inheritance.


Can second about the concurrency syntax! Did a small comparison with Go a few years back [1].

I my tests I have got the feeling it is causing a lot less gotchas with blocking go-routines and such. Not sure why that is though.

Surprised to hear about the multi-core performance though! That is super interesting.

[1] https://livesys.se/posts/crystal-concurrency-easier-syntax-t...


What is the reason of such advantage over Go? The latter is known to be geared towards concurrency, and also for having a very fast GC.


My guess is LLVM. Crystal Uses LLVM for code generation while Go has their own compiler backend.


Go can also compile to llvm https://github.com/goplus/llgo Though I am not sure how fast it is.


Doesn't look like it fully supports the language or ecosystem, with partial support for 1.21 version.


It's Ruby like syntax is awesome. It reads like psuedo code.


man, I wish for something like crystal but with a little more golang like syntax.

I hate OOP though, I may be wrong but crystal afaik is OOP. I wish for a non OOP golang-esque crystal alternative.


> something like crystal but with a little more golang like syntax... I wish for a non OOP golang-esque crystal alternative.

I don't understand... Isn't that golang?


Roc?

They're unfortunately in the middle of a compiler rewrite from Rust to Zig right now though


I think V might be what you’re looking for?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: