There is no way you could recreate a convincing enough 90s era codebase of a japanese videogame + its associated tools + scripts and commented out codepaths with current ai tools.
I wouldn't be too sure about that. The original decompilations of Mario 64 and Ocarina of Time were done mostly by hand because LLMs weren't really around yet, but these kinds of projects seem perfectly suited for handing the gritty work off to AI: There is a clear output (exact binary recreation) and a straightforward path to get there (look at this assembly code and produce some C code from it). The decompilation of Twilight Princess jumped from very little to basically 100% of core code in the past year alone: https://github.com/zeldaret/tp
I have no doubt that this would be possible for MGS2 as well.
I don't think it's impossible, but it would take a lot of time and a lot of money; likely more time than good enough models have been commercially available.
I have been working on an incremental decompilation-based reimplementation (basically how OpenRCT2 was done) of Worms Armageddon for the past 2 months with a lot of help from LLM tools; primarily Claude Code and Ghidra MCP. I've worked on it almost every day, reaching Claude Code Max 5x's 5 hour session limit multiple times every day. Suffice to say as a software rendered, sprite-based 90s PC game, Worms Armageddon is several orders of magnitude simpler than MGS2. Despite that, I think it will be 2-3 more months of work before I can compile a fully independent version of the game.
This is despite the game being an almost ideal candidate for automated RE, as it uses deterministic game logic with built-in checksum checks in replays and multiplayer. I've downloaded all the speedruns I could find for the game (as replay files) and I've retrofitted the replay system into a massively parallel test framework, which simulates over 600 games in about 30 seconds. So Claude can port all game logic independently without much need for manual testing; the replay tests can almost guarantee perfect correctness.
MGS2 doesn't have anything like that, so every ported function requires extensive manual testing. Even with LLM tools, an accurate decomp could take years (unless you're willing spend thousands of $currency per month on it).
This is really cool! Your process is compelling, and your choice of game is excellent. I'd like to read a long blog post about your entire journey from the beginning to a working binary once you get there.
As it happens I do have the habit of writing very long blog posts - though none on OpenWA so far. The OpenWA readme file serves as a bit of an introduction, though it's already a month old.
Keep your eyes open for Sonic R too. Sadly a lot of the online Sonic community has been toxic to the dev for being transparent about using Claude for the majority of the disassembly. Even though he's a very talented developer with lots of credit to his name, and only took a few weeks compared to a year+ if fully manual.
Having followed his bsky during his announcement, he started off per-emptively dissing on his haters that... didn't even exist yet. Constantly posting memes about how everyone was dissing him and how AI was totally superior (and then posting his angry sessions with Claude when it got something wrong) when most other users were just "that's cool man". The thing that made him quit bsky was a (now-deleted) thread someone posted criticizing the weird crash-outs. I think he was more... normal about the whole thing, people would have received the project quite a bit more positively.
Decompilation to C (and even C++!) has been done automatically for 2-3 decades at least. I am not sure what has changed in recent years other than people playing fast and loose with copyright (and GitHub allowing it, likely because their LLMs also stand to benefit). Introducing LLMs here is only going to introduce errors, delays and likely push you away from a reliable result.
The challenge here is readability. Reading the TP source leak you link I think it's even behind the current state of the art, as it's barely above assembly. This is where I suspect even the smallest of LLMs may help, since you don't care that much if it introduces errors.
>Decompilation to C (and even C++!) has been done automatically for 2-3 decades at least.
Only in a very rudimentary sense and definitely not in a working compilation (much less binary equivalent) sense. LLMs have turned this from a gimmick for static analysis into something that actually works pretty well for recompilation projects.
> Only in a very rudimentary sense and definitely not in a working compilation (much less binary equivalent) sense.
Working is the easy part; the hard part is getting something that classifies as readable C. LLMs do not really help reach the "working compilation" part but benefit from it.
We are way past "working compilation" when it comes to LLMs. They are already really good at writing readable, compliable code. The big problem with LLMs is making sure the output binary actually does what you wanted it to do. But if you define the goal not merely as instructions in a vague, unspecific human language and rather as recreating a given set of binary instructions after compilation, this big drawback goes away. So in a sense they are better suited for recompilation projects than for developing new applications.
My point is that we have been past the "working compilation" way before LLMs, and I do not think anything in LLMs help with it, at best agents use these tools with the same efficiency. I disagree that they're good at writing compilable code, but agree on the readable part.
Which decompiler reliably produced working, high level C/C++ from assembly? I would have loved to use this thing you are describing here 15 years ago. Compilation is inherently lossy, so any system that could have given you this would have needed pretty heavy LLM-like features anyways.
>I disagree that they're good at writing compilable code
That was never part of the discussion, because as explained several times now it is irrelevant in this case. The existence of the original binary means all you need to do is match up things, which can be automated completely.
I do not understand what is it so hard to "generate working code". Even the free version of Hexrays was doing it 15 years ago, and I have written one in my company that I have used for over 30 years. It's actually ... trivial?
The problem is readability. No one in his right mind would call what they generate "C++". Mine still interjects assembler from time to time (and not the new version that GCC supports, but the older MSVC style) .
LLMs absolutely do not help with the generate working code part, because this is an exact problem that doesn't need nor benefit from an LLM (other than maybe automating stupid iteration?). They can help with the readability part, because here once you already have a working skeleton it doesn't matter that much if they make mistakes, as it is easy to detect.
I already asked, but I guess I'll need to ask again: Please show me this tool. Hex-rays is certainly the wrong answer, because the decompiled C code usually needs tons of manual cleaning, fixing datatypes and reconstructing function prototypes before you can compile. And even then you can't be sure about functional (much less binary) equivalence. If anything, all these traditional decompilers focused on readability, not recompilability. But even there they were much worse than LLMs.
If what you said was true, the projects mentioned above wouldn't have needed years of arduous work before the age of LLMs came to be.
I get the point, but note that (custom) datatypes and function prototypes are for readability. They are not required for working nor functionally-equivalent code.
Absolutely. This is just some delusions of a vibe coder at best. Not with just current generation of AI tools but essentially never. The conversion from C, C++, Rust or whatever, through post-processing (macros etc), through IR generation, through compile time optimizations, through link time optimizations, to the generated machine code is a one way street for low level languages. You can get a pretty close higher level approximation that matches the flow/logic/structure - but the code will never be anywhere near close to the original source code. I could write the same C++ program in 3 different ways and get identical assembly, how do you go back to the exact source? The answer is that you don't.
Here's the same simple program, written in 3 different ways, producing identical binary compatible code: https://godbolt.org/z/qWrc8fEnn
How does the AI know whether it should produce back the snippet #1, #2 or #3? It does not. It cannot.
Who cares? Who said anything about recreating the exact code? You will get usable, compilable, and surprisingly readable source code, in your language of choice, that yields the functional equivalent of the binary.
Barring obvious edge cases that could show up but don't usually, like intentional race conditions. Timing is the one area where things get iffy.
That is quite incredible if that is true. Need to read a bit into that. Can you point towards relevant literature/examples? Also: please see my questions in the comment to your other reply
That's pre-2026 thinking. At this point, with the ability to lash IDA or similar tools to an agentic harness, there is no longer any such thing as a closed-source binary.
I’m interested in how LLMs handle obfuscated code. Throw LLM with IDA MCP at EasyAntiCheat_EOS.sys or the like (as the most common examples of heavily obfuscated software) and see how far they can get.