This post reads like an accidental advertisement for approaches like Verus [1], which couple the implementation and verification so you can't end up with a model that diverges from the actual implementation. I'm personally much more optimistic about the verus approach, but I freely admit that's my builder bias speaking.
I go to the theater rarely, but recently watched Project Hail Mary in-theater and quite recommend it. There _have_ been some great films made in the last while, among a sea of derivatives.
(E.g., you may find the new Dune films too violent, but they were great. And the moral is not very subtle in them. :)
The history of people trying to design GPU or ASIC-resistant proof-of-work functions is long and mostly unsuccessful. I haven't looked into RandomX; it's possible they've succeeded here (or possible that with the alt-coin market mining profitability tanking after Ethereum moved to proof-of-stake, it just wasn't worth it).
Hmmm. That's not the reason we changed it. We just got tired of tweaking things to prevent ASICs.
I'll add that there was such a large influx of miners at the outset, that (statistically) it seems any crippling of the original algorithm was fairly futile - the edge was both short-lived and minimally impactful. We're over a decade later, and nobody mining in the first month (even with that unfair advantage) was able to gain any meaningful percentage of Monero's emission.
I'll add that RandomX has proven that it is indeed possible to create a GPU and ASIC-resistant PoW algorithm. I'd encourage you to dig in further - the closest to an "ASIC" is a multi-CPU miner (Bitmain X9) with a bunch of RISC-V CPUs in it.
Sorry, I was not quite saying my fun was the reason, but that the failure to create something GPU/ASIC resilient was the more general underlying cause.
But be careful about "proven" in that last sentence - the absence of a solution isn't exactly proof, it's more of a proof that _either_ it is possible to create an ASIC-resistant algo _or_ it has not been worthwhile to ASIC-ify it given the economics of mining XMR and the research & NRE required to do so. I haven't the foggiest which of those two it is, mind you, just that there are a few remaining valid explanations.
It's a proof that something is possible to show one example.
In this case the claim was ASIC-resistant PoW is possible, and the proof has been the historical behavior of miners after years of RandomX. Nobody said it would be eternally or entirely resistant to optimizations...
The limit we set at the beginning was "no one can design a custom device for RandomX with more than a 2:1 efficiency advantage over general purpose CPUs". That is and will forever remain true.
In reality, no one has been able to build any device for RandomX that isn't actually a CPU. The closest thing to a "mining ASIC" is just a bunch of RISC-V cores.
I was intending to comment on poor wording or poor reasoning (I assume the former), not Monero.
I think what the evolving Monero team has done, for many years since inception, is wonderful to the point of inspiring. The thoughtful approaches it has consistently taken over many upgrades reflect a much greater level of competence, responsible goals, clever design, and a clearer consistent vision than all but a few alternative systems.
(Including better choices than Bitcoin, which seems to have completely elevated code stability over any problem resolution (user privacy/safety, transaction scalability, environmental damage, etc.). Stability over all non-critical features including ergonomics (i.e. transaction times) is a very strong, but legitimate choice. But not stability over basic failures/limitations relative to current function.)
Disclaimer/scope: I do not own Monero or any other cryptocurrency, but have in the past. My comments are purely about technology, with no financial/investment dimension.
Monerites - what’s the state of play for fpga mining? I did not see anything in the light documentation of RandomX that looked like it was tuned to be “awkward” for a good sized fpga.
FPGAs have no particular advantage. You could dedicate a chunk of their resources to implement a softcore CPU but it'd be several times slower than a real CPU.
The random programs change too quickly to just implement them directly on an FPGA. Reprogramming the entire chip like that takes too much time.
RandomX is designed so that if you design a RandomX ASIC then you've designed a CPU. It writes and then executes random programs. To minimize the possible efficiency gains from matching the instruction set architecture, the same program is executed several thousand times, reducing the relative overhead of translating it to a different ISA.
I partially heat my home by running the default Monero client on old Xeons (heat ejects near my desktoes). As I only mine when it's cold outside (otherwise using resistive heating), there is no actual net electricity cost. IMHO it's not "worth it" for an individual to buy equipment specifically to mine crypto... but if you already have an old machine AND you heat without a heatpump, it's a free hobby/heater.
----
To anybody else that is syncing a fresh monero blockchain copy (i.e. installing the official client), I recommend using the custom node flag ` --db-sync-mode safe ` — which is slower but corruption-avoiding — before node's initial bootup. Without safemode, any halt of the client will [most likely] corrupt the local blockchain (losing days of DL/verification).
Also, if you use an SSD for storing any blockchain (as recommended by monero team... but not by me), know that its lifespan will be greatly reduced from the constant IO/access. Personally, I recommend safemode (see above) on a 7200RPM spinner (HDDs effectively don't wear during IO/access).
----
What are your thoughts on running xmrig vs. the default getmonero.org client? Would you in general agree that monero remains ASIC-resistant?
A heat pump would arguably be more efficient for society (can provide 4x heat for 1x energy), but if you make enough money on the mined monero I guess it might be rational.
Would be curious if the marginal savings from a heat pump would allow you to buy more monero than you mine with this energy.
The only heatpumps in my house are airconditioner[†] and waterheater (mild winter climate doesn't justify replacing HVAC — but if/when it dies, I'll put in a modern minisplit) [ƒ].
[†] It's an older model, without reversing valve (circa early-2000s).
[ƒ] ...and refrigerator (thanks /u/twic)
----
>if the marginal savings from a heat pump would allow you to buy more monero than you mine with this energy.
In a colder climate, DEFINITELY (to a point: see /u/nerdsniper's great point, below).
----
I replaced a 300W "toe heater" with this rig; by directing heat to only wear its needed (i.e. muh'toes), I can heat the entire house less (whether resistive or heatpump).
Actually, I suspect heating-by-monero-mining is more likely to economically beat heat pumps only in the very coldest climates. Heat pump efficiency goes down when the temperature delta between inside and outside is very large. Below 0F or so, it's quite difficult to find heat pumps that will work sufficiently well, and generally they transition to resistive heating.
Caveat: I'm only talking about marginal advantage, ignoring the capital costs of the Xeon servers or the heat pump itself.
For hobbyists such as myself, the capital cost of already-owned (and obsolete) Xeons are infinitely less than replacing an otherwise-functional (albeit cooling-only) AC in a sub-tropical rainforest climate (like mine), which only has a few weeks of annual frost (snow "sticks" once every decade).
As far as placement of the machine: underneath your computer desk is ideal, as this directed heating allows you to keep the house's thermostat a few degrees cooler.
----
If anybody were to ask me "what would you BUY to mine monero@home," I would definitely tell them to [instead] buy a heatpump-heater, -watertank, -&c (presuming they don't have each, already).
Just use a Linux laptop with a working battery so you never have to worry about power outages or other system crashes. In that case, you don't need safe sync mode, and you don't have to kill your SSD.
Working battery ≠= avoiding system crashes | my local node has a UPS, and still Monero's client is dicey (Mac & Linux distros).
Particularly on its initial sync, Monero's daemon is flakeyAF.
If you (e.g.) don't allow `sync in background` (why is this not the default behavior?!), the official Monero client is notorious for locking up on wakeup. Once you kill the process, your local blockchain is [most likely] unusable.
Another reason to use safe-sync is (e.g.) if your system (Linux or whatnot) decides to update/restart during the several days it takes to sync-initially.
----
Just out of curiosity, why do you abuse an SSD so (safe-mode, or not)?
For SSD-diehards, I'd recomment getting a very large size because this'll last longer, presuming the drive self-levels.
> Once you kill the process, your local blockchain is [most likely] unusable.
Totally false. LMDB is perfectly crash-proof in that scenario and killing the process never damages the DB. The only thing that's not guaranteed is turning off syncs, in the face of an OS crash/power outage.
If you don't sync, you're not abusing the SSD. If you run on Windows, the OS is too unstable to use without safe sync mode though.
This is a well-documented failstate. Usually results in "unable to connect to 127.0.0.1:18081" errorlog, which is most-commonly due to a corrupt database/blockchain (from hardstop/kill).
In order of crashout likelihood: Windows >> MacOS > Linux
>If you don't sync, you're not abusing the SSD.
If you don't sync then you're not (cannot be) a fullnode / network verifyer / ringsigner.
----
>LMDB is perfectly crash-proof
It is my understanding that once your initial-sync has completed, the default monero node behavior is to then automatically enter the --safe flag (I described above).
This may be old behavior... I go way back (years beyond a decade). My only modern use in xmrworld is as a personal foot-heating ATM.
> If you don't sync then you're not (cannot be) a fullnode / network verifyer / ringsigner.
I was talking about database sync, not blockchain sync. You don't need to use safe sync mode if you don't have to worry about machine crashes. And just killing the process will never corrupt the blockchain DB.
> This may be old behavior... I go way back
On this particular point, I go way back further than you.
>On this particular point, I go way back further than you.
Definitely know who you are (LMDB programmer &c), and your contributions to world & crypto tech – thanks (few people can claim, like yourself, that their code exists on BILLIONS of machine). I've ALSO been in cryptospace longer than Monero's existance... you are hands down a better programmer than myself (I'm a bluecollar electrician), with much more name recognition.
----
BUT: You aren't listening to my "layperson user behavior report" about a common and known behavior of the default getmonero.org node. I would love to help you&team better stabilize/configure the default client behavior/options...
Please don't let hubris stand in our way of spreading the gospel of crypto. If you want me to sign some custome statement with a BTC hash (dating back to the earlytimes) just send me a postcard (I no longer use email). But you shouldn't need that to listen.
>just killing the process will never corrupt the blockchain DB
I would love to show you how easy this is to reproduce, even on fresh installs of Ubuntu and/or MacOS on otherwise-stable hardware (never tried Windows... easier?).
----
Loved your 2019 talk on "is XMR still ASIC-proof" – is it still, in 2026, in your opinion? Your line about ~"our goal is to make a hash algorythm so dynamic that if you designed an ASIC processor for it... it'd essentially just be a CPU"~ – classic quoteable.
>> just killing the process will never corrupt the blockchain DB
> I would love to show you how easy this is to reproduce, even on fresh installs of Ubuntu and/or MacOS on otherwise-stable hardware (never tried Windows... easier?).
If it's so easy to reproduce, you should be able to screen record a session with two terminal windows:
1 with monerod running and syncing the blockchain
2 send a `kill -9` to the monerod
1b restart monerod
And then we should see the error message you're referring to.
Awesome; let me rsyncd this bitch and then I'll try to help out. I'll do another with a brand new fresh install (do you have a Linux varient, otherwise it'll be Ubuntu v24).
Will also provide the perplexity.ai chatlogs that I used to both find other similar crashouts and resolve my issues. Again, I am not a programmer but have been accepting crypto (with client discount) since 2012.
Thanks again for your contributions to this community.
----
As you're capable, do you happen to know why the default configuration doesn't sync in background – this is just wild... anybody installing XMR-adjacent software isn't going to expect this behavior.
...the re-sync (after stopping due to timeout) is always where my issue appears.
The LMDB backend (I know it's you're baby) could possibly be having a freakout when Linux comes back online?! (from e.g. sleep) – I genuinely don't know – but am happy do demonstrate my frustrations (that I've had literally & reproducably on both Linus and MacOS distros (across YEARS).
I don't know what the current versions do, it's been a while since I touched that code.
I have no reason to lie, I'm not selling anything. Bitmain is selling mining hardware, take a look at their claims. They've had 7 years to try to crack it.
There was a proposal on Ethereum that didn't succeed (progpow) since they were already in the late stage of transitionning to PoS. Ethereum did quite a good job at keeping asic advantage moderate (the speedup was 100% max - not orders of magnitude). RandomX is basically progpow that succeeded. You might be interested in Chia's Proof of Space and Time... and how it collapsed!
I don't know if PoW based approaches make much sense in the modern environment, anyhow -even very clever ones that provide ASIC resistance. Ethereum has been doing real proof of stake (and not delegated proof of stake which is both easier and terrible from a system safety perspective) for quite a while and it's seemingly cheap, effective, and robust.
This was a super interesting read, and it highlights exactly the strength of cryptocurrencies. They turn game theory in their favor, so egoistic players (I don't mean this in an offensive tone) contribute to making it stronger and safer for everyone else.
They kinda do - I'll admit honestly that the final game I played in the cryptocurrency space I played solely to profit. (It was a minor, uh, **coin that didn't have a lot of redeeming value to start with). Though it turns out the incentives remained somewhat aligned: I ended up providing the developer with some security bug fixes to make sure someone couldn't mess with the cash cow. :)
(To be clear: We were just optimizing mining; in the process of looking for ways to mine it faster, I found some security bugs and fixed them. We weren't exploiting the bugs, that crosses a line for me.)
They had to design a specialized verification function, which I imagine would be the easy way to break it.
The brilliant part of Bitcoin is that it uses very widely known crypto primitives - verification is the same as getting the right seed (you just happen to be told what the right seed is, rather than having to pay for it to be discovered).
Correct; both Bitcoin and Monero use Hashcash as PoW, only differing in the choice of hash function. Verification is only different from a solution attempt in asymmetric (i.e. non-Hashcash) PoW, such as Cuckoo Cycle or (the poorly named) Equihash.
Authors are from STMicro, polytechnic Turin, Freie universitat Berlin, and Inria. Examined writing firmware for an IOT sensor platform. From the abstract:
> Two teams concurrently developing the same functionality (one in C, one in Rust) are analyzed over a period of several months. A comparative analysis of their approaches, results, and iterative efforts is provided. The analysis and measurements on hardware indicate no strong reason to prefer C over Rust for microcontroller firmware on the basis of memory footprint or execution speed. Furthermore, Ariel OS is shown to provide an efficient and portable system runtime in Rust whose footprint is smaller than that of the state-of-the-art bare-metal C stack traditionally used in this context. It is concluded that Rust is a sound choice today for firmware development in this domain.
One of the authors commented below that the “teams” were actually persons and the Rust person was an intern.
This is even less serious than the typical pattern of grabbing random students for experiments and then drawing conclusions about the general population.
Not sure about your life experiences, but every new, from-scratch project I have undertaken has looked like 1-2 or at most 3-4 people on good terms who really pulled their weight, with the rest being basically not dead weight, but the management overhead they caused ate up most of the productivity they brought to the table.
> Rust is evolving far too fast to be used in code which needs to run for years to decades down the line.
Code doesn’t stop running on existing hardware when the language changes in a future compiler. You can still use the same old toolchain.
I’ve done a lot of embedded development in a past life. Keeping old tool chains around for each old platform was standard.
I would much rather go through the easy process of switching to an older Rust tool chain to build something than all of the games we played to keep entire VMs archived with a snapshot of a vendor tool chain that worked to build something.
I remember a coworker having to fight with an old platform's build not working because our user/group IDs were bigger than 2^16. I can't remember which utility was causing the problem, I'd have to guess tar. This is when we learned to play the archive a VM game.
I can't imagine theres much overlap between "we will need to update this firmware for the next decade." and "Let's bet the farm on the documentation being perfect, and all the downloads still available."
I know a defence company that has a bunch of vaxes stored in low oxygen environments because they legally have to be able to provide software updates to firmware they’ve written for the next 20 or so years and it was written on a vax.
They had some great stories trying to get something or other running again where they had to fly one of the original designers over to hand solder a board back into action.
How we do that today is a bit of an interesting problem I don’t think they’ve convincingly solved; basically maintaining nightly builds forever — a couple 1U’s of kubernetes in deep storage ain’t gonna do it, you’re not gonna be able to solder a xeon back to life..
I know I’d rather be trying get a load of c99 rebuilt for some mips or other after 20 years that some random version of rust.
> I know I’d rather be trying get a load of c99 rebuilt for some mips or other after 20 years that some random version of rust.
Rust 1.0 is 11 years old and it's still trivial to compile Rust code from then. I doubt that will change in the next 9 years.
C is an absolute nightmare in comparison. I tried to compile some old C code I had for Nordic nRF51 chips, only a few years after the chips became available. I gave up. Broken links, missing documentation, etc. etc. I can see why other people here are saying it's standard practice to archive a VM. Not really necessary for Rust.
> Rust 1.0 is 11 years old and it's still trivial to compile Rust code from then. I doubt that will change in the next 9 years.
Maybe it's trivial to compile Rust code but it's not trivial to build a project with dependencies. I'm trying to get my feet wet with an official USB example project from Embassy on my RP2040. It doesn't work in the latest git repo for some unknown reason (might be my fault, probably is, but it's not obvious to me).
I'm assuming it worked at some point, maybe something changed and someone forgot to update something somewhere (there are lots of example projects). So I thought I'd "git bisect" until I find a working version and go from there. Well, I cannot get it to build against anything older than a year ago and that version also isn't working for me. It's dependency and Rust edition hell.
My guess is that when Rust code gets 30 years old, the problem would more likely be that you can't find an already compiled compiler that will work for that old code, and that the compilers themselves need bootstrapping. So you'll just fast-forward the code to work on a new compiler instead.
Yes, they are immutable. It's only possible to "yank" a specific version, which will prevent new dependencies, but it will still be available for download for existing dependencies.
> I know a defence company that has a bunch of vaxes stored in low oxygen environments because they legally have to be able to provide software updates to firmware they’ve written for the next 20 or so years and it was written on a vax.
So uh, will these ever make it to an auction site you think?
I have an Acer Chromebook with Celeron N3060 CPU and it runs the SIMH VAX emulator with 64MB for the VAX at the same speed as a Vaxstation 4000/60 and likely the disk is much faster.
I like OpenVMS and am slowly learning more about it; no reason to wait until you see those hit eBay :-)
People work supporting past embedded codebases and developing new code intended to run on them for decades.
If the toolchain moves on, the product is suck on whatever architecture and developers are stuck running an emulator/docker of some particular vintage of Debian sid.
The good news is that C seems also contaminated with "move fast, break things " phylosophy. The modern code writer is not able to make things that last more than a couple of months.
Previous versions of the Rust compiler don't just up and disappear just because I moved to a new workstation or setup a new build server. I understand it's not optimal to rely on a download always being available, but even then, that is not at all exclusive to any single language. Why would earlier versions of Rust be susceptible to this but not something like gcc? I don't see it.
What you are describing happens all the time. Usually the toolchain provider will continue updating a list of known issues for some time after EOL. Beyond that you have third parties that do it for decades, if the platform is big enough. They collect bug reports from the industry, investigate them, then create lists that you subscribe to. Those lists include detailed examples, explanations, and usually linter rules to detect code that could trigger the bug.
The truth is: If the toolchain was good enough to ship your product, has time to go EOL, and then you do a patch that surfaces an esoteric toolchain bug, then the odds are that you'll know exactly what triggered the bug and you can work around it by writing different code.
Because even if the newer shinier compiler/toolchain had the issue fixed, most companies wouldn't upgrade to it at that point. It's almost never desirable to change your toolchain for a shipping product, you're just introducing more unknowns.
> Because even if the newer shinier compiler/toolchain had the issue fixed, most companies wouldn't upgrade to it at that point. It's almost never desirable to change your toolchain for a shipping product, you're just introducing more unknowns.
This reaction to toolchain stability is quite defensive, and was needed for C, but isn't universally needed. C toolchain updates could break your product because of how loose the C language can be; I've had code that had benign undefined behaviour, until a toolchain update brought in an optimisation that broke it.
Another outcome of a toolchain update could be "no bugs introduced, existing bugs in your codebase now found by diagnostics".
Fortunately there are more than two compiler versions in the world.
It's easy to install different rust toolchains. You could increment the toolchain version forward and find the next toolchain to include the fix, or even backport the fix to a custom toolchain if you want.
The comments acting like Rust is breaking code all the time are also pretty lost. I've been developing Rust since +/- the 1.0 days and this isn't a common occurrence. When something does need to change it's usually a tightening up of something that was incorrect in the past, and it's easy to fix.
Some of these comments act like everything is going to collapse at any moment and the old code will be unusable, which is pretty ridiculous
Coding around compiler bugs is pretty standard practice in usecases where you have some random vendor compiler that is based on some ancient gcc if you’re lucky or completely in-house if you’re not.
Rust uses "Editions" (e.g., 2015, 2018, 2021, 2024) to introduce breaking changes without splitting the ecosystem. Every edition remains supported by newer compiler versions _indefinitely_. The only churn is on projects targeting "nightlies" but there's no reason you can't target a stable one for projects that need that stability.
Do you recall which libraries? Use of nightly fell of a cliff after 2018. Looking at the bottom of https://lib.rs/stats#rustc-usage, ~8% of all crates.io requests came from a nightly newer than that corresponding to 1.86. That's am upper bound, as using a nightly compiler doesn't mean that a nightly compiler was needed. The prevalence of nightly is also niche specific. If you're in embedded it is likely you need to use some nightly-only features that haven't been stabilized, but if you have an OS chances are that you don't.
> That's am upper bound, as using a nightly compiler doesn't mean that a nightly compiler was needed.
To be fair it's not even a lower bound, as using a stable compiler doesn't imply the absence of nightly only feature (as in Cargo features, the ones you can enable on crates you depend on).
For the purposes of this discussion the question is not whether or not a crate exposes optional features that require a nightly compiler, but whether or not a crate makes use of the nightly compiler mandatory, which has become extremely rare in my experience. Perhaps it's more common in some embedded use cases, but if people want to make that assertion, I would ask that they either mention which libraries they're specifically talking about or which nightly features they're specifically referring to.
I think the divide is apps vs libraries: a library that requires their dependants to set an environment variable opting out of stability guarantees is unlikely to gain adoption, but applications that do so are more common, like Firefox.
> For the purposes of this discussion the question is not whether or not a crate exposes optional features that require a nightly compiler, but whether or not a crate makes use of the nightly compiler mandatory
In my opinion what matters is the functionality. If it's provided by a nightly-only crate or as a nightly-only feature of an otherwise non-nightly-only crate it doesn't really matter.
But I agree that this is become more and more rare.
You have the same issue with C, no? C is upgrading versions, compilers have changed, hardware evolves and somethings in the past aren't supported as well anymore.
I'm curious why I've seen this sentiment repeated in so many places, I learned Rust once 5 years ago and I haven't had to learn any new idioms and there have been no backwards incompatible changes to it that required migrating any of my code.
I think people don't like the JavaScript treadmill. People want to think about using tools and getting proficient with them rather than relearning tools. I'm not saying rust is like that, but I do feel that way about python and JavaScript. Those are dynamic languages but it is what all this editions stuff evokes. It's an if it were stable, it wouldn't be changing sort of thing.
> using tools and getting proficient with them rather than relearning tools
This attitude works in carpentry, but not in software. You need to get proficient, but your tools will keep evolving, like everything else in the software world.
This attitude doesn't even work in carpentry, depending on the timeframe you look at, tools have changed over time. You can still use a hand saw, where a table saw would be just as suitable, or have a SawStop(tm) and reduce the likelihood of losing a finger.
In carpentry, you still do a lot of work with a hammer which did not change materially for last 70 years. Programming tools did change very, very much since 1956, even though some still retain the recognizable shape (e.g. Lisp or Fortran).
That's exactly the point. This is not normal even in software.
You can, in fact, learn C exactly once. Or any number of other languages. The entire argument being made here is that the world you're suggesting is a problem. Software developers should not have to continually relearn their tools and it is abnormal to suggest they should.
I've seen C written by people who learned it "exactly once", in let's say the 2000s. They're the same people who insist that all the safety & linting introduce since was pointless.
I'll take C written by people who've learned and improved since then any & every day of the week.
I have never even heard of the linked repo, and it does not appear to be overly popular. Nor have I ever heard of "witness types" or seen code that attempts to make use of them. And no, any new borrow checker would not require some new approach to iterators. This entire comment reads like a non sequitur. Where on Earth did you get any of this from?
To be very fair there are legitimate gripes here, they're small but they are worth covering, and then there's a huge nonsense
L1: The edition system allows Rust to literally mutate the language. 2024 edition (if you begin a new Rust project today) has different rules from 2021 Edition, from 2018 edition and the Rust 1.0 "2015 edition". These changes aren't exactly huge, but they are real and at corporate scale you would probably want to add say a one day internal seminar to learn what's new in a new edition if you want to adopt that edition. For example we hope 2027 edition will swap out the 1..=10 syntax to be sugar for the new core::range::RangeInclusive<i32> not today's core::ops::RangeInclusive<i32> and this swap delivers some nice improvements.
L2: Unlike C++ the Rust stdlib unconditionally grows for everybody in new compiler releases. So even if you stuck with 2015 Edition, all the time since Rust 1.0, when you use a brand new Rust compiler you get the standard library as it exists today in 2026, not how it was in 2015 when you began coding. If you decided you needed a "strip_suffix" method for the string slice reference type &str you might have written a Rust trait, say, ImprovedString and implemented it for &str to give it your strip_suffix method. Meanwhile in Rust 1.45 the Rust standard library &str also gained a method for the same purpose with the same name and so now what you've written won't compile due to ambiguity. You will need to modify your software to compile it on Rust 1.45 and later.
L3: Because Rust is a language with type inference, changes to what's possible which seem quite subtle and of no consequence for existing code may make something old you wrote now ambiguous because what once had a single obvious type is now ambiguous. This is more surprising than the L2 case because now it seems as though this should never have compiled at all. Type A and B already existed, before it inferred type A, now it insists B might be possible, but it may be quite a tangle to discover why B was not a possibility until this new version of Rust. If the compiler had rejected your code when you wrote it in 2015 as ambiguous you'd have grunted and written what you meant, but at this distance in time it may be hard to remember, did you mean B here?
Now the nonsense: There's a vague superstition that Rust is constantly changing while good old C is absolutely stable. Neither is true by orders of magnitude. If you really need certainty you should freeze actual hardware and software, or at the very least build a VM and then nothing changes because you changed nothing. If you'd have been comfortable upgrading to a new CC version, you shouldn't be scared about upgrading the Rust tools.
Strip_suffix won't break with new compiler versions. Anything explicitly imported takes precedence over the prelude, or else everything is a breaking change and would have to wait for an edition.
Switch to Rust 1.50 and now it's calling the stdlib strip_suffix silently, I actually wasn't expecting it to be silent, and obviously if they have the same exact behaviour (mine instead panics to show we're calling it) you wouldn't even notice, but it is a change.
Oh, wow. I am wrong. So much of the rust community must be wrong as this is commonly mentioned when discussing breakage. This is awful.
But on the other hand, it could be a bug as the trait resolver is commonly mentioned as the buggiest part of the language. I'm scared of the breakage if they fix it though.
Probably a key thing you misunderstood is that &str wasn't from the prelude. It's a type in the actual Rust language, that's why it has the lowercase name like u16 or bool
So we didn't bring str::strip_prefix from the prelude in preference to our custom trait, we made a string literal and those have type &'static str -- an immutable reference to a string which lives forever. So the "prelude doesn't win" rule does not apply for &str because it didn't come from the prelude.
If we were talking about a type which implements Iterator for example, new Iterator features would come from Iterator, which is in the prelude and you didn't specifically ask for Iterator so the things you did ask for beat Iterator. But here the language primitive type grew new methods, a thing which Rust does but many languages don't do - Rust has methods on pointers and bytes and anything, whereas a language like Java or C++ can only put methods on "classes" not the ordinary types.
The reason it works with `String` is because trait methods get priority over applying autoderef (which is needed to go from `&String` to `&str` and select `str::strip_suffix`). If you however already have a `&str` then autoderef won't be needed and the inherent method will win over the trait method. At no point does the prelude come into play
The code won't magically stop running because the Rust community continued evolving the language. The old toolchains will be available if there's a compatibility change.
Probably just depends on what you are doing. Library support could move forward and new features / security updates for libraries that are not part of core Rust could possibly be an issue if they don't work on older versions.
Might not matter for a lot of embedded, but if you are doing something like exposing functionality via a webserver or something that would be network-connected, then security updates in third-party libraries may be important.
For example, it would be really easy for me to run old code that's pinned to something like Python 3.7, but if libraries have updated to Python 3.x without backwards compatibility, then I'm stuck using the out of date versions or just backporting myself.
Very surprised to hear that, since editions are exactly the kind of mechanism Rust is using to make sure software will keep working unchanged for decades.
The Rust compiler can build a 2024 edition application which depends on a 2015 edition library, which in turn depends on a 2018 edition library.
Every crate can upgrade at their own pace, or even never at all.
I only tried Rust for small hobby projects, but I did experience weird code rot when you just leave the code there and after a while it does not compile. Might have something to do with how Cargo manages dependencies
Do you remember more specifics? I've seen four cases:
- a project with no Cargo.lock, where there have been breaking changes in a dependency that wasn't specific enough in Cargo.toml; fixing this requires some finessing of dependencies but is possible to get the project building without any code changes
- a project with proper dependency tree specified, but where a std change cause inference to break specific older versions of a crate in your tree (time 0.35 comes to mind); this requires similar changes to the above
- a project relies on UB on stable code that should always have been disallowed and since fixed; this is tricky, on a dependency, an updated version will likely exist, on your own project you'd have to either change your code or use the older toolchain, knowing that the code might not be doing what you want it to do (this happened a handful of times pre 1.20)
- an older project, with the proper dependency versions specified, being built on a newer platform; I saw this with someone trying to build a project untouched since 2018 on an ARM Mac: the toolchain for it didn't exist back then, and the macOS specific lib they were using didn't have any knowledge either. Newer versions of the library do, of course, but that required updating a set of libs that would be compatible too.
All of these cases are quite rare. You could encounter all of them at the same time, and that would be annoying, enough to have someone doing it for fun say "fuck it" and drop it. You can also get hit by a lightning.
But between Cargo.lock which should allow your project to build on newer toolchains, and access to all prior toolchains, your project should continue to build forever on the same platform.
I'd add pinning a rust toolchain version (using rust-toolchain.toml or similar) in addition to Cargo.lock
Rustc does have fairly frequent (every ~18 months of so) minor breaking changes between versions. These are often related to type inference, usually only affect a very small number of crates, and are usually mitigated by publishing patch versions of those crates that don't run into the issue. But if you have the patch version locked with a lockfile then that won't help you, and there is increased likelihood of the build failing, so it's best to lock down the rustc version too.
Luckily pinning the rustc version is very easy to do.
---
On regular projects this kind of issue can usually also be fixed by upgrading to the latest rustc and running `cargo update`. But conservative embedded projects may have legitimate reasons for not wanting to upgrade rustc to the latest version, and parts of ecosystem's disregard for MSRVs means that running `cargo update` on an older rustc has a high chance of causing build breakage due to MSRV issues.
Either you used nightly, explicitly non stable, rust instead of the default stable rust; or you used dependencies that have been yanked due to security issues; or you didn't commit your lockfile and implicitly upgrades everything by having to generate a new lockfile because you used a really wide range of compatible versions.
All of these options require you to go out of your way to enable breakage.
You could also be in the super unlucky state of using something that was later proved unsound in std, which is the only case where rust will break your code on stable. (Missused unsafe in std)
I've had issues compiling Python 3.12 on ArchLinux when Python 3.12 -> Python 3.13 happened, and few of important packages broke. So I had to compile older version of gcc and build Python 3.12
So, it can happen in any programming language, and to any large projects.
Rust allows me to handle this easily with rust.toolchain file, so, this concern is kinda overblown imo
This is not a Rust issue but an inherent issue with dependencies in all languages. External dependencies rot.
For Rust code for serious industrial use cases or firmwares, it's always best to minimize dependencies as much as possible to avoid this. Making local copies of dependencies is also a thing for certain use cases.
There is a difference in C and Rust culture. Embedded C projects rarely have external dependencies, and in rare cases when there are dependencies (e.g. most projects use vendor SDKs nowadays), they are pinned and there is an expectation of API compatibility anyway
Rust on the contrary incentivises using dependencies, and especially embedded software is hard to write without using external packages (e.g. cortex-m-rt, bytemuck and many others)
Another is the complexity of the language when it comes to low-level programming. E.g. bytemuck I've mentioned before solves a problem that is hard to even explain to a C developer.
I think a big difference is that the less unsafe you want in your own code, the more you rely on crates to provide a safe abstraction for unsafe code in a centralized place where soundness holes are likely to be found.
Of course it was always understood that you could have bugs in C libraries and some of them may include memory unsafety, but the culture is very different when there's no explicit way to demarcate the parts of the code most deserving of scrutiny.
> Might have something to do with how Cargo manages dependencies
Build against the lockfile to use the same versions.
Unless they were pulled from upstream, they won’t suddenly stop building against the same compiler version. Rustup makes it easy to switch compiler versions to get back to the same one you used, too.
Even if a crate is yanked, if you have the version in a lock file it will still download and build. (This was done precisely after seeing the left-pad incident.)
I'm sure you have ways to entirely purge a crate. And the situation will arise that you need to do so. In which case all the old code will, indeed, break.
Vendoring is the only solution to this but it's really discouraged in rust-land and there is no first-party support for it. You can kind of manually vendor your deps with cargo, and there are third party tools. But compare that to go-land where `go mod vendor` gets you 95-100% of the way there.
> I'm sure you have ways to entirely purge a crate.
No, the lesson from left-pad that every centralized package manager learned was that you cannot allow users to remove uploaded packages at their leisure. All outright code removal can only be done manually by the admins themselves, and it's unlikely to happen outside of some legal compulsion.
> Vendoring is the only solution to this but it's really discouraged in rust-land and there is no first-party support for it.
Firefox explicitly opts out of stability guarantees by using nightly features on a stable compiler in an unsupported manner, not dissimilar to using an unstable GNU extension in C. But good example of the caveat that if you're not using stable, then yes, you have no stability guarantees.
I'm curious what the concern is with the rust editions mechanics in place. Each crate gets to define the language edition it is compiled with. Even if dependencies up convert to later editions they can still be linked against by crates that are an older edition.
As for the broader crate ecosystem, if crates you depend on drop support for APIs you depend on, that could cause you to get stuck on older unsupported releases. Though that is no different of a problem than any other language.
Code in all languages bitrots. Even if your dependencies are "done", the language is unchanging, the toolchain mature, a vendor can introduce a new platform and all of a sudden your code won't compile anymore, because IBM introduced a new RISC server platform, or macOS changed the definition of time_t, or Windows blocked direct win32.DLL access (I know, a stretch), that your older libraries didn't know about.
Stretch or not, MDAC can no longer be installed on Windows. (The Microsoft Data Access Components are a rollup of database interface libraries from when they seemed to do around one a year, to the point that Spolsky remarked on it[1].) This means a significant corpus of old but still 32-bit line-of-business apps no longer runs, like anything written in VB6 or VBA that needs to access a database.
Not counting compiler specific extensions from GCC, clang, Microsoft, Intel, NVidia, AMD, IBM, Oracle, Apple, Green Hills, TI, Microbit, Mikroe, and many more C compiler vendors that could have been used to compile a specific project during the last 50 years.
We have Rust code in a living code base that is more than 5 years old and it's required maybe one touch in the last 5 years to fix some issues due to stricter rules. It was simple enough it could have been automated.
Aren't there companies that still use C89 for their production systems? I don't know any in particular but I have read comments here on HN implying that. Just do the same for Rust. Stick to the one major version you started with instead of trying to update the toolchain regularly.
I've often found that trying to compile decade-old C code with a current toolchain and current libraries will have issues. It isn't always clear what versions the code is expecting (no equivalent to a lockfile), newer C compilers or standards can break old code, and newer libraries especially can break old code. It might still build if you could recreate exactly what it expects, but it becomes decreasingly possible to do that if you weren't compiling it a decade ago and archived off exactly what worked then.
I had a surprisingly positive experience with Gemini optimizing some mathy MPS code. It did far better than claude.
Of course, when I tried it on something else it rewrote every line in the file for no good reason, applied changes directly when I told it just to plan, etc.
Do we really care if an LLM regurgitates information already available in public about the design of nuclear weapons? They're not being trained on restricted material.
(My personal guess is that you don't want them answering questions about some things because you don't want people to try it and blow themselves up, or poison themselves. That's probably much more pertinent to making drugs or conventional bombs, since presumably the average internet user doesn't have a stockpile of HEU sitting around. It's kind of like the reason the Anarchist's Cookbook is a bad idea: using its recipes is likely to be quite hazardous to the cook!)
I'd personally prefer that to be limited to the sort of person who can understand the science, not "anyone with an LLM" - having an "intelligent", "reasoning" assistant who can help you through anything you don't understand does lower the bar quite a lot, and I would prefer there to be a fair amount of friction.
It's not like the material isn't out there - if you want to learn about this stuff, an LLM will happily point you towards Wikipedia and other public sources, it's just not going to walk you through the assembly.
Oh, I dunno - I haven't downvoted it, but if I did, it would be for the idea that you "have to" give money to someone you don't want to just for a slight improvement. That's garbage. You don't have to. It's okay--no, it's _good_--to give your ethics a role in your decisionmaking.
Openat appeared in Linux in 2006 but not in FreeBSD until 2009; go started being developed in 2007. It probably missed the opportunity by a year. It would have been the right thing to change the os module at some point in the last 18 years, however.
LA proper seems to have a density of 3000/km^2 according to Wikipedia
A perhaps more interesting use case is the utsunomiya light rail. Utsunomiya has a density of around 1200/km^2.
What they ended up doing was building a new tram with exactly one line. The main thing they did was make sure the tram comes frequently, including off peak.
End result is people rely on the tram line and the tram is making good money, being operationally profitable (still gotta pay back construction costs of course).
Utsunomiya is obviously not exactly greater LA, but Utsunomiya has on average 2.25 cars per household[0]. It has traffic issues and people feel the need to own a car. And yet the tram line is finding success because transportation is a local issue, not a global one!
You can solve for transportation issues in crowded areas. Few reasonable people are lamenting that you don't have a train between madison, WI and Chicago every 15 minutes. Many are simply lamenting that even at a local level PT in many places is leaving a lot on the table despite there being chances of success!
Smaller focused PT has proven itself to work time and time again, and compounds on other PT projects in the area.
I've been testing it out by using it to create the quizzes for a course I'm teaching this semester. My conclusion is that it's well worth finding a way to try it out. Drastically reduces the amount of boilerplate.
(I haven't yet tried to write a full paper in it.)
[1] https://github.com/verus-lang/verus
reply