As somone who never really viewed systemd as a problem I'm starting to think the systemd "haters" were actually right, at least somewhat...
Viewing Poettering as some kind malicious entity undermining projects sounds like a conspiracy theory. But now with him working for Microsoft his actions do look like a lot like the "embrace, extend, and extinguish" pattern to me. Yes, yes "Microsoft <3 Linux", of course...
And now I am supposed to cheer for the groundwork for the creation of an allmighty authority with the ability to "sanction" some (parts of) operating systems, but not others?
I always thought this outcome was obvious. Systemd controls everything that happens before Linux boots. It controls everything that happens after Linux boots. Might as well call it GNU/Systemd at this point. It's the silent revolution no one wanted. The name itself implies a manifest destiny because System D is 100x greater than System V and they intentionally break POSIX compliance too. Now that the guy who owns the systemd project works for Microsoft, in addition to the fact that the Linux kernel now needs to be a Windows executable in order to boot, that really tells you all you need to know.
This is unfair. MS keeps making it harder and harder to run anything that is not MS-signed (TM) on Secure Boot hardware; any distro that does not bow to the whims of MS is thus likely to be relegated to obscurity or die due to lack of users capable of installing it.
no, MSFT wants to sell software workers the signing keys to boot and use the cloud each day. It is their worldview as the largest OS monopoly company, and antithetical to individual rights to owning your own equipment. Similar moves as tractors that phone home constantly.
While they are the largest desktop OS monopoly company, Google is currently the largest OS monopoly company, with 43% of the total market going to Android and 29% going to Windows.
I keep hearing that Microsoft is making things "harder", but I've yet to see any evidence that this is true.
All I can see is that Microsoft mandate that laptop hardware sold with Windows installed have secure boot capability and that the MS-signing keys are shipped in the TPM. This has been true for, what, over 10 years?
Huh, wasn't aware of that. Reading through, it looks like this is just talking about Microsoft "Secured-core" hardware, which seems to be hardware specifically marketed to provide a Microsoft-certified boot chain.
I'm not sure that's the same thing as Microsoft making it harder in general for people to bring and use their own keys (or just turn it off altogether).
It's a bit more complicated. The UEFI drivers and Linux distributions are signed by the same certificate, the "Microsoft 3rd party UEFI Certificate".
UEFI Drivers can be Option ROM on the PCIe cards, commonly found of graphics cards. If you where to leave this certificate out of your boot chain, how would you validate this drivers? Well, you can't.
This results in your not having any GPUs and your device is "bricked" until you can hopefully piggyback on something else. It's not really a proper brick.
This is just a design flaw in my opinion. Microsoft taking the easy route for being the org responsible for signing UEFI code when there was no other options (LetsEncrypt wasnt a thing in 2010/2012). And I don't think Microsoft envisioned themselves in the position they are currently inn.
There are workarounds though, you can read the drivers loaded during boot from the TPM Eventlog and enroll each driver into the approve list for Secure Boot (The `db` variable). But this isn't necessarily future proof if anything changes.
As others have said, it's not a full-on brick (though I can see how it can be a pain to fix).
However, the third-party MS certificate can be signed with your own keys, as described on that same page [1].
I don't have a need for that (my PCs don't need any option ROMs), but I did sign MS's main Windows key so that I can dual boot Win11 / Arch (which MS doesn't sign) with my own key registered in the UEFI.
The Microsoft policies are unclear. I was around when Fedora came up with its current Secure Boot policies and may even have contributed to a somewhat maximalist interpretation. Fedora and others operate under the assumption that Secure Boot needs to prevent the execution of unsigned code in ring 0 (or the Arm equivalent), but is this actually required for a signature from Microsoft? Unclear. Plenty of distributions had their shims signed without kernel module signing implemented in the distribution.
It doesn't mention kernel mode. The focus on kernel mode may not even be appropriate because many bad things can happen without kernel mode (wiping local disks, operating the hardware beyond safe parameters).
Devuan is a distro preconfigured to install without systemd and with enhancements to several packages to require less of systemd. Debian can still be configured to use sysvinit, it's just a bit of a mess.
Because systemd has moved forward while the alternatives haven't.
Systemd is not making the alternatives more difficult to maintain, the alternatives are more difficult and more difficult to maintain. Matter of perspective, really.
I disagree - SystemD has the approach of embrace-extend-extinguish, where it gobbles up services and turns them into their own hard dependency - from DNS to device managers, etc
It's a very MS approach instead of a UNIX - do one job.
I don't think you're very familiar about the difference between systemd the project and systemd-* the different services under that project. They're not that coupled together, including the two you've just listed.
I fail to see how the ability to have all the bits on my unencrypted uefi partition be signed is such a doom and gloom scenario.
Having a world writeable initrd - that is an fs image with executables to be run as root before any other security settings have been enabled (etc) has always struck me as a bad idea.
Yes I'm aware that this technology, like literally every other technology ever invented (from the pointy stick all the way up), can be used for evil. That is a human problem not a "this technology exists problem"...
> ...the groundwork for the creation of an allmighty authority with the ability to "sanction" some (parts of) operating systems, but not others
"We must backdoor/ban all crypto because imagine what the criminals would do with it!"
Never attribute to malice what could be explained by...
well Poettering isn't stupid really, he's clearly very talented and intelligent, but I don't think he's doing it with intent more than he doesn't really think ahead to what the system he's creating is turning into. Again, I'd say most of the developer world has this issue, just see the post a few days ago from the guy who volunteers to help elderly folks with their computers.
If I was Lennart I'd gladly go along with whatever evil scheme MS has in mind to exact my revenge on all the neckbeards that have attacked him over the years.
But the issue isn't the technology, its that Microsoft is the only trusted CA for secure boot.
So if we want to actually change this, we should aim to create another CA, not attack sysD or secure boot(though its design is stupid to rely on Microsoft but it's too late now).
This stuff predates poettering. If distros are going to support encrypted rootfs boot then it's better it is actually secure then... in the end locks are neutral so long as the locks on your stuff only accept keys you control.
I don't see the problem. I want this tech. I paid good money for a laptop with a TPM and a secure UEFI config, let secure boot work for me instead of for Microsoft.
I don't trust these groups or corporations so I'll stick to custom keys for this stuff, but I think having a working group with a signed boot loader is much better than the current situation (Microsoft signs a shim that basically bypasses secure boot on some devices and then Linux users set up a Rube Goldberg machine of 4 different levels of bootloader to get verified boot working).
I'm running the MS shim right now and I don't see why the systemd shim would be worse in any way.
>System ready for easy remote attestation, to prove validity of booted OS, configuration and local identity
>“Democratize” use of PCR policies by defining PCR register meanings, and making binding to them robust against updates, so that external projects can safely and securely bind their own data to them (or use them for remote attestation) without risking breakage whenever the OS is updated.
In what world is this a good thing? This is a one way street to an all seeing police state. I never had a problem with systemd from a technical perspective but looks like Poettering is drinking the TCB kool aid by the barrel.
If I was responsible for a large enough fleet of machines in my enterprise and I would have to deal with 100s of users of various technical knowledge while at the same time being blamed for the eventual ransomware attack, I would absolutely want to make sure that the only software that gets to run is the one I want running.
This (especially) includes the machine's firmware and kernel because that's where malware could effectively hide itself from countermeasures deployed on the machines directly.
If I can then also make sure that the various admin interfaces in our network can only be used by machines in a known-good state, I would sleep ever so much better knowing that the various hacks we have seen happening to 1Password, Uber, etc this year cannot happen on my network.
I would even say that this is helpful for my users because they will never risk being "the one who let the ransomware in".
This isn't about your own private machine. This is about corporation-owned machines in an enterprise network and as we see with nearly biweekly news articles about large-scale ransomware attacks, private data leaks and compromised employee machines, I would argue that the currently employed solutions clearly don't work.
> If I was responsible for ... the only software that gets to run is
the one I want running.
Not wishing to pick on you personally, but the above paragraph is
wonderful example of the sort of logic going around that bothers me.
You trace a faultless journey from responsibility to desiring total
control.
That's not what responsibility is.
You're describing the feeling of culpability within a brutal regime
- where Vader simply force-chokes a lieutenant for "failing me once
too often".
Responsibility involves leadership, which involves not stripping every
subordinate of their agency, dignity and humanity. It involves
trusting people. Sadly, computers make a "zero trust" ecosystem far
too easy now, and that's how they can destroy our society. Good
computing is figuring out ways to preserve liberal democratic
society while also improving it.
The problem isn’t that you can’t trust people. The problem is that people are defenseless in the face of malicious hackers with vastly more expertise and zero day vulnerabilities.
It’s not that you can’t trust your employees. It’s that you can’t trust them to defend themselves from being mugged.
The forcing function for all this removal of freedom is defense against malicious hackers. The removal of freedom is not the goal so much as a side effect.
It works the same way in the real world. We would not need borders or armies or police if everyone were nice.
Good. It's always best to have an optimistic view of our fellows,
that's how we build good social structures.
> The problem is that people are defenceless in the face of
malicious hackers
So. Make then not defenceless. We arm them. With education and other
tools they need to defend themselves. Digital Self Defence (Or Digital
Literacy 2.0 if you want a fluffier title) is the project I am
committed to. Defensive tools belong in the hands of users.
> with vastly more expertise
We can balance the theatre twofold, by giving people more defensive
capability, knowledge and rights, and by attacking the knowledge base
and knowledge value of malicious actors. We must recognise that many
of our own institutions play part of the problem, from vendor malware,
and backdoors to security disinformation. Cyber-law needs radical
reform to give end-user better security rights, and "surveillance
capitalism" needs dragging to the dock.
> zero day vulnerabilities.
Starting maybe with an all out assault on "zero days", including the
companies, agencies and re-sellers of them, using the law.
> It’s that you can’t trust them to defend themselves from being
mugged.
Part of ones job then, is to enable them to defend themselves. You
cannot follow your children around for the rest of their lives in case
bullies pick on them. You need to teach them fighting skills so they
won't be doormats. That's the reality of the digital workplace today.
Also, don't give your kids gold Rolex Oyster watches and diamond rings
to mooch around scuzzy neighbourhoods with. Limit assets, practice
compartmentalisation. Half an ounce of sensible opsec is worth a ton
of authoritarian technical non-solutions.
> The removal of freedom is not the goal so much as a side effect.
The removal of freedom is NEVER an acceptable "side effect" of any
security action. Security and freedom are not diamtrics. Otherwise
"the terrorists win" and one may as well join the ranks of malicious
principles and directly attack our own people (which is the stance
many US agencies have taken since 2001 toward baby and bath-water
alike)
> real world ... borders armies police
But this isn't the real world. Its a digital one which is different.
The old military model of perimeters and weapons isn't working and
smart people in cybersecurity know that. The collateral damage of that
broken model is our digital economy and liberal democracy itself
(which are intimately linked in the American/Western mind if you
believe one jot in things like startups and entrepreneurialism).
We'll simply have to do better than handing over the responsibilities
of our elected guardians to unelected companies considered by some [1]
to be criminally motivated. We still have laws, schools and hopefully
enough common sense to avoid that.
This, simply put, doesn't work in an enterprise context. I'd argue that relying on people for security doesn't work in any context, and that it's about as effective as relying on programmers to not code memory leaks, but in companies where there's people whose _only_ use of a desktop computer is at work, as is increasingly common nowadays, it's simply not possible to get half the people out there to learn more than absolutely required to do their job. Go look at /r/sysadmin or /r/talesfromtechsupport: for some people it's a critical emergency if their desktop icons are rearranged because then their muscle memory to click on Chrome is broken. And that's not to mention executives who believe they're above things like "phishing awareness campaigns". Technical measures are much cheaper and vastly more effective than social measures directed at people who don't want to learn
It takes inner work to overcome a bleak view of people. It's too easy
to eschew compassion and faith in fellow humans to improve themselves.
I think the rehabilitation of our digital world asks us to give people
greater "benefit of the doubt".
Perhaps the hopeless people you describe are the product of a hopeless
context - the "enterprise context" you mention. That culture seems to
encourage intellectual sloth and shrugging ambivalence, and you give
good examples.
> Go look at /r/sysadmin or /r/
Please, I'd rather not :)
> executives who believe they're above things like "phishing awareness
campaigns
My goodness I've met them. They're basically paid too much. I had to
make a humint/influence piece for a major bank in London. Cybersec
referred to it as "the cock problem"; basically mostly male with high
six figure salaries whose security weakness was they "don't give a
fuck". Same kind of fellas who would get pissed-up and drive their
Aston Martin DB into a children's play-park. Major liability. They've
learned they can transgress with impunity and just use money to solve
it. What do they care if other people's money gets hacked? Not the
best model for business, which requires a more upright mind-set to
excel at. But banks don't exactly need to compete in a real business
world.
> Technical measures are much cheaper and vastly more effective
Only in the short term. You'll need to forever keep your technical
measures one step ahead of the mixture of malice and idiocy out there.
A changed culture replicates and even grows itself.
I don't want my users to be clueless. I want them to better understand the technical systems they use every day so they can employ them more effectively. But they and I have a job as well. When I was in tech support, I spent hours some weeks with one particular user working on her software and teaching her more effective use of her computer. But I could only do that during slow weeks: I didn't have enough time to do that every week, and I certainly couldn't do it for everyone in the building. If we'd wanted to teach the whole building how to use computers, we would've needed probably a good quarter of the employees teaching other employees, during which time those employees under training wouldn't've been able to perform the work they were hired to do. We tried making documentation for users to read, but while you can lead a horse to water, you can't make him drink.
But ultimately, while I always wanted people to take training courses in computer use, I still implemented security measures, because I knew that people were at the company to work, not to take training courses. I want security measures in place for the same reason I want computers in the first place: so that I can let the computer worry about it for me. I don't want to have phishing awareness campaigns when I can just make the spam filter more effective. I don't want to tech people not to click the flashing greed download button when I can just roll out an ad blocker across the entire organization. I don't want to send an emergency email blast to my coworkers telling them not to download TOTALLY_NOT_A_VIRUS.PDF.EXE from the chain email going around and keep one step ahead of the mixture of malice and idiocy out there when I can just set up a system to scan email attachments for viruses in the first place. Yes, I want a more informed and educated workplace, and I worked to make that happen, but I want computers to work for me and do the hard part for me as well
Sounds like you're a good sysadmin and mentor. There's no possible way
you should have to take on the burden of schooling everyone. You
probably already go far beyond the call of duty.
What we need to do with Digital Self Defence is raise awareness of
civic cyber-security needs from school ages 5 upwards (My government
in the UK have started on this project). We also need to screen for
digital literacy.
I admire your positive view on people but I feel like you vastly overestimates both the ability to learn and the willingness to care of a huge chunk of the population. If we talk about corporate environments with hundreds of thousands of employees, you will absolutely always have malicious or drunk or just plain uncaring people that will not care at all what happens when they click that one blinking link in that one special email from the long lost relative.
The goal to teach people how technology works is admirable and necessary, but it is no defence against a motivated attacker. The attackers will always have a massive information advantage just because they have so much more time to aquire it.
>This isn't about your own private machine. This is about corporation-owned machines in an enterprise network and as we see with nearly biweekly news articles about large-scale ransomware attacks, private data leaks and compromised employee machines, I would argue that the currently employed solutions clearly don't work.
The fundamental issue is that you can't have one without the other, and that's bothering me. It's not like only corporate grade laptops come with TPMs now (like they did in the past).
Safetynet on my phone is the same thing and it's very much "about my own private machine". Seeing the reactions and the lack thereof about these developments makes me think this will end terribly.
AFAIK, secure boot can be disabled, both in the BIOS and in the Kernel.
In some machines that's not be the case due to contracts with Microsoft, but those already can't run Linux in the first place, so you probably won't buy them for the purpose of running Linux.
The suggestions in the original post do not change anything about this.
I am mainly worried about remote attestation encroaching on territory which was traditionally under the user's control. And once this tech reaches critical mass, sure you can disable it however that also means turning your machine into a glorified paperweight that can't access anything arbitrary websites and software.
You see this on Android, my banking app requires that it is running an OS approved by a big vendor. Their website, especially the mobile version, is getting more and more tedious to use.
I want to be able to access all services with whatever client I please. Not be required to run approved software and hardware that puts them in control.
It sounds like you're demanding to use someone's service on your own terms. I'm not sure why it's obvious that the service provider is under any obligation to entertain your desires.
So.... vote with your feet and choose a bank that shares your values more closely?
> So.... vote with your feet and choose a bank that shares your values more closely?
That never worked and will not work now. You can't choose something that's not being offered.
EDIT:
Also,
> It sounds like you're demanding to use someone's service on your own terms.
Well, sort of; a service agreement is a relationship, and TPM/remote attestation is a huge overreach by the corporate party. It's only fair to want to push back on what's becoming an abusive relationship.
> That never worked and will not work now. You can't choose something that's not being offered.
Why do you think you're entitled to something that's not being offered?
I think there's a general point here about how retail banking is not really an ideal free market, because the barrier to entry is too high. If no bank is providing a set of services that fit the compromises you want to make, then either your problem is with the government (who arguably over-regulate the space) or not enough people share your values to warrant creating a banking business around it.
> Well, sort of; a service agreement is a relationship, and TPM/remote attestation is a huge overreach by the corporate party. It's only fair to want to push back on what's becoming an abusive relationship.
On the flip side, the bank has to manage a complicated series of risks, and their investors might not want to pay the mitigation costs of the risks created by allowing anyone to use their service however they like. Hence the ToS.
> Why do you think you're entitled to something that's not being offered?
It's not that I feel entitled, and more that I'm required to use a bank to not starve, or at least to participate in the modern society. I'm not in this relationship by choice, I'm forced into it at metaphorical gunpoint by the economy.
And, it still means the advice to "vote with your feet" is invalid. I have no one to vote for.
> there's a general point here about how retail banking is not really an ideal free market, because the barrier to entry is too high.
There's an even more general point here, that this applies to pretty much every good and service. Not just banking, not just phones, not just computing. It applies to cars and clothes and food and housing just as well. Hence me saying that voting with your feet/wallet doesn't work in general. Mature markets end up supply-driven - vendors collectively decide what choices are available; the only way to have a vote is to become a vendor yourself, which is a long, hard and risky process.
> On the flip side, the bank has to manage a complicated series of risks, and their investors might not want to pay the mitigation costs of the risks created by allowing anyone to use their service however they like. Hence the ToS.
Sure. There are conflicting interests here. But the right answer isn't one party dictating how the entire relationship works. That's abuse.
> It's not that I feel entitled, and more that I'm required to use a bank to not starve, or at least to participate in the modern society. I'm not in this relationship by choice, I'm forced into it at metaphorical gunpoint by the economy.
At my last job, it was impossible to be paid except by direct deposit into a bank account. At certain times during the past couple years, it was impossible to go into my bank to do any transacting. I feel like I should be able to have a job without needing a smart phone and an app.
>I'm not sure why it's obvious that the service provider is under any obligation to entertain your desires.
They are not obliged by law in any way. However leading up to this point, whatever happens on my side of the network socket has been my discretion. Disrupting this dynamic just bothers me.
You have a point that the problem isn't the technology but the policy. However the fact is that most banks are moving in this direction so it leaves a consumer little choice.
1. If the user can supply private keys that are treated as first class citizens, then yay!
2. If the user can't supply private keys, or the user's private key is treated as a second class citizen, then boo.
Given who is driving TPM usage right now (ie: Microsoft), my intuition says situation 2 is far more likely. I'm continually surprised by the number of people who think this is an unreasonable and/or paranoid.
The TPMs are already in our machines because Microsoft requires them and CPU vendors include them in their chips. They aren't used by default and don't hinder us from doing anything. If I'm already trusting ubuntu to ship me an OS why would I have a problem with having them sign it as well.
Secureboot is a scary technology but as long as we can disable it and provide our own keys I don't see the problem. And if we lose these abilities than I'm certain it won't be Poettering or systemd to blame
It's not a "theoretical DRM use case" if you stop thinking in terms of movie piracy for a moment, and consider what's happening with banking apps on mobile platforms.
Thanks to remote attestation, the custom ROM scene for Android is pretty much dead now, because there's little point of customizing the OS when it automatically makes important services no longer accessible from the phone.
This is not limited to consumer transactions like banking apps, and deployment is never guaranteed to be the result of a rational decision-making process. I have now had more than one employer who deployed remote attestation in a bring-your-own-device ecosystem. They do this because they are told by some vendor tutorial that it is more secure, but they end up disabling user devices by accident. My current IT department told me to buy a new phone if my current one couldn't pass a SafetyNet check.
The problem is that if you are inside a project developing something like this, you can clearly visualize your expected use case, but it's impossible to know what other use cases will be contrived. Of course in this case, we have predicted a few major malfeasances and I think anyone supporting a project like this needs to know the harmful use cases and be responsible for developing with them in mind.
That's the entire point of conflict: to me, that's an overreach by the bank, who's now dictating things out of scope of the relationship between us.
The traditional boundary is one drawn by device ownership: my device, my rules; their device, their rules. I.e. my phone can do whatever, their servers can refuse working with me.
Remote attestation is at best a way for simplifying their own service, at a big cost to users' freedom. In reality, it's a bit of that, plus mostly making sure the customers are locked into a bank-controlled channel that can be used to upsell more financial products.
> The traditional boundary is one drawn by device ownership: my device, my rules; their device, their rules.
Your device your rules seems to cover this, too: they’re saying they don’t want to do business under other rules – the app is just showing the UI for that policy client side.
It’s frustrating but that seems not unreasonable given the massive ratio of people being compromised versus intentionally customizing the OS. I miss the free-for-all of the 90s in some ways but statistically zero of the non-IT people I know wouldn’t hand root on their phone over to some guy in another country based on a prompt on a webpage.
> Your device your rules seems to cover this, too: they’re saying they don’t want to do business under other rules – the app is just showing the UI for that policy client side.
I guess I see two problems here:
1) They shouldn't be allowed to make that policy in the first place. Their policy is effectively dictating what devices I can own and what can I do with them, which is unreasonable when the devices in question are general-purpose computers.
2) My device is being used against me and against my wishes here; I'm not sure who's more to blame here - the bank for taking advantage of the API, Google for providing it, or the phone vendor for implementing it and locking me out of it, or Google again for effectively forcing the phone vendor to lock me out of it.
> I miss the free-for-all of the 90s in some ways but statistically zero of the non-IT people I know wouldn’t hand root on their phone over to some guy in another country based on a prompt on a webpage.
There must be some other way of handling it. We wouldn't have the Fisher-Price tech of 2020s without the free-for-all 90s! The ability for technically proficient users to do whatever the hell they want with their own general-purpose computers is fundamental to developing new technologies, products and services, and better alternatives to existing ones. Between Fisher-Price consumer tech and locked down corporate tech (because security!), it looks like we're heading for an era where a general-purpose computer will be something available only to few certified employees in corporate-sponsored labs.
If I grab the LineageOS sources right now and build an image for my phone, my phone will get about a year of ignored security updates and removes at least a dozen shady apps trying to talk to different Chinese cloud providers.
In terms of practical security, custom ROMs are often safer than the forgotten and obsolete software left on a device when updates stop rolling out. Sandbox escapes and exploits to get root access are more realistic threats than airport security flashing a different OS through your phone recovery or some kind of conspiracy to stuff malware into an open source repository.
As long as you take care to only install software from reliable sources (i.e. download.lineageos.org, F-Droid.org) I don't see the problem.
And soon "potentially vulnerable hardware" will be anything except AT&T devices, and we'll be back to closed networks except this time the government does not have the balls to split any of the mega-corporations.
I would argue the blue pill attacks _are_ theoretical for the majority of users, while DRM overreach are at least a daily occurrence.
The complexity and bugs of these "protections" is likely to cause more actual security issues AND headaches to average users than the "evil maid" scenario these blog posts like to speak of.
Humanity has done a great step forward in the last 50 years or so, usually you can trust the police and governments everywhere on the planet today to always do the right thing. So why this irrational fear and this anti-state ideology?
> usually you can trust the police and governments everywhere on the planet today to always do the right thing
You jest, but a weaker form of "you can usually trust the police and governments today to do the right thing" is very much true for regular citizens in civilized countries.
The two problem cases are 1) people working to keep those governments in check, which is a necessary function for their operation but one that, by definition, said governments don't like, and 2) the private sector. The private sector can, and will, abuse everything it can get its hands on, to the extent permitted by the most strict reading of the letter of the law.
This is where computing as a whole is headed because it's the ONLY way to provide defense in depth against cyberattacks. Signed code path from first boot to user space code. Remote attestation because you can NEVER trust the client. Microsoft and especially Apple are already doing this. Linux needs an answer, and "no, it's too user hostile" is NOT a valid answer; it will just make Linux an untenable security risk.
It's like I keep saying, the Wild West 90s internet is long gone and nothing will bring it back. This is where things are headed, because the risks of the status quo are too great. Suck it up and get on with your life.
>the Wild West 90s internet is long gone and nothing will bring it back. This is where things are headed, because the risks of the status quo are too great. Suck it up and get on with your life.
The "Wind West internet" is very much alive, but as it was in the 90s, its hidden and far away from the "normal" users. And regarding the part with the "suck it up and get on with your life" - I always was a "Don Quijote" type personality, i don't think i will change this attitude...
> This is where computing as a whole is headed because it's the ONLY way to provide defense in depth against cyberattacks.
Yeah, sure - in a fantasy dream world. But the real world has a bit more sophisticated cyberattackers that will find a way in. And at the same time normal and powerusers of computers will have to deal with all the total-control-state-class "security" measures.
This looks pretty the same as "think of the children" censorship.
You don’t need signing, you need a way to make changing a boot path impossible by anyone who is not the user or the OS (or at least, tamper-evident on reboot).
Simply storing hashes that are set on first boot (much like SSH does with known host hashes), and allowing the user/OS to reset the hashes, will be sufficient. M1 does this, as far as I know.
> Remote attestation because you can NEVER trust the client.
IRL, I don’t carry a weapon everywhere I go, because it’s clearly overkill. For this exact reason, overkill measures like remote attestation should not be a thing on general purpose computers.
I fail to see what all the panic is about. All of the SystemD tools mentioned here (iirc) don't actually rely much on SystemD proper and especially systemd-boot and the boot stub are just SystemD in name (I use both).
But regardless, this entire article is about how to have an actually secure boot on Linux (and not remote attestation), something which is certainly good for the user. Otherwise you're actually more easily susceptible to malicious actors stealing your data on the system you "trust".
None of the steps listed here require trusting anyone other than the TPM (which is admittedly a flaw so long as we cannot audit them) and you can even use your own keys in pretty much all cases (provided -as discussed in other comments- MS hasn't f'd that up). I personally use a boot-stub based booting method with my own SB keys (but ironically, don't encrypt my root, so go figure), so I can vouch for the fact that it was actually quite painless to setup. Please don't jump down the slippery slope before you actually try these methods and realize that this is just as easy to deploy (you only need to disable secure boot first) and certainly a more secure option than using shim and an unsecured initrd.
Also, I don't understand where remote attestation entered the conversation here, and I also don't see why that can't be a community based thing (al la let's encrypt is now everyone's CA) where you can choose your providers or even roll it yourself.
> I personally use a boot-stub based booting method with my own SB keys
Same here - stub, kernel, initrd and embedded cmdline all in a signed UKI on the ESP. I do encrypt my root however, so I wouldn't go as far as "painless" for the grub->efibootmgr switch (but I also switched initramfs generator so... always keep a rescue stick around).
But it's all about ownership and trust. I control the keys - hence I am the owner of my computer - and I don't trust e.g. Microsoft[1] to not eventually try to fuck me over. But that's not the important part.
> Also, I don't understand where remote attestation entered the conversation here, and I also don't see why that can't be a community based thing (al la let's encrypt is now everyone's CA) where you can choose your providers or even roll it yourself.
Remote attestation is mentioned five times in TFA and is where this can get really pernicious - indirectly limiting user choice because $safety_critical_industry (e.g. banking) only allows "the corporate keys" (likely including a few Linuxes too, but something like Gentoo couldn't be). They'll even have very good and completely valid security reasons for not allowing arbitrary user keys, but they'd lock me down to approved choices remotely. A reverse AGPL if you will.
Of course, workarounds will exist: "just multiboot", "just use multiple devices", "just choose the bank that allows you to whitelist your key" (assuming there is one, it's nice to dream) - but user freedom is reduced without malicious intent being strictly necessary anywhere in the process.
That's focusing on the negatives with my paranoiac hat on, of course.
> but ironically, don't encrypt my root, so go figure
Oh man that takes me back. The last time I went down that rabbit hole was ~2015, I tried to implement a "fully encrypted" setup and started with Ubuntu (I know I know). Something something LUKS.
I spent ~2 days tinkering with it and never got it to work, something with the setup flow was totally broken if you also tried to encrypt root (or boot? idk like I said it's been _years_).
I also remember fun problems with grub. I was trying to dual-boot and the windows partition was using hardware-bitlocker (samsung SSD). Some kind of weird interaction was going on between grub, whatever the windows bootloader is called, and my motherboard's EFI I think. Anyways grub ended up fucking bitlocker up and I almost lost all my gaming saves. Had to use some kind of arcane recovery process to get back to being able to even _insert_ the key so the SSD would unlock and windows could continue booting.
Ended up saying screw it after a few days and just going back to windows for my gaming PC, vowing to never try dual-boot again.
Huh? I've been using LUKS for FDE (with unencrypted /boot) way before 2015, and it's never been a problem. Debian has offered an option to set it up during installation, and it was 100% smooth sailing from there.
More recently, you can even set up GRUB to ask you for the passphrase, so even /boot is encrypted, you only need a tiny 2MB partition at the front to hold the bootloader.
Well I remember getting completely stuck at one part and the cause was my samsung SSD's funny behavior in the "locked" state.
I had to set some kind of kernel flag or something (sorry, it's been years) to get it to ignore the drive until I unlocked it, as there was some kind of tight-loop where it would just keep trying to connect infinitely and not progress/fail/timeout.
I've been meaning to get back into linux again but it's going to be on a pristine/new machine.
Well the work is in setting it up. Once it's working you just enter a password. I'm sure hardware FDE support in linux has come a long way since I last tried.
There shouldn't be that level of work required to set it up. The tools are lacking if it's not just a couple of commands or toggles to get a fully encrypted disk with a secure boot chain.
A couple times ago when I poked my head back in on my old pal, the Linux Desktop, to see how it was doing, I picked Ubuntu, and, because I'd gotten used to it on Mac, selected disk encryption at install.
This worked fine.
Months later (in this case I just left it connected to a TV and occasionally used it for gaming or whatever) I let it do an OS upgrade and it forgot how to read its disk when it rebooted. I spent maybe 30 minutes trying to fix it (former heavy Gentoo user, so I'm comfortable troubleshooting boot—among other—issues) without making any progress at all, then decided that was the end of that particular check-in with desktop linux. Maybe next time (spoiler: nope, though for different reasons. But maybe the next next time...)
I haven't tried encrypting a Linux root disk since.
I mean I love the idea right? Software FDE comes with a performance hit, as well as increased wear on the SSD.
With hardware FDE the data written to the raw flash is always encrypted, the AES key is just 0 by default. Macs work this way AFAIK. With "hardware bitlocker" you just change that key (on a fresh drive).
Full performance, better security. Seems awesome huh? Well being outside the happy-path when it comes to hardware configs on free software is just asking for trouble...
I'm OK with making sure the software I'm running is the software I thought I was running. But because trusted boot runs so deep, and is intentionally hard to get around, it's vital that the implementation is trustworthy.
I don't trust Poetteringware. Poettering's team has a record of foisting technology on users, resulting in the need for e.g. the Devuan fork. I wish this work were being done by just about any other team than Poettering's.
> I don't trust Poetteringware. Poettering's team has a record of foisting technology on users, resulting in the need for e.g. the Devuan fork.
They have been developing software, that enough people have deemed useful to include it in their distributions. Some have disagreed, and have made other choices. No one was forced to do anything, there have been no "foisting" and the "need" for Devuan is a subjective opinion.
There is really no need to transform purely technical arguments into personal attacks. This just discourages participating into free software development.
Systemd was designed in a way that was more tightly coupled than the alternatives and made adopting it an all-or-nothing proposition, and other projects (particularly Gnome) were also tightly coupled to it. It was absolutely foisted on people: a lot of people didn't want it but found they were nevertheless obliged to install it. The whole thing abused the goodwill of the free software community: systemd folks added systemd-dependent patches to other software, taking advantage of the norm of accepting such contributions, while refusing patches that made systemd compatible with other systems (e.g. non-Linux). And the end result was a state where you can no longer fork and replace components piecemeal - the whole free software ethos, the very reason GNU was built as a Unix-like system in the first place - which does far more to discourage participating in free software development than any mere internet argument.
The tight coupling, the non-portability, all of that are technical choices, that can be debated on their own merit without the need to attribute malevolent intentions to the developers.
Projects merged changes because they wanted them, not because their goodwill was abused to make them merge anything. People got systemd on their OSes because they chose OSes whose developers chose to move to systemd.
It's not like Lennart comes to your home with a gun if you install OpenBSD.
> all of that are technical choices, that can be debated on their own merit without the need to attribute malevolent intentions to the developers
The ramifications of those technical choices on the software ecosystem are so well understood [0], that there is no point in discussing "their own merits" in a vacuum.
What's most relevant to my professional and waning hobbyist interests are the shape and trajectory of the software ecosystem as a whole. To talk about systemd without talking about how it's developers interact with the software ecosystem is to talk about nothing.
[0] I graduated before systemd was a glint in Pottering's eye, but we somehow still covered it in school. Not only coupling vs cohesion in the abstract, but also how various design decisions (including init system arguments of old!) interacted with the ebb and flow of unix-like OS evolution.
> The tight coupling, the non-portability, all of that are technical choices, that can be debated on their own merit
They can't. GNU was a political project from day 1, with explicitly political goals; the unix-like design is for political, not technical, reasons.
> Projects merged changes because they wanted them, not because their goodwill was abused to make them merge anything.
Citation needed. If you maintain a widely-used open-source project there's a pretty strong social norm/pressure to merge contributions that don't have anything obviously wrong with them, even if the functionality they implement is something you don't actually want or need.
> People got systemd on their OSes because they chose OSes whose developers chose to move to systemd.
Because they chose OSes whose non-technical leadership chose to move to systemd, in violation of the project constitution, and then had the rump technical committee rubber-stamp it once the principled technical leadership had resigned in disgust and the decision was already a fait accompli, the way I remember it.
> It's not like Lennart comes to your home with a gun if you install OpenBSD.
Some will rob you with a six-gun, and some with a fountain pen. No, Lennart won't hack into your computer with an SSH exploit, but he'll get the software you're using (like Gnome) to push updates that stop it working on your computer, and so the end result ends up much the same.
> There is really no need to transform purely technical arguments into personal attacks. This just discourages participating into free software development.
While I agree with you in general, for some reason this particular developer tends to take decisions that have very extensive consequences and make choice extremely difficult.
There is a huge difference between a developer who creates a superior project that everybody loves to use so it gets mass
adoption and one who makes a product that gets pushed by their employer on everyone whether they want it or not. I don't want to get into details as the subject has been beaten to death but as for Systemd* there was the case of integration with graphical login that made choice difficult. Had the author been more sensitive to this issue and cooperated a bit without being stubborn we wouldn't have had Devuan and all that mess. This is exactly NOT the way to do things in open source.
*PulseAudio was simply broken but it's not the fault of the author distros picked up aplha-quality software
For applications, tools, even most libraries - yes. But this was one of the critical elements of the system and they had to fork the entire distro because the way it was done actually made the choice more limited.
>But this was one of the critical elements of the system and they had to fork the entire distro because the way it was done actually made the choice more limited.
This isn't very accurate. When Debian decided to switch to systemd, they also agreed to support other inits in the distribution.
This wasn't good enough, so Devuan itself was forked before this decision was made. The end result is that Debian had less people to support the init script alternatives. It became a self-fulfilling prophecy.
I'm confident Debian could support other inits if there was more Debian devs available to work on it. But because people left for Devuan the pool becomes smaller.
Even the last decision on inits from Debian says that the focus should not solely be on systemd.
> they also agreed to support other inits in the distribution.
That's true, but with systemd being the only init that packages had to support. Accordingly many package maintainers choose to only support systemd.
So if you want to run Debian without systemd, you have to be prepared for your fave packages to drop support for the other inits. It follows that you can't rely on the Debian package repository. So to support a Debian-like system without systemd, you have to fork the whole repository.
Devuan has init-script support for the packages in its repository. So it's open to the Debian maintainers to pull the scripts in; but they only want to support one init system, understandably. And the fact that nearly all other distros have systemd as a default init, it's natural that developers and maintainers are pleased with the systemd hegemony.
I'm just sorry that Debian made the decision it did. But Debian has always been the developers and maintainers; only incidentally the users. They were entitled to make that decision, and I think their process was exemplary.
[Edit] That "exemplary" process: actually I think it shouldn't have been pushed to the technical committee. It should have been a simple general resolution from the start. But I think the result would have been the same, and I found the debates very illuminating.
I didn't make a personal attack on Poettering; my objection is to the software his team produces. And I wasn't making any technical argument; I don't know enough about TPM and secure boot to do that.
My point was a political one, I guess: this is more software that runs very deep in the system, coming from a team that has a record of producing software that is hard to opt-out of.
For PulseAudio on Debian, you have to take firm steps to ensure the package manager doesn't reinstall it. Much the same goes for systemd. I assume it will be much harder to opt-out of a secure boot released by that team. I believe that's on purpose: they could have made it easier to run without those packages, if they'd wanted to. I think it's clear that they wanted the opposite.
systemd has everything and a kitchen sink, it is a big blob of software.
Finding out why the heck my resolv.conf contains some 127.0.0.x entry and not the real nameserver was the final thing that made me question sanity of distro maintainers and author of two worse pieces of software: pulseaudio and systemd.
(I was very surprised that it was the same person).
Also brittle and unreliable. Every distribution needed custom-made init.d script, incompatible with every other distribution. No, thanks.
> systemd has everything and a kitchen sink, it is a big blob of software.
You can still pick and choose. Systemd is an umbrella project.
> Finding out why the heck my resolv.conf contains some 127.0.0.x entry and not the real nameserver
systemd-resolved is exactly one of the things that you can pick and choose.
> was the final thing that made me question sanity of distro maintainers
And there are good reasons for that:
1) /etc/resolv.conf cannot be extended; you cannot express all the configuration that is needed for resolver today (e.g. per subnet zones) or dynamic enough (user-managed network links that go up and down).
2) applications are not supposed to use resolv.conf directly anyway; if they do, they are broken. The local nss config may be such, that they won't be able to find all hostnames, or the above mentioned per subnet zones, which they won't be able to resolve. Hence, they are supposed to use libc, which does the right thing. Even golang runtime respects this and thunks into libc for nss.
I know my reasons for feeling worried about it but I wonder why you also dislike the idea of end user operating systems making it trivial for applications to take advantage of remote attestation?
Because it can snowball from "you may not access Widevine content on an unapproved device", which is of little significance to "you may not connect to the internet on a non government approved device" which is extremely dangerous.
To preempt the "but slippery slope!" counters, where we're currently at is "you may not use your bank's mobile app on a phone that isn't running genuine, unmodded, unrooted version of the hardware and OS it came with". This is the reality of modern Android devices. And it's not worse only because, like with any API, adoption of new-and-improved Safenet / Play $forgot-the-name APIs takes time.
The slippery slope is not a fallacy if the slope is, in fact, slippery.
At least so far my bank hasn't prevented me from using my web browser on linux to access the banking website. But I am certain they will once Windows 11 gets enough traction in a few years.
Yes, websites are still the one place where users have some degree of parity with the vendors. They are, however, being actively phased out. It's a long process, but it's clear. New banks and quasi-banks often don't even have a web portal. Old-school banks have been actively pushing apps as the primary interface. In recent years, they've been pushing apps as preferred auth mechanism for confirming logins and transactions initiated off-app. How long before it'll become the only mechanism?
However, my bank requires second factor for authenticating, when using the banking website. And that second factor used to be SMS, but nowadays it is their banking app on a mobile device. Exactly that one that has to run on "genuine, unmodded, unrooted version of the hardware and OS it came with".
It's good to know that other people can see the writing on the wall too. I've told people that this is being made possible. While it is not a guarantee that the availability of the technology will lead to the use of the technology in such a manner, the fact that it will be possible means it's just a matter of time before someone insists on doing this under the pretext of security.
The saving grace is that attestations can be forwarded but I'm sure even this will eventually get solved. (And the sheer fact they can be forwarded doesn't diminish the dangers of this use of the technology.)
>The saving grace is that attestations can be forwarded
That would require you to have root on a "trusted" device so you can extract the challenge-response that proves you are indeed "trusted". Which is becoming increasingly harder with each iteration.
Windows and AMD/Intel doe remote attestation already. This is not a theoretical development, Windows does this. Practically nobody uses Linux as their daily driver and even then the government wouldn't care. DRM and "trust" solutions are already a problem on free operating systems (like the 720p cap on Netflix content).
This isn't a slippery slope, this is a vertical surface that was erected over seven years ago when SGX hit the market, just standing there.
Luckily the content I watch on Netflix is also freely available through torrents so I don't usually notice the difference, but the DRM fight was lost long ago. Whatever evil things the government is planning on enacting won't be beaten by someone telling the police officers that they use Arch btw.
What are, if any, "advantages of remote attestation" besides enforcing DRMs and preventing sideloading/jailbreaking? Preventing those who need access to proprietary systems from using any distro except 1-2 certified ones?
So that you can be confident that your software hosted with a random cloud provider has been faithfully launched? This is why Intel is keeping SGX in their Xeon processors despite killing it off in consumer oriented series--servers benefit greatly from the ability to prove to clients that they are well behaved
I mean I can see the advantages on a corporate network, or on just any network that you control. I may want to ensure that on my home network none of the devices have been deeply compromised so I may want to take advantage of the technology. But I feel like making this technology a normal part a user facing operating system risks normalising it to the point that companies will start abusing it to e.g. lock access to online resources to windows/mac only (potentially even leading to ISPs using it to require you to use approved operating systems to connect to the internet under the pretext of improved security).
Because it is the end of electronic personal property, and permanent removal of the ability to opt out of whatever ToC MS or Google decide you must sign to participate in society?
It's called "defense in depth". A signed, attestable boot path is how you get defense in depth against malware attacks. Period. All computing is headed this way.
If you were responsible for the security of your enterprise network, would you vehemently fight for your user's rights to run the ransomware executable they just get sent over email?
I'm aware of that and those computers do not need to use any of these features. But if you're a FOSS project, why should you be advocating for security features to not exist if they provide value for a significant part of their user-base?
Smh, the FUD is suffocating in here. And really, a round of applause for everyone butting into say they don't need this. Do you have a habit of doing this for all features?
Sorry, but I expect my Linux install to be at least as secure as my Windows (Pro) installation. And without this, it's not. It's that simple. In years, most of you will be benefiting from this, it will become table stakes and the FUD will subside.
It's hard to really read this thread. If you care about user freedom, this sure ain't got nothing to do with it. Other than giving me the freedom to have a more secure computer.
Just sad to see the FUD cause people to just get so activated without even understanding the stakes at hand. (Think. It's not like MS is watching this feature to decide whether or not to allow user-key-enrollment on the ARM Surface).
Great, thats cool. I'm concerned about accessibility to general computing too. But this article is about an optional feature in an FOSS software that is only as good or bad as the distro using it.
Please, you must understand, you devalue your point by demonstrating that you either don't understand the layering at play here, and where the real risks are, or you don't care to advocate in any sort of meaningful or compelling way. Instead, you just repeated FUD and waved your arms more. Again, about something we probably agree about, but is simply not at hand here.
I mean the issue that you're worried about? How about the millions of capable smartphones being tossed because of closed bootloaders? Android manufacturers, Google's Nest and Chromecast products, and Apple's iOS devices here are far greater offenders. And just more insidious. My 3 year old phone that works damn fine is considered literal garbage by Google (no security updates, means I use Lineage, means I can't bank, watch Netflix, use credit card apps, and more). Microsoft has never used security features to invalidate my old hardware. Ever.
Again, totally tangential from whether or not SystemD has first-rate support for this (mostly because at this point it's about UX and distro mechanics such that users and distros can freely secure their computers. The functionality to just lock-er-down has been here. For a decade). This issue is about capitalism, economic incentives, social politics, etc. But, I don't expect that conversation to play out on this site much.
Finally, Pixel and Chromebook prove that secure computing AND user freedom CAN be respected and can be user-empowering. Please, if you care, advocate for their models. (Which, again, I can also do on all x86 Microsoft devices and Microsoft Surface ARM, at least, today).
"Considered attack scenarios and considerations:
Evil Maid: [.. ] physical access to a storage device should [not] enable an attacker to read the user’s plaintext data [..]. [or] allow undetected modification/backdooring of user data or OS (integrity), or exfiltration of secrets."
Am I misunderstanding, or is the the only attack scenario listed?
Seems like a lot of work, complexity and potential problems of all sort to fix.. something quite minor?
I mean: 1) Get a lock? 2) Will it actually stop an Evil Maid? I'm thinking of stuff like physical key-loggers, hidden cameras etc etc
> Seems like a lot of work, complexity and potential problems of all sort to fix.. something quite minor?
Probably because the main goal isn't actually solving that problem. That's just the excuse used to let the trojan horse in and convince people to hand complete control of their devices over to corporations.
It's possible to encrypt all of /boot while using SecureBoot. That was my solution. It's a little annoying to type the passphrase at boot, but it's entirely doable.
I did the whole TPM thing previously on a Debian-only machine and it worked great until it stopped for some reason and I haven't had time to revisit. But on my latest machine I wanted to make sure Windows with SecureBoot worked and I was in a rush. So it's encrypted /boot for now.
I figure the threat is now that someone could clone the drive and brute force the passphrase and then tamper with /boot.
>Most popular Linux distributions generate initrds locally, and they are unsigned, thus not protected through SecureBoot (since that would require local SecureBoot key enrollment, which is generally not done), nor TPM PCRs.
You can sign initrd and check it using (signed) grub. But yes you need local key enrollment. Maybe making this easier is the solution though? Instead of relying on Microsoft to be charitable with its keys.
>No rollback protection (no way to cryptographically invalidate access to TPM-bound secrets on OS updates)
Revoke the key used to sign either grub, the kernel or the initrd?
>Unified Kernel Image
UKI are systemd-specific and mostly a joke AFAICT. Linux supports embedding an initrd into its efi bin. Why make a new thing?
The main object of this whole article is to make PCRs contain hashes of the current system state. Its only advantage is that it can be used to restrict some TPM access. To do that, it tries to introduce a new way to do things which adds nothing to the average user. This is mainly useful for distributions that want to have complete control on the boot process (most Linux distributions do not). A distribution can already do most (everything?) of what is suggested using rotating keys/key revocation locally, but this would introduce the possibility of forced attestation of local state. It's a plus for big organizations, but I fail to see how this improves the state of Linux for the user. At best it's an alternative to using local signing (which is already possible), at worst it's an entry into attestation of local state (DRM).
I can’t tell if Poettering hasn’t read Brave New World, so the title is innocuous, or he has, and the title is frightening.
That said, I’m in the camp of: this is good, and a lot of the comments here are FUD (aka Poettering probably hasn’t read Brave New World, or at least this isn’t that).
IMHO we should maintain GNU/Linux/BSD systems as tools that can free us, not entangle us or turn us into guinea pigs.
The world is already full of proprietary systems, including the one produced by the employer of systemd. WSL has improved a lot in the past years, he should be focusing on that, or at least use that as a testing ground.
Not confining new technology like systemd has lead to an infinite amount of CVEs to deal with in the past years, this could have been avoided by not allowing a tech prototype to bypass community adoption and impose itself as a ego-driven standard.
I've never understood the need for all the complexity.
Security should be built from the ground up from two physical ports. One is a file storage like SD card and one is a hasher/checker for the blobs that are read form that storage device. One of the blobs should be the CPU microcode. The system can be expanded with additional (tag,blobs).
There. Solved every solvable use case.
Okay, i lied. I do understand why there is a need for all the complexity. Some sysadmins and DRM believers think the complex solution makes their lives easier by being remotely controlled and being "good enough".
Because nobody could ever care enough to tinker with 'baked in' secrets and if they do its above your pay-grade.
Nice try :). Computing built out of cryptographically secure cons cells.
But no, this doesn't solve anything, because the root problem doesn't have anything to do with technology. The root problem is that people need to work together despite having conflicting goals and interests.
Like, e.g., the major issue with trusted computing isn't trusted computing. Everyone would like their devices to be secure against unwanted software doing bad things. The conflict is over who owns the device in the first place. I'd like to own my PC, the very one I'm writing this comment on. So does Intel, Dell, Realtek, Microsoft, my bank, every news site, half a dozen governments, and countless criminals specialized in exploiting digital technology. Whether or not TC is good for me hinges entirely on the answer to the ownership question.
As for complexity - it comes from the industry incrementally evolving technological solutions to what's a philosophical/sociological problem that doesn't have a theoretical solution yet (and might be unsolvable in general).
Depends. In theory it is fine, in practice it is a big ball of mud.
I'm starting to like Apple's approach more. It allows to have multiple OSes installed and each to have different level of trust. In UEFI, the equivalent would be that you could have Secure-core like security for one installed instance, 3rd party UEFI-enabled Secure Boot for another and no Secure Boot for yet another installed OS. Unfortunately, in UEFI it is all-or-nothing instead.
Right, it was secured(from a foreclosure auction). My collection is fairly modest, and getting more and more dated every day, so it was a bit of a curveball to run into something with not the usual BIOS. I think i sent it to GoodWill.
I applaud Pottering's systemd projects as a masterful career move at the same time that I shake my fist at them, and how they build a path to undermine free computing.
They recentralize, and fragilize, the free software ecosystem.
They build features that
are not requested,
with the premise of small, incremental benefits
while taking away choice
by making the pre-existing standard
of interoperable decoupled components
difficult-to-impossible to continue to use.
This development pattern enforces deeper integration with their one blessed way, and tramples on the culture of the free software ecosystem,
which the project efforts take part in, in all but ultimate aim. They do things that should not be done, although he's effective in creating and pursuing opportunities that advance the interests of Red Hat and now Microsoft.
I like that free software has let me build and explore things without getting sign-off and approval from authority,
while the existence and strength of free software means I get to enjoy some of its benefits
even while using devices made by companies that do not pay-forward their free/open software founding roots,
specifically that developer system configuration escape hatches are needed on macOS, and Windows,
for competitive reasons.
The
ease of use,
breadth of adoption,
and
creeping (and unnecessarily coupled) integration with widely used software
are all small pennies, which "no one should object to",
thrown to entice
in front of the steamroller of
centralized control of plausibly all software running (boot, OS, & applications)
that makes founding fears of the free software community
more likely to come to pass.
My workaround for this problem was to use non encrypted USB flash drive for EFI and /boot partitions. It also contains encryption keys, main reason was to avoid entering passwords.
Works pretty well, only problem kernel upgrade fails if usb key is not mounted.
When I travel, there is no way to decrypt my computers, since keys are with me. When I arrive home, I insert keys, start all computers, upgrade packages (automated), remove keys...
Viewing Poettering as some kind malicious entity undermining projects sounds like a conspiracy theory. But now with him working for Microsoft his actions do look like a lot like the "embrace, extend, and extinguish" pattern to me. Yes, yes "Microsoft <3 Linux", of course...
And now I am supposed to cheer for the groundwork for the creation of an allmighty authority with the ability to "sanction" some (parts of) operating systems, but not others?