The problem is Office. It is actually a problem of two sides.
1) Libre/OpenOffice was never satisfactory. It is ugly and crashes. Tried many times using it since 1999 and it gets worse and worse. Pity, because StarOffice was a perfectly usable alternative back then.
2) Office-like products are addictive in corporate environment. People love to make ramsom-note style documents and presentations. An old typewritten document seems way more professional in direct comparison... People add so many 'hairs' in documents that not even Office365 can open them anymore, so you have to resort to native MS-Office. And sometimes it must even be the Windows version, because the Mac version does things a little differently and messes up a really complicated document. (The second last version for Mac consistently broke when review mode and table changes were mixed, and I worked in an environment where review mode was extensively employed.)
Any migration away from Windows would have to tackle these problems before even the operating system. The rest of the desktop UI is ok in Linux - people use mostly the browser anyway. Google Docs is an option - no documents more complicated than GDocs can handle should be allowed.
I actually preferred the documents were all generated, either by software, or at least using markup like HTML or LaTeX, that are diff'able. But I know this is impossible to impose on the layman.
I've observed the same but regarding fancy Office documents my experience has mostly been with amazingly complex Excel spreadsheets.
People who have no programming background, no knowledge of modern tools like jupyter somehow are very proficient at crunching data and making graphs in Excel.
I've seen some outlandish things like all invoices being generated by Excel templates and data gathering done by vbs. Or an entire datacenter mapped up in Excel, like a poor mans racktables.
Granted the vbs was done by a programmer but that's a tiny slice of what I've seen done in Excel.
So if you're going to force these people that I work with into google docs then you need to offer an alternative they can understand. Either a devops department that can script these things for them easily, or training in new tools.
Basically M$ tried over the years with a lot of force to convert them back, they even sent Balmer as CEO back then to convince them, but failed. Then M$ moved their German Headquarters to Munich city and the city major changed (now from to another party that is M$-aligned). The major engaged a M$-owned agency to assess the current state, and they suggest to switch to W10 and Office, which will cost 25+m more. Now you can call 1 + 1 together. Exactly that Heise.de news media did, the did a lot of investigative objective research and uncovered the scandal. Guess what happens, ...nothing.
Google Docs (and other cloud services) as the enabler of the "open source" desktop is a bit ironic though, considering it is neither open source nor do we even have control over the binaries.
>Google Docs is an option - no documents more complicated than GDocs can handle should be allowed.
At least in my experience the GDocs is rapidly eating Office. Deployment is cheaper and easier. All you need to do is send a link and someone can edit your document. It works everywhere.
It’s OK on generic hardware. By “generic” I mean CPU & GPU that are more than 2 years old, USB mice/keyboards keyboard with standard set of keys/buttons, maybe a USB mass storage, no printers, no web cams, no custom hardware.
Agreed compatibility with MS Office is always "the stumbling block" for the average user. When the Munich transition was made LibreOffice (OpenOffice) didn't handle Office compatibility too well. These days compatibility is much better. But that is not the issue, too many people sling .docx files around, not really understanding what they are doing. Document collaboration is more typically emailing and at best using the review functionality.
In my experience even reasonably computer literate people need to be asked, repeatedly, to send me documents as .PDF not .DOCX. Hate to think how hard that would be when we are dealing with hundreds of disinterested public servants.
> I actually preferred the documents were all generated, either by software, or at least using markup like HTML or LaTeX, that are diff'able. But I know this is impossible to impose on the layman.
I agree with that. Regarding MS Office compatibility however, you don't need to stick with Libre/OpenOffice. There is also SoftMaker Office.
Yeah for software engineers, scientists, 3D animators and web developers linux on the desktop is arguably superior but for typical office users its a train wreck unless they are using emacs org mode, which requires a significant investment in training.
I worked with one of the architects of that (many moons ago). He was a fervent believer in OSS and its use in the enterprise. From what I could glean, they struggled with user interface issues - end users didn't like having to relearn how to use their very-similar-but-not-quite-the-same-word-processor software. As soon as he left (and it was never clear which came first) they decided to return to the Microsoft fold.
From what I could glean, they struggled with user interface issues - end users didn't like having to relearn how to use their very-similar-but-not-quite-the-same-word-processor software.
I find this very interesting. We are all familiar with less tech-savvy users that are only comfortable what they know.
However, Google Docs is a fierce competitor of Microsoft Office. Android/iOS replaced Windows for a lot of regular users. Chrome crushed the competition, including Internet Explorer.
Why are these examples different? Is it because these alternatives did not try to emulate Microsoft's products and hence didn't cause the impression that 'it's almost the same, but glitchy'. Or was the experience of these products so much better than Microsoft's counterparts?
I have sort of an underlying philosophical theory that open-source projects that start out as attempts to clone a popular commercial package will ultimately fail. The reasoning goes like this: even if you achieve a good clone, the commercial project will move along and you will not be able to keep up. Your desire to catch a user-base that is resistant to change will actually encourage you to not keep up, as at first this looks like a kind of success, but then a few years down the road you will realize that you are terrible behind.
As an example, look at LibreOffice. It is widely marketed by OSS enthusiasts as "just like Office" and "just as good as Office," but it struggled to meet feature parity while remaining stable early on. Then, MS transitioned Office to the Ribbon UI. This was absolutely hated by a fairly large group of users, and it became a marketing point, at least in my circles, for LibreOffice that it had not made such a transition. This pulled in more users and looked like a success story. Years later, though, Office users have adapted to the Ribbon UI and discovered its advantages, and when you compare Office to LibreOffice, LibreOffice seems hopelessly out of date.
I think KDE is another example of this phenomenon as it was commonly seen as the best way to transition from Windows. But as a project it seriously struggled to retain interest after Microsoft continued to evolve their desktop environment in big ways, and now that I think they've successfully divorced themselves from any "clone of Windows" objectives they're still not really able to make up for the lost steam.
For just a last example, I would point to The GIMP, which has successfully navigated the transition from "a kind of clone of Photoshop" to "a kind of clone of a very old version of Photoshop."
Regarding catching up you are right. It is always difficult, if not impossible to catch up with proprietary systems. I always wondered why the OSS community followed that way for so long instead of focusing on things which the user really needs and wants. Both Gnome and KDE for instance made the big mistake to ditch their very successful previous versions in favor of something many users didn't like.
Fortunately, and that's a big point for Linux, the user decides the further direction. KDE has become very usable again, and Gnome is on the way. In the Windows and Mac ecosystems the users have no choice.
Regarding Limux, I followed the Limux project pretty closely. I think this project failed only for political reasons. This conjecture is supported by the fact that most reports about Limux in Germany published only opinions of technically clueless political leaders while the opinions of the actual users were barely mentioned, if mentioned at all.
The point is, Limux had to fail by any means since MS could never afford a Limux success. If Limux would have been successful (and it actually looked promising) it would have been an example to follow for the whole world -- just for reason of license costs and source control.
I wholeheartedly agree with all of that except GIMP. The UI in GIMP is horrendous and the only reason it's successful IMO is because the gap it fills is that of an expensive and confusing product that is niche to a lot of people who need some of the functionality but don't use it for their job.
I use it quite often (and loathe the interface) but I'll never pay for photoshop because I don't need the pro tool and after 10+ years of GIMP I don't want to shake up.
To me it's exactly the opposite. I am glad that KDE remained to be a usable desktop, and did not follow the Win 8 GUI disaster.
I also appreciate that LibreOffice sticks to the classical menu system and doesn't overload the desktop window with a multiple of icons where most of them are barely used.
As you implicitly identify, the problem is that LibreOffice isn’t better than Word.
Users learn new things all the time, and are strongly motivated to do so when it offers real benefits, and hold a real grudge when it’s for cost savings (or an ideological concept they don’t identify with).
A FOSS MS Office killer would be more like Google docs or Etherpad, and would be way easier to open source across platforms, and maintain collaboratively.
It could offer huge benefits beyond authorization and collaboration, such as open APIs so departments can automate bureaucratic review processes, or publishing to blogs and websites.
Spend money to get your system into schools, because a huge proportion of workers will adamantly HATE anybody who tries to make them learn something new after their first few years working.
Apple figured it out. Microsoft figured it out. Google is figuring it out. GNU hasn't quite gotten there.
No, what workers hate is when they have to learn something new for no reason. If you give them something new that actually improves their lives, makes it easier for them to do their jobs, they'll embrace it (usually).
That doesn't happen often, because employers and decision makers rarely look at things from the perspective of the employee, preferring instead to make their own jobs easier at the expense of others.
That's a good point, although when you maliciously box out competitors from being easily compatible with your product once your audience is locked in...
Well, it starts to smell like antitrust behavior, honestly. These changes WOULD make peoples' lives better and easier, if they were allowed to be compatible with the products those people were previously using. To prevent that, incumbents do everything possible to stop any new/different systems from interacting with the ones that they sell.
And the whole point is that that would not be an issue if people were willing to hold their noses for a damned second and migrate to systems that ARE accessible and open and easy to use with various products from a plethora of different vendors. That way, the barrier of 'pointlessly changing things' will be vastly reduced in the future.
You say the managers are short-sighted and lazy, but so are the workers. It's only human.
Of course. We are familiar with the concept that, in general, people will act in their own short term self interest. An open, compatible computing world sounds great to you and me because we are computer people with heavy investment in the future of computing. This is not the case for most workers or their managers and executives.
Yeah, but what I'm saying is that a major reason why it doesn't wind up working for most people is because large incumbent players abuse their market position to a staggering degree, in order to lock all of those people into their platform.
They rely on the fact that most workers will resist changing from their systems, no matter what the executives want. That's how you get hooks into a client's flesh and roughly drag them out of any semblance of a free market. It just seems like gobsmackingly reprehensible behavior.
Being comfortable with what you know is one thing. The other one is that "they are just unwilling to learn something different" is comfortable narrative if you are advocate, but oftentimes too much of a simplification. At least based on what I observed when geeks interacted with users.
Google Docs, Adroid and iOS for that matter worked hard on making everything intuitive for average person. They are easy to start with. They are less frustrating for use case they are meant for then Microsoft products. Otherwise said, someone with influence and power in team who made sure users needs were cared about in gazimillion of small details. And they look good.
Many geeks are quick to write off users and assume they are dumb instead of trying to understand what those people do or need. As far as geek is concerned, the user is doing some stupid generic thing in word processor. If something does not work, it is unimportant edge case that user should not care about - because geek dont need it. People who say that it is all about users being lazy generally never valued what those users do and dont consider half their needs to be legitimate. I mean, people were "just not willing to learn something new" for not jumping to open office from Excel, but I swear to got that excel was both faster/responsive and easier to use at that time and open office really missed features that people I knew actually used.
Very-similar-but-not-quite-the-same-word-processor software may mean that report you used to generate within an hour suddenly takes three and still does not look good enough. Clunky may mean twice as much clicking for often performed tasks. If may mean that often used features are awfully complicated or hidden from sight. Or that it is generally slower in things that are visible while fast in things that dont matter.
You cant just write that off as lazy or stupid users to their face and cant just get offended and then get aggressive when they talk about their frustrations (which is what I have seen geeks to do).
> Google Docs is a fierce competitor of Microsoft Office.
Only if one likes to give all private data to Google and the respective Microsoft Office knowledge resumes to a simple typewriter letters and basic data in tabular format.
Google Docs and MS Office probably have better UX testing protocols than OpenOffice et al. FLOSS tends to skimp on tests because there are lots of people willing to be coders and nobody who wants to do product testing. Unfortunately that's a hard problem to fix, I think, although elementaryOS seems to have something up their sleeve.
A lot of Linux stuff feels "unpolished", even though lots of people have tried to polish it. But without data, the gatekeeper of the repository only has their natural discretion when determining which contributor is best at polishing the UI. What is true is that this isn't usually devastating to the user experience: anyone who has enough goodwill towards the idea of FLOSS can use free software UIs and be happy about it, but without that, the rough edges might be putting people off.
If you collaboratively produce documents at your workplace, Google Docs wins by a large margin. Microsoft products have way too many issues when multiple people are editing the same large (100-200 page) document.
Edit: The takeaway for FOSS is that the product has to be lower frustration than the alternative for it to win out.
That sounds like a nightmare. Google Docs is barely powerful enough for writing a three-page letter. I can't imagine writing a collaborative 200-page document with it.
LaTeX + git would be the ideal, but my coworkers would shoot me for suggesting it.
The main issue with a wiki is that its hard to give one to a customer for long term use. A PDF works much better for this, so when the customer wants to review the hundreds of thousands they spent on consulting a decade ago they can.
Divide your document into chapters, have one tex file per chapter, plus a driver file that runs the whole document (by \include'ing the individual chapter files).
Exactly. I shudder at the thought of training a office manager to use latex or git.
Open source has all the technical pieces to do a good job of the issue, but they rarely are seamlessly integrated into a program that someone can be highly successful just by clicking on things that look like what they want.
Not at all. If you do basic coordination of who is editing which section when, merge conflicts are exceedingly rare. I've collaborated on multiple 10-ish page documents (research papers) with 3+ people using LaTeX+git and it has worked just fine.
Or you can use Google doc where whole document is together and basic coordination is not needed and no one needs to be disciplined with how often they push etc.
Coordination is bother, not having to do it is big advantage.
Chrome was so much better than IE or firefox when it came out. Android/iOS replacing windows is because before that windows was the only option, then something came along that better suited their needs and they rolled with it.
Libreoffice on the other hand is a clusterfuck of awkward, clunky, unintuitive, and dated UI design. It's not even comparable to MS Office or Google Docs in terms of usability, and that's before you add the extra layer of unfamiliarity. And now google docs doesn't cost money so why would you bother using Libreoffice unless you were forced to?
Chrome was so much better than IE or firefox when it came out.
While I never saw chrome as better than Firefox, thats just a matter of taste. I dont think the average user's sense of quality had anything to do with its rise. It was a very heavy marketing campaign that if I remember correctly included tv ads, installer links on google owned websites, and even spyware style side installs that came with pop ular programs.
Edit:
~10 years ago I definitely remember having to support users that accidentally installed chrome, and didnt even realize something had changed, except they couldnt use some awful ie only webpage they needed.
I switched to chrome pretty early on. Firefox was pretty unusable then. Typically with more than 8 tabs or so, some damn ad in some damn tab would call flash, and the whole thing would hang.
Chrome used a different CPU per tab and put flash in it's own process so it wouldn't hang the entire browser. I was quite impressed and told everyone I know how much better it was. Pretty quickly I switched.
Even today I get the occasional query from a windows or mac user who wants the "good" browser installed instead of IE or Safari. Yes, novices still can't install their own browser no matter how easy it is.
Windows and Office have changed so much in the last years that I thought it would be just a perfect setting for people to move to Linux and LibreOffice.
But people lots of people stick to whatever has an MS logo on it.
I had an interesting conversation once with a relative who wanted Windows on a tablet. He argued that it would be easier to configure Wifi. I just had to point out that he couldn't configure Wifi on Windows either. Still, he thought it would be easier.
There is a lot of hyperbole, idealism and straight out lies around this story, from both sides unfortunately.
All idealism aside, it is important to acknowledge that there were (always will be?) issues with documents made in MS Office sent by e.g. other departments or by citizens. After all, the workers are supposed to handle applications and such.
The departments were promised free Linux based software solutions made from scratch to cover their workflows. This has actually nothing to do with Linux itself, but it seems that software never fulfilled the requirements.
It appears that Microsoft spent quite some time and money to convince the decision bodys and the workers that none of this problems would have happened with MS Windows. All idealism aside, if you are going to partially stick to handling MS Office documents, there is no way Linux will be satisfactory.
(not neccearily from any officially involved, but you could hear it from the advocates on many forums and can still hear it. The german Heise forum for example is very bad for this, unfortunately)
<strikethrough>
You're statement is the kind of thing I was stabbing at with my critique: a hidden (shallow) argument, that going with LiMux is worse than going with MS, and it would have been wiser and cheaper to stick with MS in the first place?
Why can't such a thing (arguably a complex matter) be discussed in a mature way, without unfounded arguments that are just re-iterated anecdotal wisdom, and mostly stabs at the other guys? This is not a we vs. them thing! And you should stop believing everything you're being told by the guys in your group.
</strikethrough>
I was taking this the wrong way. Nevertheless, we should stop arguing with the idealists and their idealistic, unfounded arguments, basically taking the discussion down to a unsolvable level. Let's discuss objective (measurable or at least verifyable) arguments instead.
"Let's discuss objective (measurable or at least verifyable) arguments instead."
Agreed.
And I think, if this would have done from the beginning, especially from the linux crowd, then maybe they would still have Limux or are on their way of having a stable variant, yet.
I mean with all that money, it definitely should have been possible to adjust LibreOffice in the needed way by now.
But so there were wrong expectations and then disappointment.
Well, that everything will be and work the same. Particularly regarding interoperability with existing workflows (MS Office), this is either a lie or incompetence.
I was hoping for references and not vague assumptions that there were such claims. If you really have some quotes, as you mention even "particularly regarding interoperability", just refer to them.
If you look at the history, you see they did a study comparing going with MS vs. open solutions already before 2003: https://en.wikipedia.org/wiki/LiMux So where are the lies in that timeline?
If somebody does not want a software system, or in some cases, wants to sabotage it, this is not a technical problem. No technical solution will fix it.
It might be fixable, but not with a software update.
Sometimes you have to update the people to make the software work. I’ve watched a company’s staff sabotage an ERP system because it made them more accountable.
Yes, but at one point we need to ask why people don't want linux. Apple MACOS & IOS, android, even windows 8 and 10 did introduce radical new changes, it was a bumpy road, but at the end people did start using it.
It is a technical problem, Linux on desktop is just worst than both windows and MACOS; Both in term of software available (can we stop pretending that libreoffice is even remotely comparable to MS office...), basic drivers and OS services (wifi etc...) and UI design
To a certain extent, I think the lack of applications, driver compatibility, and "familiarity" are excuses trotted out by Linux Desktop evangelists.
Don't get me wrong, they definitely are a problem, but they'd be solved problems if people actually wanted to work with a Linux Desktop. The thing is, people don't.
I grew up in the 90s, and I was even a Linux and OSS evangelist during my stupid teenager years. I really wanted to make a Linux Desktop work. I still do want a viable OSS desktop. But what I found was that the community built up around Linux, and the Linux Desktop in particular, seems intent on making things suck as much as possible. I can hypothesize a wide variety of reasons why, and I suspect anyone who's spent a significant amount of time working with OSS communities can as well, but the upshot is that pretty much nothing they do is conducive to producing a good product. What's worse is that they largely refuse to recognize any problems they might actually have, technical or otherwise, preferring instead to believe they are a perfect beacon of The Way Things Should Be, that everyone else is doing it wrong, and blaming Windows's dominance for their failures.
I think you're sincere, but your post sounds like a pretty good troll. Lightly attacks a wide group of people while not being overly aggressive and doesn't actually make any concrete point (like point out those problems) so it's easy for others to read in it their own opinions and defend it.
Not op but to bring this into concrete terms the multiple incompatible UI toolkits, and especially the GTK rewrites couldn't have helped.
I remember first starting out trying to decide if I should use KDE or GTK. And this choice would affect how well your apps worked in ways that are totally non obvious if you aren't a UI programmer.
The vocal response you get on the internet when you ask why any product is failing is usually some variation of: "It failed because the company/developers didn't do things I wanted!" However, I believe the truth is less subjective than that.
>Yes, but at one point we need to ask why people don't want linux
The long tail of software and that the alternatives are good enough. I happen to be a Linux guy, but when I bought my computer, it came with Windows. If I use it to browse the web, play games or type word documents, why should I switch?
Back in the win95/98/3.1 days, Linux's main advantage was stability. How many old-timers spent hours working on a Doc or WordPerfect file, to lose it all because of a GPF? Linux was such a breath of fresh air back then.
Honestly, the "year of the Linux desktop" ended when XP came out.
No, the year of the Linux desktop ended a few years later:
- Laptops became hugely popular and Linux never was great at plug and play hardware handling.
- PulseAudio was part of the fix that wasn't
- KDE4 and Gnome 3 happened (even though the plasma desktop was stabilized soon, things like Akonadi and Nepomik have managed to destroy much of the appeal of the unique tight integration of KDE software)
- systemd happened. Mentioning this will trigger rabid supporters among the HN crowd, but the fact remains that this is at least a PR disaster on all accounts.
The end result of the changes of the last decade is that Linux installations have become extremely opaque even to experts while effectively staying just as brittle as before.
GNOME3 was mostly controversial for the "old timers" and were upset that gnome turned around 180 degrees (it would be like if lxqt were to make their next release look like ... Gnome3). Same thing with systemd.
Of course someone got triggered by the presence if the word "systemd". I am surprised that it didn't take longer. But that was not my point at all.
The fact is that all attempts to fix the Linux user space have essentially failed to deliver. The additional complexity added over these years did effectively nothing to move the Linux desktop into the 21st century, let alone this decade.
I was extremely hopeful around 2003 to 2005 that Linux might carve out a corner in the desktop market. Now I find myself pondering switching to Windows full time after dual booting for two decades.
"additional complexity" is the ticket. Linux Desktop tried so hard to chase some strawman of the "average user" over the past 2 decades, while at the same time keeping their identity as a unix system, that they ended up building a nightmarish Rube Goldberg machine that is completely opaque in its operation even to a lot of the people claiming to be experts in using it.
They sacrificed "simple" on the altar of "easy" and, unsurprisingly, it didn't get them anywhere. Simple is easy to work with, and easy to reason about. That means that even when you screw up, it will be pretty obvious why, and when it breaks it will be pretty obvious to fix. It also usually means it'll be pretty flexible. DOS was simple, but not easy, for instance. Windows has been trending this way too.
Of course, if you ask me, the Linux community always prefered complexity anyway. Take package managers: they only exist to manage complicated dependency relationships. Instead of asking why they have such complicated dependency relationships in their software, or why that causes so many problems for them, they instead wrote a package manager to handle the whole complicated mess; and that package manager was such a rousing success that no one ever wrote another one because it had so elegantly and completely solved the problem. Sigh
> Take package managers: they only exist to manage complicated dependency relationships.
Most dependency relationships are very simple. The package manager isn't there as a bandaid over complexity, it's there so I don't have to do repetitive simple downloading by hand. A system could also come with everything installed, or all packages could include their dependencies, but both of those are huge wastes of space without being much simpler.
What you're saying is it's a walled garden, an app store. Well then it sucks because of all the problems with those, including but not limited to not being very open for developers to get into without jumping through hoops and therefor having limited application choices based on whatever the package repository maintainer decided to do or not do.
All packages including their dependencies is the right way to do it (AppBundles, AppDirs, PortableApps, whatever you want to call that methodology). It's 2017, you can't tell me that sharing a 400k libsomebullshit.so between the 2 applications that use it is worth all this added complexity.
Bonus: It's easy as hell for developers to deploy this way. Of course, it does require a defined and stable base system to rely on, and we know how much the Linux community loathes that concept.
Still, kudos to AppImage for at least trying to do the right thing.
Package management very much does imply a walled garden. You can install packages from anywhere... as long as they conform to your package format... and they aren't incompatible with the way your distro's repository does things... and they aren't too old... or too new... did we mention there's like, 200 distros?
Yeah, totally flexible that.
And since when the hell can you choose where to install packages? Unless you're talking about very niche things like GoboLinux, or stupid workaround shenanigans with chroot and/or large amounts of symlinks. It's possible this is a thing that has changed and I never noticed because nobody ever does it. However, I doubt that very much given OSS developers' love for hardcoding paths at compile time.
People routinely distribute Electron Apps, which are bundled with an entire browser. Minecraft comes with the entire JRE bundled. Doom 2016, as far as I know, doesn't do anything like that but is something like 70GB anyway. 200MB is not a big deal, and for my money the complexity and limitations introduced by a package manager more than outweigh the extra space lost in the few cases where a large dependency might be used by more than one application.
> You can install packages from anywhere... as long as they conform to your package format... and they aren't incompatible with the way your distro's repository does things... and they aren't too old... or too new...
You have to have a standard to make it work. But you need standards to make portable apps work too. There's no reason a package standard has to be more complex.
> And since when the hell can you choose where to install packages?
My argument is that it's possible to make a package standard that can do it, and be simple. It's not inherent to package managers that these things be hard.
> the complexity and limitations introduced by a package manager more than outweigh the extra space
In some cases, sure, but I'd have a bunch of OS installs that suddenly triple in size or more and that's not worth it to me.
The package manager is mostly a win. I almost never have problems installing things on Linux. I remember lots of problems (usually driver related) for Windows, back in the day (disclaimer: last version of Windows I ever touched was Windows 7).
The problem for me is this weird concept of tying my application updates to my OS updates. If I want the latest Visual Studio I don't need to update my whole OS first.
But without adding untrusted repos updating clang Somehow requires updating everything.
> If I want the latest Visual Studio I don't need to update my whole OS first.
"apt-get install visual-studio" would do this, it would only update dependencies of visual studio, which is pretty identical to what a stand alone visual studio installer would do.
> The problem for me is this weird concept of tying my application updates to my OS updates.
The only alternative seems to be bundling what should be in the OS with the application, which is why your printer driver is a 600MB download. This way is much more secure though because the OS pushes security updates, having random apps with random old binaries on your computer is a growing security threat. The equifax breach was an example of this, had the library they were using been provided by the OS it would have been much more likely to be patched.
> But without adding untrusted repos updating clang Somehow requires updating everything.
What do you mean here? Updating clang should only require a handful of developer tools to be updated.
> When I install VS I don't want other applications sharing dependencies with it to get upgraded or change.
Generally they won't change, you'll just get the security updates you would/should have installed anyway.
> No because the newer clang is not in the repo.
It sounds like you're adding a more bleeding edge repository that contains much more than just clang, is that correct? On debian based distro's I believe this can be done by pinning, but it's been quite a while since I've done that, from memory I do think it was a bit of a pain to setup correctly.
To understand the complaint I think you need to step back a bit. When someone wants to install a program on their box they are told to run: apt-get install XYZ (or through a GUI wrapper). You install it and everyone is happy.
A year later a new version comes out. The only way to get it is to upgrade your distro to a version that includes it in the repository.
Sure you have other alternatives like ppa's, .debs, and good old fashioned make install but now your off the happy path and into expert land.
Same applies to Linux, try to use a more modern compiler in a LTS version.
It is as easy as compile from source, assuming the source is available, until you start the cascade of updating the dependencies to actually be able to compile it and make it run.
I haven't used desktop Linux in a while, but when I did it was common to install Fire Fox and Open Office via the package manager. The Ubuntu apt repository had thousands of desktop applications. Has that changed?
Sure, you can install Firefox or LibreOffice using apt. You can also download them. You probably won't install an IDE (such as Eclipse, IntelliJ, etc.) using a package manager, but by downloading the latest version from the project's website.
Ok, but then we haven't found a Windows advantage.
I think Windows is a pain to use [1]; everything about it seems second-rate to me. My theory is that familiarity matters a lot: now that it's been years since I last touched a Windows PC, everything seems weird, difficult and awkward about it. Someone equally unfamiliar with both Windows or a user-friendly Linux like Ubuntu wouldn't have a harder time with the latter.
"These things don't have to work like this" is a fine sentiment -- and one with which I agree on principle! -- but I suspect package managers themselves started that way: "this is unnecessarily difficult, let's fix it". My bet is that with almost every solution, you'll trade one set of problems for another :)
----
[1] To be fair, I'm talking about Windows circa Windows 7. Just as I'm annoyed by people who claim "you cannot game on Linux" because they last touched a distro a decade ago and keep talking about problems from the past, so I should admit I've no idea how usable Windows is these days. Maybe it got better... just like Linux did.
I wasn't making an argument for Windows' advantages, though there are many. To list a few: modern display server and compositing window manager that can handle multi-gpu multi-monitor seamlessly, doesn't explode when you add or remove a display, can recover from graphics driver crashes and install and use graphics drivers without a reboot, and doesn't drop all your windows while doing so.
When people say you can't game on Linux, they mean it the same way one might say you can't game on a phone: it isn't impossible, it's just that you'll have a worse experience, and there's nothing Linux offers to them to justify the extra hoops to jump through and other downsides. If you've made "gamer" part of your identity, that's probably pretty important to you.
But my point was really, who gives a shit how Windows does it? Aim to be better than Windows where you can instead of trying to offer a cheap Windows knock-off.
And honestly, package managers don't fix anything, they just cover it up with automation and bring a lot of limitations along. For instance, you can't install 2 different versions of the same application (unless the package repository is specifically designed so you can do that for certain applications), you can't install applications to alternate paths, and you can't move applications around. Until AppImage came along Linux had no concept of a portable application, which is usually a pretty trivial thing in Windows.
But instead of saying "wait, why are we installing applications this way? can we just do that differently and eliminate the problem?" they went ahead and wrote a complicated centralized database that tracks nearly every file on the system and its relationships just to try and avoid issues that, frankly, have no reason to exist in the first place. DOS had tons of applications and not a goddamned one of them required a package manager to get installed.
Sorry if this is a little ranty, it just irks me that so often OSS developers (Linux Desktop devs especially) seem to choose the most complex possible answer to any given problem.
I've no problem with multi-monitor setups with Linux, but it doesn't matter because nobody cares about seamless multi-gpu multi-monitor setups, compositing windows managers, or systemd (which someone else bizarrely mentioned in the comments elsewhere, as if it mattered) or any other such technicalities -- that's not the reason regular people use or don't use an OS. We believe this sort of stuff matters because we're nerds and know too much for our own goods, but it honestly doesn't matter to regular users. They want to use their PCs to watch movies, browse internet, play the occasional game, edit some simple documents, use a spreadsheet, and get some stuff done. It certainly doesn't matter for the kind of uses discussed in TFA: administrative software.
> [gaming on Linux] isn't impossible, it's just that you'll have a worse experience, and there's nothing Linux offers to [gamers] to justify the extra hoops to jump through and other downsides
"Worse" is relative. Worse than Windows? True. OS X is also worse than Windows for gaming, but it's still praised by a lot of people. When people say "you can't game on Linux", they are simply mistaken or believe things that were true a decade ago. I know this because I own and play lots of AAA games on Linux. Sometimes I have to manually edit a config file or install a library... the horror! (Still nothing compared to getting some games to run on DOS back in the day... do you remember HIMEM, DOS4GW and all the other black magic you had to do just to get some goddamn games to run?)
> If you've made "gamer" part of your identity, that's probably pretty important to you.
If you've made "gamer" part of your identity, none of this matters. You either own tons of consoles and/or have a very expensive PC setup. You're also in the minority, so again: who cares (in the context of this discussion)?
I confess I find the rest of your rant about package managers very hard to follow. Your comparison to DOS seems bizarre to me. DOS was nothing at all like a modern OS, and applications have changed (and multiplied!) since the old days, as I'm sure you remember.
> They want to use their PCs to watch movies, browse internet, play the occasional game, edit some simple documents, use a spreadsheet, and get some stuff done.
Except for the getting stuff done part, those are tablet users (I mean, they really only need a browser and may as well be using a tablet or ChromeOS). This sort of strawman "average" user, and the obsession with catering to them while at the same time insisting on your own superiority, is what has doomed the Linux Desktop to what it is today. You can tell yourself all you want that it's everyone else's fault Linux Desktop has a market share south of 4%, but it won't make it true.
Also digging the "works fine for me!", straight out of the Linux Evangelist playbook from 1995.
I never argued the Linux Desktop was superior to anything else, just that I liked it better, and that I found Windows (circa version 7) less useful, buggier and harder to use. I concede that it has probably gotten better in later versions (certainly, it used to be that a graphics crash crashed the whole system!), but I've had enough of Windows and no reason to use it. You make a good point about tablets, but that's a good argument against general purpose PCs, be it Windows, OS X or Linux.
The average user obviously doesn't use Linux. I'm just saying that they could, because Linux excels at the kinds of tasks they usually need to do (including, but not limited to, casual gaming, installing apps, using a spreadsheet, etc). The reasons why they don't have nothing to do with multi-gpu multi-monitor setups, which the average user doesn't own. It probably has more to do with familiarity, with the relative marketshares of Windows vs Linux, with the lack of good enough Office-like suites, etc -- but I'm not even arguing this is the complete list of reasons, and the issue certainly merits more exploration.
I'm not trying to "blame" anything on anyone, either. I'm just disagreeing with you and saying you're (in my opinion) not making any good arguments for your assertions, for example about package managers.
> Also digging the "works fine for me!", straight out of the Linux Evangelist playbook from 1995.
This is not even an argument. It seems you're trying to pick a fight for reasons I don't understand.
PS: though I would say "owning hundreds of AAA games for Linux" is an actual argument on my part, and not a "play" from any "playbook".
You wrote that " the fact remains that this is at least a PR disaster on all accounts."
No it is not PR disaster and I dont even use systemd. Or maybe because I dont use it and I am not member of community that fought that fight I can freely say it has zero impact on linux desktop PR.
I think KDE5 was an even bigger disaster than KDE4. It completely trashed KDE's famously great support for localization, and half the applications still haven't been ported over. And somehow they managed to shoot themselves in the foot with notifications and then drop the ball handling it.
For all of KDE4's problems, and it did have problems, KDE5 just feels like half a desktop.
I play plenty of AAA games on Ubuntu (example: the latest Mad Max), so that's definitely not it. Also, a lot of computer users are not interested in games at all.
The issue is "the long tail". For example, if my favorite game (extra points if it's from 2000 and abandonware) doesn't work on Linux, why should I switch?
If my camera which I already own isn't on Linux, why should I switch?
If I'm running an office and I suspect I'll need a Windows program in the future (almost guaranteed, if you want some off-cloud accounting or billing or business-logic software), why should I switch?
> if my favorite game [...] doesn't work on Linux, why should I switch?
That's easy: you should not switch to Linux because of games. You should switch because you like it better than Windows; if you also happen to like gaming, you will find plenty of games for Linux. Your favorite game might not be available, but then again, you should not switch (to any OS!) because of games.
This is not exclusively a problem for Linux either; it also happens to Mac users. Still, the Mac has many fans, right?
>That's easy: you should not switch to Linux because of games. You should switch because you like it better than Windows; if you also happen to like gaming, you will find plenty of games for Linux. Your favorite game might not be available, but then again, you should not switch (to any OS!) because of games.
That's my point. For the "masses", there isn't much there (see how much they care about privacy), and something to lose.
I first started using Linux because I liked unix, programming and open source; privacy wasn't a concern.
I'm not sure it's an OS "for the masses". I know it works for people who just want to edit simple documents, browse the web, watch movies and play the occasional game.
Note that I was replying to MrFurious' comment about "videogames and MS Office". This may come as a surprise to us techies, but "the masses" do NOT game on PCs at all. Gamers on any platform at all are not particularly mainstream (they certainly weren't until relatively recently). I'm willing to bet most people who game at all either play on consoles or play casual games (which usually work on Linux). Very few people play AAA on their PCs, comparatively speaking. Serious PC gamers who build monstrous high-end machines won't be comfortable with Linux, sure -- but they are not the mainstream.
MS Office and related apps are a reasonable point. I get by with Libre Office, but I'm willing to concede the point it's not on par with MS Office. But gaming? I do not believe gaming is the reason people don't use Linux.
I use linux everyday, practically only play football manager,but in steam practically doesn't exists AAA games. Mad Max from 2015?. No fifa, no call of duty, no last AAA games. It's the reality.
You're simply wrong, there are plenty of AAA games on Steam, GOG and the Humble Bundle for Linux. Plenty. It's just not the case anymore that you cannot game on Linux. Also, we're on 2017, 2015 is not old.
Here are some AAA games that I own, all running great on a Linux laptop:
- XCOM: Enemy Unknown (the remake)
- Alien: Isolation (impressive graphics)
- Insurgency
- Mad Max
- Victor Vran
- Portal 2
- L4D2
- Dawn of War 2
- SOMA
- Firewatch (awesome graphics)
- Invisible Inc
- Transistor
- Divinity: Original Sin (impressive graphics)
- Hand of Fate
- Shadowrun & all expansions.
... and plenty more. I just picked some that I like (you won't see racing or sports games because I'm not into them), I actually own hundreds of games. Some newer, some older, some low spec, some hardware-demanding.
In this day and age, pretending that gaming on Linux is hard is just FUD. You may get the games later than on Windows, and not every game, but still there are plenty.
XCOM: Enemy Unknown - 2012
Alien: Isolation - 2014
Insurgency: 2014
Mad Max: 2015
Victor Vran: 2015
Portal 2: 2011
L4D2: 2009
Dawn of War 2: 2009
SOMA: 2015
Firewatch: 2016
Invisible Inc: 2015
Transistor: 2014
Divinity: Original Sin - 2014
Hand of Fate: 2015
Shadowrun & all expansions: 2013?
Oh, with Linux you can play old games(and indie games) and comfort you thinking that linux is the best platform for a gamer and 2015 is not old in the videogame world, meanwhile, your friends play Horizon Zero Dawn in PS4 or last FIFA 18 in Windows 10.
Really do you think that a gamer will change windows 10 for linux for play ancient games and cannot play the last AAA games?.
Finally, i have games how L4D2 in Linux, but if i want to play games how GTA or Asssasins Creed i need windows.
> Oh, with Linux you can play old games(and indie games) and comfort you thinking that linux is the best platform for a gamer and 2015 is not old in the videogame world
It's 2017 now. These games are not old and certainly not ancient. Most of the games I listed are AAA. You said "you cannot game on Linux" and I've proven you wrong. I just listed some games I like, I'm not about to list my whole collection for you to ridicule it.
I never said Linux was the best platform for a gamer, and certainly not the best platform for playing this year's games, but that was not your initial remark either, was it?
> Really do you think that a gamer will change windows 10 for linux
No, but since nobody was arguing this, it seems you're talking to yourself?
Please try to address the actual point, otherwise I won't bother answering you anymore.
Unfortunately, updating the way people look at things is a lot harder than updating code. It takes years of work to convince people to buy-in to things, and a lot of it is advertising and promoting, not just education and training.
Maybe they would have had better luck if they didn't create their own distro? That seems like a lot of unnecessary overhead, unless it's just marketing for "Ubuntu + preselected set of packages"
> Along the way, it was reported that 20% of the users of Limux were not happy or satisfied with it
I would guess you could survey users of any computer/OS combination and get that response. That means 80% of users are happy (or at least, not unhappy) which is phenomenally good, really.
As technical folks, we tend to forget that its all about politics and the hidden under-belly of politics is money, lots of it. MS and their consultant partners have a massive vested interest in keeping organisations subservient to their vision of IT. Limux and open source are certainly in the public's better interests, but that runs contrary to the plutocrats' interests. Guess who wins?
> "Do we sometimes harm these migrations by volunteering?" Migrations to free software are generally
> driven by individuals, either inside a public administration or by a parent for a school. Those
> individuals start bringing free software in and do lots of work (for free) to make it all work.
> Problems arise and there is no budget to bring in others to help out; people burn out and then
> everything fails. Instead of coming to the conclusion that not having a budget led to free software
> failing, the organizations often decide to "get a budget and do it right". He thought it might make
> more sense to try to get the budget for the free-software project, instead of volunteering.
When reading I thought this was a rather insightful passage. I've noticed this at varying levels by helping others out (family) or by watching others put in overtime for small fixes: if you don't set a budget, then free == charity == not as good as the free market could do in most people's eyes. Does anyone have any additional anecdotes or data into how this happens? I get that people attribute cost to value, even if the two aren't related at all, but I wonder how this affects institutions at a higher level.
>Limux will be replaced with Microsoft clients. It doesn't make sense, he said, because the city already had a strategy to move away from desktops to "bring your own device" and other desktop alternatives.
There are no desktop alternatives at work. You BYOD for checking email not doing work.
And phones and tablets are much more proprietary than desktops.
The thing is I love linux on a desktop (if by desktop we mean not-a-server and thus laptops are in context as well), in fact I prefer it over OSX... right up until I plug in an external monitor, or try to do a video chat only to realize a driver literally does not exist for my MBP camera, or my IDE just kinda flickers randomly for no discernible reason, ec.
Walled-garden hardware like Apple has set up certainly has advantages, but given that windows is installed on a far greater variety of hardware than linux distros, I'm kinda surprised its more stable. I guess other than the obvious fact that everybody working on a given linux distro is doing it for free (right? that's my impression).
Most of my troubles vanished when I switched from Ubuntu to Debian. Only been about two weeks but loving it so far. Might finally be able to abandon OSX!
(note: I'm a total noob when it comes to OSs and whatnot - not that kind of programmer!)
>I guess other than the obvious fact that everybody working on a given linux distro is doing it for free (right? that's my impression).
People working on most components of the desktop Linux stack are being paid for it. Think kernel -> video drivers -> systemd -> wayland -> GNOME. All of those groups have paid employees. They are evidently understaffed.
Red Hat, Intel, AMD, SUSE, Samsung, Canonical etc.
Obviously, a lot of paid work is done on low-level plumbing, because Linux is hugely profitable as a server, infrastructure, or embedded operating system for a lot of companies. Graphics infrastructure (KMS, Wayland, X11, etc.) is useful for companies who embed Linux in various appliances (set-op boxes, etc.). Red Hat probably invests in GNOME because for Red Hat Enterprise Linux Workstation and dogfooding.
There is a LOT that can be done to improve desktop linux, but I think in this case, at least according to the article, the difficulties were largely political, not technical.
In the general case, though, I think the biggest problem is that the community that uses and works on the Linux Desktop doesn't actually want it to improve.
If you're an open source developer, for instance, why would you work to improve or maintain software X when you can create an alternative that does things slightly differently? That looks a lot more impressive on a resume. And I'd swear many Linux Desktop users wish it was even more complicated and fragile either because they love troubleshooting or just so they can virtue signal that they're leet. I mean, why else would /r/unixporn exist?
I think this is largely an issue with X rather than users wanting to show off, I think wayland (while not perfect) can really help with developing easy to use desktop enviroments.
Also not everybody who uses "complicated" setups care about "leetness", personally I use i3 (and alot of mods, customizations) because nothing compares to the ease of use + functionality for me.
I personally want more linux desktop adoption, Ive switched my parents over, trying to convert my brother.
One of the things that has struck me with getting used to i3, even after a decade and a half "faking it" by mostly running apps maximized anyway, is that it is changing my thinking, not least due to the API. E.g. I recently started writing a text editor (because, why not), and it struck me that I really don't have a reason to support multiple windows. I added "splitting buffers" today by having it exec i3-msg to start another copy of itself.
I think what is really hampering Linux on the desktop is trying to play catch-up with Windows and OS X instead of rethinking the processes.
I suspect you'd have been better off switching them to ChromeOS. Most people do not need a desktop in 2017. There was only a brief window in the late 90s and early aughties when "most" people did, but that was only because of the internet. We have better consumer-oriented devices for that now. Desktops are for people who do work, hobbyists, developers, and the kind of gamer who is basically the hotrod-guy of the computing world. If someone actually wants to make the Linux Desktop a good desktop OS, they'll have to stop trying to make things "easy" for the strawman of the "average" user and instead make something for those people.
While I largely agree with your observation, the problem with ChromeOS is that not everybody wants to ship their behavioral data in real-time to Google. So, there is is still a place for a basic Linux-based family desktop that just runs a privacy-friendly browser.
That's not a desktop, it's a kiosk. Linux is actually fine for these kinds of single-tasking embedded use-cases.
But seriously, who does that? I don't know anyone who primarily uses a desktop computer just to browse the web. That's what tablets and phones are for. ChromeOS is just one of those with a bigger screen and a keyboard, which is why it is perfect for them.
"Family desktop" was only a thing in the brief period between the internet becoming popular with consumers and the first iPhone. Outside of that period, multi-user desktops (or microcomputers, if you recall the 80s) are an extreme edge case.
Don't forget about the educational market. ChromeOS is very popular in schools because of how locked down it is, and computer labs in schools are one of the few environments where multi-user desktops thrive.
> If you're an open source developer, for instance, why would you work to improve or maintain software X when you can create an alternative that does things slightly differently? That looks a lot more impressive on a resume.
One could argue that this is one of the driving causes behind the current JavaScript churn madness.
That wasn't my read of the situation. Bottom line is, the administration realize that the cost savings promised might not materialized, and that most of IT problems could be resolved more cheaply by just moving to better version of windows
> the difficulties were largely political, not technical
Not all of the difficulties were political. Quoting from the article: "... Along the way, it was reported that 20% of the users of Limux were not happy or satisfied with it; other reports had the number at 40%. ..."
My last two laptops have given me the best software/hardware experience ever, even better than Tiger/Leopard days on MacBooks, and they run Linux.
Three tricks: i) cherry pick hardware (all Intel); ii) minimal userspace (tiled wm + firefox + emacs + terminal); iii) distros you build & customize from the ground up (arch/slack/nix/guix).
All those together remove a lot of complexity vs a regular desktop setup, so few moving parts that can break. Furthermore, the ability nix gives to set up the system / configure packages declaratively and have the possibility to rollback is a total killer feature.
Moving to MacOS or Windows would feel like a huge step backwards due to this feature alone. The absence of decent tiling window managers on those platforms is also a dealbreaker for me.
Netbooks were released each with its own Linux flavour, thanks to lack of driver ABI, it was almost impossible to replace them with more common distributions.
I have an Asus 1516B, sold with Ubuntu, that I had to fight with Firefox for proper hardware acceleration and being able to wake after hibernation.
A Dell developer laptop could have been another option, but they are always unavailable in European shops.
Intel GPUs are no match for good 3D acceleration, case in point, what they just announced with AMD.
Windows can also rollback without issues, just keep the snapshots and shadow file system configured.
> A Dell developer laptop could have been another option, but they are always unavailable in European shops.
Never ever order something from the dell website.
Always seek out to either A. and Dell distributor or B. Dell directly (most likely you pay less than the price on the website).
Directly under notebooks 13" the simple versions it's the newest and besides the fact that it's a little bit noisy it's really really great, it even works with the dell wd15 dock just perfectly.
Let me ask you something: doesn't it bother you that something like Nix is necessary for the problems Nix claims to solve? Why is a system so fragile as to require what Nix provides?
If you stay in the bleeding edge some setups might break. E.g. package A doesn't work with library B anymore unless you switch back to another version.
I have very rarely experienced this problem, and in 8 years time I have never had to roll back any critical packages like the kernel, firefox, emacs or any system utilities.
It's not something that bothers me because it's so rare, and I understand it's the product of very complicated setups and bleeding edge configurations.
Nix is mostly valuable for simple declarative configurations, the ability to alter package build recipes quickly, and for package and setup reproducibility.
That's my whole point though: why does this library conflict occur at all? Why do you need such a convoluted tool just to have a reproducible setup? Doesn't something about the nature of that situation seem off to you? Like, Nix solves a problem that maybe only exists because the system itself is fundamentally flawed?
The problem is that after a decade of FOSS zealotry (1994-2007) I couldn't really go back to Windows. Windows is now fundamentally incompatible with how my brain works. I have used macOS desktops from 2007 onwards, though I am annoyed by general bugginess (Preview.app seems to have degenerated even further from the already stripped version in Sierra) and Apple's Mac price inflation.
At home I have started using the Dell Linux Workstation that I had lying around (mostly as a headless server) more and more the last few months. I run Wayland and replaced the nVidia card by and AMD FirePro and it has mostly been smooth sailing.
(For some reason, despite being a rolling distro, Arch seems to be more stable than most distributions I have used in the past.)
Of course, I wouldn't dare to switch to Linux on laptops (yet).
FWIW, I just setup a sweet Arch + Xmonad rice on my MBP mid-2012 and it's working really, really well. Almost negligible footprint (<200 MB for sure, with a few background daemons). Arch has truly been a breath of fresh air. Every package that I installed has a reason. No cruft.
Government users do not generally need sophisticated desktops. Most of them probably handle the same documents and work with the same systems all day, every day with few deviations.
You can pry my Linux desktop out of my cold dead hands. I've been using it for almost 10 years now, and I couldn't imagine having to deal with windows telemetry and terrible performance, or apples outrageous prices. Yes it takes longer to set up but my desktop setup fits me like a glove. I always feel slightly handicapped when I have to use windows at work.
There is one thing that would make me switch, and that's apples IME methods for Chinese characters, which are a lot more sophisticated than the Linux offerings.
Any one who read the article would know that. They'd also know LiMux comes with a LiNux desktop client, which is referenced about 10 times in the article.
Uhh... what? The only significant performance difference is in graphics & media, which is where Windows handily bests Linux. What performance are you talking about?
> So maybe Windows 10 beats Gnome 3 (although I don't know if it's true, I don't have Windows), but it definitely can't beat "Linux"
This gets back to the question of what performance are you even talking about. It sounds like you're talking about the performance of random apps here rather than of the OS or any of its fundamentals.
No matter what DE you run, though, you're not fixing Linux's graphics problems, so Windows will always beat Linux on that front until the driver & compositor situation radically improves. Wayland simply can't compete with WDDM today, and for some god unknown reason you still get X11 on many distros by default anyway.
Running on a raspberry pi isn't some monumental achievement, either. The Pi has rather quite a lot of power. Windows IoT core runs just fine on it, which despite the name still has a graphics stack and all of that (more "windows lite" rather than "iot core")
Windows 10 must be amazing then, because from win2k to win7 I've dual booted and always found Linux to be much faster and smoother for everything. "Random apps" forms the bulk of what we do with operating systems. windows being more performant on some level is all very well and interesting, but I don't care because it's sluggish when I browse the internet, watch videos or use IDEs.
Graphics? Like what exactly? Often when I see frame rate comparisons for nvidia cards on linux vs win10 they are pretty similar with each side winning some. Media? What like video playback? Even a raspberry pi can play back 1080P. Any random linux desktop with a nvidia card can play back 4k with low cpu usage.
Taking several seconds to a minute to open the god damn start menu. Can take a few seconds if you've got a HDD, a minute if your internet connection drops at at the wrong time. Pauses and stutters in many other places. It's pretty clear MS only dogfooded windows 10 on high end devices.
You probably haven't used it since pre-vista if that is your experience. Microsoft changed a lot of internals that really improved the situation on Windows. For instance: WDDM can recover from a graphics driver crash, with the added benefit of allowing you to install and use new graphics drivers without rebooting or even losing your open windows.
Yes. Absolutely. UI design doesn't work in the OSS "bazaar model". It needs a pope who is generally regarded as infallible, and deferred to even when not. Alternatively, a rather strict set of rules like Apple's HIIG might work, possibly created and enforced in a process like Wikipedia's.
I've repeatedly cringed when watching some eager young guy set his grandfather up with Linux. It may have improved now, but having once seen a 75-year old type commands into a terminal as dictated over the phone to install flash has thoroughly ruined any instinct to proselytize I may have had.
1) Libre/OpenOffice was never satisfactory. It is ugly and crashes. Tried many times using it since 1999 and it gets worse and worse. Pity, because StarOffice was a perfectly usable alternative back then.
2) Office-like products are addictive in corporate environment. People love to make ramsom-note style documents and presentations. An old typewritten document seems way more professional in direct comparison... People add so many 'hairs' in documents that not even Office365 can open them anymore, so you have to resort to native MS-Office. And sometimes it must even be the Windows version, because the Mac version does things a little differently and messes up a really complicated document. (The second last version for Mac consistently broke when review mode and table changes were mixed, and I worked in an environment where review mode was extensively employed.)
Any migration away from Windows would have to tackle these problems before even the operating system. The rest of the desktop UI is ok in Linux - people use mostly the browser anyway. Google Docs is an option - no documents more complicated than GDocs can handle should be allowed.
I actually preferred the documents were all generated, either by software, or at least using markup like HTML or LaTeX, that are diff'able. But I know this is impossible to impose on the layman.