This is out of the question for any normal user and is also not reliable at all. When you install a Jailbreak you are by default giving up on a plethora of apps which do proper checks and they won't work at all even if you try to bypass them (banking especially).
Older iOS versions have other problems as well. Safari is not an app that you can update, it's part of the OS. A lot of websites will simply stop working for you if your version is too old.
Newsroom post is just better, I don't understand the need for this.
Are you going to verify that the alternative article is accurate and doesn't interpret the official information in a bad way (I mean in general--not saying that about this announcement)?
I find it irritating when people quote semver like it’s some kind of law.
It’s a random protocol someone came up with that some other people decided to follow. It’s nowhere near a de facto standard for version numbers: plenty of software has non-semver version numbers (in fact, this applies to all software I have worked on so far in my career!)
I think it's pretty clear semver doesn't really have a place in modern software. Except for Microsoft, no one cares about backwards compatibility, which semver is all about. All software is continually developed, every release contains both new features and bug fixes, which violates semver. The vast majority of software has a meaningless "1." or "0." tacked to the front to try to satisfy semver, until the project gets bored of typing it and just drops it.
I think most software should just have date-based versioning. It fits the development models we actually use far better, and actually communicates useful information to the user, unlike semver. Are you running a kernel from 2016? Might wanna update that.
SemVer has significant benefit for package manager world.
Many argue that why don’t you just read a changelog, but that is not the point. Package manager should know whether it can update some dependency to later version safely, without some guy always manually hardcoding the suitable version.
It is everywhere. In Arch Linux, Debian, Pip and Cargo.
They all rely on versions which itself should describe the impact of the change. If there is no standard, then it is always risky to update. On Debian it means manually testing every package. In Arch you accept the risk. If everyone would follow version schema, then you could trust the number in most of the cases.
My original message was a bit joke which some missed, but SemVer has a place and need.
> All software is continually developed, every release contains both new features and bug fixes, which violates semver.
Combination of them is not violation. Overall impact of the change should be described with the correct increment. It does not matter how do you categorise the content of the change.
It's often a change in interface. If you build from source, old code simply won't compile with new incompatible interface, and won't get to the testing stage.
The impact comes from the dependencies you are using.
Depending how the software is made, it can be noticed early on build time.
But it can also be noticed just on runtime, e.g. dynamic libraries, which is hopefully noticed on testing stage.
However, how it impacts downstream, the dependents of your software, cannot be tested. You can only inform about it. For people, with changelog.
For automated systems, with version number. But if you do not follow systematic version numbering, you fail to inform automatic systems. They update to the later version of you software and dependents will break.
That's "Linux" the kernel, not the operating system. No one building a 'modern' desktop OS on top of that kernel cares. Download a 20 year old Linux binary and try running it on a random up to date linux distro. Unless it's a trivial program it will almost certainly fail.
> one cares about backwards compatibility, which semver is all about.
Interesting. I'd characterise it more as 'responsibly breaking backwards compatibility'. Just a tool to communicate the nature of changes, a really (really) brief summarising changelog - what's included, breaking changes, non-breaking new features, or just misc. fixes?
I quite like it, but I tried to phrase my top-level comment not to be about it specifically. It would have surprised me to learn that Linux used 'semver' - that doesn't mean there's no semantic meaning to the version numbers it uses though.
> backwards compatibility, which semver is all about
It's not about maintaining backwards compatibility. It's about making it explicit when you break compatibility. Whether you do or not is completely up to you - semver doesn't care.
And what's more, the second unofficial motto of the Linux kernel is "break kernel space"; they don't make any attempt at keeping in-kernel APIs stable. So if they followed semver for userspace, they'd be on 1.9000.0, if they followed semver for kernel space they'd be on 9000.0.0.
There's value, albeit less in the minor vs patch distinction I think. So you could imagine a world in which SemVer came first, Linux uses it, and we're on 1.5.19 but usually drop the 1. because we've agreed there'll never be a v2.
Some developers (younger ones?) seem to think that the SemVer spec is a law of the universe, when in reality it's just something a GitHub guy put into the mix in the 2010s.
While it is not the law, it is the only reasonable attempt to solve dependency issues. If the software gets an major release when author feels like it, then version itself tells nothing about the changes. Then dependant software can never estimate the impact of update. I agree that Linux Kernel is a bit different, since it sits on top of everyrhing, but still.
Semver is a bandaid on a gaping chest wound, that can best be summed up as "major version number changes may break everything, or they may not, but minor version numbers might not break everything, but they may."
It's more of a philosophy than a law, and it can't really be relied on that much; as often the developers themselves can't accurately predict what is a major breaking change and what isn't.
The Linux kernel itself has some baggage from the 2.4/2.6 era that Linus is explicitly walking away from.
Actually testing is the bandaid .
You always have the dilemma when updating dependencies of the software - do they break something? Of course you could manually check every time every software from changelog, if this has some impact.
Problem is, that usually for example in Linux one dependency might have hundreds of dependents. Are you going to check all of them manually?
If standard versioning would automatically show this information, a lot of time and testing would be saved.
> often the developers themselves can't accurately predict what is a major breaking change and what isn't.
There is a difference with a breaking change and a bug.
All breaking changes are predictable. If you modify API, it possible breaks. If you don’t modify API but it breaks, it is just a bug.
If you add feature, and don’ t modify API, but something breaks, it is a bug.
You are not expected to apply SemVer for bugs because you can’t predict them.
Docker containers don't work for most self hosted solutions, since most self hosted OSes are security focused, and use FreeBSD, instead of Linux, in order to get away from some security vulnerabilities. Docker is a pretty large security vulnerability.
It's better than windows, sure - but I think everyone would agree that shouldn't be the bar.
It's not as easy as "just run the docker image". Maybe it is if you just want to run a single one. But as soon as you want to run multiple it becomes a very complex job of configuring nginx and lets encrypt. It took me several hours to work out how to host nextcloud and get the nginx config working.
Wow. Thanks for that insight. I went the middle ground and am using a shared hosting provider with great tutorials on how to get things running.
Nextcloud was 5 minutes (or 15 if one includes setting up ssh key in the web frontend for my account). WordPress was 3 minutes, Matomo also 5 including configuration.
I know that I am using a central service and am not self hosting. But for > 13 years this setup "just works".
I had a masquerading server at home once (back in the early 2000s) and updating, securing and just maintaining it was a hassle.
So to me the current setup is stable, mostly secure (and more secure than I could make it) and balances my needs for control and stability and ease of use quite well.
A bit on how this played out: what they call their first 10nm node which in its third iteration has been named Intel 7 was more aggressive than TSMC's first 7 nm node, and failed for many years to economically produce chips. Both use 193 nm UV lithography, and then TSMC made a more aggressive than Intel's node using some EUV from ASML, and both TSMC nodes worked.
Now TSMC is two major nodes ahead with 3 nm risk production credibly scheduled for this year and mass production next year, with Intel said to be buying a lot of that. What was Intel's still delayed 7 nm node, the first to use EUV, is now named Intel 4. I've not looked at it closely, but it looks like something equivalent to TSMC 3 nm nodes is scheduled as Intel 20A for angstrom and 18A, sometime in 2024 and 2025.
I have yet to see anything that convinces me Intel will regain its ability to make state of the art logic chips, which for many generations was one of their most important advantages, allowing them to beat "smarter" CPU designs with their own CPUs being manufactured 1-2 nodes ahead of everyone else. It will be interesting if some day the true story of how this happened is revealed.
Doesn’t this mean the Intel is done? Intel has never designed chips with innovation that matches the quality of AMD, Apple, and Nvidia. I just don’t see how they survive as a fabless.
Doesn’t this mean the Intel is done? Intel has never designed chips with innovation that matches the quality of AMD, Apple, and Nvidia.
That's frankly not true, for example see them along with a few others not including the above companies reviving the out-of-order technology IBM first developed for its System 360 supercomputers, for Intel first in the Pentium Pro.
Culturally I doubt they can shift to fabless although we'll be able to tell if they're really trying by seeing mass layoffs of managers, and I've very concerned they didn't recruit a fab guy to be their CEO although I wonder if no one qualified was willing to take the position. Another argument against that is their placeholder CEO who'd previously been their CFO actually had a clue, I suppose wasn't part of one of the factions involved in the fab debacles, decided to go fabless, and his strategy was junked by the new CEO.
In general I don't see a path to victory for them given my guess that they won't be able to regain their ability to move to new nodes that can make economical chips. Although I haven't investigated their third iteration of their 10 nm node, now Intel 7, need to spend more time reading Semiwiki etc. and also see if there's any sign they're going to get their first EUV node, Intel 4, working in any practical time frame, it's already delayed.
On the other hand, AMD has done it again, spent a huge amount of their capital on this time an FPGA company; at best that means they believe they don't have anything better to devote resources to, which is bad news for their AMD64 ecosystem. If you've followed their history going back decades, it was said a while ago over the long term they've never made money for their stockholders, and they've repeatedly gained serious leads only to blow them, raise your hand if you once used one of the 100MHz 486DX4s. Just like Intel's cultural problems, I suspect this aspect of AMD's culture will give Intel breathing room sooner or later, as it did when K8 got long in the tooth.
Apple isn't even in the running unless they decide to start selling their ARM SoCs to third parties. Who gives a damn about those if you're not in the Apple ecosystem, which has it's own problems. Nvidia if allowed to buy ARM will I expect destroy that ecosystem, certainly make it uncompetitive, including eliminating competition they would be facing thus likely making them complacent.
So what is your final say? You say they have a chance at executing an IBM scale pivot, but a low one? You also say that AMD historic inability to generate profits will give them some breathing room?
So, intel will recover because it will gain breathing room and it will benefit from a pivot?
It seems like a stretch.
For one it seems like shifting a company used to an accommodating internal node users to one that has to retool and shift to TSMC/Samsung is a major ask. For two it seems like major profits are around the corner for AMD. This is a company that has historically had <1 % of the markets intel participate in. Those numbers have grown by factors and there is no sign they will stop.
Would I count Intel out? Maybe not. Maybe the US gov will provide a lifeline. But there is not a doubt in my mind that Intel could stumble again and loss the manufacturing advantage permanently. That seems like it would be a mistake that would lead to another 5 years of market share loss.
I certainly have no "final say," I'm just pointing out some of the forces including internal cultural ones that I believe may have effects on the outcomes.
IBM has nothing to do with the business or technical questions, I'm referring to the https://en.wikipedia.org/wiki/Tomasulo_algorithm which was first used in the late 1960s in the floating point unit of the IBM System/360 91 and much later used for the Pentium Pro.
AMD's historical inability to generate profits pertains to its historical inability to succeed well for very long, it's an independent measure of that phenomena. So no matter how well AMD is doing right now, it's in their DNA to fail any time now; doesn't mean it's going to happen, but I'll repeat again that paying a huge sum for an FPGA company is a very bad sign, much worse then their purchase of ATI which did not end up helping their CPU efforts, if anything distracted from the latter. Another sign AMD "will stop" is the reported generally lower quality ecosystem, but I have no good reading on that.
Your comment about a "major ask" is just repeating why I previously said, "Culturally I doubt they can shift to fabless although we'll be able to tell if they're really trying by seeing mass layoffs of managers...."
How much of AMD's "major profits around the corner" will be put into the FPGA unit? It was an all stock deal so they at least won't be having to pay down debt on it, but to restate my main point about it, it's a vote of no confidence in their CPU business, doing it comes at tremendous opportunity costs.
Never heard that money was the core problem for Intel's failure to move to new nodes, so the US government doesn't have the right sort of lifeline to provide. Intel is currently stumbling and I personally guess that won't stop, but that's based on very thin data and what I see the new CEO trying to do.
Or not, if the denials about the rumor about them buying Global Foundries directly from its owner Mubadala Investment Company are correct. If managed well, which is never the way to bet in tech company acquisitions, that would give them invaluable expertise in the business of being a foundry.
I know they have had fabs, but they decided to go with TSMC recently, and that's what I was wondering about. Going with TSMC doesn't mean TSMC does their designs too, does it?
It may support that, but the tethered hotspot mentioned previously works perfectly well without this library for me on my Mac. When tethered together and hotspot enabled, I can use the iPhone's cellular data with no wifi on the Mac at all.
Older iOS versions have other problems as well. Safari is not an app that you can update, it's part of the OS. A lot of websites will simply stop working for you if your version is too old.