Ya, there's no way this is the "final" release. Maybe by the core Python team, but it will be forked to fix bugs. Ten years from now there will still be Python 2 code running critical infrastructure at various companies, and the most responsible path to address discovered issues in the runtime will not be "rewrite the application to work in Python 3!" but "upgrade the interpreter to this community-vetted fork of 2.7.18".
Mumble mumble something about conflating languages with implementations.
What's use of Python 2 if you can't use libraries[1]?
It will only get more difficult to maintain your app.
[1] https://python3statement.org/ - note many libraries weren't even waiting until 2020. It is a lot of work to maintain code with python 2 cruft. Not all packages are listed there, for example Django is Python 3 only, starting from 2.0 (currently at 3.0)
> What's use of Python 2 if you can't use libraries[1]?
Unless some python 3 fanatic goes out of his way to write a python 2 library deleting virus the existing code wont disappear. Also some of these pledges only limit feature releases, afaik numpy planned to still provide a long term support version with bugfixes for python 2. It also helps that python already comes with a lot of build in bells and whistles so third party libraries aren't always necessary either.
Sorry I wasn't clear nothing happens if your application doesn't change, but if you do sooner or later you'll be forced to upgrade your dependencies (could be a bug that you just found, maybe a bug, or maybe a performance improvement) if the updated version won't work on your python it will be tough. You'll have choice to either migrate your app to python 3 or fork the library and backport fixes.
You might be lucky and someone else might do that for you, but it will be harder and harder with time. Already according to JetBrains survey in 2019 (I believe) about 80% of people surveyed they already use python 3.
As for numpy I just checked[1] and the only wheels they are providing for the latest version are 3.5+ the package also says that it is 3 only.
The problem isn't that Python 2 is bad. Python 2 is a fantastic language. The problem is that the maintainers of Python decided to break backwards compatibility and force library developers to support what are essentially two different programming languages.
I don’t think there was any clear path around that, though. The single biggest change was that Python 2 pretended that text and binary data were the same datatype, where Python 3 correctly makes you distinguish between the two. There’s not really a great way to roll out that major change without breaking tons of stuff along the way. And, well, if you’re already making a backward-incompatible version, here’s this checklist of other breaking changes you might as well bring along for the ride.
And that raises an obvious question: why didn’t every other programming language immediately break backwards compatibility when UTF-8 became a de facto standard?
> And, well, if you’re already making a backward-incompatible version, here’s this checklist of other breaking changes you might as well bring along for the ride.
Sorry, that doesn’t track. Treating quoted strings as UTF-8 by default instead of ASCII-or-arbitrary-bytes would have been a small migration that would not have taken over a decade to complete.
Because many of these languages were created when Unicode already existed. Someone listed Java and Javascript, both of them started from the point that python 3 tries to bring.
When python was written in 1989 Unicode didn't exist yet.
As for your second argument, many people bring out Go, that had such amazing idea of using everything as UTF-8 and it works great. They don't realize that Go is pretty much doing the same thing that Python does (ignoring how the string is represented internally, since that shouldn't really be programmer's concern).
Go clearly distinguishes between string (string type) and bytes ([]byte type) to use string as bytes you have to cast it to []byte and to convert bytes to string you need to cast them to string.
That's the equivalent of doing variable.encode() to get bytes and you do variable.decode() to get a string.
What python 3 inroduced is two types str and bytes, and blocked any implicit casting between them. That's exactly same thing Go does.
The only difference is implementation detail, Go stores strings as utf-8 and casting doesn't require any work, they are just for having compiler catch errors it also ignores environment variables and always uses utf-8. Python has an internal[1] representation and does do conversion. It respects LANG and other variables and uses that for stdin/out/err. Initially when those variables were undefined it assumed us-ascii which created some issues, but I believe now that was fixed and utf-8 is the default.
[1] Python 3 actually tries to be smart and uses UCS1 (Latin 1), UCS2 or UCS4 depending what characters are contained. If an UTTF-8 conversion was requested it will also cache that representation (as a C-string) so it won't do the conversion next time.
> Because many of these languages were created when Unicode already existed. Someone listed Java and Javascript, both of them started from the point that python 3 tries to bring.
That was me in a parallel thread. Java and JavaScript internally use UTF-16 encoding. I also mentioned C, which treats strings as byte arrays, and C++, which supports C strings as well as introducing a string class that is still just byte arrays.
> As for your second argument, many people bring out Go, that had such amazing idea of using everything as UTF-8 and it works great.
Has Go ever broken backwards compatibility? Let me clarify my second argument: if you are going to break backwards compatibility, you should do so in a minimal way that eases the pain of migration. The Python maintainers decided that breaking backwards compatibility meant throwing in the kitchen sink, succumbing to second system effect, and essentially forking the language for over a decade. The migration from Ruby 1.8 to 1.9 was less painful, though in fairness I suppose the migration from Perl 5 to Perl 6 was even more painful.
Actually migrating from Perl5 to Raku may be less painful than migrating from Python2 to Python3 for some codebases.
That is because you can easily use Perl5 modules in Raku.
use v6;
use Scalar::Util:from<Perl5> <looks_like_number>;
say ?looks_like_number( '5.0' ); # True
Which means that all you have to do to start migrating is make sure that the majority of your Perl codebase is in modules and not in scripts.
Then you can migrate one module at a time.
You can even subclass Perl classes using this technology.
Basically you can use the old codebase to fill in the parts of the new codebase that you haven't transferred over yet.
---
By that same token you can transition from Python to Raku in much the same way. The module that handles that for Python isn't as featurefull as the one for Perl yet.
use v6;
{
# load the interface module
use Inline::Python;
use base64:from<Python>;
my $b64 = base64::b64encode('ABCD');
say $b64;
# Buf:0x<51 55 4A 44 52 41 3D 3D>
say $b64.decode;
# QUJDRA==
}
{
# Raku wrapper around a native library
use Base64::Native;
my $b64 = base64-encode('ABCD');
say $b64;
# Buf[uint8]:0x<51 55 4A 44 52 41 3D 3D>
say $b64.decode;
# QUJDRA==
}
{
use MIME::Base64:from<Perl5>;
my $b64 = encode_base64('ABCD');
say $b64;
# QUJDRA==
}
{
use Inline::Ruby;
use base64:from<Ruby>;
# workaround for apparent missing feature in Inline::Ruby
my \Base64 = EVAL 「Base64」, :lang<Ruby>;
my $b64 = Base64.encode64('ABCD');
say $b64;
# «QUJDRA==
# »:rb
say ~$b64;
# QUJDRA==
}
I just used four different modules from four different languages, and for the most part it was fairly seamless. (Updates to the various `Inline` modules could make it even more seamless.)
So if I had to I could transition from any of those other languages above to Raku at my leisure.
Not like Python2 to Python3 where it has to mostly be all or nothing.
> That was me in a parallel thread. Java and JavaScript internally use UTF-16 encoding. I also mentioned C, which treats strings as byte arrays, and C++, which supports C strings as well as introducing a string class that is still just byte arrays.
C and C++ doesn't really have Unicode support, and most C and C++ applications don't support unicode. There are libraries that you need to use to get this kind of support.
> Has Go ever broken backwards compatibility? Let me clarify my second argument: if you are going to break backwards compatibility, you should do so in a minimal way that eases the pain of migration. The Python maintainers decided that breaking backwards compatibility meant throwing in the kitchen sink, succumbing to second system effect, and essentially forking the language for over a decade. The migration from Ruby 1.8 to 1.9 was less painful, though in fairness I suppose the migration from Perl 5 to Perl 6 was even more painful.
Go is only 10 years old Python is 31. And in fact it had some breaking changes for example in 1.4, 1.12. Those are easy to fix since they would show up during compilation. Python is a dynamic language and unless you use something like mypy you don't have that luxury.
Going back to python, what was broken in Python 2 is that str type could represent both text and bytes, and the difficulty was that most Python 2 applications are broken (yes they worked fine with ascii text but broke in interesting ways whenever unicode was used. You might say, so what, why should I care if I don't use Unicode. The problem was that mixing these two types and implicit casting that python 2 did made it extremely hard to write correct code even when you know what you're doing. With python 3 is no effort.
There is a good write up by one of Python developers why python 3 was necessary[1].
> Going back to python, what was broken in Python 2 is that str type could represent both text and bytes...
You know, it’s astounding to me that you managed to quote my entire point and still didn’t even bother to acknowledge it, let alone respond to it.
If they had to break backwards compatibility to fix string encoding, that’s fine and I get it. That doesn’t explain or justify breaking backwards compatibility in a dozen additional ways that have nothing to do with string encoding.
Are you going to address that point or just go on another irrelevant tangent?
There is no migration from Perl 5 to Perl 6, but mainly because Perl 6 has been renamed to Raku (https://raku.org using the #rakulang tag on social media).
That being sad, you can integrate Perl code in Raku (using the Inline::Perl5 module), and vice-versa.
Fundamentally, the "right place" here differs between Windows and Linux.
On Windows, command line arguments really are unicode (UTF-16 actually). On Linux, they're just bytes.
In Python 2, on Linux you got the bytes as-is; but on Windows you got the command line arguments converted to the system codepage.
Note that the Windows system codepage generally isn't a Unicode encoding, so there was unavoidable data loss even before the first line of your code started running (AFAIK neither sys.argv nor sys.environ had a unicode-supporting alternative in Python 2).
However, on Linux, Python 2 was just fine.
Now with Python 3 it's the other way around -- Windows is fine but Linux has issues.
However, the problems for linux are less severe: often you can get away with assuming that everything is UTF-8. And you can still work with bytes if you absolutely need to.
> On Windows, command line arguments really are unicode (UTF-16 actually)
No, they're not. Windows can't magically send your program Unicode. It sends your program strings of bytes, which your program interprets as Unicode with the UTF-16 encoding. The actual raw data your program is being sent by Windows is still strings of bytes.
> you can still work with bytes if you absolutely need to
In your own code, yes, you can, but you can't tell the Standard Library to treat sys.std{in|out|err} as bytes, or fix their encodings (at least, not until Python 3.7, when you can do the latter), when it incorrectly detects the encoding of whatever Unicode the system is sending/receiving to/from them.
> AFAIK neither sys.argv nor sys.environ had a unicode-supporting alternative in Python 2)
That's because none was needed. You got strings of bytes and you could decode them to whatever you wanted, if you knew the encoding and wanted to work with them as Unicode. That's exactly what a language/library should do when it can't rely on a particular encoding or on detecting the encoding--work with the lowest common denominator, which is strings of bytes.
> In your own code, yes, you can, but you can't tell the Standard Library to treat sys.std{in|out|err} as bytes,
Actually you can, you should use sys.std{in,out,err}.buffer, which will be binary[1]
> or fix their encodings (at least, not until Python 3.7, when you can do the latter), when it incorrectly detects the encoding of whatever Unicode the system is sending/receiving to/from them.
I'm assuming you're talking about scenario where LANG/LC_* was not defined, then Python assumed us-ascii encoding. I think in 3.7 they changed default to UTF-8.
> Actually you can, you should use sys.std{in,out,err}.buffer,
That's fine for your own code, as I said. It doesn't help at all for code in standard library modules that uses the standard streams, which is what I was referring to.
> I think in 3.7 they changed default to UTF-8
Yes, they did, which is certainly a saner default in today's world than ASCII, but it still doesn't cover all use cases. It would have been better to not have a default at all and make application programs explicitly do encoding/decoding wherever it made the most sense for the application.
> That's fine for your own code, as I said. It doesn't help at all for code in standard library modules that uses the standard streams, which is what I was referring to.
I'm not aware what code you're talking about. All functions I can think of expect to provide streams explicitly.
> Yes, they did, which is certainly a saner default in today's world than ASCII, but it still doesn't cover all use cases. It would have been better to not have a default at all and make application programs explicitly do encoding/decoding wherever it made the most sense for the application.
I disagree, it would be far more confusing when stdin/stdout/stderr were sometimes text sometimes binary. If you meant that they should always be binary that's also unoptimal. In most use cases an user works with text.
All the places in the standard library that explicitly write output or error messages to sys.stdout or sys.stderr. (There are far fewer places that explicitly take input from sys.stdin, so there's that, I suppose.)
> it would be far more confusing when stdin/stdout/stderr were sometimes text sometimes binary
I am not suggesting that. They should always be binary, i.e., streams of bytes. That's the lowest common denominator for all uses cases, so that's what a language runtime and a library should be doing.
> If you meant that they should always be binary that's also unoptimal. In most use cases an user works with text.
Users who work with text can easily wrap binary streams in a TextIOWrapper (or an appropriate alternative) if the basic streams are always binary.
Users who work with binary but can't control library code that insists on treating things as text are SOL if the basic streams are text, with buffer attributes that let user code use the binary version but only in code the user explicitly controls.
> Mumble mumble something about conflating languages with implementations.
So your claim is that "Python 2" is a language spec, not an implementation? And that there will be future releases of this language spec in the future? I doubt it.
I agree it's likely that there will be people wasting their time maintaining an interpreter fork, but that will not be Python-the-language (a trademarked term BTW), it will be a fork of the implementation.
No, my claim is that, while ceasing development of the language Python 2 is wholly sensible, ceasing development of the implementation Python 2 (CPython specifically) is not (due the almost certain existence of latent bugs). My "mumble" at the end was meant exactly in reference to that.
I suppose one could argue that, the CPython implementation is the language specification. (And I seem to recall hearing that notion somewhere years ago.) In which case, it would not be possible to freeze development of the language without freezing the implementation as well. There are various reasons I wholeheartedly disagree with such a characterization, but I guess there's some self-consistency there at least.
> ceasing development of the language Python 2 is wholly sensible, ceasing development of the implementation Python 2 (CPython specifically) is not
Development of CPython 2 has ended, bugs and all. It's past its end of life, this is well known and has been known for a long time. Any remaining bugs are the problem of the users, not the responsibility of the former developers.
Sure people will fork it and do stuff with those forks, but those will no longer be new versions of CPython, they will be new versions of some-fork-of-CPython.
> So your claim is that "Python 2" is a language spec, not an implementation? And that there will be future releases of this language spec in the future?
PyPy maintains a Python 2 implementation and will continue to do so.
Does anyone know what the main reason is for not updating from python 2? I'm genuinely curious as I don't really know any modules that won't work under Python 3 and I can't really come up with any other blocking changes that would make upgrading that hard.
I did the work for a reasonably sized project recently - a few hundred thousand LOC. It was long, boring, risky work. Let me rattle off some of the tasks.
Audit all strings coming in and going out for encoding issues. Update all dependencies to their python 3 equivalent. Replace dependencies that hadn’t been updated (typically older django dependencies). Use python-future to bulk update incompatibilities. Changes to metaclasses were annoying. Force all uses of pickle to use protocol version 2. I documented some more during the migration on Twitter https://twitter.com/jarshwah/status/1209381850822496256?s=21
We began getting the code base into a compatible position about 1.5 years earlier. A final push of 3-4 weeks of work got it over the line, with many bug fixes after the deployment.
Other older larger systems will have similar problems at a larger scale.
This isn’t a condemnation by the way. Python 3 is better. The only reason we held out so long was because of the business justification. Once we couldn’t wait any longer it got prioritised.
Many internal tools for one, platforms, etc. Hard to tell.
One industry example is https://vfxplatform.com/ - they just (this year) moved to Python3, but with some delays, from the site:
The move to Python 3 was delayed from CY2019 to CY2020 due to:
No supported combination of Qt 5.6, Python 3 and PySide 2 so Qt first needed to be upgraded.
Upgrade of both Qt and Python in the same year was too large a commitment for software vendors and large studios.
Python 3 in CY2020 is a firm commitment, it will be a required upgrade as Python 2 will no longer be supported beyond 2020. Software vendors are strongly encouraged to provide a tech preview release in 2019 to help studios with testing during their Python migration efforts.
The active development of Python 2.7 stopped in 2015, that was the time to start migrating. Seems like this application would never updated if 2 wasn't EOL.
There are still Classic Visual BASIC programs out there that haven't been ported to VB.Net or C# yet because of how huge they are and how hard it is to code that they cannot afford to hire developers to do it for them. The same is true of many old languages like COBOL.
I heard that some places still using Turbo Pascal for DOS and have to stick with 32 bit machines because 64 bit can't run 16 bit DOS code.
Yes, and you similarly can continue using python 2.7.18 for next 10 years, no one expects Microsoft to continue releasing new versions of the classic VB. A lot of python users have weird expectations.
Lot’s of code is simply unmaintained. The guy that wrote it is gone, it’s still running fine so nobody is touching it. Businesses don’t want to take the risk and spend the money to upgrade it. Maybe you don’t realize the insane amount of code that is in this state!!
If you depend on an unmaintained codebase where the original developers are no longer available, then that's a substantial business risk by itself.
Too many software development projects are treated as one-off events where people commission them and assume they will work forever without updates. Software requires maintenance, and people who commission software development projects without planning on how they are going to be maintained in the future are taking on risk. Any risk involved in updating that abandoned code in future is a consequence of that decision.
If that's the case then 2.7.18 will continue working and probably it is a bad idea to port it. A lot of work for minimum gain.
But if you actively changing the code, the maintenance will get more and more expensive. With packages dropping python 2 support if you discover a bug in one of your dependencies and fix is in package that no longer work on python 2 you'll need to backport the fix (and maintain your fork) or migrate your code.
This is in part due to a lack of foresight, but you can run into all sorts of weird issues that you'd never think of, like this one: we have a feature in our REST API that can return lists of items as CSV instead of JSON (yes, I know, it sounds weird). It requires no effort from our backend services; the api proxy takes care of it. Unfortunately, something changed with dict enumeration order between python 2 and 3, and so when we first tried to upgrade, the CSV files being spit out had a new column ordering, which of course would have broken customer code that relied on it.
The string-handling changes, while necessary, are also a bear to deal with. Since python is dynamically typed, you need to work to find all the places where you need to add a ".decode()" or ".encode()". If you don't have excellent test coverage already, you're going to miss some, and it'll be a game of whack-a-mole until you get them all... assuming you have actually gotten them all.
> something changed with dict enumeration order between python 2 and 3
Dicts were by definition unordered until Python 3.7 [0], so you were relying on undefined behaviour. If you need an ordered dictionary and support Python 3.6 or below, you should use OrderedDict [1].
> something changed with dict enumeration order between python 2 and 3
Enumeration order in dict keys was never guaranteed (until 2019), even on 2. So basically that code relied on undocumented cpython behaviour that was strongly advised against, i.e. it was broken already. 3 simply made the brokenness more visible.
I imagine the reason for not upgrading from Python 2 is the same reason you don't upgrade your car just because there's a new model out. (Or maybe you're the type who does, but I guess you can hopefully understand why others don't do that.)
I think the biggest issue seems to be that you need to migrate the whole thing at once. If it could be done incrementally it would be easier.
Although there are ways, you can still incrementally adapt code base to work on both pythons. Also pylint with py3k option, mypy can help. There's also six packages, but many people seem to had good luck with futurize.
There's also something that I tried a while ago and it surprisingly worked (although it might not work that well on larger codebase?), basically you can use Cython (not to be confused with CPython) to compile Python 2 code and then include it in python 3, this would enable migration file by file.
Your existing code work perfectly for now. Management failed to understand while they need to budget a team to upgrade when the project doesn't bring anything new to the table.
Our migration to Python 3 occurred with a new generation of the product. No new development is happening on the Python 2 product, and customers are being migrated off. The vendor who runs our old product (GAE) has promised ongoing Python 2 support, so there’s literally no reason to spend the time or money to migrate it, no matter how long it takes for the last customer to get off the old product.
In my work, it's software that offers a Python 2 module for scripting. I tried the naïve "upgrade" of copying the module into my Python 3 module library, but no dice. The software checked the version of Python, saw it wasn't 2.7.10 (yeah), and raised an error.
Because there's nothing to update to. Everyone who was working Python circa 2.7.0 has switched to working on a new, different language which they insist on misleadingly calling "Python 3" rather than come up with a new name like the Perl -> Raku folks did.
I think you are trolling, but in case you aren't; can you elaborate what is so different in Python 3? Granted, it is not a drop-in replacement but it is 99.9% the same thing.
I think describing it as porting to a new language is misleading: on most of my projects, most of the work is a few minutes — run modernize/futurize, check the tests, etc. If the original developers were really sloppy about how they handle encoding, it can take longer but most of the problems I've seen have very little to do with Python rather than the fact that something still running Python 2 likely has significant technical debt issues — especially things like not having test coverage which make it a lot harder to ship changes.
Completely agreed — I would just argue that the “Python 2 vs. 3” argument is a distraction. Java hasn't had as breaking a change but there are still a ton of places running Java 6 or 7 because they like skimping on developers more than getting security updates.
Our product took about a year to port, from getting the go-ahead to the eventual production switch. Running modernize / futurize was like 0.01% of the work.
You're right though, we're fighting our way out of technical debt and switching to python 3 was absolutely necessary. It's forced us to sort out a lot of sketchy string / bytes handling. We do, mercifully, have ~90% test coverage.
I don't remember us writing any new tests specifically for the python 3 port.
One issue with the test suite was that it made heavy use of a thing called django_any that isn't supported in python 3, so decided to replace it with Factory Boy. We have about 500 django models that needed new factories. Factory Boy works quite differently and it was a lot of work to make the factories behave similarly to the old ones where possible, and update most of our ~4000 tests for the new behaviour.
So that was one issue. It was tempting to just patch django_any, but we decided to tackle the technical debt instead.
There isn’t anything inherently wrong with still using 2.7.x . Just don’t expect updates. For new code using 3.7 is probably the best bet at this time.
Red Hat, Ubuntu, etc. are going to support Python 2 for the duration of the operating system releases which shipped it. I would assume that Anaconda, et al. will have similar options for paid customers.
Red Hat has committed to keeping Python 2 on life support until 2024 as part of Red Hat Enterprise Linux 8 [1] so you can get security fixes for Python 2 until then if you use CentOS 8.
Canonical will not provide long term support for Python 2 as part of Ubuntu 20.04 LTS. In Ubuntu 20.04, Python 2 is a "universe" package [2] that does not receive updates by Canonical. This means that the you will only get Python 2 security update guarantees with Ubuntu is on 18.04 LTS until April 2023.
Debian is making an active effort [3] to remove Python 2 and packages that depend on it for its next release. It'll likely support Python 2 as part of Debian Buster until 2024.
Note that if you're reading this to delay your move to Python 3 by another few years, you're doing it wrong. This list shows even all slow enterprise-y distros have a deadline for Python 2, not that you can stretch your stuff for a couple of more years :)
I believe the biggest thing to worry is your application dependencies, if you also depend on packages that come with your system then probably fine (although I noticed that these are largely ignored even if there bugfixes they don't update them)
Otherwise even if your python has security patches for next 4 years, it won't do you any good when you find a bug in one of your dependencies and bugfix is in a version that's python 3 only
Thank you for providing the extra details — I especially agree with your conclusion: go to your boss and say “even if we pay, we're looking at a drop dead date no later than 2024”.
Distro maintainers will be patching security bugs for the foreseeable feature. Do you seriously think if there is a security bug found today Debian maintainers will be like "ah, though luck, I suppose people need to upgrade to py3"?..
Broadly, distros have been ripping out Python2 left and right in advance of 2020. Debian may have a longer support cycle than most and still have Python2 in stable or oldstable.
> Do you seriously think if there is a security bug found today Debian maintainers will be like "ah, though luck, I suppose people need to upgrade to py3"?..
> During DebConf19 we¹ have tried to figure out how to manage Python 2 and PyPy module removal from Debian and below is our proposal. [0]
Debian are in the midst of a large project [1] to remove Python 2 as quickly as they possibly can. Whilst some bugfixes may happen, Debian are already telling you in no uncertain terms:
Python's open source. Anyone can do security updates. Teams at RedHat, Debian, Oracle, etc, will be doing security updates for many decades I'm sure. You may have to pay.
A huge amount of the work of distro maintainers is actually just this kind of backporting and applying security fixes. You're right that technically, the python foundation (or whoever owns the trademark) could come after redhat for making these kinds of changes but it's very doubtful they would.
If redhat decided to add new features to python 2.7, I'm sure the PSF would make a stink
You can, that's what RedHat is doing. It will be still Python 2.7.18 + security patches.
You probably confusing it with Tauton (a Python 2.7 with backported Python 3 features) that tried to place itself as Python 2.8. By backporting these changes they created essentially 3rd version of Python that was incompatible with other 2.
> As such, stating accurately that software ... is compatible with the Python programming language, or that it contains the Python programming language, is always allowed.
3.8 seems to be much more twitchy about exact versions of dependencies, so I've had problems running the AWS cli stuff on 3.8 at times, because there's no set of non-conflicting dependencies. (oftentimes due to minor/patch level version mismatches)
I keep having issues with 3.8 and many dependencies. Two months back, I started out a new project in 3.8 and two days in was downgrading it due to compatibility problems with Pillow and a couple others.
What typically is happening is that a wheel package was missing. When that happens python tries to compile the package, to do that it requires extra dependencies, such as compiler, python-devel, and other *-devel packages, because they weren't available it failed. This is very common when a new major version is released, it requires authors of the C based packages to build wheels to make installation easier and not requiring extra dependencies.
Looks like Pillow has wheel for 3.8 now since April 2nd, so might work now (no compilation is needed). I don't know other packages so can't check them. Psycopg2 would probably be another one with this issue (also fixed on April 6th).
Caveat about 2.7: HTTPS switches from default-non-validating to default-validating. If your internal system uses SSL, and you have a bunch of self-signed certs, for example.
It will be decades before the final Python 2 program goes offline.