You're right, nothing is stopping anybody from putting these modules on Pypi. If somebody's already using Pypi dependencies, then another is no big deal.
But there's a really big jump from "This tool is self-contained" to "This tool needs X from pypi". It ripples into deployment, source-control, everything.
I try very hard to make tools that don't need anything except for Python's stdlib. That's based on a perception of trust: that the stdlib will be supported, and represents Python's only stable dependency.
Once the team starts to erode that trust, Python's compatibility story will start to break.
If you want to step up and be the maintainer of one or more of these modules in the standard library, I'm pretty sure you would be welcomed. PEP 594 is basically just a formalisation of the fact that no one has volunteered to do this yet.
Arguing that opponents to removal "can just step up as maintainers" are completely bogus. As mentioned elsewhere, this is more about eroding trust in stdlib as a stable baseline. This is just the kind of thing that will teach people to not trust Python as a stable foundation.
Also. If maintenance of aging modules becomes tiresome, maybe it is a strong hint to keep backwards compatibility in Python proper.
Not at all bogus. If "eroding trust in stdlib as a stable baseline" is an important issue to you, then you can do something about it by helping provide the resources that allows old modules to be kept.
Core language backwards compatibility is a non-issue. Approximately zero percent of maintenance costs go towards that.
Not to mention python has an entirely open proposal process where anyone at any point could have commented and expressed a desire to keep these libraries in the standard library. It seems after this entire process not enough people need them in the standard library.
If your implication here is that the very public backwards compatibility statement, the mailing lists where PEPs are discussed and approved/rejected, and the website python.org are a "locked filing cabinet", it may interest you to learn that this exact PEP was discussed over two threads and nearly 1000 posts on Hacker News: https://news.ycombinator.com/item?id=19948642, https://news.ycombinator.com/item?id=19985802.
That has no bearing at all on the PEP process and is a gross mischaracterization of it. It's clearly documented and open. If you subscribe to any python development mailing list or even the PEP github repo you would have seen this issue. You would have seen links to the discussion which was open for _three years_ and had over 100 comments on it. This was not a decision that was hidden in any way from the python userbase.
It wasn't a strong enough argument to overcome the final decision that cgi and cgitb are modules which never would be accepted in python today. There were three years to discuss this issue and make a case--was that process not enough?
cgi, nntp, telnet, etc. are old and obsolete. And not just in the 5 years out of fashion/not the trendy new thing--these things are decades out of date and have major or glaring security issues, etc. If you're publishing brand new code in 2022 that still depends on python's built-in cgi support, etc. you have far greater problems than having to pull in a third party dependency now.
Python changes in breaking ways--just look at the python 2 to 3 change. You will have to learn to deal with it. There is no explicit guarantee that python will continue to work in exactly the way it has always worked for you forever.
what's the major glaring security issue of cgi that you would consider a dealbreaker for a simple http-based service?
(this is an honest question, because when I saw this list I actually have a python script running on an apache web server that is using the cgi module to access post variables, and I never saw any need to change that, as any other "more modern" solution would be much more bothersome to configure and maintain.)
What is your auth and session story? Because I doubt you're implementing OIDC, JWT, etc. in a cgi script vs. relying on a modern framework that does all the work for you. HTTP basic auth? Well, there's your security issue right there.
So presumably you are happy to have a web accessible endpoint running on your server that will pop a new process and start executing a local script (which might be reading/writing files, other services, etc.) without checking any permissions, enforcing any cross site scripting or CSRF protections, etc. It might be fine for the most trivial appplication. For anything that actually changes state or has side effects, I would be very concerned and it would never pass a serious security review.
Firstly, HTTP Basic auth is not a problem over HTTPS (and it will be HTTPS more often than not, given modern browsers are bound to criminalise HTTP soon). Even then, putting an authenticating reverse proxy in front of such a site is dead-simple, and that can use wherever auth is required.
Secondly, chances are that if you're building a CGI site, you won't be exposing it to the outside world at all because it's internal/personal jank that's built to do a single job, and not to look nice. If it's meant to look nice and and handle the stress of being used by a public userbase, then it won't be CGI in the first place.
It's a massive footgun and no sane security review would let a production service pass with HTTP basic auth. You're one misconfigured TLS proxy away from major security breaches and issues.
* basic auth only exists at the proxy layer to protect apps with no auth.
* commonly deployed auth systems are secure without TLS?
because both are false. Check out the Rails Devise support for HTTP basic auth which is perfectly secure. And check that basically zero auth systems in the wild use PAKE and so are entirely dependent on TLS to secure the password transmission in flight.
How many "ands" should a theoretical security threat have before you should just ignore the guy proposing it?
E.g.
If mercury is in retrograde and the stars are aligned and it's Tuesday and the hacker is in position with Wireshark running on windows XP with a tethered pine phone, and we accidentally open all our ports, then we're completely vulnerable --> Ignore that guy.
I don't really see the connection between cgi and http basic auth. These seem orthogonal issues.
(And fwiw the usecase I have does not have any authentication and doesn't need any. It's taking a POST var and doing things with it, but there are no secrets involved.)
WebOb uses cgi for parsing multipart uploads/forms...
It's not obsolete, and it is heavily used, it's working software that works well. Ripping it out just means now there will be N copies floating around that will all need to get patched/fixed instead of just a single version included with Python.
Well apparently it's not heavily enough used that any of its current users were motivated enough to discuss the PEP and make a strong case for keeping it in the standard library. The discussion was open for _three years_ and had hundreds of comments: https://discuss.python.org/t/pep-594-removing-dead-batteries... The outcome is that there isn't enough usage to merit this being in the standard library.
I suspect there's very little overlap between users of development software and participants in the bureaucracy that designs the software. The requisite personality types are very different.
If you're developing software that depends on a python standard library it is on you to subscribe to the relevant mailing lists so you can be notified of security issues, deprecations, changes, etc. There is not bureaucracy here to be aware of what's happening, or even comment on the changes.
Years ago I used to do that. But nowadays I use so much external stuff it's impossible. So I'm actually immensely grateful to the Python team for including so much in the Standard Library and maintaining it for decades - because for me, at least until now, it's been that one part where I could relax a bit about and be confident that competent people will take care of everything and I won't need to spend much time analyzing the situation.
Seems to me that this is exactly the problem. The python team is saying they really can't maintain these libraries to a high standard and need to remove them from the standard library. I can see both sides but really that makes a lot of sense to me.
Is my comment. It's the 6th comment down. Clearly I didn't make a strong enough case, but I certainly did advocate for it and mention that it is in use.
Since NNTP was my gateway to open source, I'm somewhat sad to see Python's nntplib module die... but NNTP is quite well advanced in its way out. If you were asking a decade ago, NNTP would be a relatively niche community that potentially had some relevance, but posting volumes not associated with alt.binaries has declined precipitously in the past decade alone (judging from aioe.org's statistics, it's a third of in 2022 of what it was in 2012, and even that feels generous).
I can understand some concern about the impact of removal of support for deprecated things where it potentially impacts archival scenarios (e.g., uuencoded text), but dropping NNTP from the standard library is probably the right call for Python. It's not like the protocol is so complicated you need a library to do it for you, anyways. ;-) (To be fair, I often use NNTP as my example protocol for implementing stuff because of its simplicity.)
Telnet the protocol is secure if you either (a) use the Telnet ENCRYPTION option (such as with Kerberos ktelnet), or (b) run it over TLS (standard practice in IBM mainframe environments in which Telnet is still heavily used). Most Telnet clients and servers, this Python telnetlib included, never implemented either. But people present the protocol itself as being insecure, when it is just that the majority of implementations of it never bothered to implement the security extensions which have been defined for it. Telnet-over-TLS is just as secure as SSH. Telnet has the advantage of having various standardised extensions which don't exist in SSH, such as IBM 3270 and 5250 block mode terminal support, the SUPDUP protocol used by PDP-10s and Lisp Machines, serial port control (enabling you to configure baud rate/parity bits/etc if you Telnet to a serial port server), and Kermit. Now, you may not use any of those features – in which case SSH will suit you perfectly fine – but some people do. And, in principle someone could define equivalent extensions to SSH, but by and large nobody has done so, and even if somebody did, they wouldn't be standard so little or no software would support them.
But there's a really big jump from "This tool is self-contained" to "This tool needs X from pypi". It ripples into deployment, source-control, everything.
I try very hard to make tools that don't need anything except for Python's stdlib. That's based on a perception of trust: that the stdlib will be supported, and represents Python's only stable dependency.
Once the team starts to erode that trust, Python's compatibility story will start to break.