this post was submitted on 13 Jul 2024
114 points (98.3% liked)

Programming

17432 readers
238 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



founded 1 year ago
MODERATORS
top 36 comments
sorted by: hot top controversial new old
[–] csm10495@sh.itjust.works 28 points 4 months ago (1 children)

It's exciting, but man there are lots of assumptions in native python built around the gil.

I've seen lists, etc. modified by threads assuming the gil locks for them. Testing this e2e for any production deployment can be a bit of a nightmare.

[–] overcast5348@lemmy.world 40 points 4 months ago* (last edited 4 months ago) (3 children)

My company makes it super easy for me - we're just going to continue on python 2.7 and add this to the long list of reasons why we're not upgrading.

Please send help.

[–] Corbin@programming.dev 3 points 4 months ago (1 children)

You may be pleased to know that PyPy's Python 2.7 branch will be maintained indefinitely, since PyPy is also written in Python 2.7. Also, if you can't leave CPython yet, ActivePython's team is publishing CPython 2.7 security patches.

[–] overcast5348@lemmy.world 3 points 4 months ago

We already have contracts in place to get security patches. That's usually the InfoSec team's problem anyway.

As a developer, my life gets hard due to library support. We manage internal forks of multiple open source projects just to make them python 2 compatible. A non-trivial amount of time is wasted on this, and we don't even have it available for public use. 🤷‍♂️

[–] verstra@programming.dev 2 points 4 months ago (2 children)

Why would you not be upgrading due to a new feature of python? You don't like new features or was that a badly wordered sentence?

[–] nickwitha_k@lemmy.sdf.org 10 points 4 months ago

Because using an exceedingly insecure version is cheaper until an inevitable compromise makes it expensive.

[–] magikmw@lemm.ee 4 points 4 months ago

More work, more debt. The more debt you have the harder it is to let go.

[–] fubarx@lemmy.ml 2 points 4 months ago

Python 2.7 and iOS mobile programmers stuck on Objective-C could start a support group.

[–] BB_C@programming.dev 20 points 4 months ago (1 children)

While pure Python code should work unchanged, code written in other languages or using the CPython C API may not. The GIL was implicitly protecting a lot of thread-unsafe C, C++, Cython, Fortran, etc. code - and now it no longer does. Which may lead to all sorts of fun outcomes (crashes, intermittent incorrect behavior, etc.).

:tabclose

[–] vrighter@discuss.tchncs.de 4 points 4 months ago

those libraries include pretty much almost all popular libraries. It's just impossible to write performant code in python.

[–] roadrunner_ex@lemmy.ca 8 points 4 months ago (1 children)

I'm curious to see how this whole thing shakes out. Like, will removing the GIL be an uphill battle that everyone regrets even suggesting?Will it be so easy, we wonder why we didn't do it years ago? Or, most likely, somewhere in the middle?

[–] brettvitaz@programming.dev 3 points 4 months ago (1 children)
[–] roadrunner_ex@lemmy.ca 9 points 4 months ago (2 children)

Yes, testing infrastructure is being put in place and some low-hanging fruit bugs have already been squashed. This bodes well, but it's still early days, and I imagine not a lot of GIL-less production deployments are out there yet - where the real showstoppers will potentially live.

I'm tenatively optimistic, but threading bugs are sometimes hard to catch

[–] FizzyOrange@programming.dev 2 points 4 months ago (1 children)

threading bugs are sometimes hard to catch

Putting it mildly! Threading bugs are probably the worst class of bugs to debug

Definitely debatable if this is worth the risk of impossible bugs. Python is very slow, and multi threading isn't going to change that. 4x extremely slow is still extremely slow. If you care remotely about performance you need to use a different language anyway.

[–] Womble@lemmy.world 5 points 4 months ago (2 children)

Python can be extremely slow, it doesn't have to be. I recently re-wrote a stats program at work and got a ~500x speedup over the original python and a 10x speed up over the c++ rewrite of that. If you know how python works and avoid the performance foot-guns like nested loops you can often (though not always) get good performance.

[–] FizzyOrange@programming.dev 1 points 4 months ago (4 children)

Unless the C++ code was doing something wrong there's literally no way you can write pure Python that's 10x faster than it. Something else is going on there. Maybe the c++ code was accidentally O(N^2) or something.

In general Python will be 10-200 times slower than C++. 50x slower is typical.

[–] bitcrafter@programming.dev 6 points 4 months ago (1 children)

Unless the C++ code was doing something wrong there’s literally no way you can write pure Python that’s 10x faster than it. Something else is going on there.

Completely agreed, but it can be surprising just how often C++ really is written that inefficiently; I have had multiple successes in my career of rewriting C++ code in Python and making it faster in the process, but never because Python is inherently faster than C++.

[–] FizzyOrange@programming.dev 1 points 4 months ago

Yeah exactly. You made it faster through algorithmic improvement. Like for like Python is far far slower than C++ and it's impossible to write Python that is as fast as C++.

[–] Womble@lemmy.world 5 points 4 months ago (1 children)

Nope, if you're working on large arrays of data you can get significant speed ups using well optimised BLAS functions that are vectorised (numpy) which beats out simply written c++ operating on each array element in turn. There's also Numba which uses LLVM to jit compile a subset of python to get compiled performance, though I didnt go to that in this case.

You could link the BLAS libraries to c++ but its significantly more work than just importing numpy from python.

[–] FizzyOrange@programming.dev 0 points 4 months ago (2 children)

numpy

Numpy is written in C.

Numba

Numba is interesting... But a) it can already do multithreading so this change makes little difference, and b) it's still not going to be as fast as C++ (obviously we don't count the GPU backend).

[–] HyperCube@kbin.run 1 points 4 months ago (1 children)

Numpy is written in C.

So you get the best of both worlds then: the speed of C and the ease of use of Python.

[–] FizzyOrange@programming.dev 3 points 4 months ago

Sure but that's not relevant to the current discussion. The point is that removing the GIL doesn't affect Numpy because Numpy is written in C.

[–] Womble@lemmy.world -2 points 4 months ago* (last edited 4 months ago) (1 children)

Numpy is written in C.

Python is written in C too, what's your point? I've seen this argument a few times and I find it bizarre that "easily able to incorporate highly optimised Fortran and C numerical routines" is somehow portrayed as a point against python.

Numpy is a defacto extension to the python standard that adds first class support for single type multi-dimensional arrays and functions for working on them. It is implemented in a mixture of python and c (about 60% python according to github) , interfaces with python's c-api and links in specialist libraries for operations. You could write the same statement for parts of the python std-lib, is that also not python?

Its hard to not understate just how much simpler development is in numpy compared to c++, in this example here the new python version was less than 50 lines and was developed in an afternoon, the c++ version was closing in on 1000 lines over 6 files.

[–] FizzyOrange@programming.dev 3 points 4 months ago (1 children)

Python is written in C too, what’s your point?

The point is that eliminating the GIL mainly benefits pure Python code. Numpy is already multithreaded.

I think you may have forgotten what we're talking about.

the new python version was less than 50 lines and was developed in an afternoon, the c++ version was closing in on 1000 lines over 6 files.

That's a bit suss too tbh. Did the C++ version use an existing library like Eigen too or did they implement everything from scratch?

[–] Womble@lemmy.world 1 points 4 months ago* (last edited 4 months ago)

I was responding to your general statement that python is slow and so there is no point in making it faster, I agree that removing the GIL wont do much to improve the execution speed for programs making heavy use of numpy or things calling outside it.

That’s a bit suss too tbh. Did the C++ version use an existing library like Eigen too or did they implement everything from scratch?

It was written entirely from scratch which is kind of my point, a well writen python program can outperform a naive c implementation and is vastly simpler to create.

If you have the expertise and are willing to put in the effort you likely can squeze that extra bit of performance out by dropping to a lower level language, but for certain workloads you can get good performance out of python if you know what you are doing so calling it extremely slow and saying you have to move to another language if you care about performance is missleading.

[–] Corbin@programming.dev 1 points 4 months ago (1 children)

You're thinking of CPython. PyPy can routinely compete with C and C++, particularly in allocation-heavy or pointer-heavy scenarios.

[–] FizzyOrange@programming.dev -1 points 4 months ago

I am indeed thinking of CPython because a) approximately nobody uses PyPy, and b) this article is about CPython!!

In any case, PyPy is only about 4x faster than CPython on average (according to their own benchmarks) so it's only going to be able to compete with C++ in random specifics circumstances, not in general.

And PyPy still has a GIL! Come on dude, think!

[–] nickwitha_k@lemmy.sdf.org 0 points 4 months ago (1 children)

You're both at least partly right. The only interpreted language that can compete with compiled for execution speed is Java and it has the downside of being Java.

That being said, you might be surprised at how fast you can make Python code execute, even pre-GIL changes. I certainly was. Using multiprocessing and code architected to be run massively parallel, it can be blazingly fast. It would still be blown out of the water by similarly optimized compiled code but, is worth serious consideration if you want to optimize for iterative development.

My view on such workflows would be:

  1. Write iteration of code component in Python.
  2. Release.
  3. Evaluate if any functional changes are required. If so, goto 1.
  4. Port component to compiled language, changing function calls/imports to make use of the compiled binary alongside the other interpreted components.
  5. Release.
  6. Refactor code to optimize for compiled language, features that compiled language enables, and/or security/bug fixes.
  7. Release.
  8. Evaluate if further refactor is required at this time, if so, goto 6.
[–] FizzyOrange@programming.dev 1 points 4 months ago (1 children)

The only interpreted language that can compete with compiled for execution speed is Java

"Interpreted" isn't especially well defined but it would take a pretty wildly out-there definition to call Java interpreted! Java is JIT compiled or even AoT compiled recently.

it can be blazingly fast

It definitely can't.

It would still be blown out of the water by similarly optimized compiled code

Well, yes. So not blazingly fast then.

I mean it can be blazingly fast compared to computers from the 90s, or like humans... But "blazingly fast" generally means in the context of what is possible.

Port component to compiled language

My extensive experience is that this step rarely happens because by the time it makes sense to do this you have 100k lines of Python and performance is juuuust about tolerable and we can't wait 3 months for you to rewrite it we need those new features now now now!

My experience has also shown that writing Python is rarely a faster way to develop even prototypes, especially when you consider all the time you'll waste on pip and setuptools and venv...

[–] nickwitha_k@lemmy.sdf.org 2 points 4 months ago (1 children)

"Interpreted" isn't especially well defined but it would take a pretty wildly out-there definition to call Java interpreted! Java is JIT compiled or even AoT compiled recently.

Java is absolutely interpreted, supposing that the AoT isn't being used. The code must be interpreted by JVM (an interpreter and JIT compiler) in order to output binary data that can run on any system, the same as any interpreted language. It is a pretty major stretch, in my mind to claim that it's not. The simplest test would be: "Does the program require any additional programs to provide the system with native binaries at runtime?"

It definitely can't.

Well, yes. So not blazingly fast then.

I mean it can be blazingly fast compared to computers from the 90s, or like humans... But "blazingly fast" generally means in the context of what is possible.

I find that context marginally useful in practice. In my experience it is prone to letting perfect be the enemy of good and premature optimization.

My focus is more in tooling, however, so, might be coming from very different places. In my contexts, things are usually measured against existing processes and tooling and frequently on human scale. Do my something in 5 seconds that usually takes a human 15 minutes and that's an improvement of nearly 3 orders of magnitude.

My extensive experience is that this step rarely happens because by the time it makes sense to do this you have 100k lines of Python and performance is juuuust about tolerable and we can't wait 3 months for you to rewrite it we need those new features now now now!

You're not wrong. I'm actually in the process of making such a push where I'm at, for the first time in my career. It helps a lot if you can architect it so that you can have runner and coordinator components as those, at their basics, are simple to implement in most languages. Then, things can be iteratively ported over time.

My experience has also shown that writing Python is rarely a faster way to develop even prototypes, especially when you consider all the time you'll waste on pip and setuptools and venv...

That's... an odd perspective to me. Pip and venv have been tools that I've found to greatly accelerate dev setup and application deployment. Installing any third-party dependencies in a venv with pip means that one can pip freeze later and dump directly to a requirements.txt for others (including deployment) to use.

[–] FizzyOrange@programming.dev 2 points 4 months ago (1 children)

Pip and venv have been tools that I’ve found to greatly accelerate dev setup and application deployment.

I'm not saying pip and venv are worse than not using them. They're obviously mandatory for Python development. I mean that compared to other languages they provide a pretty awful experience and you'll constantly be fighting them. Here's some examples:

  • Pip is super slow. I recently discovered uv which is written in Rust and consequently is about 10x faster (57s to 7s in my case).
  • Pip gives terrible error messages. For example it assumes all version resolution failures are due to requirements conflicts, when actually it can be due to Python version requirements too so you get insane messages like "Requirement foo >= 2.0 conflicts with requirement foo == 2.0". Yeah really.
  • You can't install multiple versions of the same dependency, so you end up in dependency resolution hell (depA depends on foo >= 3 but depB depends on foo <= 2).
  • No namespace support for package names so you can't securely use private PyPI repositories.
  • To make static typing work properly with Pyright and venv and everything you need some insane command like pip install --conf-settings editable-mode=compat --editable ./mypackage. How user friendly. Apparently when they changed how editable packages were installed they were warned that it would break all static tooling but did it anyway. Good job guys.
  • When you install an editable package in a venv it dumps a load of stuff in the package directory, which means you can't do it twice to two different venvs.
  • The fact that you have to use venvs in the first place is a pain. Don't need that with Deno.

There's so much more but this is just what I can remember off the top of my head. If you haven't run into these things just be glad your Python usage is simple enough that you've been lucky!

I’m actually in the process of making such a push where I’m at, for the first time in my career

Good luck!

[–] nickwitha_k@lemmy.sdf.org 1 points 4 months ago (1 children)

Oh my. Yeah. I have seen these before indeed. IMO, that's also a sure sign that a compiled language needs to come into the picture (port the simplest conflicting component). Especially if, for some reason multiple dependency versions are needed (I have hit that in particular myself).

I've not yet had the pleasure of working with Rust much but that's the target for the next version that we start, so, will be fun.

[–] SatouKazuma@programming.dev 2 points 4 months ago

Rust is a lovely language if you're okay getting deep into the nuts and bolts.

[–] vrighter@discuss.tchncs.de -2 points 4 months ago

you must have written some really really horrible c++

[–] Socsa@sh.itjust.works 1 points 4 months ago

The reality is just that some kind of python code will have the same race conditions as most other languages moving forward and that's ok.

[–] fubarx@lemmy.ml 1 points 4 months ago* (last edited 4 months ago)

I have a project ready to try this out. It's a software simulator, and each run (typically 10-10,000 iterations) can be done independently, with the results aggregated and shown at the end. It's also instrumented to show CPU and memory usage, and on MacOS, you can watch how busy each core gets (hint: PEGGED in multicore mode).

Can run it single-threaded, then with multiprocessing, then with multi-core and time each one. Pretty happy with multicore, but as soon as the no-GIL/subinterpreter version is stable, will try it out and see if it makes any difference. Under the hood it uses numpy and scipy, so will have to wait for them.

Edit: on my todo list is to try it all out in Mojo. They make pretty big performance gain claims.