Am I the only one that gets triggered by the spelling "Pypy" and "Pypi" instead of "PyPy" and "PyPI" :P
Python
Welcome to the Python community on the programming.dev Lemmy instance!
π Events
Past
November 2023
- PyCon Ireland 2023, 11-12th
- PyData Tel Aviv 2023 14th
October 2023
- PyConES Canarias 2023, 6-8th
- DjangoCon US 2023, 16-20th (!django π¬)
July 2023
- PyDelhi Meetup, 2nd
- PyCon Israel, 4-5th
- DFW Pythoneers, 6th
- Django Girls Abraka, 6-7th
- SciPy 2023 10-16th, Austin
- IndyPy, 11th
- Leipzig Python User Group, 11th
- Austin Python, 12th
- EuroPython 2023, 17-23rd
- Austin Python: Evening of Coding, 18th
- PyHEP.dev 2023 - "Python in HEP" Developer's Workshop, 25th
August 2023
- PyLadies Dublin, 15th
- EuroSciPy 2023, 14-18th
September 2023
- PyData Amsterdam, 14-16th
- PyCon UK, 22nd - 25th
π Python project:
- Python
- Documentation
- News & Blog
- Python Planet blog aggregator
π Python Community:
- #python IRC for general questions
- #python-dev IRC for CPython developers
- PySlackers Slack channel
- Python Discord server
- Python Weekly newsletters
- Mailing lists
- Forum
β¨ Python Ecosystem:
π Fediverse
Communities
- #python on Mastodon
- c/django on programming.dev
- c/pythorhead on lemmy.dbzer0.com
Projects
- PythΓΆrhead: a Python library for interacting with Lemmy
- Plemmy: a Python package for accessing the Lemmy API
- pylemmy pylemmy enables simple access to Lemmy's API with Python
- mastodon.py, a Python wrapper for the Mastodon API
Feeds
Nice read.. Thanks.. π
For what it is worth, my take on the article. A really over whelming list. Nice read through but for those that are interested, the most useful components that were discussed were probably:
- CPython. Of course, that is what we all use.
- PyPy. This is an interesting acceleration if you do not need things like numpy and a lot of other common libraries. The acceleration is maybe 9X in my experience. However C code or good use of numba can often get 100X.
- MicroPython. Not tried but seems cool if you need a really small Python. Presumably not exactly compatible because of missing libraries.
- Pyston. Have not tried but seemed interesting from their discussion of the "pyston_lite_autoload" thing. Have no idea if it is useful.
- Cython. Lot of hoopla about this. Good software but my experience is that you do not get much for speedup until you statically declare stuff. When I did that I got about 24X, then playing with prange and openmp features I got 75X. Not a bad speed up. However, it does not look so good when compared with writing C code or using numba. Mainly because those speedups using other methods seem to be easier to get and I got as large as 121X when using them instead. Cython is just complex to use and then does not get your full entitlement with respect to speed, or at least that was my experience.
- Numba. Numba and Numpy used in the correct situations can give 121X speed improvements and performance similar to parallized and vectorized C code. Actually for some reason it was faster then my C code. This combo is super. Everyone should know about Numba.
- Nuika. Very handy deployment tool. My experience same speed as CPython basically. Well I got about a 9% improvement which is almost nothing. So do not be fooled into thinking that it will give you big speed improvements. A very nice tool as part of your packaging and deployment process.
Since I talked about C code. There are three ways to integrate C code into python: ctypes, CFFI, and using the standard C extension method. I found ctypes to be about 107X, CFFI 108X, and the standard method about 112X for my code on my hardware with code which was using autoparallel and autovectorize, fastmath, and maybe other settings. My point, the speeds of these are about the same though the standard method is just a little faster. So you can really pretty much do whichever is easier.
Anyway my thoughts. Hope they make some sense.
And then there f2py for calling Fortran code from Python!
The info about Nuitka being a similar speed is good to note, since everyone assumes that by compiling something you automatically get a massive speed boost.
This is related to the fact that a lot of people hear about the Faster Python and go "ooh will python be getting a JIT" compiler, as if that's the magic weapon that will improve everything, while in reality loads of different changes are needed.
(If anyone's interested Mark Shannon gave a good talk on Faster Python at PyCon this year, now available on YouTube)
Yes, there are a lot of assumptions, incorrect information, or at least miss-leading stuff out there. So I am always interested in learning more about easy and hard ways to make things better. In fact for most things I do, Python is fast enough, but sometimes it is not.
The things I find miss-leading about what people often say about Python are that it is not that slow, and that you can always just use a library like numpy or something similar to solve speed issues. I found both to be more or less untrue in the sense of getting C like speeds. On my code, Python was indeed slow, like 1% of C speed. The surprising thing for me was using numpy helps a lot but not as much as you think. I only got to 5 to 10% of of C speed with numpy. This is because libraries are often generically compiled and to get good speed you really have to have C code that is compiled for your specific hardware with vectorization, autoparallel, and fast math at least. So generic libraries just are not going to be that fast. Another one people push is using GPUs. That also is not really very effective unless you have a very expensive card and most notably a dedicated GPU card design just for that or an array of them. The GPU performance of my workstation is significantly less then throughput of my CPU. There are hardware limitations too that are interesting. My AMD Rizen 7 based workstation would have twice the speed if I had 4 port memory rather then two port memory which is a lot more common since fully optimized code is memory IO bound at about 1/2 the CPU throughput. This must be why AMD Rizen Threadrippers seem to use 4 port memory.
There are ways around a lot of this. For example using numba can be incredible. Similarly writing your owe C code and carefully compiling it too. The careful compile is critical. Maybe one could do the same with some stock libraries, carefully compile them. Lot of the other stuff people talk about just does not work very well in terms of speed or effort such as pypy, cython, nuitka, etc.
Thanks. That's a big article, and some extra commentary and streamlining there is welcome!
Thanks. I love Python and have used since about 1998. The two areas where I have always found a little lacking is a) creating and app that you can actually give some one, b) computational speed when needed. So I am always interested in those two areas. A year or so ago I looked at a lot of the tools that that the article described, but there are one or two that were mentioned that are new to me. I think I will have to try when I get some time.