PyCon 2024 showcased a number of ways to speed the pokey Python programming language including sub-interpreters, immortal objects, just-in-time compilation and more.
Usually, but when it isn’t then you’ve got a bottleneck. Multithreaded performance is a major weak point if you need to do any processing that isn’t handled by one of the libraries.
Then you need to break up your problem into processes. Python doesn’t really do multi-threading (hopefully that changes with the GIL going away), but most things can scale reasonably well in a process pool if you manage the worker queue properly (e.g. RabbitMQ works well).
It’s not as good as proper threadimg, but it’s a lot simpler and easier to scale horizontally. You can later rewrite certain parts if hosting costs become a larger issue than dev costs.
A process pool means extra copying of data around which incurs a huge cost and this is made worse by the tendency for parallel-processing-friendly workloads often consisting of large amounts of data.
Yup, which is why you should try to limit the copying by designing your parallel processing algorithm around it. If you can’t, you would handle threading with a native library or something and scale vertical instead of horizontal. Or pick a different language if it’s a huge part of your app.
But in a lot of cases, it’s reasonable to stick with Python and scale horizontally. That has value if you’re otherwise a Python shop.
Usually, but when it isn’t then you’ve got a bottleneck. Multithreaded performance is a major weak point if you need to do any processing that isn’t handled by one of the libraries.
Then you need to break up your problem into processes. Python doesn’t really do multi-threading (hopefully that changes with the GIL going away), but most things can scale reasonably well in a process pool if you manage the worker queue properly (e.g. RabbitMQ works well).
It’s not as good as proper threadimg, but it’s a lot simpler and easier to scale horizontally. You can later rewrite certain parts if hosting costs become a larger issue than dev costs.
A process pool means extra copying of data around which incurs a huge cost and this is made worse by the tendency for parallel-processing-friendly workloads often consisting of large amounts of data.
Yup, which is why you should try to limit the copying by designing your parallel processing algorithm around it. If you can’t, you would handle threading with a native library or something and scale vertical instead of horizontal. Or pick a different language if it’s a huge part of your app.
But in a lot of cases, it’s reasonable to stick with Python and scale horizontally. That has value if you’re otherwise a Python shop.