generator based concurrent programming – redux

A little earlier in November, I wrote up a little concurrent programming exercise using Kamaelia. Michael Sparks commented a bit on it, and suggested some improvements – which was really cool. He’s the guy that make Kamaelia, and helped me a few months back when I was first investigating “hey, will this work?”

So I took used his code updates and I wanted to share the difference in execution speed:

10 concurrent objects, 1000 loops:
python timeit.py -s “import hackysack” “hackysack.runit(10,1000)”

Originally: 10 loops, best of 3: 127 msec per loop
Updated Code: 10 loops, best of 3: 142 msec per loop

100 concurrent objects, 1000 loops
python timeit.py -s “import hackysack” “hackysack.runit(100,1000)”

Originally: 10 loops, best of 3: 587 msec per loop
Updated Code: 10 loops, best of 3: 180 msec per loop

1000 concurrent objects, 1000 loops
python timeit.py -s “import hackysack” “hackysack.runit(1000,1000)”

Originally: 10 loops, best of 3: 6.05 sec per loop
Updated Code: 10 loops, best of 3: 550 msec per loop

10000 concurrent objects, 1000 loops
python timeit.py -s “import hackysack” “hackysack.runit(10000,1000)”

Originally: 10 loops, best of 3: 60.4 sec per loop
Updated Code: 10 loops, best of 3: 4.26 sec per loop

So what’s this all mean? Well – mostly that using generators for this sort of shared in-process concurrent style execution is possible and effective. Stackless Python still kicks it’s ass hands down – but the benefit you get right now is you can use a stock Python distribution and get the benefit of the tasklet coding style that makes some things really effective.

Michael even has some recent fiddling that uses forked python processes, talking between them to get multiple cores into the game.

Published by heckj

Developer, author, and life-long student. Writes online at https://rhonabwy.com/.

2 thoughts on “generator based concurrent programming – redux

Comments are closed.

%d bloggers like this: