Michael Sparks has been a busy guy – obviously continuing to think and fiddle in the concurrent python space quite a bit. He posted a bit of thought and some initial API concepts for software transactional memory on this blog yesterday, and then today kicked out a message to the Kamaelia list with those concepts at least roughly cemented into place with a stand-alone Axon implementation.
My knowledge of transactional memory is, frankly, pretty limited. I listened incredibly enthusiastically at OSCON 2007 to Simon Peyton-Jones talk about it. (Slides available too). He’s a great speaker, and I got the gist of the talk, but I wasn’t ready (and still haven’t gathered myself) to make a leap into some of the new kid languages (Haskell, in particular) to try all this stuff out.
So at a brief reading of the API and description, it looks Michael has implemented the critter. So far, all the fiddling I’ve done has taken advantage of just a single core – in short, the little tasklets aren’t running concurrently – they’re explicitly sharing back and forth. Michael’s clearly thought further down the road there and determined that it would be nice to get all 8 cores of an Octo-Mac into the action. (Well, I expect that’s not QUITE what he though, but that was my immediate translation of it)
Mixing Axons as tasklets and threads, and you can do it. Axon’s got all the components to drive that thing hard. And as soon as you do that, you loose the safety net that I counted on earlier and need something exactly like STM.
Glad to hear that its of interest 🙂
Incidentally, if you only use the boxes interface in Kamaelia, and limit usage of services to generator components the Kamaelia is threadsafe (though a service can still be a thread). A service is something like a Backplane, the Selector, PygameDisplay, etc
The border case is accessing a service from a thread. It’s *just* possible to allocate a service as it’s shutting down in a threading environment. It’s incredibly unlikely, but possible. This is due to use of the co-ordinating assistant tracker. (The CAT is to Kamaelia systems what unix environment variables are to Unix programs – or what hormones are to the neural system in biology (ish))
The reason for implementing STM (needs a friendlier name really – versioned shared variables maybe?) is to eradicate that corner case. The fact that it’s also useful to enable multicore applications on python VM’s without a GIL is really quite neat too. (eg on ironpython threads can run on seperate CPUs, so this provides a means for natural scaling there I suppose) Not sure if mono (and hence ironpython) runs on an opto-mac though 🙂
Hm… If this was running on a multicore machine, running on Iron Python running multiple schedulers in their own threads (and hence across CPUs) and then distributing microprocesses across them using STM as the implementation of the CAT would allow for naturally scaling parallelism without any faff at all. Potentially without any real modification to existing Kamaelia apps at all.
Pity I’ve only got a single core 🙂 It’d be a lovely proof of concept though.
LikeLike
Hey… I don’t seem to have your email address stored anywhereand wasn’t sure if you’d been checking… I believe there is going to be an Anita memorial of sorts at the blogger meetup tomorrow night. So, wanted to let you know… //t
LikeLike
Thanks Tara
LikeLike