post-WWDC – more device collaboration?

It’s been two weeks since WWDC as I’m writing this. I certainly haven’t caught all the content from this year’s event, or even fully processed what I have learned. I see several patterns evolving, hear and read the various rumors, and can’t help but wonder. Maybe it’s wishful thinking, or I’m reading tea leaves incorrectly, but a few ideas keep popping into my head. One of them is that I think we would benefit from — and have the technology seeds for — more and better device collaboration between Apple devices.

One of the features that SwiftUI released this year was leveraging a pattern of pre-computed visualizations for use as complications on the Apple Watch, and more generally as Widgets. The notion of a timeline is ingenious and a great way to solve the constrained compute capability of compact devices. This feature heavily leverages SwiftUI’s declarative views; that is, views derived from data.

Apple devices have frequently worked with each other – Phone and Watch pairing, the Sidecar mechanism that’s available with a Mac laptop and an iPad, and of course AirPlay. They offload data, and in some cases they offload computation. There’s creative caching involved, but even the new App Clips feature seems like it’s a variation on this theme – with code that can be trusted and run for small, very focused purposes.

Apple has made some really extraordinary wearable computing devices – the watch and AirPods, leveraging Siri – in a very different take than the smart speakers of Google Home and Amazon’s Alexa. This year’s update to Siri, for example, supports for on-device translation as well as dictation.

Now extrapolate out just a bit farther…

My house has a lot of Apple devices in it – laptop, watch, AirPods, several iPads, and that’s just the stuff I use. My wife has a similar set. The wearable bits are far more constrained and with me all the time – but also not always able to do everything themselves. And sometimes, they just conflict with each other – especially when it comes to Siri. (Go ahead – say “Hey Siri” in my house, and hear the chorus of responses)

So what about collaboration and communication between these devices? It would be fantastic if they could share sufficient context to support making the interactions even more seamless. A way to leverage the capabilities of a remote device (my phone, tablet, or even laptop) from the AirPods and a Siri request. They could potentially even hand-off background tasks (like tracking a timer) or knowing which device has been used most recently to better infer context for a request. For example, I want a timer while cooking often on my watch, not my phone – but “hey Siri” is not at all guaranteed to get it there.

That they could also know about the various devices capabilities and share those capabilities would make the whole set even smarter and more effective, and depending on which rumors you are excited by – they may be able to do some heavier computation off the devices that are more power constrained (wearables) by near-by but not physically connected (and power efficient) microprocessors. That could be generating visuals like Widgets, or perhaps the inverse – running a machine learning model against transmitted Lidar updates to identify independent objects and their traits from a point cloud or computed 3D mesh.

It’ll be interesting to see where this goes – I hope that distributed coordination, a means of allowing it (as a user), and a means developing for it is somewhere in the near future.

Published by heckj

Developer, author, and life-long student. Writes online at https://rhonabwy.com/.

%d bloggers like this: