Adding thread safety in Swift 3

One of the pieces that I’ve brushed up against recently, but didn’t understand in any great depth, were techniques for making various sections of code thread-safe. There are some excellent articles out there on it, so if you’re looking and found this, let me provide a few references:

In Jun 2016 Matt Gallagher wrote Mutexes and closure capture in Swift, which was oriented towards how to optimize beyond the “obvious advice”, and a great read for the more curious.

The “usual advice” referenced a StackOverflow question: What is the Swift equivalent to Objective-C’s @synchronized? that as a technique spans swift 2 and 3 – the gist being to leverage the libDispatch project, creating a dispatch queue to handle synchronization and access to shared data structures. I found another question+answer on StackOverflow to be a bit easier to read and understand: Adding items to Swift array across multiple threads causing issues…, and matched a bit more of the technique that you can spot in Swift Package Manager code.

One of the interesting quirks in Swift 3 is that let myQueue = DispatchQueue(label: "my queue") returns a serialized queue, as opposed to the asynchronous queue, which you can get invoking DispatchQueue.global(). I’m not sure where in the formal documentation that default information appears – the guides I found to Grand Central Dispatch are all still written for C and Objective-C primarily, so the mapping of Swift libraries to those structures wasn’t at all obvious to me. I particularly liked the answer that described this detail, as it was some of the better descriptions that mapped to the Apple developer guides on concurrency (and which desperately needs to be updated to related to Swift IMO).

Circling back to Matt Gallagher’s piece, the overhead of the scope capture in passing through the closure was more than he wanted to see – although it was fine for my needs. None the less, his details are also available in github at CwlUtils, which has a number of other interesting pieces and tidbits in it that I’ll circle back later to look at in depth.

 

spelunking Swift Package Manager

In the slowly building sequence of my swift dev diaries, I wrote about how to set up a swift development environment, and noted some details I gleaned from the SwiftPM slack channel about how to make a swift 3.0/3.1 binary “portable”. This likely will not be an issue in another year, as the plans for SwiftPM under swift 4 include being able to make statically compiled binaries.

Anyway, the focal point for all this was an excuse to learn and start seriously using swift, and since I’ve been a lot more server-side than not for the past years, I came at it from that direction. With the help of the guys writing swift package manager, I picked a few bugs and started working on them.

Turns out Swift Package Manager is a fairly complex beast. Like any other project, a seemingly simple bug leads to a lot of tendrils through the code. So this weekend, I decided I would dive in deep and share what I learned while spelunking.

I started from the bug SR-3275 (which was semi-resolved by the time I got there), but I decided to try and “make it better”. SwiftPM uses git and a number of other command line tools. Before it uses it, it would be nice to check to make sure they’re there, and if they’re not – to fail gracefully, or at least informatively. My first cut at this  improved the error output and exposed me to some of the complexity under the covers. Git is used in several parts of SwiftPM (as are other tools). It has definitely grown organically with the code, so it’s not entirely consistent. SwiftPM includes multiple binary command-line tools, and several of them use Git to validate dependencies, check them out if needed, and so forth.

The package structure of SwiftPM itself is fairly complex, with multiple separate internal packages and dependencies. I made a map because it’s easier for me to understand things when I scrawl them out.

swiftpm

swiftpm is the PDF version (probably more readable) which I generated by taking the output of swift package --dump-package and converting it to a graphviz digraph, rendering it with my old friend OmniGraffle.

The command line tools (swift build, swift package, etc) all use a common package Commands, which in turn relies and the various underlying pieces. Where I started was nicely encapsulate in the package SourceControl, but I soon realized that what I was fiddling with was a cross-cutting concern. The actual usage of external command line tools happens in Utility, the relevant component being Process.swift.

Utility has a close neighbor: Basic, and the two seem significantly overlapping to me. From what I’ve gathered, Utility is the “we’re not sure if it’s in the right place” grouping, and Basic contains the more stabilized, structured code. Process seems to be in the grey area between those two groupings, and is getting some additional love to stabilize it even as I’m writing this.

All the CLI tools use a base class called SwiftTool, which sets a common structure. SwiftTool has an init method that sets up CLI arguments and processes the ones provided into an Options object that can then get passed down in the relevant code that does the lifting. ArgumentParser and ArgumentBinder do most of that lifting. Each subclass of SwiftTool has its own runImpl method, using relevant underlying packages and dependencies. run invokes the relevant code in runImpl. If any errors get thrown, it deals with them (regardless of the CLI invoking)  with a method called handle in Error. It is mostly printing some (hopefully useful) error message and then exiting with an error code.

My goal is to check for required files – that they exist and are executable – and report consistently and clearly if they’re not. From what I’ve learned so far that seems to be best implemented by creating a relevant subclass (or enum declaration) of Swift.Error and throwing when things are awry, letting the _handle implementation in Commands.Error take care of printing things consistently.

So on to the file checking parts!

The “file exists” seemed to have two implementations, one in Basic.Filesystem and another in Basic.Pathshims.

After a bit more reading and chatting with Ankit, it became clear that Filesystem was the desired common way of doing things, and Pathshims the original “let’s just get this working” stuff. It is slowly migrating and stabilizing over to Filesystem. There are a few pieces of the Filesystem implementation that directly use Pathshims, so I expect a possible follow-up will be to collapse those together, and finally get rid of Pathshims entirely.

The “file is executable check” didn’t yet exist, although there were a couple of notes that it would be good to have. Similar checking was in UserToolchain, including searching environment paths for the executable. It seemed like it needed to be pulled out from there and moved to a more central location, so it could be used in any of the subclasses of SwiftTool.

At this point, I’ve put up a pull request to add in the executable check to Filesystem, and another to migrate the “search paths” logic into Utility. I need to tweak those and finish up adding the tests and comments, and then I’ll head back to adding in the checks to the various subclasses of SwiftTool that could use the error checking.

Sabbatical

I took this panoramic over a month ago now, when the crocus’ weren’t even opening yet. A pretty spectacular weekend, and the view of Lake Union and east to the Cascades was amazing.

It seemed like a damn good photo to start my sabbatical with. As I’m posting this, I’m just beginning a sabbatical. I’ve been wanting to do this for over a decade, and this year its feasible. Learning, Playing, and Travel are the top contenders for my time. I have been making lists for nearly 3 months about “what I could do” – I’ll never get to it all, but it makes a good target.

I’ve started some art classes, which I’m inordinately nervous about, but have been trying to work on for months now. I figured a full-out class would enforce some structure around what I’m doing, and hopefully lead to some interesting things. I have been loving the new camera on the iPhone 7 and posting photography experiments to facebook and twitter. It’ll be interesting to see if I can fuse a few different arts together.

I can’t even begin to think of not being a geek, so I am also contributing to a couple  open source efforts. I’m now a contributor to Apple’s swift project: just fixing a bug or two at this point, but it’s a start and gives me a reason to dig more into the language. I may do the same with Kubernetes a bit later, and expand on learning the Go language.

Travel is also in the plan – although like usual I’ll post more about where I’ve been than “where I am”, so don’t expect updates until after the fact for the travel. Lots of things to see on the horizon…

HOW TO: making a portable binary with swift

Over the weekend I was working with Vapor, trying it out and learning a bit about the libraries. Vapor leverages a library called LibreSSL to provide TLS to web services, so when you compile the project, you get a binary and a dynamic library that it uses.

The interesting part here is that if you move the directory that contains the “built bits”, the program ceases to function, reporting an error that it can’t find the dynamic library. You can see this in my simple test case, with the code from https://github.com/heckj/vaportst.

git clone https://github.com/heckj/vaportst
swift build -c release
mv .build/release newlocation
./newlocation/App version

then throws an error:

dyld: Library not loaded: /Users/heckj/src/vaportst/.build/release/libCLibreSSL.dylib
 Referenced from: /Users/heckj/src/vaportst/newlocation/App
 Reason: image not found
Abort trap: 6

It turns out that with Swift 3 (an Swift 3.1 that’s coming), the compiler adds the static path to the dynamic library, and there’s an interesting tool, called the install_name_tool that can modify that from a static path to a dynamic path.

Norio Nomura was kind enough to give me the exact syntax:

install_name_tool -change /Users/heckj/src/vaportst/.build/release/libCLibreSSL.dylib @rpath/libCLibreSSL.dylib .build/release/App
install_name_tool -add_rpath @executable_path .build/release/App

This tool is specific to MacOS, as the linux dynamic loader works slightly differently. As long as the dynamic library is in the same directory as the binary, or the environment variable LD_LIBRARY_PATH is set to the directory containing the dynamic libraries, it’ll get loaded just fine on Linux.

Swift today doesn’t provide a means to create a statically linked binary (it is an open feature request: SR-648). It looks like that may be an option in the future as the comments in the bug show progress towards this goal. The whole issue of dynamic loading becomes moot, at the cost of larger binaries – but it is an incredible boon when dealing with containers and particularly looking towards running “server side swift”.

Kubernetes community crucible

I’ve been watching and lurking on the edges of the Kubernetes community for this past cycle of development. We are closing on the feature freezes for the 1.6 release, and it is fascinating to watch the community evolve.

These next several upcoming releases are a crucible for Kubernetes as a project and community. They have moved from Google to the CNCF, but the reality of the responsibility transition is still in progress. They made their first large moves and efforts to handle the explosive growth of interest in their project and the corresponding expansion of community. The Kubernetes 1.6 release was supposed to be a “few features, but mostly stability” after the heavy changes that led into 1.5, recognizing a lot of change has hit and stabilization is needed. This is true technically as well as for the community itself.

The work is going well: the SIG’s are delegating responsibility pretty effectively and most everyone is working to pull things forward. That is not to say it is going perfectly. The crucible isn’t how it dealt with the growth, but how it deals with the failures and faults of the efforts to handle that growth. One highlight is that although 1.6 was supposed to be about stability, the test flakiness has risen again. The conversation in last week’s community meeting highlighted it, as well as discussed what could be done to shift back to reliability and stability as a key ingredient. Other issues include project-wide impacts that SIG’s can have, the resolution to that being a lot of the community project and product manager focus over the past months.

This week’s DevOpsWeekly newsletter also highlighted some end user feedback: some enthusiastically pro-Kubernetes, another not so much. The two posts are sizzling plates of feedback goodness for the future of Kubernetes – explanations of what’s effective, what’s confusing, and what could be better – product management gold.

  • Thoughts on Kubernetes by Nelson Elhage is an “enthusiastic for the future of the project” account from a new user who clearly did his research and understood the system, pointing out rough edges and weak points on the user experience.
  • Why I’m leaving Kubernetes for Swarm by Jonathan Kosgei highlights the rougher-than-most edges that exist around the concept of “ingress” when Kubernetes is being used outside of the large cloud providers, highlighting that Docker swarm has stitched together a pretty effective end-user story and experience here.

I think the project is doing a lot of good things, and have been impressed with the efforts, professionalism, and personalities of the team moving Kubernetes forward. There is a lot of passion and desire to see the right thing done, and a nice acknowledgement to our myopic tendencies at times when holistic or strategic thoughts are needed. I hope the flaky tests get sorted without as much pain as the last round, and the community grows from the combined efforts. I want to see them alloyed in this crucible and come out stronger for it.

Benchmarking etcd 3.0 – an excellent example of how to benchmark a service

Last week, the CoreOS team posted a benchmark review of etcd 3.0 on their blog. Gyo-Ho Lee was the author, and clearly the primary committer to the effort – and he did an amazing job.

First and foremost is that the benchmark is entirely open, clear, and reproducible. All the code for this effort is in a git repository dedicated to the purpose: dbtester, and the test results (also in the repository) and how to run the tests are all detailed. The benchmarking code was created for the purpose of benchmarking this specific kind of service, and used to compare etcd, zookeeper, and consul. CoreOS did a tremendous service making this public, and I hope it gave them a concrete dashboard for their development improvements while they iterated on etcd to 3.0.

What Gyo-Ho Lee did within the benchmark is what makes this an amazing example: He reviewed the performance of the target against multiple dimensions. Too many benchmarks, especially ones presenting in marketing materials, are simple graphs highlighting a single dimension – and utterly opaque as to how they got there. The etcd3 benchmark reviews itself, zookeeper, and consult against multiple dimensions memory, cpu, and diskIO. The raw data that backed the blog post is committed into the repo under “test-results”. It is reasonably representative (writing 1,000,000 keys to the backend) and tracked time to complete, memory consumed, CPU consumed, and disk IO consumed during the process.

I haven’t looked at the code to see how re-usable it might be – I would love to see more benchmarks with different actions, and a comparison to how this operates in production (in cluster mode) – but these wishes are just variations on the theme, and not a complaint to the work done so far.

As an industry, as we build more with containers, this kind of benchmarking is exactly what we need. We’re composing distributed services now more than ever, and knowing the qualities of how these systems or containers will operate is as critical as any other correctness validation efforts.

How to measure your open source project

Before I get into the content, let me preface this with this is a WORK IN PROGRESS. I have been contributing and collaborating using Open Source for quite a while now, and my recent work is focused on helping people collaborate using open source as well as how to host and build open source projects.

I am a firm believer of “measure what you want to do” – to the point that even indirect measures can be helpful indicators. Most of this article isn’t something you want to react to on a day-to-day basis – this is about strategic health and engagement. A lot of the information takes time to gather, and even more time to show useful trends.

Some of this may be blindingly obvious, and hopefully a few ideas (or more) will be useful. These nuggets are what I’ve built up to help understand the “success” of an open source project I am involved with, or running. I did a number of google searches for similar topics when I was establishing my last few projects – there weren’t too many articles or hints out there like this, so hopefully this is a little “give back”.

There are two different – related but very separate – ways to look at your project in terms of “success”:

  • Consumption
  • Contribution

Consumption

Consumption is focused around “are folks effectively using this project”. The really awesome news in this space is that there’s been a lot of thinking about these kinds of measures – it’s the same question that anyone managing a product has been thinking about, so the same techniques to measure product success can be used.

If your project is a software library, then you will likely want to leverage public package management tooling that exists for the language of your choice: PyPi, CPAN, NPM, Swift Package Catalog, Go Search, etc. The trend of these package managers is heading towards relying on Github, Bitbucket, and Gitlab as a hosting location for the underlying data.

Some of the package indices provide some stats – npm, for example, can track downloads through NPM for the past day, week and month. A number of the package indicies will also have some measure of popularity where they try and relate consumption by some metric (examples: http://pypi-ranking.info/alltime, NPM keeps track of “how many dependents and stars a package has, and go-search tracks the number of stars on the underlying repo)

The underlying source system, like github, often have some metrics you can look at – and may have the best detail. Github “stars”, which is a social indicator that’s useful to pay attention to, and any given given repository has graphs that provide some basic analytics as well (although you need to be an admin on that repo to see it under the URI graphs/traffic). Github also recently introduced the concept of “dependents” in its graphs – although I’ve not seen it populated with any details as yet. Traffic includes simple line charts for the past two weeks covering:

  • Clones
  • Unique Cloners
  • Views
  • Unique Visitors

Documentation hosted on the Internet is another excellent place to get some metrics, and is a reasonable correlation to people using your library. If you’re hosting documentation for your project through either github pages or ReadTheDocs, you can embed in a Google Analytics code so that you can get page views, unique users per month, and other standard website analysis from those sites.

I’ve often wished to know how often people where looking at just the README on the front page of a Github repository, without any further setup of docs or github pages, and Ilya Grgorik made a lovely solution in the form of ga-beacon (available on github). You can use this to host a project on Heroku (or your PaaS or IaaS of choice), and it’ll return a image and pass along the traffic to Google Analytics to get some of those basic details.

If your project is a more complete service, and most specifically if you have a UI or a reference implementation with a UI, that offers another possibility for collecting metrics. Libraries from a variety of languages can send data to Google Analytics, and with many web-based projects it’s as easy as embedding the Google Analytics code. This can get into a bit of a grey space – about what your monitoring and sending back. In the presentation entitled “Consider the Maintainer“, Nadia Eghbal cited a preference towards more information that I completely agree with – and if you create a demo/reference download that includes a UI, it seems quite reasonable to track metrics on that reference instance’s UI to know how many people are using it, and even what they’re using – although don’t confuse that with evidence about how people are using your project, it’s far more about people experimenting and exploring when you’re reviewing stats from a reference UI.

There is also StackOverflow (or one of the Stack Exchange variants) where your project might be discussed. If your project grows to the level of getting questions and answers on StackOverflow, it’s quite possible to start seeing tags either for your project, or for specifics of your project – at least if the project encourages Q&A there. You can get basic stats per tag, “viewed”, “active”, and “editors” as well as the number of questions with that tag, which can be a reasonably representation of consumption as well.

Often well before your project is big enough to warrant a StackOverflow tag, Google will know about people looking for details about your project. Leverage Google Trends to search for your project’s name, and you can even compare it to related projects if you want to see how you’re shaping up against possible competition, although pinning it down by query terms can be a real dark art.

Contribution

Measuring contribution is a much trickier game, and often more indirect, than consumption. The obvious starting point there are code contributions back to your project, but there’s other aspects to consider: bug reports, code reviews, and just conversational aspects – helping people through a mailing list, IRC or Slack channel. When I was involved with OpenStack there was a lot of conversation about what it meant to be a contributor and how to acknowledge it (in that case, for the purposes of voting on technical leadership within the project). Out of those conversations, the Stackalytics website was created by Mirantis to report on contributions in quite a large number of dimensions: Commits, Blueprints (OpenStack’s feature coordination document) drafted and completed, Emails sent, bugs filed and resolved, reviews, and translations.

Mirantis expanded Stackalytics to cover a number of ancillary projects: Kubernetes, Ansible, Docker, Open vSwitch, and so on. The code to Stackalytics itself is open source, available on github at https://github.com/openstack/stackalytics. I’m not aware of other tooling that provides this same sort of collection and reporting – it looks fairly complex to set up and run, but if you want to track contribution metrics this is an option.

For my own use, the number of pull requests submitted per week or month has been interesting, as has the number of bug reports submitted per week or month. I found it useful to distinguish between “pull requests from the known crew” and external pull requests – trying to break apart a sense of “new folks” from an existing core. If your project has a fairly rigorous review process, then time between creation and close or merging of a PR can also be interesting. Be a little careful with what you imply from it though, as causality for time delays is really hard to ping down and very dependent on your process for validating a PR.

There’s a grey area for a few metrics that lay somewhere between contribution and consumption – they don’t really assert one or the other cleanly, but are useful in trying to track the health of a project: mailing lists and chat channels. The number of subscribers to project mailing lists or forums, # of people subscribed to a project slack channel (or IRC channel). The conversation interfaces tend to be far more ephemeral – the numbers can vary up and down quite a bit – and by minute and hour as much as over days or weeks – but within that ebb and flow you can gather some trends of growth or shrinkage. It doesn’t tell you about quality of content, but just that people are subscribed is a useful baseline.

Metrics I Found Most Useful

I’ve thrown out a lot of pieces and parts, so let me also share what I found most useful. In these cases, the metrics were all measured weekly and tracked over time:

  • Github repo “stars” count
  • Number of mailing list/forum subscribers
  • Number of chat channel subscribers
  • Number of pull requests submitted
  • Number of bugs/issues filed
  • Number of clones of the repository
  • Number of visitors of the repository
  • Number of “sessions” for a documentation website
  • Number of “users” for a documentation website
  • Number of “pageviews” for a documentation website
  • Google Trend for project name

I don’t have any software to help that collection – a lot of this is just my own manual collection and analysis to date.

If you have metrics that you’ve found immensely valuable, I’d love to know about them and what you found useful from them – leave me a comment! I’ll circle back to this after a while and update the list of metrics from feedback.

Updates:

Right after I posted this, I saw a tweet from Ben Balter hinting at some of the future advances that Github is making to help provide for Community Management.

And following on that Bitgeria poked me about their efforts to provide software development analytics, and in particular their efforts at providing software to collect metrics on what it means to collaborate into their own open source effort: Grimoire Lab.

I also spotted an interesting algorithm that  used to compare open source AI libraries based on github metrics:

Aggregate Popularity = (30*contrib + 10*issues + 5*forks)*0.001

Setting up a Swift Development Environment

Over the holidays I thought I should start learning something new – a new language or something, and I’ve been wanting to get back to doing some IOS development for a while, so it seemed to be a good time to start digging into swift. While I’m making all the mistakes and learning, I thought I’d chronicle the effort. I am taking a page from Brent Simmon’s Swift Dev Diaries, which I found immensely useful and informative. So this is the first of my swift dev diaries.

To give myself something concrete to work on, I thought I might help out with open source  swift itself. The swift package manager caught my eye as a starting place. For getting started resources, we have the github repo http://github.com/apple/swift-package-manager, the mailing list swift-build-dev, and a bug tracker. And then on the mailing list, I saw there was also a slack channel, specifically for swiftpm (unofficial and all that).

The bug I decided to tackle as a starting pointlooked reasonably constrained: SR-3275 – originally a crash report, but updated to reflect a request for improved output about why a failure occurred. The bug report led me to want to verify my update on MacOS and Linux, which started this post – which includes a bit of “how to” to develop for Swift project manager on Linux as well. The same could be used for any “server side swift”.

There are instructions on how to set things up, but it’s slightly behind as of the latest development snapshot. I set up a Vagrant based development environment, starting with IBM-Swift‘s work initial server-side swift work, and updating it to make a development snapshot work. The work itself is under a branch called swift-dev-branch, updated to start with Ubuntu 16.04 and adding in a number of dependencies that aren’t yet documented in the swift.org documentation as yet.

If you want to try out the Vagrant setup, you can my Vagrantfile from git and give it a whirl – I’ll happily take updates. You’ll need to have Vagrant and Virtualbox pre-installed.

git clone https://github.com/heckj/vagrant-ubuntu-swift-dev
cd vagrant-ubuntu-swift-dev
git checkout swift-dev-branch
vagrant up && vagrant ssh

It will take a bit of time to build, downloading the base image and snapshot, but once built is pretty fast to work with. If for some reason you want to destroy it all and start from scratch:

vagrant destroy -f && vagrant box update && vagrant up

should do the trick, wiping the local VM and starting afresh.

As a proof of concept, I used it to run the swiftpm build and test sequence:

vagrant ssh:

mkdir -p ~/src
cd ~/src/
git clone https://github.com/apple/swift-package-manager swiftpm
cd ~/src/swiftpm
Utilities/bootstrap test

While I was poking around at this, Ankit provided a number of tips through the Slack channel and shared his development setup/aliases which utilize a Docker image. I have some of those files in my repository as well now, trying them out – but I don’t have it all worked out for an “easy flow” as yet. Perhaps a topic for a future swift dev diary entry.

I wanted to mention a blog post about Bhargav Gurlanka‘s SwiftPM development workflow, which was also useful. It focused on how to use rsync to replicate data between MacOS and a Virtualbox instance to run testing on both MacOS and Ubuntu/Linux based swift.

One of the more hints I got is to use the build product from bootstrap directly for testing, which can operate in parallel, and hence run faster.

.build/debug/swift-test --parallel

That speeds up a complete test run quite considerably. It improved from 3 minutes 31 seconds to run via bootstrap to 1 minute 51 seconds – and it is a lot less visual output – on my old laptop.

I also found quite a bit of benefit to use swift-test to list the tests and to run a specific test while I was poking at things:

.build/debug/swift-test -l
.build/debug/swift-test -s FunctionalTests.MiscellaneousTestCase/testCanKillSubprocessOnSigInt

Which I was doing a fair bit of while working on digging through the code.

I haven’t quite got “using Xcode with a swift development toolchain snapshot” really nailed down as yet – I keep running into strange side issues that are hard to diagnose, but it’s definitely possible – I’ve just not been able to make it repeatably consistent.

The notes at https://github.com/apple/swift-package-manager#development are a good starting point, and from there you can use the environment variable “TOOLCHAINS” to let Xcode know to use your snapshot and go from there. The general pattern that I’ve used to date:

  1. download and install a swift development snapshot from https://swift.org/download/#releases
  2. export TOOLCHAINS=swift
  3. Utilities\bootstrap –generate-xcodeproj
  4. open SwiftPM.xcodeproj

and go from there. Xcode uses it’s own locations for build products, so make sure you invoke the build from Xcode “Build for Testing” prior to invoking any tests.

If Xcode was open before you invoked the “export TOOLCHAINS=swift” and then opened the project, you might want to close it and invoke it from the “open” command as I did above – that makes sure the environment variables are present and respected in Xcode while you’re working with the development toolchain.

Google Project Zero delivering benefits

Most people won’t know about Google Project Zero – but it’s worth knowing about. I learned about it a few months ago, although the effort was started back in 2014 after the now-infamous heartbleed security vulnerability. It is an effort to focus and drive on a particularly nasty set of bugs to identify – low-level software exploits, funded and hosted by Google. The wikipedia article on Google Zero is pretty good for the history.

This morning as I was applying a software update, I scanned through the release notes, and quite a number the set I reviewed were security patches informed through CVE’s generated or found through Project Zero. As an effort to support and bolster the lowest level of software infrastructure, I’ve got to applaud it.

micro.blog: Looking forward for a new way to have a conversation

I’ve had my twitter account since 2007. When I joined twitter, it was a lovely way to step forward into conversations with a bunch of technologist friends who were mostly Mac (and soon IOS) developers. For me, it was a pipeline to the thoughts of friends and cohorts. It was really similar to the kind of experience I found at stopping by the pub at a developer’s conferences: lots of scattered conversations, forming, happening, and breaking down again. I described it as “tea party conversations” – where you didn’t need, or even want, to hear everything, but you could get a bit of feeling of what was being talked about, join in on the conversation, and step back out when you were done.

That lasted a good long while. During that time I used blogs and RSS readers (like NetNewsWire) to keep up with the longer form content. Tutorials, opinion pieces, demo links, whatever – stuff that didn’t fit in a small text message style conversation. I was using several devices at once; when the landscape changed and most of the RSS readers accumulated to Google Reader, I went with them. I still used NetNewsWire but sync’d all the data with Google Reader and also used the Google Reader web application.

Longer form writing started to die down – Tumblr rolled through, lots of folks posted to Facebook where they might have written something longer on a Blog, and Twitter kept being prolific at getting me pretty good conversation points. More and more, twitter became the place I learned about events, found links to longer form articles to read, and keep track of what was happening in quite a variety of technical ecosystems.

Then in 2013, Google kicked a whole bunch of folks (including me) in the nuts: It shut down Google Reader. They gave a lot of time to transition, they did it well and communicated fairly, but in hindsight that shutdown really collapsed my reliance on RSS and getting to longer form writing through RSS. I kept up with Twitter, LinkedIn was doing some technology news sharing, and I wandered and struggled with tracking news and information I was interested in, but was mostly successful.

In the intervening years, Twitter has arguably become a cesspool (so has Facebook to be fair). Harassment, fake news, overt advertising and with the service’s algorithms showing me only portions of what I wanted in both cases. What they thought I wanted to hear; it became more of an echo chamber than I’m comfortable with. Where it was once the pub where I could step into a conversation, learn something interesting or hear about something I should check out later and head on – it became incoherent shouting over crowds.

I intentionally followed a wide variety of people (and companies); I wanted the diversity of viewpoints. With the evolving algorithms, injection of advertising that looks like my friend’s messages, and generally just the sheer amount of noise it became overwhelming. The “signal to noise ratio” is really quite terrible. I still get a few nuggets and have an interesting conversation with it, but it is a lot more rare. Friends have dropped from twitter due to their frustration with it, harassment issues for some of them, and others have locked down their posts, understandably so.

So when I heard that Manton Reece was taking a stab at this game with micro.blog – getting back to conversations using open-web technologies, I was thrilled. I’ve been quietly following him for years; he’s a level headed guy with interesting opinions. A good person to listen to in my “tea party conversations” metaphor. Manton has his project idea up on Kickstarter, and got the response I think it (and he) deserved: resounding. It is fully funded, a couple times over as I write this. I backed it – as did over 1750 other folks.

Even though it’s funded, if you’re in the same situation I am with twitter and facebook, take a look at the micro.blog kickstarter project and consider chipping in.

I don’t know if it’ll be a path to solve some of the problems that I’ve experience with Twitter, and to a lessor extent with Facebook. I want a place where I can carry on conversations that isn’t tightly bound into one of those companies. I want some tooling where I don’t have to have a really high barrier to get different opinions and thoughts on topics I’m interested in. I want to “turn down the volume” quite a few notches. I hope that micro.blog does some of that.

I’m fairly confident that the people I started following on twitter back in 2007 will be in micro.blog, sharing their opinions (and blogs). I’ll be on it, and sharing my thoughts and opinions, and hopefully engaging in some interesting conversations.