I started my first job at the University from which I graduated. It was an strange time, a light economic depression and “peace” had just broken out. The cold war was effectively over, the Berlin wall came down, and it wouldn’t be long until the Soviet Union dissolved into component states. I previously excelled in my studies of CMOS chip design never useing that skill in any job. Instead what was available was software and system administration. An evening gig at a campus “help desk” is where I started. In that environment, I had access to clusters of NeXT computers, SGI Toasters with Irix, some standalone IBM RS6000 running AIX, a VM/CMS mainframe, and rooms and rooms of PCs and Macs. You couldn’t ask for a more fertile ground for learning.
One the CS professors doubled down as an early job mentor (as well as being “the security guy” for University): Greg Johnson. When I was first stepping into unix and trying to get a handle on it, he provided some sage advice that stuck with me:
Reading man pages is a skill. It’s something you can practice at and learn, and there is a treasure trove of information there.
For those who don’t know man (short for manual) pages were (are) the built-in documentation to unix – accessible using the command man – they share information from how to use commands on the command line to details about how various software libraries function. You can easily find them – any linux OS probably has them installed, or Macs – just open a terminal or SSH session and use a command like “man ls” and you’ll get a gist of what’s there – as well as why it’s a skill to read them.
I took Greg’s advice to heart and focused on learning to read man pages. It may have been the moral equivalent of learning to read by going through a dictionary, but it worked. I wallowed in man pages, kept following down leads and terms referenced in that green-screen text output. It started as a confused mess of words that didn’t mean anything. After seeing them repeatedly, and wrapping some context around some, I was able to sort out what various terms meant.
Twenty five years later, I find myself learning using the same process. Different set of confusing words, and a different topic to learn, but things are coming in to focus. Over the past six months (well, since Deep Mind won the Go games over Lee Sodel) I’ve been wanting to understand machine learning and the current state of research. The topic is so new, so rapidly unfolding, that other forms of documentation are pretty limited. There are articles and how-to’s out there for Keras, TensorFlow, and others – but understanding the topic is different than learning how to use the tools.
Wallowing in academic research papers is as messy as the man pages were in the “unix wars” era. The source these days is arXiv, which hosts scientific papers in prepublication form as a common repository for academic research and citation. They have a handy web interface that let’s you list the papers for the past week:
Every weekend, I refresh that page in a browser, browse through the titles, and grab and stash PDFs into evernote for later reading. Every now and then, I find a “survey of…” paper which is a gold mine of references. From there either arXiv or Google Scholar can help seed up finding these additional papers or areas of interest. Wikipedia has been a godsend in turning some of the jargonistic terms into plain english. An AI issue such as vanishing gradient descent gets a lot more tractable in wikipedia, where volunteers explain the issue much more simply than the strict correctness of academic papers.
I hit one of those “magic” papers just last weekend. Entitled “A growing long-term episodic & semantic memory“, it was one of the papers that stitched together a lot of concepts that had been loosely floating around in my head.
I don’t understand it sufficiently to explain it simply, but I’m getting there…