Deep learning and AI research

Several years ago the website OReilly.com was my go-to site for technical news, information, and useful links. Enter the era of twitter, combined with the over-selling-of-stuff website design O’Reilly was experimenting with, and I fell away from the site. I was tired of being inundated with advertising to buy training, listen to XYZ podcast or video, etc. I wanted written, short form articles. About the only thing I kept keeping an eye out for was Nat Torkington’s “4 short links” curated bits of technical tastiness, which O’Reilly reorganized and tossed under the banner of Radar. I get it – sites change, adapt, and we fell apart, it’s all good. I used to read lots of sites, although to slow fall-away of RSS (with the shutdown of Google Reader as a banner moment) as a standard has really stalled my acquisition of interesting content from peers. Frankly, twitter kind of sucks as a replacement in that regard, more-so lately than ever…

So it was with some excitement that I ran across the articles under O’Reilly Radar: Artificial Intelligence, and most specifically one article written by Beau Cronin from July of last year: In search of a model for modeling intelligence. The article was excellent, but what got me really excited was the reference to additional writing and research, in particular How to Grow a Mind: Statistics, Structure, and Abstraction. This paper is a true gem to me – it’s an opinionated review of some of the specific problem spaces in AI research in which I’m deeply interested : representation and the process of “learning” knowledge.

This paper, along with another I found today (Representation Learning: A Review and New Perspectives updated in April 2014) have provided me with some deep reading that will keep me through the weekend and almost more interestingly: a plethora of citations to try and track down and understand. I love review papers, even (especially?) opinionated ones in that they are the connectors to all this great potential for information. Concepts deep, shallow, and sometimes just seriously screwed up wrong, but pushing out on the borders of our knowledge.

So I think at this point, if you’re interested in AI research, or picking up a little of the hype and rabid blather that is the media’s current font for AI, you could do a hell of a lot worse than the AI Topic at OReilly.com. Maybe I’ll fall away from this site again as it tries to sell me on screen casts and tutorials for how to run Spark machine learning programs on a Raspberry Pi. In the meantime there are some good links there.

It’s a creek worth panning.

 

Published by heckj

Developer, author, and life-long student. Writes online at https://rhonabwy.com/.

%d bloggers like this: