One of the first modern neural networks makes its way to Github.
AI is one of the biggest and most all-consuming zeitgeists I’ve ever seen in technology. I can’t even search the internet without being served several ads about potential AI products, including the one that’s still begging for permissions to run my devices. AI may be everywhere we look in 2025, but the kind of neural networks now associated with it are a bit older. This kind of AI was actually being dabbled with as far back as the 1950’s, though it wasn’t until 2012 that we saw it kick off the current generation of machine learning with AlexNet; an image recognition bot whose code has just been released as open source by Google and the Computer History Museum.
We’ve seen many different ideas of AI over the years, but generally the term is used in reference to computers or machines with self learning capabilities. While the concept has been talked about by science-fiction writers since the 1800’s, it’s far from being fully realised. Today most of what we call AI refers to language models and machine learning, as opposed to unique individual thought or reasoning by a machine. This kind of deep learning technique is essentially feeding computers large sets of data to train them on specific tasks.
The idea of deep learning also isn’t new. In the 1950’s researchers like Frank Rosenblatt at Cornell had already created a simplified machine learning neural network using similar foundational ideas to what we have today. Unfortunately the technology hadn’t quite caught up to the idea, and was largely rejected. It wasn’t until the 1980’s that we really saw machine learning come up once again.
In 1986, Geoffrey Hinton, David Rumelhart and Ronald J. Williams, published a paper around backpropagation, an algorithm that applies appropriate weights to the responses of a neural network based on the cost.
Home
United States
USA — software The 2012 source code for AlexNet, the precursor to modern AI, is...