by madisonmay on 9/22/16, 2:56 PM with 31 comments
by antirez on 9/22/16, 4:28 PM
You'll see it learning to recognize the digits, you can print the digits that it misses and you'll see they are actually harder even for humans sometimes, or sometimes you'll see why it can't understand the digit while it's trivial for you (for instance it's an 8 but just the lower circle is so small).
Also back propagation is an algorithm which is simple to develop an intuition about. Even if you forget the details N years later the idea is one of the stuff you'll never forget.
by nkozyra on 9/22/16, 3:46 PM
Knowing you can pretty quickly whip up a KNN or ANN in a few hundred lines of code or fewer is one of the more eye-opening parts of the delving in. For the most part, supervised learning follows a pretty reliable path and each algorithm obviously varies in approach, but I know I originally thought "deep learning? ugh, sounds abstract and complicated" before realizing it was all just a deep ANN.
Long story short: dig in. It's unlikely to be as complex as you think. And if you've ever had an algorithms class (or worked as a professional software dev) none of it should be too daunting. Your only problem will be keeping up the charade if people around you think ML/AI is some sort of magic.
by djkust on 9/22/16, 4:39 PM
This is actually part 3 in a series. For developers who are still getting oriented around machine learning, you might enjoy the first two articles, too. Part 1 shows how the machine learning process is fundamentally the same as the scientific thinking process. Part 2 explains why MNIST is a good benchmark task. Future parts will show how to extend the simple model into the more sophisticated stuff we see in research papers.
We intend to continue until as long as there are useful things to show & tell. If there are particular topics you'd like to see sooner than later, please leave a note!
by yodsanklai on 9/22/16, 6:56 PM
Are there a lot of problems that fall between the very hard and the very easy ones? and for which enough data can be found?
by throwaway13048u on 9/22/16, 5:03 PM
I'm working on undertstanding CNNs, and I can't seem to find the answer (read: don't know what terms to look for) that explain how you train the convolutional weights.
For instance, a blur might be
[[ 0 0.125 0 ] , [ 0.125 0.5 0.125 ] , [0 0.125 0]]
But in practice, I assume you would want to have these actual weights themselves trained, no?
But, in CNNs, the same convolutional step is executed on the entire input to the convolutional step, you just move around where you take your "inputs".
How do you do the training, then? Do you just do backprop on each variable of the convolution stem from its output, with a really small learning rate, then repeat after shifting over to the next output?
Sorry if this seems like a poorly thought out question, I'm definitely not phrasing this perfectly.
by aantix on 9/22/16, 4:56 PM
I would start to get misclassified pages and it was so difficult to diagnose as to why these misclassifications were occurring. Bad examples? Bad counter examples? Wrong algorithm for the job? Ugh.
I ended up writing a set of rules. It wasn't fancy but at the end of the day, I understood the exact criteria for each classification and they were easily adjustable.