from Hacker News

Machine Learning: Models with Learned Parameters

by madisonmay on 9/22/16, 2:56 PM with 31 comments

  • by antirez on 9/22/16, 4:28 PM

    I strongly advise everybody with one day free (and not much better things to do) to implement a basic fully connected feedforward neural network (the classical stuff, basically), and give it a try against the MNIST handwritten digits database. It's a relatively simple project that learns you the basic. On top of that to understand the more complex stuff becomes more approachable. To me this is the parallel task to implement a basic interpreter in order to understand how higher level languages and compilers work. You don't need to write compilers normally, as you don't need to write your own AI stack, but it's the only path to fully understand the basics.

    You'll see it learning to recognize the digits, you can print the digits that it misses and you'll see they are actually harder even for humans sometimes, or sometimes you'll see why it can't understand the digit while it's trivial for you (for instance it's an 8 but just the lower circle is so small).

    Also back propagation is an algorithm which is simple to develop an intuition about. Even if you forget the details N years later the idea is one of the stuff you'll never forget.

  • by nkozyra on 9/22/16, 3:46 PM

    This is well-written and I applaud any step toward demystifying the sometimes scary sounding concepts that drive much of the ML algorithms.

    Knowing you can pretty quickly whip up a KNN or ANN in a few hundred lines of code or fewer is one of the more eye-opening parts of the delving in. For the most part, supervised learning follows a pretty reliable path and each algorithm obviously varies in approach, but I know I originally thought "deep learning? ugh, sounds abstract and complicated" before realizing it was all just a deep ANN.

    Long story short: dig in. It's unlikely to be as complex as you think. And if you've ever had an algorithms class (or worked as a professional software dev) none of it should be too daunting. Your only problem will be keeping up the charade if people around you think ML/AI is some sort of magic.

  • by djkust on 9/22/16, 4:39 PM

    Hi folks, authors here in case you have questions.

    This is actually part 3 in a series. For developers who are still getting oriented around machine learning, you might enjoy the first two articles, too. Part 1 shows how the machine learning process is fundamentally the same as the scientific thinking process. Part 2 explains why MNIST is a good benchmark task. Future parts will show how to extend the simple model into the more sophisticated stuff we see in research papers.

    We intend to continue until as long as there are useful things to show & tell. If there are particular topics you'd like to see sooner than later, please leave a note!

  • by yodsanklai on 9/22/16, 6:56 PM

    I took Andrew NG's ML class on Coursera. It was certainly interesting to see how ML works but I'm not sure what to do with this. Particularly, I'm still unsure how to tell beforehand if a problem is too complex to be considered, how much data it'll require, what computing power is needed.

    Are there a lot of problems that fall between the very hard and the very easy ones? and for which enough data can be found?

  • by throwaway13048u on 9/22/16, 5:03 PM

    So this may be a place as good as any -- I've got a decent math background, and am self teaching myself ML while waiting for work to come in.

    I'm working on undertstanding CNNs, and I can't seem to find the answer (read: don't know what terms to look for) that explain how you train the convolutional weights.

    For instance, a blur might be

    [[ 0 0.125 0 ] , [ 0.125 0.5 0.125 ] , [0 0.125 0]]

    But in practice, I assume you would want to have these actual weights themselves trained, no?

    But, in CNNs, the same convolutional step is executed on the entire input to the convolutional step, you just move around where you take your "inputs".

    How do you do the training, then? Do you just do backprop on each variable of the convolution stem from its output, with a really small learning rate, then repeat after shifting over to the next output?

    Sorry if this seems like a poorly thought out question, I'm definitely not phrasing this perfectly.

  • by aantix on 9/22/16, 4:56 PM

    There's been a couple of times where I needed to classify a large set of web pages and used a Bayes classifier.

    I would start to get misclassified pages and it was so difficult to diagnose as to why these misclassifications were occurring. Bad examples? Bad counter examples? Wrong algorithm for the job? Ugh.

    I ended up writing a set of rules. It wasn't fancy but at the end of the day, I understood the exact criteria for each classification and they were easily adjustable.