from Hacker News

Neural Complete – A neural network that autocompletes neural network code

by kootenpv on 4/15/17, 7:54 AM with 16 comments

  • by minimaxir on 4/15/17, 2:30 PM

    A note: looking at the code, this isn't a seq2seq Keras model. The core model code is a fork of the base Keras text generation example (https://github.com/fchollet/keras/blob/master/examples/lstm_...), which works like a char-rnn where the previous 80 characters will predict the 81st character, then feed the generated characters back into the model. In the server implementation, the server keeps predicting characters until it hits a break character, which will then serve the generated characters to the user.

    In a seq2seq implementation, you need to predict all output characters simultaneously (i.e. model input is all characters in a line, model output is all characters in the next line), which in Keras involves using a TimeDistributed(Dense()) layer. (see the Keras example for seq2seq: https://github.com/fchollet/keras/blob/master/examples/addit...) This also requires more sequence ETL and a lot more training time.

  • by asrp on 4/15/17, 12:29 PM

    This reminds me a bit of auto-sklearn[1] for automatically select the machine learning algorithm to use and its parameters (and so isn't quite doing it at code level like this one).

    [1] https://github.com/automl/auto-sklearn

  • by potomushto on 4/15/17, 11:30 AM

    > It would be very fun to experiment with a future model in which it will use the python AST and take out variable naming out of the equation.

    So what if we use AST as a source for the code structure? Also, there are other metadata such as filename (e.g. reducer.js), path (./components), project dependencies (package.json for JavaScript projects), amount of github stars and forks.

  • by teomoura on 4/15/17, 12:40 PM

    and so it begins
  • by z3t4 on 4/15/17, 11:46 AM

    i imagine a future where the programmer is just there in case the AI does a mistake.
  • by m-j-fox on 4/15/17, 7:33 PM

    I will train on Tim Pope's dataset and see if the computer comes up with a useful plugin.