from Hacker News

My Most Important Project Was a Bytecode Interpreter

by 10098 on 9/22/16, 1:13 AM with 149 comments

  • by robertelder on 9/22/16, 2:46 AM

    One of the moments where I really started to feel like I was starting to 'see the matrix' was when I was working on a regex engine to try to make my compiler faster (it didn't, but that's another story). The asymptotically fast way to approach regex processing actually involves writing a parser to process the regex, so in order to write a fast compiler, you need to write another fast compiler to process the regexes that will process the actual programs that you write. But, if your regexes get complex, you should really write a parser to parse the regexes that will parse the actual program. This is where you realize that it's parsers all the way down.

    When you think more about regexes this way, you realize that a regex is just a tiny description of a virtual machine (or emulator) that can process the simplest of instructions (check for 'a', accept '0-9', etc.). Each step in the regex is just a piece of bytecode that can execute, and if you turn a regex on its side you can visualize it as just a simple assembly program.

  • by sillysaurus3 on 9/22/16, 2:14 AM

    Also: a software rasterizer.

    Most people refuse to write one because it's so easy not to. Why bother?

    It will make you a better coder for the rest of your life.

    Let's make a list of "power projects" like this. A bytecode interpreter, a software rasterizer... What else?

  • by tominous on 9/22/16, 2:32 AM

    I love the author's meta-idea of refusing to accept that unfamiliar things are black boxes full of magic that can't be touched.

    A great example of this mindset is the guy who bought a mainframe. [1]

    Refuse to be placed in a silo. Work your way up and down the stack and you'll be much better placed to solve problems and learn from the patterns that repeat at all levels.

    [1] https://news.ycombinator.com/item?id=11376711

  • by wahern on 9/22/16, 3:20 AM

    Two approaches are severely underused in the software world:

    1) Domain-specific languages (DSLs)

    2) Virtual machines (or just explicit state machines more generally)

    What I mean is, alot of problems could be solved cleanly, elegantly, more safely, and more powerfully by using one (or both) of the above. The problem is that when people think DSL or VM, they think big (Scheme or JVM) instead of thinking small (printf). A DSL or VM doesn't need to be complex; it could be incredibly simple but still be immensely more powerful than coding a solution directly in an existing language using its constructs and APIs.

    Case in point: the BSD hexdump(1) utility. POSIX defines the od(1) utility for formatting binary data as text, and it takes a long list of complex command-line arguments. The hexdump utility, by contrast, uses a simple DSL to specify how to format output. hexdump can implement almost every conceivable output format of od and then some using its DSL. The DSL is basically printf format specifiers combined with looping declarations.

    I got bored one day and decided to implement hexdump as a library (i.e. "one hexdump to rule them all"), with a thin command-line wrapper that emulates the BSD utility version. Unlike BSD hexdump(1) or POSIX od(1), which implement everything in C in the typical manner, I decided to translate the hexdump DSL into bytecode for a simple virtual machine.

      http://25thandclement.com/~william/projects/hexdump.c.html
    
    The end result was that my implementation was about the same size as either of those, but 1) could built as a shared library, command-line utility, or Lua module, 2) is more performant (formats almost 30% faster for the common outputs, thanks to a couple of obvious, easy, single-line optimizations the approach opened up) than either of the others, and 3) is arguably easier to read and hack on.

    Granted, my little hexdump utility doesn't have much value. I still tend to rewrite a simple dumper in a couple dozen lines of code for different projects (I'm big on avoiding dependencies), and not many other people use it. But I really liked the experience and the end result. I've used simple DSLs, VMs, and especially explicit state machines many times before and after, but this one was one of the largest and most satisfying.

    The only more complex VM I've written was for an asynchronous I/O SPF C library, but that one is more difficult to explain and justify, though I will if pressed.

  • by gopalv on 9/22/16, 2:17 AM

    The project that affected my thinking the most was a bytecode interpreter[1].

    I've had use for that knowledge, nearly fifteen years later - most of the interesting learnings about building one has been about the inner loop.

    The way you build a good interpreter is upside-down in tech - the system which is simpler often works faster than anything more complicated.

    Because of working on that, then writing my final paper about the JVM, contributing to Perl6/Parrot and then moving onto working on the PHP bytecode with APC, my career went down a particular funnel (still with the JVM now, but a logical level above it).

    Building interpreters makes you an under-techtitect, if that's a word. It creates systems from the inner loop outwards rather than leaving the innards of the system for someone else to build - it produces a sort of double-vision between the details and the actual goals of the user.

    [1] - "Design of the Portable.net interpreter"

  • by _RPM on 9/22/16, 3:57 AM

    I saw the matrix after I first implemented a virtual machine. I recommend everyone does it because it will teach you a lot about how code is executed and transformed from the syntax to the actual assembly/bytecode. A stack based virtual machine is so simple it takes a lot of thinking to understand how they work. (or maybe I'm just not that smart).

    It's interesting that he implemented function calls via a jump. In my VM a function is just mapped to a name (variable), so functions are first class. When the VM gets to a CALL instruction, it loads the bytecode from the hash table (via a lookup of the name).

    Since this is a procedural language where statements can be executed outside of a function, implementing the functions as a jump would be difficult because there would need to be multiple jumps between the function definition and statements that aren't in a function.

    I really wish my CS program had a compilers class, but unfortunately they don't, so I had to learn everything on my own.

  • by briansteffens on 9/22/16, 2:18 AM

    Nice post! I really enjoy playing around with things like this. It's amazing how little is needed to make a language/interpreter capable of doing virtually anything, even if not elegantly or safely. As long as you can perform calculations, jump around, and implement some kind of stack your language can do just about anything.

    I recently threw something together sort of like this, just for fun (I like your interpreter's name better though): https://github.com/briansteffens/bemu

    It's crazy how much these little projects can clarify your understanding of concepts that seem more complicated or magical than they really are.

  • by anaccountwow on 9/22/16, 4:23 AM

    This is a required hw assignment for a freshmen class @ cmu. https://www.cs.cmu.edu/~fp/courses/15122-s11/lectures/23-c0v...

    Given it has some parts already written in the interest of time...

  • by oops on 9/22/16, 12:32 PM

    Nice read! Reminds me of nand2tetris that was posted not too long ago https://news.ycombinator.com/item?id=12333508

    (You basically implement every layer starting with the CPU and finishing with a working Tetris game)

  • by douche on 9/22/16, 3:07 PM

    This reminds me a little bit of my computer architecture class. We started at logic gates in a simulator[1], and worked our way up from there to flip-flops and adders, memory chips, a simple ALU, and eventually a whole 8-bit CPU in the simulator. I want to think that we were even writing assembly for it, loading the programs into the simulated memory, and executing it. It was a great way to get a sense of how everything works, and I think it's when C-style pointers really clicked for me.

    [1] this one, IIRC https://sourceforge.net/projects/circuit/

  • by foobarge on 9/22/16, 7:25 AM

    I've done something similar 21 years ago: a C interpreter targeting a virtual machine. The runtime had a dynamic equivalent of libffi to call into native code and use existing native libraries. I added extensions to run code blocks in threads so that the dinning philosopher problem solution was very elegant. Back in the days, not having libffi meant generating assembly on the fly for Sparc, MIPS, PA-Risc, i386. Fun times. That C interpreter was used to extend a CAD package.
  • by reidrac on 9/22/16, 7:47 AM

    I wrote a VM for the 6502 for fun and it was one of most interesting and satisfying projects I've ever made in my free time.

    It is very close to a bytecode interpreter, only that it comes with a specification that is actually the opcode list for the MOS 6502 (and few details you need to take into account when implementing that CPU).

    Besides there are cross-compilers that allows you to generate 6502 code from C for your specific VM (see cc65).

  • by memsom on 9/22/16, 10:44 AM

    I did this in C#. It was a lunch time project at work a couple of years ago. It was fun. I still want to do a V2 and remove all of the shortcuts I put in because I didn't want to write code for the stack and stuff like that. At the end of the day, my solution was spookily similar to this - the 32bit instructions - well, yeah, I was the same! It was just simpler. I did have a few general purpose registers (V1, V2 and V3 I think) and I did have routines to handle bytes, words and such like. So stuff like this (as a random example I pulled from the source):

    ORG START

    START: ST_B 10

    LOOP: ST_B 10

    ADD_B ;;value will go back on stack

    LD_B V1

    SM_B V1 ;;value we use next loop

    SM_B V1 ;;value we compare

    SM_B V1 ;;value we echo to console

    TRP 21 ;;writes to the console

    ST_S '',13,10,$

    TRP 21 ;;writes to the console

    CMP_B 50 ;;compares stack to the constant

    JNE LOOP

    ST_S 'The End',13,10,$

    TRP 21 ;;writes to the console

    END

  • by pka on 9/22/16, 11:58 AM

    I'm thinking a lot of the complexity of writing a compiler stems from the usage of inappropriate tools. I.e. I would rather kill myself than write a lexer in C (without yacc / bison), but using parser combinators it's a rather trivial task.

    Similarly, annotating, transforming, folding, pattern matching on, CPS transforming etc. the produced AST is pretty trivial in a language that supports these constructs. And again, a nightmare in C.

    That leaves codegen, but using the right abstractions it turns into a very manageable task as well.

    Here's a compiler written in Haskell for LLVM [0].

    [0] http://www.stephendiehl.com/llvm

  • by philippeback on 9/22/16, 6:48 AM

    Parsers made easy and pretty much interactive:

    http://www.lukas-renggli.ch/blog/petitparser-1

    http://www.themoosebook.org/book/internals/petit-parser

    This include the dynamic generation of blocks and arrows style things...

  • by philippeback on 9/22/16, 6:46 AM

    Soulmate of yours here: https://clementbera.wordpress.com

    Lots of optimizations going on for OpenVM.

    https://github.com/OpenSmalltalk/opensmalltalk-vm

    Interesting bit: VM is written in Slang and transformed into C then compiled.

    So you can livecode your VM. In the VM simulator.

  • by elcct on 9/22/16, 9:17 AM

    I did something similar in the distant past, that is I wrote subset of C compiler (functions, standard types, pointers) to imaginary assembler and then bytecode interpreter. It was awesome fun, but also I got so into it my - then - girlfriend started to question my commitment to the relationship. So be careful, this is really interesting thing to do :)
  • by rosstex on 9/22/16, 2:52 AM

    In this same vein, I recommend coding an emulator! It can be an excellent experience.

    http://www.multigesture.net/articles/how-to-write-an-emulato...

  • by curtfoo on 9/22/16, 2:19 AM

    Yes I wrote a parser/compiler and interpreter for a custom domain specific language and it had a similar effect on my career. Lots of fun!

    Okay I guess technically I used a parser generator that I then modified to build an AST and convert it into assembly-like code that fed the interpreter.

  • by reacweb on 9/22/16, 7:40 AM

    Bill gates also started with an interpreter (basic interpreter). Many parts of early windows applications were developed in p-code and visual basic is an important part of Microsoft success.
  • by loeg on 9/22/16, 3:42 AM

    I like implementing emulators, because the toolchain and architecture specification are all there already. You get to implement what is basically a little embedded CPU.
  • by dpratt on 9/22/16, 2:43 AM

    I'd add a driver for a non trivial binary protocol - I ended up implementing a JVM driver for Cassandra a few years ago, and it was a blast.