Content tagged programming

Clones FAQ

posted on 2022-06-21 09:30:00

An Update on Clones

Even though I've been finding less time to work on it, I've enjoyed the process of hacking on clones and things are going pretty well. Input handling and background rendering are finished and with any luck it will be another weekend until sprites are working. I do expect to need some tweaks for fine scrolling to be implemented correctly so it may be another week or two still until more advanced NROM titles like Super Mario Bros run. It should be a hop, skip, and a jump from there to MMC1 titles like Mega Man 2 though. 🙏


Commonly asked questions

There are 3 questions that have come up pretty regularly though, either on stream chat or elsewhere, and I'd like to jot down some thoughts about them while it's fresh in my mind.

Why are you building clones?

Because it brings me joy. I feel compelled to work on it as a vehicle to try to create something that I find aesthetically appealing and that indulges parts of my curiosity.

Why are you building clones in lisp?

Because it's my favorite language to work in. I don't think the feature set of Common Lisp is critical. Macros, CLOS, and conditions and restarts are all great but the motivating factor for me remains the interactive development workflow with Emacs and SLIME/Sly. When I was young and not yet a programmer, I imagined that at some point working on software or digging into the internals would be more like having a conversation than working out a math problem. Our tooling remains far from those (naive) visions, but Common Lisp is closer to what I imagined and so it feels more comfortable to me. Mikel Evins has written some great posts about this.

What are your goals in building clones?

This is the toughest part to answer. Initially, I'd just like to be able to have a functioning emulator and play childhood games that I loved, like Mega Man 2, tolerably well. Once that piece is completed though, I'm very interested in trying to add tools for debugging and reverse engineering games.

This isn't so much about the games themselves as it is about having better tools for investigating software without access to source code. An emulator is a great place to experiment with approaches to that. Now I admit I have not spent a lot of time doing reverse engineering or security work and am not familiar with the state of the art around static analysis or disassembly tools like IDA Pro.

I'm limited in my ability to express what I imagine. So since I can't tell you exactly what it should be, here's a sketch:

I would love it if I could build a graph of the control flow of the game as I play it. I would love it if I could later annotate the graph, name segments of assembly, and receive hints around what specific parts might be interacting with graphics data, or the APU, or handling player inputs.

The code is an artifact, the leftover cocoon of the program being written. The interesting pieces are in the constraints of the level design, the physics, the musical score, the artwork. I would like, as much as possible, to have tools for exploring the shape of a process as it lives, exploring the data it operates on, and understanding the constraints of the problem, rather than relying on code to understand one specific approach to solving that problem.

In the abstract, this isn't a solvable problem and I will never have a proof of correctness or confidence in completion. But it's worth striving to see how software in general could leave breadcrumbs behind it, given how much of our ideas and culture are being poured into it and fossilized in amber.

Advent Reflections

posted on 2022-01-23 18:00:00

It's been a busy start to 2022. I'm working as an Engineering Manager for the first time and enjoying it but it's been easy for other things to slip through the cracks. For example, I told myself I would write a post on Advent of Code several weeks ago. So I'm sitting down to write about it now before I forget any more details.


This is the second time I've attempted Advent of Code. The first time was in 2020 and I enjoyed it a lot but ran out of gas around day 10. I was pretty distracted with a Flamingo Squad project I can't recall and probably a bit burned out. Both years I've written my solutions in Common Lisp.

Advent is interesting. I get enjoyment from different things on different days. Some problems I just enjoy seeing how much I can optimize Common Lisp, or writing solutions in a few different styles if the problem is simple and seeing the differences in how they are compiled and allocate memory. Other problems I'm much more satisfied by trying to see how "pretty" a solution I can write, either using constraint solving tools like Screamer or a pipeline using threading macros and so on.

I enjoy the social aspect of AoC and having a leaderboard with some mutual friends and coworkers. It's nice to chat about something besides production code with other talented programmers. That said, I have to be pretty careful to avoid judging myself. I have to consciously remind myself that my goal isn't to "win the race" and not worry too much if I struggle to solve a problem elegantly.

Advent 2021

There were two things I wanted to try and do differently this year from last year. The first was just to go as far as I could and not worry about racing. The second was to experiment with literate programming tools and try to do a better job documenting my work.

I think the results were a mixed bag. I got through day 11 so I powered out at around the same point. I mostly worried less about the race but I still cared a lot about finishing each problem on the day it became available and definitely got discouraged once or twice when I didn't like my approach. On the other hand, I had a good time and learned a few things so it's a good investment overall.

Here's the current state of the generated site. You can see I didn't wind up embedding the source for the different functions so it can't properly be called literate but the [function] label (or equivalent) next to all the exported symbols can be clicked to jump to the source on github.



I have been meaning to play with mgl-pax for a long time. Like ... probably several years? There are blog posts about it as far back as 2014 and it's been on my radar a long time but I just never seemed to make time for it. Advent seemed like a good opportunity to dive in.

I like the idea of an environment where prose and code are intermingled, so I have a natural attraction to literate programming. This shouldn't surprise you if you've been here before. It also seems important to me that such an environment for authoring programs should be rooted in the development tools and support the prose as a secondary feature (like MGL-PAX) rather than rooted in the prose and supporting the code as a secondary feature (like org-babel). I.e. tangling one or more files to produce my program seems like the wrong way to go to me.

In terms of Advent of Code problems, I'd ideally be able to do the following:

  • Keep the prose and code for a given day/problem in a single file
  • Easily export the entire project with all days to an easily navigated, well designed web page
  • Make it easy to show different versions of the code as well as disassembly or evaluation examples

Editor's Note: The issues I bring up below were resolved before I could ever make a PR or ask the author about them. It seems making PAX more flexible about transcription was in the plans all along.

I think MGL-PAX excels on the first two points and struggles more on the third. It has a feature called transcripts which could plausibly support it but they are an awkward fit. Transcripts allow for including examples that are evaluated when the documentation is generated but there are two issues I have with it:

  1. The results are actually embedded in the source code. The argument for this is that they are parsed and checked to ensure the code and docs don't get out of date. I'm interested in embedding details like disassembly though and things like the memory address will change from run to run. The output could also easily dwarf the code for the function itself so shouldn't be directly embedded. (PAX now supports not checking consistency, allowing me to simply dump the output of a form.)
  2. The second (much more minor) issue is that there isn't a straightforward way to ask the source definition of a function to be embedded in the doc. MGL-PAX assumes that symbols listed in the doc should be an exported part of the public API and links directly to the source on github at the relevant commit. It's a neat bit of hackery but makes more sense in the case of medium-to-large projects rather than Advent of Code exercises. (Similarly, PAX now has a way of adding "includes" via transcript.)

My only remaining concern is that the navigation and transcription functionality is tied to slime and swank. Hopefully I'll have an opportunity to try them with sly soon and can dig in or report it if there are issues.

In the advent project, the site-building and deployment was trivial. Hacking together a way to generate an overview of all solved problems with performance measurements involved some fiddling but I'm happy with the results.

Evolving my style

For a long time, I've been writing small projects in common lisp, writing a handful of tests, and relied only sparingly on libraries. A little alexandria here, a little cl-ppcre there. There's a place for that but I'm ready to try and cobble together the utilities and extensions to the language that I'm comfortable with. For now, that's alexandria, serapeum, iterate, mgl-pax, and try. Clingon, trivia, screamer, and fset wait in the wings for the right problem.

There are plenty of talented lispers around. Two people whose code I've enjoyed reading during advent are death and Steve Losh (aka sjl). They feel like opposites to me if only because sjl has a project dedicated to advent and an assortment of dependencies, macros, and utilities to make advent hacking pleasant. Death by contrast almost always just relies on the language standard and throws his code unceremoniously in a gist. His solutions are often faster than mine but don't sacrifice elegance.


It all boils down to this: I still like lisp, I miss hacking it, and I should read and write more code. My algorithms chops aren't as good as I'd like and I have to make an effort to not get discouraged by my limitations. All the more reason to keep doing advent, even out of season, and learn a few things.

Research Goals

posted on 2018-08-22 12:59:00

A Recent Dilemma

Lately, I've been thinking about what I want my next job to be and I've been strongly tempted to pursue grad school. I only fell further down the rabbit hole when I read this tweet by Tony Garnock-Jones:

This problem is very close to my heart. Yet I'm ill equipped at present to tackle it. That suggests I should pursue a PhD to get better tools but I have some serious concerns about that (above and beyond selling my house). Let me explain.

The State of CS Research

Thinking about CS academics, it seems that most research aims to support building larger systems by improving software:

  1. Correctness, or
  2. Performance

I feel really weird because neither of those things interest me. Sure, modern software has plenty of bugs and it could always be faster. But software is eating the world anyway.

My Concern

Software is the fastest growing store of "how to" knowledge on earth and is mostly inaccessible, not only to the general population but programmers too.

What do I mean? Well...

There's no such thing as code being "readable", not only because being readable implies assumptions about the context of the reader but also because most programs cannot present a linear narrative! As a consequence code cannot be self-documenting. We should strive to write clear code but suggesting that code is its own documentation is untenable.

I started working on a Nintendo Emulator in Lisp to explore the idea of readable code and see if I could make the emulator source a good pedagogical example of what a computer does. I failed.

My Goals

Now my primary motivation working on the emulator is finding ways to generate an explanation of binaries without just recovering the disassembly. It's the intent and constraints that matter, the shape of the problem, not necessarily the solution the developers wound up with.

Indeed, if we could recover disassembly or even get the original source out of binaries (compilation being an inherently lossy process) that would only get us more code that we would need to comprehend. I.e. More how-to knowledge that isn't readily accessible. Legacy code and software preservation only makes this need more urgent.

I should note that I don't think all software needs to be preserved. I just think it's a travesty that we're 60 years into the project of programming and our tools for asking computers about how programs behave are so poor and so specialized.

Computers could be much greater tools for knowledge dissemination than they presently are because they can execute models. We currently use them to disseminate static knowledge (web pages, PDFs, videos) instead of executable knowledge.

I continue to want to find a way explain code without relying on static methods like documentation, the code itself, or even types and tests. I dream of better tools for communicating and reasoning about the work software systems do than source code.

Putting it in the simplest terms possible:

When I was 8, I desperately wished I could ask my family computer, "Wow! How did you do that?" I still can't today and I spend a lot of time thinking about what a solution should look like.


see also: Peter Seibel and Akkartik, though Luke Gorrie may be a counterpoint

The Right Thing

posted on 2015-05-09 03:35:00

I've been programming for 8 years now. Only half that time professionally. It seems like a long time to me but is a drop in the bucket compared to many in this industry.

If I had to pick two big lessons from the past 8 years to tell someone getting into Software Development for the first time, I would probably say:

  1. Software is never finished.
  2. Software is never "right".

Software is Never Finished

Software is never finished because it exists in a social context. Software is an accessory to daily life and life changes. As our needs evolve, and the context around software shifts, so must the software itself.

Not to mention the fact that the teams working on software and the organizations around it shift. We cannot forget Conway's Law!

I often cannot imagine how the Software industry will ever settle with regards to tools and practices, when I see our tumultuous past 50 years. But even if our techniques and tools settle, I don't imagine our codebases will.

Software is Never "Right"

You might imagine that this section is redundant. Certainly it sounds similar to "Software is Never Finished". But I mean something different. I mean that there isn't a single right way that all software should be written or made.

The internet would lead you to believe otherwise. Programmers are nothing if not evangelists, even zealots, about their tools and techniques, their pet styles and practices, their esoterica.

The most important thing you can do is ignore this. Ignore the hate, Ignore the hype. Do not second guess yourself.

Tools, practices, and domains of knowledge are just that. Even if Software wasn't a moving target (see: Never Finished), we still do not have a single methodology understood to always produce the ideal code. Indeed, there isn't some ideal code we're after. The code is not the most important part, it's just the part we're paid to obsess over.

So don't sweat the folks who insist everyone should follow their "better" way. At the end of the day, good documentation and happy customers are probably more important than most particulars of your codebase.

By all means, don't stop learning and write the best code you can. But chart your own path through our tangled maze of lore. And remind yourself that it's okay to be an average programmer. We've got to find time for families and lives, after all. Speaking of which ...

The Right Thing

It's tough to try to plan for retirement. I'm still too young to think confidently on decade plus time scales.

It's tough to decide how to teach my students and gauge assignments. It's tough to decide what to learn next to become better at programming. It's tough to decide what I want from my career, how to nourish it and have it nourish me. It's tough to decide what's worth doing in general, when our lives are so busy and full.

But there's one decision I make that's really easy, even when it makes my life harder.

It's coming up on six years since Dad died. I cannot measure the amount I've grown since then. I know he'd be proud. But I'm proud and that's even more important. The biggest lesson I learned from John Glenn is probably this:

Loving other people is so obviously the right thing.

I cannot think of a time when I am as confident in my decisions as when I am loving and supporting others.

I'm not proud of my programming skills or code, though cl-6502 is kind of neat. I'm not proud of my job or relationship, though I am thrilled with those aspects of my life.

I'm proud that I handle hard situations with all the grace I am able. I'm proud that I treat others with respect and care because I didn't used to.

Most importantly, I'm proud that when I see someone struggling, I love them.

We do not get to have many easy decisions in our adult lives. But loving those around us, pleasant or unpleasant, in good times or bad is an enormous undertaking. I certainly fail at it, but I never regret it.

It's unfortunate that with our busy lives, in the sea of our alerts and notifications, it is so difficult to focus on the simple and important things. But I truly believe loving others is the most important thing that I do.

I have failed many times before and in many ways and modalities. I have character flaws. I have shortcomings. But if I have helped those around me through difficult periods in their lives and supported them when they were in need? Well, then it's all probably worth it.

Confronting Impostor Syndrome

posted on 2015-03-10 19:57:00

For the bulk of my professional career as a software developer, I've felt like a fraud. To some extent, I think various aspects of tech hiring practices and tool fads/fetishes in the software industry create or exacerbate this feeling in most of us.

I read Joel Spolsky's Java Schools article way back when, before I was really programming. I looked down on Web Dev for a long time. I played with lisp, played with emulation. ... But I've been a professional web dev. Why am I fighting it and being hard on myself for not being a systems programmer?

I flogged myself for some time, a little voice in my head saying that web devlopment "isn't real programming". I would flog myself for not being good at web development when I hadn't embraced it. I would flog myself for not being knowing systems programming when I haven't put any time into it.

Sure, there's plenty I still don't know. But "I don't know but I can figure it out" is the right instinct to have. Trying a bunch of stuff and not finishing is vastly better than paralysis. Exploring any ecosystem and building better apps is better than misguided elitism.

I'm a hacker, through and through. I want to learn, want to improve, want to synthesize new things from my understanding, grow, share, and change. I looked up to the hackers of lisp lore and AI Labs. But that hero worship has become negative, is distracting me from just building things.

Part of the reason that little voice is in my head (and I listened to it for so long) is because I thought I didn't have a chance in this industry.

I've been a successful developer for years but often unable to enjoy my jobs because I've been too uncomfortable to embrace them. Then feared I'll be found out as a fraud, not a "real programmer".

I've been telling my students that since they understand the major components in web development and have some understanding of how they fit together, their real focus to grow should be practice. Constantly building bigger things, trying to make each piece more cleanly than in the past, gradually knowing how to solve harder and harder problems.

I stopped taking my own advice at some point. It's time to build new things again. Bigger things. Not the prettiest or the best, but real.

Goals for 2015: Technical

posted on 2015-01-01 18:15:00

Technical Goals

I've been in a technical rut for a while. Sure, I learned new things at my job but limited myself outside of it by sticking to projects I wasn't motived about and only working with tools I was familiar with.

As much as anything, 2015 is going to be about playing with new projects, experimenting with new tools, and focusing on fundamentals again. Seeing as I have 3 big goals here, I might just try to tackle one each semester. :)


To that end, my primary goal will be to learn Ocaml and build something cool with it. I'm leaving that just as open ended as it sounds.

I'm interested in Ocaml for a variety of reasons. Its origin dates from a time before Intel and Microsoft created a 20-year dominant platform. It was envisioned as a systems language, much like Lisp, to compete with C++.

That said, it employs pattern matching and a strong, static type system like Haskell. Unlike Haskell, it has a fairly simple runtime and compiler built to give very clear intuition about code performance. While Ocaml is strongly functional, it provides for use of imperative state without monad machinations (sorry @James).

There are other reasons but I think this is a good start. I'd be interested in everything from writing an IRC bot, to scripting tasks, to NES reverse engineering tools (i.e. lots of graph manipulation), to OpenMirage toys.

Javascript / Frontend

I've leaned towards backend work in my career where possible. After helping TA Tim's Frontend engineering course last semester I finally want to pick up some front end skills. I'm not angling to get much better from a design or HTML/CSS perspective, but have a definite interest in mithril and Clojurescript's om. Elm also seems really cool but I'd prefer to stick to slightly less alien tech for the time being.

I'm considering porting my Nintendo emulator to Clojurescript and getting it to use canvas but I worry about getting bogged down as I did with the Lisp version. It was fairly painless getting a rough cl-6502 port up in Clojurescript in a few days though.


To be honest, my algorithms chops have never been terribly good. I think one thing that's intimidated me working on trowel is a lack of confidence in my algorithmic reasonining. So I'll be working through the latest edition of Sedgewick's Algorithms with a particular focus on graphs. With any luck I'll actually make some progress on trowel. Maybe I'll even wire it to an om app with some logic programming or constraint propagation tricks.

Strangeloop Thoughts, 2014 Edition

posted on 2014-09-24 16:05:00

Why do Programming?

After going to Strangeloop for the first time in 2012, I was really fired up about programming. I'd only been out of school a year and was already a full-remote Clojure developer making a great salary. Even better, I had made good progress on my lisp emulation project and received some recognition from hackers I respected for it.

The last two years Strangeloop has been a lot more sobering. I told myself things in 2012 about how fast I'd get better and how good I'd be. Even though my expectations were unreasonable it's taken a long time to not feel bad about my (relative lack of) progress. I've been slow to accept the fact that I don't want to fight my way to being the best in my field.

Strangeloop, 2014

I got in Tuesday night, dropped my bags at the hotel, and immediately ran off to drinks and phenomenal conversation at the Schlafly Tap Room. Many of the best experiences I've had at Strangeloop have been at the tap room. Sure there is excellent food, beer, and technical conversation, but I've always gotten a sense of personal acceptance at gatherings there. Strangeloop is certainly a welcoming crowd.

Wednesday was a blast as well, as any day with a trip to the STL City Museum should be, but more and more I found I cared about personal interactions more than the content of the talks I attended.

By the end of Thursday, I was stressed out. I wasn't even excited about the talks, which isn't to say they weren't good. I hated the idea that I was just an average programmer, that I didn't aspire to more than hacking bog-standard Rails apps as a career, that I'd spent almost $2000 out of my own pocket to come to a conference that someone else might have gotten more out of. I went to bed early that night.

I kept my mindset in check much better on Friday. I reminded myself to be excited about others discoveries and creations, not to demand myself learn every tool or technique. The conference wrapped up well and I particularly enjoyed some of the Distributed Systems talks, a subfield I still have no experience with. Not to say I want to go fight distributed heisenbugs soon or design highly reliable systems. The talks were quite entertaining and informative is all.

Back Home

The main takeaways I've had from Strangeloop have nothing to do with tech and everything to do with me. Strangeloop has, in many ways, been a time for me to reflect these last two years. I'm not sure that's a good way to use the conference but it seems to be what I've done.

The first takeaway that comes to mind is that I need to try and respect myself for just being a decent Rails developer. I'm not great at solving algorithmically tricky problems and my CS background is, frankly, pretty damn weak. But no matter what I shouldn't beat up on myself for being "just a programmer". I need to put in an honest day's work and actually pat myself on the back at the end of it.

The second takeaway is that I need to program for me again. I got into programming mostly because I wanted to know more about how computers work. The emulator and talks surrounding it, some of my favorite work, is really more an investigation than artifact. I still want to support coleslaw. It actually has a few satisfied users and I'm one of them. But the only real purpose of the emulator is to sate my curiosity. Not to become a real thing or be special or groundbreaking.

There is plenty I'm still digesting and plenty I'm still not sure of, both technically and in terms of what I want for myself, my career, my life. But I'll leave that for another day. Cheers.

Dear Hackers

posted on 2014-07-16 12:33:00

(In which I talk about my feeeeeeels)

Disclaimer: This post comes in the middle of an existential crisis. I'm struggling a lot with programming as a career choice and feeling disconnected from a community of excited hackers. These feelings and opinions are my own and I think it's totally fine if you don't subscribe to them or want to write me off as an F-ing idiot.

Love First

A lot of the ideas in this post have been buzzing around in my head since I saw Jen Myers deliver her keynote at Strangeloop last year.

I've been keeping my thoughts in my head mostly because I'm already an established programmer. A lot of the motive for the talk was to be more welcoming to newcomers and minorities that struggled to be included in our communities. But I think this problem affects all of us, every last one, regardless of gender, race, or class.

The short version is that I think the tone of programming communities, especially online ones, is horrific. It's filled with religious debate over things less important than getting people excited about and interested in computing. For me, whether it's smart people posturing for social status or individuals genuinely trying to enlighten others is irrelevant.

Our first reaction to any comrade, any other person passionate about and interested in building things with computers, any human crazy and masochistic enough to try and expand the capabilities of these absurd machines, should be empathy and love.

This may seem ridiculous at first glance. It's harder than it sounds.

The Same Old Arguments

You already know the religious wars I'm talking about. They're silly little things. Are static or dynamic types better? (For some, is there even such a thing as being dynamically typed?) Is Vim or Emacs better? Should I learn programming with PHP or Haskell? Should my app use JSON, XML, or a self-describing binary format? Is programming math, art, or craft? Can code be literature?

For a host of reasons, these are questions we have a vested interest in. And I think, more often than not, our motive is to encourage more learning and exploration. But the conversation is almost always full of condescension and judgment, especially if the medium for response is limited. We simply cannot let supporting curiosity become secondary to proselytizing "the right thing".

Plain and simple, turning a prompt for exploration into a right-or-wrong religious debate is curiosity destroying. And that's precisely the opposite of our intent, the opposite of what we as a community should aspire to.

Our opinions are important, and I'm not precluding the existence of a right answer. But someone pondering a tricky subject isn't best met by bludgeoning them over the head with a conclusion. As long as the principal motive of those we interact with is the fractal question "Why?", we are together.

This connects to a lot of things. It connects to people wondering if they're good at programming, or how to know such a thing. It contributes to impostor syndrome. I've struggled to hack on hobby code for fun because I don't feel like I can be proud of it. Not smart enough, not groundbreaking enough, not important enough. And I know that's silly, because there are more important things to worry about.

So the more we can get away from emphasizing that the most important thing in programming is being right, the better that will be for newcomers, for hobbyists, and I believe, for all of us.


I'm reminded of an Alan Perlis quote in SICP:

"I think that it's extraordinarily important that we keep the fun in computing. ... We began to feel as if we really were responsible for the successful, error-free, perfect use of these machines. I don't think we are.

I think we're responsible for stretching them, setting them off in new directions, and keeping fun in the house."

I'm not perfect at this either. It is difficult to never be dismissive, let alone to always be gentle. But sometimes people are just trying to make it through the day. Not use the best tool, not come up with a groundbreaking solution, not fix the world. We need to try to meet other programmers where they are. Not move them to our habitat before empathizing, before loving.

Ironically, I know this has been a bit of a high-horse diatribe. At least let me give you a gift for coming so far and listening to me ramble so much. Here, have something I love, bits of Milosz:

To whom do we tell what happened on the earth,
for whom do we place everywhere huge mirrors
in the hope that they will be filled up and will stay so?

I think that I am here, on this earth,
To present a report on it, but to whom I don’t know.
As if I were sent so that whatever takes place
Has meaning because it changes into memory.

To find my home in one sentence, concise, as if hammered in metal.
Not to enchant anybody. Not to earn a lasting name in posterity.
An unnamed need for order, for rhythm, for form,
which three words are opposed to chaos and nothingness.

What did I really want to tell them? That I labored to transcend my place and time,
searching for the Real. And we could have been united only by what we have in common:
the same nakedness in a garden beyond time, but the moments are short
when it seems to me that, at odds with time, we hold each other's hands.
And I drink wine and I shake my head and say: "What man feels and thinks will never be expressed."

Strangeloop 2013 Schedule

posted on 2013-09-12 09:55:00

I can't believe Strangeloop is only a week away!


  • Machine Learning for Relevance and Serendipity - Jenny Finkel (keynote)
  • Fast and Dynamic - Maxime Chevalier-Boisvert
  • Graph Computing at Scale - Matthias Broecheler
  • The History of Women in Technology
  • Software for Programming Cells - Colin Gravill
  • Learnfun and Playfun - Tom Murphy VII
  • Linear Logic Programming - Chris Martens
  • Creative Machines - Joseph Wilk
  • Making Software Development Make Sense to Everyone - Jen Myers (keynote)


  • The Trouble with Types - Martin Odersky (keynote)
  • Abstract Algebra Meets Analytics - Avi Bryant
  • Programming a 144-computer chip to minimize power - Chuck Moore
  • Web Apps in Clojure and Clojurescript with Pedestal - Brenton Ashworth
  • Getting Pushy - David Pollak || Why Ruby Isn't Slow - Alex Gaynor
  • Thinking DSLs for Massive Visualization - Leo Meyerovich
  • Finding a Way Out - Chris Granger || Servo - Jack Moffitt
  • What is a Strange Loop? - Douglas Hofstadter (keynote)
  • Thrown for a Loop - David Stutz

Lessons from cl-6502

posted on 2013-07-05 11:44:00

This will be the last post about emulation that doesn't involve graphics or disassembly of old NES games, I promise. cl-6502 0.9.5 is out and, in my testing with SBCL, pretty snappy. The book has received updates and is also available on lulu. Below is the 'Lessons Learned - Common Lisp' chapter:

Structures can be preferable to classes

Structures are much more static than classes. They also enforce their slot types. When you have a solid idea of the layout of your data and really need speed, they're ideal.

CLOS is fast enough

CLOS, for single-dispatch at least, is really quite fast. When I redesigned the emulator to avoid a method call for every memory read/write, my benchmark only ran ~10% faster. I eventually chose to stick with the new scheme for several reasons, performance was only a minor factor.

Destructuring is more expensive than you think

My second big speedup came, indirectly, from changing the arguments to the opcode lambdas. By having the opcode only take a single argument, the CPU, I avoided the need to destructure the opcode metadata in step-cpu. You don't want to destructure a list in your inner loop, no matter how readable it is!

Eval-when is about data more than code

That is, the times I found myself using it always involved computing data at compile-time that would be stored or accessed in a later phase. E.g. I used it to ensure that the status-bit enum was created for use by set-flags-if and the *mode-bodies* variable was bound in time for defaddress. Regardless, try to go without it if possible.

Use DECLAIM (and DECLARE) wisely

DECLAIM is for global declarations and DECLARE is for local ones. Once you've eked out as many algorithmic gains as possible and figured out your hotspots with the profiler, recompile your code with (declaim (optimize speed)) to see what is keeping the compiler from generating fast code. Letting the compiler know the FTYPE of your most called functions and inlining a few things can make a big difference.

My Lisp Summer Project

posted on 2013-06-21 14:58:00

cl-6502 0.9.4

I haven't been doing any hacking on coleslaw or famiclom for the last month. I've been focused almost entirely on my 6502 CPU emulator. In particular, I've been optimizing it and turning it into a "readable program".

The optimizations have gone swimmingly, taking cl-6502 from 3.8 emulated mhz circa May 1st (commit eecfe7) to 29.3 emulated mhz today (commit b729e8).(0) A factor of 8 speedup feels pretty good though it would be fun to coax more speed out later.

(0): All figures obtained with SBCL 1.1.4 on Debian 64-bit on an old Thinkpad X200. See this 6502 forum post.

I feel that the readability of the program has remained, maybe even improved, through all that optimization. The same overall design is in place; most refactorings were, approximately, tweaking macros and their callsites. The readability is especially improved when the code is broken into chapters, each with an introduction for context, and typeset with LaTeX. The latter is done thanks to a simple Makefile and some very nifty code swiped from Luke Gorrie's snabbswitch. If you've been curious about how cl-6502 is implemented or just wanted to dive in, there's never been a better time. Grab the book!

NES + Lisp + Static Analysis = ?

I'm still planning to make famiclom a full NES emulator. I won't consider it done until I can play Mega Man 2 with it. Hopefully using a USB controller. It doesn't make much sense for Lisp in Summer Projects though. I've already started the project, the scope is ill-defined, and I want to work on something fresh and new to me. So I've come up with a project that I can't possibly complete instead. It'll be great!

In short, I intend to rewrite Super Mario Bros 1. ... in a not-yet-existing lisp-like macro-assembler/incredibly simple compiler. I already have a 6502 assembler/disassembler in cl-6502 and a tool to parse NES roms in romreader. There's also a very thorough annotated disassembly of Super Mario Bros floating around. I've got a good start on a static analyzer that will take the SMB binary and an entry point and try to build a CFG of the game. The current scheme won't work with memory mapped titles but it's good enough for Mario.

Wild Speculation

Once I have a graph representation of Mario Bros, I'll try both manual analysis of the annotated disassembly with a pen and pad, and automated analysis on the graph using Lisp. I'll try to find as many idioms and patterns as possible to condense and improve readability of the code. Then, somewhere in early August, I'll start trying to rewrite the source and see how far I can get.

This approach is probably completely: insane, unworkable, unviable, inadvisable, and just all around wrong. But I think I'll have fun and learn something, so it's good enough for me. And hell, who knows, maybe I'll get lucky and be able to attend ECLM next year. :)

Towards Comprehensible Computing

posted on 2013-04-10 17:07:00

A Recent Obsession

"I pretended to work like others from morning to evening,
but I was absent, dedicated to invisible countries."
- Czeslaw Milosz, Nonadaptation

When I started programming, I quickly became fascinated by a question that many of us ponder at some point. Why isn't there a 'best' programming language that makes it simple to express our thoughts and intents to both each other and the machine? That transformed over time into a different question, "Why is programming so hard", but that still wasn't quite right. I think I've finally settled on the real question which is, "Why is modern software so incomprehensible?"

On Modern Software Systems

"As far as his own weak head is concerned, the thought of what huge heads
everyone must have in order to have such huge thoughts is already enough."
- Soren Kierkegaard, Fear and Trembling

I recently informally looked at the size of my software. The answer was predictably both bewildering and unsettling. The day-to-day software I use is comprised of about 35 million lines of source code.

All of that software is free and works well. By that measure alone, we might judge software engineering a success. It is popular in some circles to rail against software engineering methodologies and, often, modern software as well. We must acknowledge, however, that the demands placed on modern software far exceed the demands placed on the software of yesteryear.

This is also reflected in the way we teach Computer Science now. For example, MIT's new undergraduate curriculum shifts focus towards building systems with unreliable, incompletely understood components. The fact that we can build software this way points to a culture of library reuse as our chief abstraction rather than the humble function.

Defining Comprehensibility

"When you have a problem with X, have you pulled up the X server internals?
Have you dug into the problems with your drivers?
No, because the kernel is 10 million lines of code and the only way in is grep!
Fuck that, that's not helping me."
- Brit Butler, Wanting Types, Demanding Mirrors (video coming soon)

But I still yearn for the days when you could conceivably understand your computer soup-to-nuts. In October, I'll have been doing "real programming" for 4 years and I still accept a large portion of the day-to-day workings of my machine as magic. How many of us really have a complete picture of what is going on under the covers? We don't wrap our heads around the workings of a 1-billion transistor processor, much less the massive body of code atop it.

I'd wager 100 kloc is the upper bound on a well-organized system I think I could mostly keep in my head. That's 2000 pages or 5 400-page volumes at 50 lines/page. While prose is very different than code this would seem to be roughly the same order of magnitude as prominent fiction series like LOTR, Harry Potter, Game of Thrones, etc.

There is a torrent of OpenGenera floating around that contains about 870 thousand lines of Lisp. It should be about 20 years old. That means my current desktop is a 40-fold increase in size over Genera. Windows 3.11 is probably bigger than Genera and the size of a Desktop OS probably hasn't doubled every 4 years for the last two decades...but it is an interesting figure.

MenuetOS is a more provocative example. Menuet is a desktop OS written exclusively in x86 assembly. There is an open source 32-bit version and a closed source 64-bit one. I haven't read it so I can't say whether or not it is comprehensible. It is only 36 kloc for the kernel and 58 kloc for the apps though. Even with limited hardware support, 94k for a graphical OS is a noteworthy point in the design space.

What I Want

"So. I ask: how many people does it take, at a minimum, to maintain
our current level of technological civilization?"
- Charlie Stross, Insufficient Data

I am explicitly not asking for a world where desktop OSes are limited, comprehensible systems over full-featured ones that can't fit in one human's head. I'm not worried about bus factors. But an Operating Systems and Hardware/Software Interface course are not enough!

I want a completely open, comprehensible system that does something cool. It doesn't have to be self-hosting. It does have to be something I can study and change, observe and modify at every level of the system. It should be something that actually shipped to consumers at one point. The absolute lack of such a system for people trying to learn frustrates and disappoints me.

My Little Contribution

"We should burn all libraries and allow to remain only
that which everyone knows by heart."
- Hugo Ball

All this is exactly why I started working on a Nintendo emulator in lisp. It's still early days but it's my forever project until it's done. 6502 emulation is done, some basic Nintendo functionality is working. Once the NES is 90% complete, I plan to try writing a simple lisp-like language that targets the NES. I'll then use that to produce an annotated, high-level reconstruction of my favorite childhood game, Mega Man 2. It is a bit ridiculous and probably overreaching but it's worth a shot. And in some way, it's the computing toy I've always wanted.

On Inaction

posted on 2013-03-04 16:28:00

Seeing Value

I've been experiencing the Dunning-Kruger effect a lot lately. At least, I've been feeling like a fraud. And while I could list reasons I'm not a great programmer, asking why I felt like a fraud has led me to something more interesting. I don't think I've worked at a company where I knew "where the money is coming from". What does that mean exactly?

  • I haven't worked for a company whose revenue comes from a product I use or want to use.
  • I haven't worked for a company in an "obvious" growth market.
  • The software I write generates revenue indirectly through customers I never interact with.

All of this creates a surprising problem for me: The value I add is opaque from my perspective. I take it on the word of my superiors and peers that any value is present. This makes it essential that I trust and enjoy working with those people.

The Mythical Customer

It might not be immediately apparent why this is a problem. Find a company with decent people and culture and you don't need to be directly connected to the product or customers. Just churn out code and have fun. Advertisers will foot the bill. While it's true that you can sustain a business this way it certainly isn't ideal. The issue comes from just how decoupled the product becomes from the revenue. When it's time to grow revenue, you have to do it by attracting more eyeballs. Here's how that works:

  1. A Product Manager decides on new features or a UI overhaul to increase site traffic.
  2. Programmers implement those features with small tweaks and adjustments.
  3. The changes are released and traffic is measured for an increase. A good shop will use A/B testing to try and at least ground these decisions in data.
  4. Improved numbers are sent to advertisers to garner more customers and/or revenue.

But Product Managers are not users. Programmers are not users. Advertisers are not users. Sure, we use the product some to verify the code works during testing but we're not invested in it. A/B testing is not the same as user input. There is also no requirement that you correlate traffic with actual perceived value. Many companies just read traffic AS perceived value. Frankly, that's bullshit. Our loyalty is necessarily to the advertisers. They pay us...but the users are the real customers. The product just happens to be paid for by collecting data about how they use it.

So What?

Thus far, none of this should surprise anyone who has worked in the tech industry. Hell, this shouldn't surprise anyone with a Facebook account. It points however to a serious cultural problem in many tech companies: not letting (or demanding) your technical experts be, well, technical experts. A friend of mine calls this "{} for $". Many people rant about this as "Taylorism in software". It cannot be overstated that no programming paradigm nor software engineering methodology will eliminate the need to connect engineers to the product. Similarly, letting the engineers take the reigns is not anathema to good product design or improved value to the business. And this is not new. Quoth Don Eastwood in a 1972 Status Report on MIT's Incompatible Timesharing System:

"In general, the ITS system can be said to have been designer implemented and user designed. The problem of unrealistic software design is greatly diminished when the designer is the implementor. The implementor's ease in programming and pride in the result is increased when he, in an essential sense, is the designer. Features are less likely to turn out to be of low utility if users are their designers and they are less likely to be difficult to use if their designers are their users."

Earlier in the report, Eastwood says, The system has been incrementally developed almost continuously since its inception. Hello, agile kids. I'll say it again. Any company pretending that software engineering methodology or a given technology replaces the need to connect engineers with what they're building deserves to be skewered. The reason knowing "where the money is coming from" is so essential is that software is different than any other product in history. Because the design in a fundamental sense is the product. If you don't know what I'm talking about, I'd encourage you to watch Glenn Vanderburg's talk from RailsConf 2011. If your engineers don't understand the reason for what they're building, then the product can at best accidentally support the business. If you think you or your company can just scrape by for your whole career without getting eaten, I'd encourage you to reevaluate that assumption.

Finding Alternatives

One big reason most companies are hierarchies more concerned with maintaining market position than creating value is the inherent risk. Real growth comes from bets and empowering people to change what's needed to find "a better way" or "The Right Thing"...and that's terrifying. Few individuals or institutions have the guts, bravery, and stamina for continuous reinvention. It's exhausting. Not to mention that it puts the focus squarely on a company's employees. It seems our best chance at winning big comes from those kinds of risks though. Github and Valve's experiments in distributed management are a brilliant step in this direction.

In the Interim

A lot of what has had me feeling like a fraud is remembering how much I have to learn. Learning is a kind of reinvention itself though and one of the reasons I've loved computers since the beginning. So I'll keep learning and next time I'm in a programming interview, I look forward to asking the other hackers, Do you know where the money is coming from?

On Visible Programming

posted on 2012-09-26 20:13:00

I have a bad feeling that I'm about to piss off a lot of people. Oh, well.

Bret Victor gave a very interesting talk at Strange Loop called "Visible Programming". From what I can tell, Bret is a very smart guy and an accomplished UI designer. I was surprised to find that while I agreed with many (all?) of his premises I disagreed with most of his argumentation. As I've received a question or two about it, I'll try to clarify my thoughts here.

I have three core points:

  1. His vision requires reflection and an editor is the wrong place for it.
  2. Many examples were domain-specific and/or trivial but he didn't talk about general editor extensibility.
  3. Much he suggests is already done or can be done with some solid effort and available technology.

On Relective Systems

Ironically, I gave a talk on a different but overlapping topic a few weeks back. As mentioned in Chris Granger's talk, examples like the ones he and Bret Victor gave are much easier when working with a dynamic runtime. Unfortunately, dynamic is conventionally interpreted as a language with dynamic types and, put simply, we should try to change that. I argue that a dynamic language is one that allows you to inspect values and update function and class definitions while the program runs. This is formally known as reflection. What I believe dynamic language advocates care about is the ability to work with a "live" system, not the presence or lack of static type checking.

Being able to visualize program execution means having a snapshot of the program state during each step of execution. Setting aside the impractical size of such a thing for programs with arbitrarily large working data sets, this requires either A) annotating every line or function call with logging statements/function tracing, or B) instrumenting the language runtime in some fashion to retrieve values during execution. That's why these examples are much easier with a reflective language like Javascript where you can hook arbitrary behavior into the Object prototype.

Trying to get an editor to do this for all languages is a fool's errand without a nice reflective runtime and API to retrieve data from. It's hard enough with that stuff! And that means we need reflective compilers and interpreters before we can have an ideal editor.

On Editor Extensibility

Many of Bret's examples were highly domain specific. While it makes sense to have a simple interface for toying with values in a visualization/drawing program, it's harder to see how to usefully apply that to something like protein folding software or even RSS feed parsers. Having editor extensibility in addition to reflective systems enables arbitrary widgets or modes for dealing with a given problem domain or data visualization need. Plain and simple, there's no way to build-in useful visualizations that are universally applicable. I admit I am not a designer or gifted at visualization so I may have simply struggled here.

What can be done today

Bret essentially wants to make the experience of programming more tractable through interactivity and tangibility. He suggested 5 key requirements:

  1. Enable the programmer to read the vocabulary.
    • Mouseovers/tool-tips on all source tokens is largely available today and seems to address this.
  2. Enable the programmer to follow the flow.
    • A visualization to step through execution could be written somewhat easily given something like Common Lisp's trace.
  3. Enable the programmer to see the state.
    • This is essentially a more elaborate visualization that requires a bit more trace data than the above.
  4. Enable the programmer to create by reacting.
    • Suggests that editor not only autocompletes function names but default values too so you can see their effect immediately.
    • Function name autocompletion exists in many IDEs. Pervasive default values are easier for typed than "untyped" languages. Still requires runtime support!
  5. Enable the programmer to create by abstracting.
    • Essentially demonstrated a refactoring system here. Keep feeling like I missed something key about his argument.


While philosophically well-formed, Bret seems to miss the fact that runtime support is required for a Visible Editing Experience to emerge. If the industry still doesn't understand that dynamism is about the runtime rather than types, clamoring for a magic editor will get us nowhere. I want to see a more interactive, tangible environment in the future as well but we cannot get there by arguing that IDE/editor writers need to step up their game. We need to make a concerted argument for the resurrection of highly reflective systems in both research and industry. Once systems with robust reflective capabilities are widespread, realizing a vision such as that described will be a week long hack rather than a decade-long million man-hour slog.

I'd also like to examine how far reflection can scale and a bit about making such a thing applicable to both novices and experts but I need more time to compose my thoughts so I'll save that for a future post. Any comments on this post or its mistakes, of course, are welcome.

Strange Loop Notes - Day 2

posted on 2012-09-25 10:17:00

Computing like the brain

  • Lots of detail on anatomy and physiology. Use that to derive theories and test with software.

  • Neocortex is a predictive modeling system.

  • Grok, a predictive modeling product.

  • Future of AI.

  • Neocortex builds online models from streaming data. Think about it in terms of the senes.

  • Beware, vision is more than one sense. Retina is array of a million sensors. Auditory nerve is 30,000.

  • Brain has several million sensors changing firing in the 10s of milliseconds.

  • The brain invokes model building in response to novel sensory input. It can: make predictions, detect violations of predictions, generate actions.

  • The brain is more of a memory system than a computing system.

  • Neocortex is a hierarchy of projected systems. retina->cochlea->somatic nerve. But all one memory algorithm.

  • Primary memory is sequence memory. Playback of stored patterns. Stream processing!

  • Sparse distributed representations of data.

  • Traditional computing uses dense representation. ASCII is a perfect example. Individual bits have no meaning, programmer assigns meaning.

  • Brain uses sparse representation. Mostly 0 bits, maybe 2% 1s. Sparse IntMap? Bits that represent specific things. Top bit that means X.

SDR Properties:

  1. Similarity: shared bits = semantic similarity. A similar bit vector has similar semantics.

  2. Store and Compare: Store indices of active bits, don't traverse the whole thing. Subsampling is OK though!

  3. Probability shows errors are very unusual even with subsampling. If you do make a mistake, it's a close/semantically similar one.

  4. Union membership: Sets! Is this SDR (10..001) a member of this union of SDRs (00..001)? Very high correctness.

  5. The key to machine intelligence is sparse distributed representation.

Sequence memory:

  • Digression on neuroscience.
    • Typical neuron has thousands of synapses.
    • Neural networks with only a few connections are nothing like real neurons.
    • Cell goes into a predictive state when it detects a coincidence.
    • Each cell is one bit in their SDR implementation.
    • When a cell is activated, it looks for cells that were active just prior to you. Recognition of former state. Pattern/prediction. Deeply probabilistic.
    • Multiple predictions can occur at once. Only single step memory. For larger sequences need Variable Order Sequence Memory.
    • 40 active columns, 10 cells per column: 10^40 ways to represent the same input in different contexts. Deeply distributed.
    • Different representation of sequences at columnar level and cellular. Architecture supports generalization.
  • If pattern does not repeat, forget it. If it does, reinforce it.

  • Synapses are either connected or not. Represented via scalar, if it's above a threshold, it is connected.

  • Typical system: 1 region, 2,000 columns, 30 cells per column, 128 dendrite segments per cell, 40 connections per segment. 300M connections. No SPOF.

Predictive Analytics Today:

  • The world stores tons of data in databases, builds models and batch processes.
  • Too slow! Soon will be massive stream processing with automated model creation, continuous learning, and temporal and spatial models.
  • Grok is their product. Get a stream of data, run through encoders into SDRs, feed it into Sequence Memory that makes predictions with probabilities.
  • User provides a data stream, what to predict, how often and how far in advance.
  • Energy pricing and demand/response is a common user. Also, server loads, ads, etc. All running on Amazon AWS with a REST API and dev dashboard.
  • Helped a server management group provision instances ahead of time for video transcoding. Predicted server demand enough to reduce cost by 15%.

Future of Machine Intelligence:

  • Supports alternate paths being pursued now: IBM Watson, Google Cars, etc. ...but strongly believes SDR and brain-derived solutions will eventually dominate.
  • Strong sensory-motor integration. New science coming out constantly. Need more answers/understanding.
  • How do we move this from the cloud to embedded/distributed sensor grids?
  • Lots of effort to make this fast in software. Doing it in hardware would be better! When/how? New memory architecture rather than CPU. VERY fault tolerant.
  • Most of your brain is connections, not processing mass. Chips aren't good at massive interconnects yet.
  • Cool applications will not be the classics (vision, language, speech).
    • Artificial brains? Maybe. Brain has ultimate latency for a result of 5ms. That's pushing it. Make 'em bigger. Faster!
    • Make a physicist brain that works round the clock really fast that never gets tired and never eats.
    • We're not designed to explore the universe. Make an artificial brain that is! Send it out, bring it back, ask it questions.

Behind the Mirror: The birth of Light Table

  • 1974: MIT AI lab was using TECO. Tape Editor and COrrector for the PDP-10.
  • Not an editor like today. A language like for text manipulation.
  • Code samples. God it hurts. Called YAFIYGI "you ask for it you get it" in the first paper.
  • Richard Stallman finds WYSIWYG at the Stanford AI lab. Decides to make TECO WYSIWYG by adding macros. Emacs! Usability += 1000.

35 years later...

  • Worked at Microsoft, got hired as a PM. PM on Visual Studio. Owned C#+VB in the IDE. Was asked "What is the future of editing?"

  • do users really work with it? NO end-to-end usability studies on Visual Studio. Only on specific features.

  • Watch a guy hacking from a room with a one-way mirror. Ask the user to think out loud as they do it.
    • One interesting finding: You /will/ touch the mouse. Even if you swear you never do.
  • Richest feedback you'll ever get on a product is a usability study.

  • Expectation: Visual Studio is amazing tech! Actual: Too complicated, too noisy. Shock: No one actually vocalized problem.

  • If it's true that the key attribute of a good programmer is keeping the system in your head, its very sobering. How to teach memory?

  • Everyone just uses: Code navigatoin, Debugger, Editor. Basic workflow has not changed since Emacs.

  • Many innovations but never went mainstream: Smalltalk, Lisp Machines, etc. Time to re-imagine.

  • Tried many things, nothing like Light Table. What changed?

  • Clojure taught @ibdknox that what makes great programmers is being able to abstract away parts of a system, not hold it all in their head at once.
    • Make black-box abstractions and traverse them. Don't keep it all in your head.
    • Perfect! How do we make an editor for that?
  • We're always dealing with abstraction. We are not recipe writers. We consume and subsequently create abstractions. We are synthesizing machines!

  • Need to ask questions about them. Poke at them. You don't understand code until you run it. Writing it is not enough.

  • Realization: Interactivity/REPLs were a big enabler at getting people into computing. Closer connection to what you're doing. YAFIYGI->WYSIWYG is similar.

  • What is an IDE for Abstractioners? Light Table. Initially a 6 day hack. Half a million people saw it in 2 days.

  • Raised 300k+ on kickstarter. Acknowledgement that we are in the dark ages of software dev. We're disconnected from our software!

  • Brief live demo...

  • Add a custom widget for git interaction.
    • Pull in lighttable stuff by creating namespace. Add an atom for git-state.
    • Add an update function. Ask the server to run git status and update the git-state with it.
    • Write an init function that runs update, and returns a div with a list of modified files.
    • Add a hook (on [] (update)). We're live!
  • Add a custom widget for HTML5 game dev.
    • Add a canvas widget in widget init.
    • Add buttons to start and stop the game loop. defui is a Light Table construct to make a DOM element and bind events to it.
    • Add some CSS to make it pretty.
  • Add a custom widget for live view of entities in the game. Player/enemies,
    • Add atom to track entities and accessors.
    • Add hooks to update entities in IDE when game functions are called.
    • Add CSS to improve visibility.
  • HA! We've been in a custom presentation mode.

Runaway Complexity in Big Data

Missed due to chatting with Chris Granger and other awesome folks.

Computer Architecture from the 1960's

  • Burroughs B5000. Sassy ads. ALGOL syntax chart was first CACM centerfold. COBOL 61 chart was 4 times the size for half the capability.
  • "MULTIPLY PAY-RATE TIMES HOURS-WORKED GIVING TOTAL-PAY." - Burroughs ad where this is the only language needed to program it.
  • B5500 had dynamically sized arrays in hardware! Call by name! "They believed in lambda calculus when they created ALGOL."
  • Thunk came from CACM in 1961. :D
  • Pitched Haskell. Interesting and unexpected. "Trying to learn haskell so I can regain the purity I had back in 1969."

Guess Lazily: Making a Program Guess and Guess Well

  • Guessing is a way to both set up and solve computationally hard problems.
  • Naive guessing doesn't get us far. How do we guess well? One simple principle.
  • Fail Early, Fail often. Be lazy. Constraint propagation!

Okay, I couldn't keep up and had you been there you wouldn't blame me.

But this was cool and analagous to the Byrd/Friedman running programs backwards and forward stuff from yestereday.

Also, Buyer Beware: Oleg's papers may be more clear than his presentation style. :P

An Audobon Society for Partial Failures

  • In an age of shamanism when it comes to operating computers.

  • Talking about pressure as it relates to large scale distributed systems. Cliff's company, Boundary, does high-volume streaming.

  • One day they noticed IOPS starting to disappear. Kafka message queue, writes to disk. If IOPS start dropping...

  • Pressure rising on interior nodes, data kept coming in, not going out. Death spiral! Chained failures.

  • Like riding out a storm, just trying to keep systems up until traffic dies down.

  • Erlang memory explosion:
    • Process is over capacity and can't keep up. Queues grow unboundedly.
    • GC and heap relocation takes more and more time, allowing more messages to pile up.
    • Until the VM tries to allocate 16GB of memory or so... and the OS is all "GTFO"!
  • There is no explicit flow control. But we need it! (We already have it.)

  • Think you know TCP? You don't! It has flow control. Designed for exactly this problem.
    • But our software doesn't take advantage of it!
  • Only documentation of queue explosion problem + flow control in Erlang. ... Ulf Wiger mailing list post.

  • Create an end-to-end linkage of back pressure so edge nodes can know when there's a problem and react.

  • In Erlang:
    • Active sockets produce Erlang messages internally.
    • {active, true} is an unfettered firehouse.
    • {active, once} provides explicit control over when delivery occurs. Overload then backs up to TCP receive queue. Slows down sender!
  • In Netty/JVM:
    • MemoryAwareThreadPoolExecutor
    • Expresses back pressures by causing producer to sleep when execution pools are above a threshold.
  • Is this familiar? "SEDA: An Architecture for Highly Concurrent Server Applications"

  • What does this buy you? Back pressure gives you time to recover. Like System Dynamics, about getting a grip on non-linear behavior.

  • Distributed Erlang Considered Harmful!
    • Erlang protocol has an embedded heartbeat.
    • Heavily loaded connections will timeout the heratbeat - dropping a good connection.
    • Systems that move heavy throughput shouldn't use distributed Erlang protocol. AHEM, riak.
  • An aside: SCTP is a little better for this.

Expressing Abstraction, Abstracting Expression

  • Grew up making JRuby ready for production work, PLT-wise.
  • Started with Ruby as a more practical lisp but got into JRuby because job demanded Java.
  • Started working on Ioke (pronounced I-oh-kee). A vehicle for exploring expressiveness, informal as it is.
  • Why are new languages still being created? Is it worth choosing languages strategically/Does language actually matter?
  • Ex-pres'sive-ness: "Effectively conveying thought or feeling."
  • A construct is expressive if it enables you to Write/Use an API that can't be written/used without it.
  • Of course, Turing completeness means any language can do it. But you might write an interpreter to emulate it!
  • "Beware of the Turing tarpit where everything is possible but nothing of interest is easy."
  • The Blub Paradox is oft mentioned. Raymond advises learning many different languages.

Aspects of Expressiveness:

  • Regularity - whoops. missed this a bit.
  • Readability - A strange concept. Readable to who? Readable for what problems?
  • Learnability - whoops. missed this too.
  • Essence vs Ceremony - Anything not expressing the essence of your problem is ceremony. The weight of changes between brain and code.
  • Precision vs Conciseness - You will abstract away from the machine chasing conciseness. Reducers give up control of execution order.

Expressiveness over performance, every time! (in Ioke)

  • Language is super slow. Duh.
    • "I think I made a mistake." If you can't run useful programs quickly, there's no point expressing them.
  • Theoretical Expressiveness by Matthias Felleisen The Expresisve power of Programming Languages.
    • An expressive feature is something you can't do without reorganizing the rest of the program.
    • The negative consequence of expressiveness is patterns. Not expressive enough on some dimension.
    • Argues design patterns reduce understanding and readability rather than enhancing it.
    • Let is macro-expressive as it can be transformed into lambda.
  • Seems there is folklore thinking that goes against these definitions. Still seems ad-hoc. Somewhat disappointing.
  • Spent most time thinking about practical expressiveness vs theoretical expressiveness.

Types of Abstraction:

  • Abstracting objects. Ioke was prototype-based.
  • Abstracting class of objects.
  • Abstracting functionality. (Beware crossover, most Scheme programmers write their own object system. JS too.)
    • Classic "Object is a poor man's Closure/Closure is a poor man's Object" debate.
  • Abstracting structure (of data).
  • Abstracting structure (of code).
  • Abstracting relationship. Erlang vs Mozart-Oz. Actors or Dataflow variables! Constraint propagation/logic programming?
  • Abstracting paradigm. D, Racket, Common Lisp.
  • Elaboration. You need to be able to change an abstraction. CLOS MOP?


  • Language is much more impoverished by not having macros.

Different kinds:

  1. C-style Preprocessor macros. No structure. Just string processing! AAAAAGHHHH.
  2. AST macros. Work on S-Expression, not AST.
  3. Same language available as transformation language and host language. Metalanguage, etc.
  4. Often use functions in a complex macro.
  5. Template macros. Type-directed expansion, turing complete.
  6. This is scary because they are TWO different languages.
  7. Similar situation to complex Type Systems in Haskell/Scala.


  • Should be able to change components separately without changing multiple pieces. The Expression Problem!
  • Will be first thing looked at in language design/Ioke next time around.

Static typing!

  • Way of abstracting that reduces the expressiveness of your language.
  • Makes illogical behavior illegal. Also restrict you from some valid/correct programs they can't typecheck.
  • Very powerful. Can make it impossible to compile a program with an invalid account number, for example.


  • Working with generic code over an unrestricted set of containers.

  • Haskell and Scala are only two mainstream languages with Type Classes. Not really a static language feature.
    • Ad-hoc polymorphism/dispatch mechanism. Ultimately a dynamic feature.
    • Type Classes enable other abstractions. Most monads use Type Classes and are more powerful consequentially. Synergistic abstractions.
  • Most abstractions are leaky. See ORMs. Occasionally can't get underneath it to use raw SQL.

  • "All non-trivial abstractions, to some degree, are leaky." - Joel Spolsky
    • Okay. Why?
    • Abstractions are leaky because they are not absolute.
    • They hide complexity in a single direction/dimension.
    • They expect a certain kind of use. ... What if you use the library from the side?
  • "Abstractions may be formed by reducing the information content of a concept, to retain information only relevant for a specific purpose." - What purpose?

  • Effective communication requires common experience/understanding. Referents in linguistics.


  • Some interesting features of natural languages...
    • Simile - A way to compare two things. (like, as) is to (subclassing, type classes)?
    • Metaphor - Doesn't seem to be a good PL analog.
    • Analogy - also missing somewhat.
    • Redundancy/repetition - Think about speechwriting. Java has this but we don't seem to like it.
  • Arguments/precision determined by caller rather than callee in Natural Langs.
  • Syntax, Semantics, Pragmatics. Linguistic pragmatics is the idea of how context contributes to meaning.
  • If you were talking to the King of Sweden, you'll speak differently to him. Not wrong but a faux pas.

Communicating indirectly with many audiences though. The machine, team members, ops, etc. Even business stakeholders! Totally unique to programming.


  • From a PLT perspective, doesn't matter. In day to day pragmatics, it does matter. Everyone knows this.

  • Can go too far in either direction, Salt or Saccharine.

  • Operator overloading. Probably not one right answer.

  • Don't support it.

  • Support overloading built-ins.

  • Don't have operators. Everything is a function.

  • Arbitrary operators to be defined or redefined. (haskell/ml)

  • Should built-ins look different than user defined stuff? (editors note: Hell no, let the editor show you/syntax highlighting).

  • Optimizing:
    • For writing. A bit undefined. Optimizing for the shortest part of code's life. Perl? What about communication?
    • For reading. Reading by a novice or an expert? Again, who is your audience?
    • For visual distinctiveness of different semantics
  • Expressiveness is not always better. Many dimensions to optimize along. Language design is a balancing act. Find the right clustering of features for a given point in the design space.

  • Expressiveness and abstractions are relative.

Visible Programming

  • Designing a Programming Environment Around How Human Beings Do Things
  • What makes one programming environment better than another? Programmer productivity! ... huh? What's that?
  • Is the problem with programming we don't go fast enough? Huh. Nobody told me.
  • We're creating programs and we can't see what they're doing at runtime. If you can't see something, you can't understand it.
  • A programming environment should let us see and understand what the program is doing. A more directed goal than productivity.
  • And maybe productivity will fall out if we're lucky. ;)

5 Principles to aim for:

  1. Must enable the programmer to read the vocabulary (program).
  2. Must enable the programmer to follow the flow (Call-flow-graph).
  3. Must enable the programmer to see the state (as it executes).
  4. Must enable the programmer to create by reacting (iterative, interactive development).
  5. Must enable the programmer to create by abstracting (what sort of tools?).

Examples of these principles in a hypothetical environment: (JS & Processing for this talk)

  1. Normally we go to the docs or source. This sounds like using 4 sliders to choose arguments to something in Photoshop. UI garbage.
    • Mouseovers/tool-tips on each token in the code. Make meaning transparent!
  2. Great explanations show, not tell. Make it possible to see the output of a given form as inputs change. Explain in context!
    • Imagine a cooking show that introduces ingredients and then cuts to the result. You can do it yourself!
    • Step through execution via a slider. This helps but it is not enough. We can't see patterns in execution.
    • Plot it on a timeline. Which lines executed when? what patterns or alternate executions could there be?
  3. It is common to expect programmers to manipulate code in their heads. WHY?
    • It is the responsiblity of the environment to show what changes a line causes in the running program.
    • Show what lines executed, when, and what effect they produced.
    • Larger programs just become a data visualization problem.
    • Show the data, show the comparisons, eliminate hidden state.
    • Making global variable change visible is an option for inherently stateful lines.
    • More programs running forward and back. (editors note: you guys are jerks and you hurt my heart)
  4. Composing things in your head doesn't scale. Maybe this is why only small programs are beautiful! Use the environment as an external imagination.
    • Live coding is one way to approach this. Bit of a straw man argument here.
    • Core idea is you want to have program supply you ideas, not just keep you from hitting recompile.
    • Take completion further, what if autocompletion lists also included suggested values?
    • Useful but clearly limited to the domain of this specific problem: Processing/GUIs and visualization.
    • Dump all the parts on the floor so you see your raw materials. Functions are lego blocks. Have API browsing. Offers another problem-specific example.
  5. Encourage starting with constants, than adding variables and functions later once you know what behavior you want.
    • Editor can help by offering "factoring relations".
    • I know this wasn't intended to resemble a working production system but I was hoping for more than this.
    • I thoroughly agree with his premises but I do not think this was a convincing way to demonstrate their need.
  6. How does this scale to real-world programming? I have answers but I think it's better to question the question.

  7. Asking how to scale to everyday programming is like asking how engines benefit horses. We have to work like this.

  8. Argues against static runtimes. I agree but this was a weak treatment. There is tooling to work around this.

  9. "There is no future in destroy-the-world programming. It's got to go." - Agreed.

The State of Javascript

  • Ten days in May 1995: Shipped Mocha.

  • September 1995: Livescript

  • December 1995: Javascript.

  • "Hey maybe we should fix equality?" "Nope, too late."

  • JS: Wat. We've all seen it.
    • JS is based on C but wanted to use braces for dictionaries like python. Based on Java so objects convert to string. So int->string->0 in {} + [].
    • This is awesome. Going through how JS processes every WAT in JS. Much respect to Eich for this.
    • NaN is a number but that's the IEEE's fault.
    • "W3C is gonna fail, web won't be XML, let's make HTML5!" - in a pub one time
    • Get the band back together, heal the rifts, hug it out. ES6!
  • ES6 Goals:
    • Make it better for applications.
    • Make it better for libraries.
    • Make it better for code generation.
  • Adding very minimal classes.

  • Adding MODULES!!!! Fuck. Thank God. FINALLY. Not first-class but better than nothing.

  • Symbols (formerly names or private names). They're self-named objects that you can't forge from strings. Hygiene!

  • If you're using symbols, there ought to be a nice way to deference/get at them. How about @symbol?

  • Utilities for symbols, many interesting applications.

  • Default parameters. arguments is a hideous hack. We will kill the arguments object but Brendan will pay to see that movie. :)

  • &rest parameters. Guess what, MO LISPY! Prototyped in Spidermonkey right now.

  • ... is spread, inverse of rest. I.e. the **kwargs\kwargs split. Splicing?

  • for-of, iterators and generators. Have to use of instead of in. Can finally iterate over values instead of keys.

  • Callback hell. What can we do? More problems than aesthetics, could capture too many outer variables.

  • Promises/futures still not here...but we will have coroutines from generators and use combinators to make things suck much less!

  • Array comprehensions. Lazy generator comprehensions too! When did they get feature-itis?

  • Sets too! Proxies for metaprogramming API. Think they got feature-itis when they thought about this audience. ;)

  • Can emulate NoSuchMethod/DoesNotUnderstand with proxies. Couldn't agree on standardization. Could add a default one to Object.Prototype.

Apparently all that was library and application improvements. What about code generators?

  • is taken seriously by ECMAscript standards body.
  • A portable bytecode feature may emerge one day. Probably not. UNTIL THEN...
  • The number one source of malware 0-days lately is Java on the client. Class loading still fucked. Gosling threw in the towel, hurt alot.

So bytecode...

  • JS might actually be better than bytecode and COMPRESS better.

  • Bytecode standardization simply won't happen. Need to be versioned, deters competitive race, don't have the bandwidth in companies.

  • "Versioning is an anti-pattern on the web. You want to fail soft."

  • No call/cc for you! Also, many humans still like writing JS.

  • Proper tail calls have been standardized. Thanks Dave Herman!

  • What do you think of NativeClient?
    • Not portable.
    • Implementation-defined.
    • No view source.
    • Impressive engineering and good competition though. Not in the cards for standardization.
  • Just care about making the web better. Wound up adding Typed Arrays for WebGL. Big potential win for compilers there. See: emscripten.

  • "The reach of the web exceeds any other platform."

  • "To game developers, they're just excited about doing another port."

  • LLJS, low-level javascript is another experiment. C-like lang to typed arrays js to... Feel free to play with it.

  • ES4 collapsed when they tried to add something type-like. NOT adding types or contracts or whatever. Have typed arrays. Go away.

  • "People write Javascript in a latently well-typed way."

  • There is however a proposal for User-defined structs that are stored in typed arrays. Close enough, right?

  • The last missing JS feature is... MACROS. Not in ES6. though. It's on github, it works. It's a sound JS reader and hygienic macros.

  • "Worst thing I did out of Perl envy was to make regular expressions with slashes...means you have to parse (the whole thing) to lex!"

  • "Please help because if we can get this into ES7, we can get put ourselves out of business, and I can do something else."

  • Other cute projects:
    • Cross-compile XNA (jsil)
    • Cross-compile Android (xmlvm)
    • Parallelism so that JS doesn't use your battery up really fast. Not adding the hazards of threads to the language, of course.
    • River Trail uses pure functional APIs and going well. Possibly in ES7. PJS has task parallelism similar to Rust.
  • Left with a general optimism on the continued future of Javascript.


I was going to have to miss the last keynote and two talks as well as all the camaraderie that would occur after. Delta had screwed up my flight and I didn't have a hotel room for the night. Thankfully, some Loopers came to my rescue though Delta still managed to extract $170 from me. Soulless corporations! It was totally worth it. I can't think of a more worthwhile or energizing way to spend 3 days. I'd like to thank everyone I was able to spend time drinking and chatting with especially Scott Vokes, Paul Snively, Andreas Fuchs. Brian Rice, and Jose Valim. Obviously, I never want to go home.

Strange Loop Notes - Day 1

posted on 2012-09-24 10:46:00

ELC/Preconference talks

Though I didn't take notes on these, or today's keynotes, I have seen quite good coverage of ELC and Strange Loop talks here.

Potificating Quantification

  • Generative testing with contracts used /in tests/ (to avoid runtime overhead) seems a good compromise.
  • Optional type systems that aren't part of the language are verification systems. Type systems must be part of the language by definition.
  • We are doomed by Godel to inconsistency, never truly safe.
  • My thought: How can we apply Paraconsistent Logics?

Functional Design Patterns

  • Design Patterns: Are they a sign of weakness in your language?

  • Graph of known Monad tutorials. Skyrocketing. :P

  • Monads are a great abstraction to capture/describe patterns, NOT explain them.

  • A System of Patterns - book
    • Architecture Pattern > Design Pattern > Idiom (idioms are least general)
  • State/Event Pattern
    • Store all the inputs and initial state. Rederive any point in execution.
    • Can be troublesome to track tons of inputs/events.
  • Consequences Pattern
    • Input can trigger arbitrarily many events to fire/hooks.
    • Returns the events as a sequence (of thunks, presumably).
    • Don't let them recurse or you're turing complete!
  • Data Building Patterns
    • Accumulator Pattern - Reduce a sequence to a scalar value. Duh.
      • If order matters, have to do it sequentially. Otherwise, divide!
    • Digression:
      • MapReduce about wringing data locality out of slow disk clusters. Already changing with SSDs.
      • Reducers are nice when you have associativty. Easy to parallelize trees of operations.
    • Recursive Expansion Pattern - Macroexpansion, Datmoic transaction, Are we done expanding? If not, keep going.
  • Flow Control Patterns
    • Pipeline pattern - Code may be longer but it's cleaner. Enforced composition through layering+encapsulation. No branching allowed!
    • Wrapper pattern - Foundation for Ring. One main path with potential branches at each step. Just decorators/:before+:after+:around methods?
    • Token pattern - May need to cancel and operation. Call Begin, returns a destructor, basically. Don't need direct access to resource to destroy it.

A Whole New World

Possibly the talk I'm most excited for today. DESTROY ALL SOFTWARE!

3 Confessions

  1. Wrote "An Editor"
    • Modal, Terminal Only, Neither VIM nor IDE
    • Layers! Annotations on source. Diffs. Tracebacks from prod logs. YES!
      • Crash - Have to parse logs+traceback, maybe use different checkout.
    • Interactions - On-demand Class Hierarchy/CFG. Code navigation methods.
      • No static analysis! Language-specific FIFO queues for program traces! Fork+render with graphviz. FUCK YEAHHHHH!
    • Answer questions like: What code does a web request hit? More importantly, what code might have reached this crash point in our traceback?
  2. Wrote "An Terminal"
    • DEC VT100 has determined terminal protocols for 30 YEARS. Powered by an 8080. @1978.
    • Add raster graphics, 24-bit color, momentary keypresses, font styles.
    • Use for more editor layers! Tag lines with profiling info. Bottom 95% grey, others yellow or red. Same thing for Type annotation. Record traces, remember?
    • Do you want it?
  3. Wrote "An Lies". HAS BEEN LYING.
    • All bullshit. C-c f t, flip all the tables.
    • Takes a long time to fake all that shit.
    • "Ship often. Ship lousy stuff, but ship. Ship constantly." -- bullshit
      • I KNOW that all software sucks.
    • Legacy & Paralysis? Legacy == Paralysis!

They will not merge our kernel patches. How do we move forward? Our "Shipping Culture" is poisonous to infrastructure. We just accrete low level infrastructure. Programmer Archaeologists are we. INCREMENTAL DEVELOPMENT WILL NOT WORK FOR THIS.

Type-Driven Functional Design

  • Basic overview of Haskell type syntax. Call to map. We all know this I hope.
  • Currying + Partial Application trivial in ML family. Duh. Syntactic support.
  • UML is garbage. Thinking of the /flow/ of types through your program gives insight.
  • Moved to miniKanren talk.


  • Growing benefit of compiling to JS. Lots of browser competition, obvio.
  • "Lisp programmers know the value of everything and the cost of nothing."
  • Hope to show that efficiency is important when getting richer semantics.
  • Expression-based and value based semantics, vs statements.
  • Go back to Robin Milner. Compiler figures out boolean inference.
  • Functions aren't primitive in Clojure, unlike Scheme+CL, like T.
  • Construct types that act like a fn, Collections are an instance of IFn.
  • Invocation always emitted as Expensive in JS engines though. Again helped by Compilation. Use Google Closure for DCE, aggressive inlining, etc.
    • Whole Program Compilation allows propagation of args and type info.
    • Bit of a dev/prod divide. Dynamic for devs, static compilation for perf to ship.
  • arguments.slice is a big performance hit. Return a closure with a dispatch function for multiple arity defns.
  • Performance competitive with hand-written JS. Based on Bagwell and Okasaki (since compare-and-swap isn't available, I presume).
  • Digression on how persistent data structures don't suck. V8 handles them really well.
    • Within 3x performance hit on Chrome 22. Opera and Safari a bit worse off. Firefox good at operations on data but slow creating it.
    • But it won't be the bottleneck in your application. DOM traversal is vastly slower, for example, and probably dominates.
  • Local type extension with protocols. \o/ Used internally for hashing.
  • What about when you drop to JS? REPL connected to browser. Compiles in namespaces for you! Source maps should help with debugging.
  • And it has macros, of course. Zebra problem demo. PAIP shoutout! 24 billion possible solutions. Runs in JS in 16ms, 1000x than Norvig's from 1993.
  • Caveats:
    • Same broken numeric tower.
    • Debugging much harder than CoffeeScript. Not necessarily readable.
    • Needs Clojure. Not self-hosting. Not something they really want to fix.
    • Multimethods and keywords still slow.
    • Sure, the "runtime" looks big in generated code at first. But Google Closure compiles it down to a nice, small gzipped thing in the end.
  • Clojurescript host compiler in clj is only 4000 lines of code! ClojureScript side is 7500.

Data Structures: The Code that isn't there

"A Data Structure is just a stupid programming language." - Bill Gosper

Perhaps instead...

"A data structure is just a tiny virtual machine." - Scott Vokes

  • Fundamentals: Lists, Arrays, Hash Tables, Trees
  • Ruby 1.9.2 briefly used list instead of a hash-table or set for require. In big-oh this is O(#fail).

"The cheapest, fastest, and most reliable components are those that aren't there." - Gordon Bell

  • Good choice of data structure /subtracts/ code.

  • Data structures set the path of least resistance for interacting with your data.

  • You probably won't know the ideal DS up front. Don't paint yourself in a corner.

  • Skiplists
    • Take an ordered, linked list. Add an express lane! Jump by 2. Add another! Jump by 4. More closely resembles a binary tree...
    • But how do we balance it?
    • Doesn't have to be perfect. Real trees aren't balanced! Just balanced enough.
    • Use random probability distribution. Actually winds up balanced enough.
    • Roundabouts, not traffic lights
      • Traffic lights are SPOFs, bottle necks.
      • Roundabout decentralized, delegates to cars, smart at the node. Global order from local decisions.
    • Because only immediate neighbors are effected on insert, lock contention is low.
  • Difference Lists
    • Comes from Prolog. ?- uses(prolog, Person). no.
    • Digression on Unification.
    • Allows appending to immutable list.
    • Closer to future/promises for new list elements than lazy evaluation.
  • Rolling Hashes!
    • Find matching/overlapping sequences in binary data. Rsync does this.
    • Bioinformatics loves this stuff. Genome seqs.
    • Hashing everything vs everything with traditional hashes (md5, sha) too slow.
    • Drop a letter off the front, add another to the back, hash. Add to set/bloom filter or something.
    • Rsync:
      • Break file into fixed width blocks + remainder.
      • Send the hash for each block. If it's different on the server, update!
    • Insert/delete shifts each block though...
    • Use rolling hash for files that are already on remote. Otherwise, blocks.
    • Can also be used for chunking data.
    • A rolling hash: finds deterministic breaks, cheaply matches blocks
  • Something new: A Jumprope
    • Stores large binary strings (or files)
    • Content Addressable Storage (reference by hash for easy distribution)
    • Persistent and immutable so you can cache it anywhere
    • ... so kind of like a git repo but much better for big files
    • Three structural elements:
      1. Leaf - chunk of raw data
      2. Limb - Series of content hashes and their links, stored in an array
      3. Trunk - A limb with a big end node.
    • Then you just back it with a key-value store.
      • Obviously, choice implies things about performance.
    • Somewhat like a skiplist that uses a hash as its probability function.
    • Good for pipelining streaming content thanks to seeking properties.
    • 2kb for limb nodes and 64kb overhead for leaf nodes.
    • Trivial to fetch, stream, mirror.
    • Using it for a distributed FS, scatterbrain!!
      • Similar to Amazon Dynamo.
      • lack of pointers, use Compare and Swap!
      • Emergence from local behavior.
      • Can tune bad performance to arbitrary guarantee. 1%, .1%, etc.
      • Currently backed by C and Lua. Also a quick'n'dirty Erlang implementation.
  • All coming to a github near you soon.

The Database as a Value & Making Javascript Fast

Elided due to battery life.

Ah, programming...

posted on 2012-08-28 20:14:15

I just saw this over on PLT Alain de Botton's twitter feed and couldn't resist collecting and reposting it here:

The Goldilocks Principle for Programming Languages:

  1. Everyone with less theoretical knowledge than me is an idiot noob whose code is gibberish.
  2. Everyone with more theoretical knowledge than me is a pointy-headed elitist whose code is gibberish.
  3. My level of theoretical knowledge is just right and my code is clear and deep.

Addendum: I was an idiot 5 minutes ago, and will be an elitist in 5 minutes. Where's my beautiful code gone?

What a charming, weird little enterprise hacking is.

PS: With any luck I've got a very cool announcement coming in the next few days. Also, blogging is a lot more fun using emacs+git. I may just start doing it more.

Unless otherwise credited all material Creative Commons License by Brit Butler