posted on 2022-06-21 09:30:00
Even though I've been finding less time to work on it, I've enjoyed the process of hacking on clones and things are going pretty well. Input handling and background rendering are finished and with any luck it will be another weekend until sprites are working. I do expect to need some tweaks for fine scrolling to be implemented correctly so it may be another week or two still until more advanced NROM titles like Super Mario Bros run. It should be a hop, skip, and a jump from there to MMC1 titles like Mega Man 2 though. 🙏
There are 3 questions that have come up pretty regularly though, either on stream chat or elsewhere, and I'd like to jot down some thoughts about them while it's fresh in my mind.
Because it brings me joy. I feel compelled to work on it as a vehicle to try to create something that I find aesthetically appealing and that indulges parts of my curiosity.
Because it's my favorite language to work in. I don't think the feature set of Common Lisp is critical. Macros, CLOS, and conditions and restarts are all great but the motivating factor for me remains the interactive development workflow with Emacs and SLIME/Sly. When I was young and not yet a programmer, I imagined that at some point working on software or digging into the internals would be more like having a conversation than working out a math problem. Our tooling remains far from those (naive) visions, but Common Lisp is closer to what I imagined and so it feels more comfortable to me. Mikel Evins has written some great posts about this.
This is the toughest part to answer. Initially, I'd just like to be able to have a functioning emulator and play childhood games that I loved, like Mega Man 2, tolerably well. Once that piece is completed though, I'm very interested in trying to add tools for debugging and reverse engineering games.
This isn't so much about the games themselves as it is about having better tools for investigating software without access to source code. An emulator is a great place to experiment with approaches to that. Now I admit I have not spent a lot of time doing reverse engineering or security work and am not familiar with the state of the art around static analysis or disassembly tools like IDA Pro.
I'm limited in my ability to express what I imagine. So since I can't tell you exactly what it should be, here's a sketch:
I would love it if I could build a graph of the control flow of the game as I play it. I would love it if I could later annotate the graph, name segments of assembly, and receive hints around what specific parts might be interacting with graphics data, or the APU, or handling player inputs.
The code is an artifact, the leftover cocoon of the program being written. The interesting pieces are in the constraints of the level design, the physics, the musical score, the artwork. I would like, as much as possible, to have tools for exploring the shape of a process as it lives, exploring the data it operates on, and understanding the constraints of the problem, rather than relying on code to understand one specific approach to solving that problem.
In the abstract, this isn't a solvable problem and I will never have a proof of correctness or confidence in completion. But it's worth striving to see how software in general could leave breadcrumbs behind it, given how much of our ideas and culture are being poured into it and fossilized in amber.
posted on 2022-01-23 18:00:00
It's been a busy start to 2022. I'm working as an Engineering Manager for the first time and enjoying it but it's been easy for other things to slip through the cracks. For example, I told myself I would write a post on Advent of Code several weeks ago. So I'm sitting down to write about it now before I forget any more details.
This is the second time I've attempted Advent of Code. The first time was in 2020 and I enjoyed it a lot but ran out of gas around day 10. I was pretty distracted with a Flamingo Squad project I can't recall and probably a bit burned out. Both years I've written my solutions in Common Lisp.
Advent is interesting. I get enjoyment from different things on different days. Some problems I just enjoy seeing how much I can optimize Common Lisp, or writing solutions in a few different styles if the problem is simple and seeing the differences in how they are compiled and allocate memory. Other problems I'm much more satisfied by trying to see how "pretty" a solution I can write, either using constraint solving tools like Screamer or a pipeline using threading macros and so on.
I enjoy the social aspect of AoC and having a leaderboard with some mutual friends and coworkers. It's nice to chat about something besides production code with other talented programmers. That said, I have to be pretty careful to avoid judging myself. I have to consciously remind myself that my goal isn't to "win the race" and not worry too much if I struggle to solve a problem elegantly.
There were two things I wanted to try and do differently this year from last year. The first was just to go as far as I could and not worry about racing. The second was to experiment with literate programming tools and try to do a better job documenting my work.
I think the results were a mixed bag. I got through day 11 so I powered out at around the same point. I mostly worried less about the race but I still cared a lot about finishing each problem on the day it became available and definitely got discouraged once or twice when I didn't like my approach. On the other hand, I had a good time and learned a few things so it's a good investment overall.
Here's the current state of the generated site. You can see I didn't wind up embedding
the source for the different functions so it can't properly be called literate but the [function]
label (or equivalent) next to all the exported symbols can be clicked to jump to the source on github.
I have been meaning to play with mgl-pax for a long time. Like ... probably several years? There are blog posts about it as far back as 2014 and it's been on my radar a long time but I just never seemed to make time for it. Advent seemed like a good opportunity to dive in.
I like the idea of an environment where prose and code are intermingled, so I have a natural attraction to literate programming. This shouldn't surprise you if you've been here before. It also seems important to me that such an environment for authoring programs should be rooted in the development tools and support the prose as a secondary feature (like MGL-PAX) rather than rooted in the prose and supporting the code as a secondary feature (like org-babel). I.e. tangling one or more files to produce my program seems like the wrong way to go to me.
In terms of Advent of Code problems, I'd ideally be able to do the following:
Editor's Note: The issues I bring up below were resolved before I could ever make a PR or ask the author about them. It seems making PAX more flexible about transcription was in the plans all along.
I think MGL-PAX excels on the first two points and struggles more on the third. It has a feature called transcripts which could plausibly support it but they are an awkward fit. Transcripts allow for including examples that are evaluated when the documentation is generated but there are two issues I have with it:
My only remaining concern is that the navigation and transcription functionality is tied to slime and swank. Hopefully I'll have an opportunity to try them with sly soon and can dig in or report it if there are issues.
In the advent project, the site-building and deployment was trivial. Hacking together a way to generate an overview of all solved problems with performance measurements involved some fiddling but I'm happy with the results.
For a long time, I've been writing small projects in common lisp, writing a handful of tests, and relied only sparingly on libraries. A little alexandria here, a little cl-ppcre there. There's a place for that but I'm ready to try and cobble together the utilities and extensions to the language that I'm comfortable with. For now, that's alexandria, serapeum, iterate, mgl-pax, and try. Clingon, trivia, screamer, and fset wait in the wings for the right problem.
There are plenty of talented lispers around. Two people whose code I've enjoyed reading during advent
are death
and Steve Losh (aka sjl
). They feel like opposites to me if only because sjl has a project
dedicated to advent and an assortment of dependencies, macros, and utilities to make advent hacking pleasant.
Death by contrast almost always just relies on the language standard and throws his code unceremoniously in
a gist. His solutions are often faster than mine but don't sacrifice elegance.
It all boils down to this: I still like lisp, I miss hacking it, and I should read and write more code. My algorithms chops aren't as good as I'd like and I have to make an effort to not get discouraged by my limitations. All the more reason to keep doing advent, even out of season, and learn a few things.
posted on 2018-08-22 12:59:00
Lately, I've been thinking about what I want my next job to be and I've been strongly tempted to pursue grad school. I only fell further down the rabbit hole when I read this tweet by Tony Garnock-Jones:
Reading things teaches people how to write. Analogous, if we are to place programming at the same fundamental level, using a program should teach how it works. But we don't see this.
— Tony Garnock-Jones (@leastfixedpoint) August 6, 2018
This problem is very close to my heart. Yet I'm ill equipped at present to tackle it. That suggests I should pursue a PhD to get better tools but I have some serious concerns about that (above and beyond selling my house). Let me explain.
Thinking about CS academics, it seems that most research aims to support building larger systems by improving software:
I feel really weird because neither of those things interest me. Sure, modern software has plenty of bugs and it could always be faster. But software is eating the world anyway.
Software is the fastest growing store of "how to" knowledge on earth and is mostly inaccessible, not only to the general population but programmers too.
What do I mean? Well...
There's no such thing as code being "readable", not only because being readable implies assumptions about the context of the reader but also because most programs cannot present a linear narrative! As a consequence code cannot be self-documenting. We should strive to write clear code but suggesting that code is its own documentation is untenable.
I started working on a Nintendo Emulator in Lisp to explore the idea of readable code and see if I could make the emulator source a good pedagogical example of what a computer does. I failed.
Now my primary motivation working on the emulator is finding ways to generate an explanation of binaries without just recovering the disassembly. It's the intent and constraints that matter, the shape of the problem, not necessarily the solution the developers wound up with.
Indeed, if we could recover disassembly or even get the original source out of binaries (compilation being an inherently lossy process) that would only get us more code that we would need to comprehend. I.e. More how-to knowledge that isn't readily accessible. Legacy code and software preservation only makes this need more urgent.
I should note that I don't think all software needs to be preserved. I just think it's a travesty that we're 60 years into the project of programming and our tools for asking computers about how programs behave are so poor and so specialized.
Computers could be much greater tools for knowledge dissemination than they presently are because they can execute models. We currently use them to disseminate static knowledge (web pages, PDFs, videos) instead of executable knowledge.
I continue to want to find a way explain code without relying on static methods like documentation, the code itself, or even types and tests. I dream of better tools for communicating and reasoning about the work software systems do than source code.
Putting it in the simplest terms possible:
When I was 8, I desperately wished I could ask my family computer, "Wow! How did you do that?" I still can't today and I spend a lot of time thinking about what a solution should look like.
see also: Peter Seibel and Akkartik, though Luke Gorrie may be a counterpoint
posted on 2015-05-09 03:35:00
I've been programming for 8 years now. Only half that time professionally. It seems like a long time to me but is a drop in the bucket compared to many in this industry.
If I had to pick two big lessons from the past 8 years to tell someone getting into Software Development for the first time, I would probably say:
Software is never finished because it exists in a social context. Software is an accessory to daily life and life changes. As our needs evolve, and the context around software shifts, so must the software itself.
Not to mention the fact that the teams working on software and the organizations around it shift. We cannot forget Conway's Law!
I often cannot imagine how the Software industry will ever settle with regards to tools and practices, when I see our tumultuous past 50 years. But even if our techniques and tools settle, I don't imagine our codebases will.
You might imagine that this section is redundant. Certainly it sounds similar to "Software is Never Finished". But I mean something different. I mean that there isn't a single right way that all software should be written or made.
The internet would lead you to believe otherwise. Programmers are nothing if not evangelists, even zealots, about their tools and techniques, their pet styles and practices, their esoterica.
The most important thing you can do is ignore this. Ignore the hate, Ignore the hype. Do not second guess yourself.
Tools, practices, and domains of knowledge are just that. Even if Software wasn't a moving target (see: Never Finished), we still do not have a single methodology understood to always produce the ideal code. Indeed, there isn't some ideal code we're after. The code is not the most important part, it's just the part we're paid to obsess over.
So don't sweat the folks who insist everyone should follow their "better" way. At the end of the day, good documentation and happy customers are probably more important than most particulars of your codebase.
By all means, don't stop learning and write the best code you can. But chart your own path through our tangled maze of lore. And remind yourself that it's okay to be an average programmer. We've got to find time for families and lives, after all. Speaking of which ...
It's tough to try to plan for retirement. I'm still too young to think confidently on decade plus time scales.
It's tough to decide how to teach my students and gauge assignments. It's tough to decide what to learn next to become better at programming. It's tough to decide what I want from my career, how to nourish it and have it nourish me. It's tough to decide what's worth doing in general, when our lives are so busy and full.
But there's one decision I make that's really easy, even when it makes my life harder.
It's coming up on six years since Dad died. I cannot measure the amount I've grown since then. I know he'd be proud. But I'm proud and that's even more important. The biggest lesson I learned from John Glenn is probably this:
Loving other people is so obviously the right thing.
I cannot think of a time when I am as confident in my decisions as when I am loving and supporting others.
I'm not proud of my programming skills or code, though cl-6502 is kind of neat. I'm not proud of my job or relationship, though I am thrilled with those aspects of my life.
I'm proud that I handle hard situations with all the grace I am able. I'm proud that I treat others with respect and care because I didn't used to.
Most importantly, I'm proud that when I see someone struggling, I love them.
We do not get to have many easy decisions in our adult lives. But loving those around us, pleasant or unpleasant, in good times or bad is an enormous undertaking. I certainly fail at it, but I never regret it.
It's unfortunate that with our busy lives, in the sea of our alerts and notifications, it is so difficult to focus on the simple and important things. But I truly believe loving others is the most important thing that I do.
I have failed many times before and in many ways and modalities. I have character flaws. I have shortcomings. But if I have helped those around me through difficult periods in their lives and supported them when they were in need? Well, then it's all probably worth it.
posted on 2015-03-10 19:57:00
For the bulk of my professional career as a software developer, I've felt like a fraud. To some extent, I think various aspects of tech hiring practices and tool fads/fetishes in the software industry create or exacerbate this feeling in most of us.
I read Joel Spolsky's Java Schools article way back when, before I was really programming. I looked down on Web Dev for a long time. I played with lisp, played with emulation. ... But I've been a professional web dev. Why am I fighting it and being hard on myself for not being a systems programmer?
I flogged myself for some time, a little voice in my head saying that web devlopment "isn't real programming". I would flog myself for not being good at web development when I hadn't embraced it. I would flog myself for not being knowing systems programming when I haven't put any time into it.
Sure, there's plenty I still don't know. But "I don't know but I can figure it out" is the right instinct to have. Trying a bunch of stuff and not finishing is vastly better than paralysis. Exploring any ecosystem and building better apps is better than misguided elitism.
I'm a hacker, through and through. I want to learn, want to improve, want to synthesize new things from my understanding, grow, share, and change. I looked up to the hackers of lisp lore and AI Labs. But that hero worship has become negative, is distracting me from just building things.
Part of the reason that little voice is in my head (and I listened to it for so long) is because I thought I didn't have a chance in this industry.
I've been a successful developer for years but often unable to enjoy my jobs because I've been too uncomfortable to embrace them. Then feared I'll be found out as a fraud, not a "real programmer".
I've been telling my students that since they understand the major components in web development and have some understanding of how they fit together, their real focus to grow should be practice. Constantly building bigger things, trying to make each piece more cleanly than in the past, gradually knowing how to solve harder and harder problems.
I stopped taking my own advice at some point. It's time to build new things again. Bigger things. Not the prettiest or the best, but real.
posted on 2015-01-01 18:15:00
I've been in a technical rut for a while. Sure, I learned new things at my job but limited myself outside of it by sticking to projects I wasn't motived about and only working with tools I was familiar with.
As much as anything, 2015 is going to be about playing with new projects, experimenting with new tools, and focusing on fundamentals again. Seeing as I have 3 big goals here, I might just try to tackle one each semester. :)
To that end, my primary goal will be to learn Ocaml and build something cool with it. I'm leaving that just as open ended as it sounds.
I'm interested in Ocaml for a variety of reasons. Its origin dates from a time before Intel and Microsoft created a 20-year dominant platform. It was envisioned as a systems language, much like Lisp, to compete with C++.
That said, it employs pattern matching and a strong, static type system like Haskell. Unlike Haskell, it has a fairly simple runtime and compiler built to give very clear intuition about code performance. While Ocaml is strongly functional, it provides for use of imperative state without monad machinations (sorry @James).
There are other reasons but I think this is a good start. I'd be interested in everything from writing an IRC bot, to scripting tasks, to NES reverse engineering tools (i.e. lots of graph manipulation), to OpenMirage toys.
I've leaned towards backend work in my career where possible. After helping TA Tim's Frontend engineering course last semester I finally want to pick up some front end skills. I'm not angling to get much better from a design or HTML/CSS perspective, but have a definite interest in mithril and Clojurescript's om. Elm also seems really cool but I'd prefer to stick to slightly less alien tech for the time being.
I'm considering porting my Nintendo emulator to Clojurescript and getting it to use canvas but I worry about getting bogged down as I did with the Lisp version. It was fairly painless getting a rough cl-6502 port up in Clojurescript in a few days though.
To be honest, my algorithms chops have never been terribly good. I think one thing that's intimidated me working on trowel is a lack of confidence in my algorithmic reasonining. So I'll be working through the latest edition of Sedgewick's Algorithms with a particular focus on graphs. With any luck I'll actually make some progress on trowel. Maybe I'll even wire it to an om app with some logic programming or constraint propagation tricks.
posted on 2014-09-24 16:05:00
After going to Strangeloop for the first time in 2012, I was really fired up about programming. I'd only been out of school a year and was already a full-remote Clojure developer making a great salary. Even better, I had made good progress on my lisp emulation project and received some recognition from hackers I respected for it.
The last two years Strangeloop has been a lot more sobering. I told myself things in 2012 about how fast I'd get better and how good I'd be. Even though my expectations were unreasonable it's taken a long time to not feel bad about my (relative lack of) progress. I've been slow to accept the fact that I don't want to fight my way to being the best in my field.
I got in Tuesday night, dropped my bags at the hotel, and immediately ran off to drinks and phenomenal conversation at the Schlafly Tap Room. Many of the best experiences I've had at Strangeloop have been at the tap room. Sure there is excellent food, beer, and technical conversation, but I've always gotten a sense of personal acceptance at gatherings there. Strangeloop is certainly a welcoming crowd.
Wednesday was a blast as well, as any day with a trip to the STL City Museum should be, but more and more I found I cared about personal interactions more than the content of the talks I attended.
By the end of Thursday, I was stressed out. I wasn't even excited about the talks, which isn't to say they weren't good. I hated the idea that I was just an average programmer, that I didn't aspire to more than hacking bog-standard Rails apps as a career, that I'd spent almost $2000 out of my own pocket to come to a conference that someone else might have gotten more out of. I went to bed early that night.
I kept my mindset in check much better on Friday. I reminded myself to be excited about others discoveries and creations, not to demand myself learn every tool or technique. The conference wrapped up well and I particularly enjoyed some of the Distributed Systems talks, a subfield I still have no experience with. Not to say I want to go fight distributed heisenbugs soon or design highly reliable systems. The talks were quite entertaining and informative is all.
The main takeaways I've had from Strangeloop have nothing to do with tech and everything to do with me. Strangeloop has, in many ways, been a time for me to reflect these last two years. I'm not sure that's a good way to use the conference but it seems to be what I've done.
The first takeaway that comes to mind is that I need to try and respect myself for just being a decent Rails developer. I'm not great at solving algorithmically tricky problems and my CS background is, frankly, pretty damn weak. But no matter what I shouldn't beat up on myself for being "just a programmer". I need to put in an honest day's work and actually pat myself on the back at the end of it.
The second takeaway is that I need to program for me again. I got into programming mostly because I wanted to know more about how computers work. The emulator and talks surrounding it, some of my favorite work, is really more an investigation than artifact. I still want to support coleslaw. It actually has a few satisfied users and I'm one of them. But the only real purpose of the emulator is to sate my curiosity. Not to become a real thing or be special or groundbreaking.
There is plenty I'm still digesting and plenty I'm still not sure of, both technically and in terms of what I want for myself, my career, my life. But I'll leave that for another day. Cheers.
posted on 2014-07-16 12:33:00
(In which I talk about my feeeeeeels)
Disclaimer: This post comes in the middle of an existential crisis. I'm struggling a lot with programming as a career choice and feeling disconnected from a community of excited hackers. These feelings and opinions are my own and I think it's totally fine if you don't subscribe to them or want to write me off as an F-ing idiot.
A lot of the ideas in this post have been buzzing around in my head since I saw Jen Myers deliver her keynote at Strangeloop last year.
I've been keeping my thoughts in my head mostly because I'm already an established programmer. A lot of the motive for the talk was to be more welcoming to newcomers and minorities that struggled to be included in our communities. But I think this problem affects all of us, every last one, regardless of gender, race, or class.
The short version is that I think the tone of programming communities, especially online ones, is horrific. It's filled with religious debate over things less important than getting people excited about and interested in computing. For me, whether it's smart people posturing for social status or individuals genuinely trying to enlighten others is irrelevant.
Our first reaction to any comrade, any other person passionate about and interested in building things with computers, any human crazy and masochistic enough to try and expand the capabilities of these absurd machines, should be empathy and love.
This may seem ridiculous at first glance. It's harder than it sounds.
You already know the religious wars I'm talking about. They're silly little things. Are static or dynamic types better? (For some, is there even such a thing as being dynamically typed?) Is Vim or Emacs better? Should I learn programming with PHP or Haskell? Should my app use JSON, XML, or a self-describing binary format? Is programming math, art, or craft? Can code be literature?
For a host of reasons, these are questions we have a vested interest in. And I think, more often than not, our motive is to encourage more learning and exploration. But the conversation is almost always full of condescension and judgment, especially if the medium for response is limited. We simply cannot let supporting curiosity become secondary to proselytizing "the right thing".
Plain and simple, turning a prompt for exploration into a right-or-wrong religious debate is curiosity destroying. And that's precisely the opposite of our intent, the opposite of what we as a community should aspire to.
Our opinions are important, and I'm not precluding the existence of a right answer. But someone pondering a tricky subject isn't best met by bludgeoning them over the head with a conclusion. As long as the principal motive of those we interact with is the fractal question "Why?", we are together.
This connects to a lot of things. It connects to people wondering if they're good at programming, or how to know such a thing. It contributes to impostor syndrome. I've struggled to hack on hobby code for fun because I don't feel like I can be proud of it. Not smart enough, not groundbreaking enough, not important enough. And I know that's silly, because there are more important things to worry about.
So the more we can get away from emphasizing that the most important thing in programming is being right, the better that will be for newcomers, for hobbyists, and I believe, for all of us.
I'm reminded of an Alan Perlis quote in SICP:
"I think that it's extraordinarily important that we keep the fun in computing. ... We began to feel as if we really were responsible for the successful, error-free, perfect use of these machines. I don't think we are.
I think we're responsible for stretching them, setting them off in new directions, and keeping fun in the house."
I'm not perfect at this either. It is difficult to never be dismissive, let alone to always be gentle. But sometimes people are just trying to make it through the day. Not use the best tool, not come up with a groundbreaking solution, not fix the world. We need to try to meet other programmers where they are. Not move them to our habitat before empathizing, before loving.
Ironically, I know this has been a bit of a high-horse diatribe. At least let me give you a gift for coming so far and listening to me ramble so much. Here, have something I love, bits of Milosz:
To whom do we tell what happened on the earth,
for whom do we place everywhere huge mirrors
in the hope that they will be filled up and will stay so?
I think that I am here, on this earth,
To present a report on it, but to whom I don’t know.
As if I were sent so that whatever takes place
Has meaning because it changes into memory.
To find my home in one sentence, concise, as if hammered in metal.
Not to enchant anybody. Not to earn a lasting name in posterity.
An unnamed need for order, for rhythm, for form,
which three words are opposed to chaos and nothingness.
What did I really want to tell them? That I labored to transcend my place and time,
searching for the Real. And we could have been united only by what we have in common:
the same nakedness in a garden beyond time, but the moments are short
when it seems to me that, at odds with time, we hold each other's hands.
And I drink wine and I shake my head and say: "What man feels and thinks will never be expressed."
posted on 2013-09-12 09:55:00
I can't believe Strangeloop is only a week away!
Thursday:
Friday:
posted on 2013-07-05 11:44:00
This will be the last post about emulation that doesn't involve graphics or disassembly of old NES games, I promise. cl-6502 0.9.5 is out and, in my testing with SBCL, pretty snappy. The book has received updates and is also available on lulu. Below is the 'Lessons Learned - Common Lisp' chapter:
Structures are much more static than classes. They also enforce their slot types. When you have a solid idea of the layout of your data and really need speed, they're ideal.
CLOS, for single-dispatch at least, is really quite fast. When I redesigned the emulator to avoid a method call for every memory read/write, my benchmark only ran ~10% faster. I eventually chose to stick with the new scheme for several reasons, performance was only a minor factor.
My second big speedup came, indirectly, from changing the arguments to the
opcode lambdas. By having the opcode only take a single argument, the CPU, I
avoided the need to destructure the opcode metadata in step-cpu
. You don't
want to destructure a list in your inner loop, no matter how readable it is!
That is, the times I found myself using it always involved computing data at
compile-time that would be stored or accessed in a later phase. E.g. I used
it to ensure that the status-bit enum was created for use by set-flags-if
and
the *mode-bodies*
variable was bound in time for defaddress
. Regardless,
try to go without it if possible.
DECLAIM is for global declarations and DECLARE is for local ones. Once you've
eked out as many algorithmic gains as possible and figured out your hotspots with
the profiler, recompile your code with (declaim (optimize speed))
to see what
is keeping the compiler from generating fast code. Letting the compiler know the
FTYPE of your most called functions and inlining a few things can make a big
difference.
posted on 2013-06-21 14:58:00
I haven't been doing any hacking on coleslaw or famiclom for the last month. I've been focused almost entirely on my 6502 CPU emulator. In particular, I've been optimizing it and turning it into a "readable program".
The optimizations have gone swimmingly, taking cl-6502 from 3.8 emulated mhz circa May 1st (commit eecfe7) to 29.3 emulated mhz today (commit b729e8).(0) A factor of 8 speedup feels pretty good though it would be fun to coax more speed out later.
(0): All figures obtained with SBCL 1.1.4 on Debian 64-bit on an old Thinkpad X200. See this 6502 forum post.
I feel that the readability of the program has remained, maybe even improved, through all that optimization. The same overall design is in place; most refactorings were, approximately, tweaking macros and their callsites. The readability is especially improved when the code is broken into chapters, each with an introduction for context, and typeset with LaTeX. The latter is done thanks to a simple Makefile and some very nifty code swiped from Luke Gorrie's snabbswitch. If you've been curious about how cl-6502 is implemented or just wanted to dive in, there's never been a better time. Grab the book!
I'm still planning to make famiclom a full NES emulator. I won't consider it done until I can play Mega Man 2 with it. Hopefully using a USB controller. It doesn't make much sense for Lisp in Summer Projects though. I've already started the project, the scope is ill-defined, and I want to work on something fresh and new to me. So I've come up with a project that I can't possibly complete instead. It'll be great!
In short, I intend to rewrite Super Mario Bros 1. ... in a not-yet-existing lisp-like macro-assembler/incredibly simple compiler. I already have a 6502 assembler/disassembler in cl-6502 and a tool to parse NES roms in romreader. There's also a very thorough annotated disassembly of Super Mario Bros floating around. I've got a good start on a static analyzer that will take the SMB binary and an entry point and try to build a CFG of the game. The current scheme won't work with memory mapped titles but it's good enough for Mario.
Once I have a graph representation of Mario Bros, I'll try both manual analysis of the annotated disassembly with a pen and pad, and automated analysis on the graph using Lisp. I'll try to find as many idioms and patterns as possible to condense and improve readability of the code. Then, somewhere in early August, I'll start trying to rewrite the source and see how far I can get.
This approach is probably completely: insane, unworkable, unviable, inadvisable, and just all around wrong. But I think I'll have fun and learn something, so it's good enough for me. And hell, who knows, maybe I'll get lucky and be able to attend ECLM next year. :)
posted on 2013-04-10 17:07:00
"I pretended to work like others from morning to evening,
but I was absent, dedicated to invisible countries."
- Czeslaw Milosz, Nonadaptation
When I started programming, I quickly became fascinated by a question that many of us ponder at some point. Why isn't there a 'best' programming language that makes it simple to express our thoughts and intents to both each other and the machine? That transformed over time into a different question, "Why is programming so hard", but that still wasn't quite right. I think I've finally settled on the real question which is, "Why is modern software so incomprehensible?"
"As far as his own weak head is concerned, the thought of what huge heads
everyone must have in order to have such huge thoughts is already enough."
- Soren Kierkegaard, Fear and Trembling
I recently informally looked at the size of my software. The answer was predictably both bewildering and unsettling. The day-to-day software I use is comprised of about 35 million lines of source code.
All of that software is free and works well. By that measure alone, we might judge software engineering a success. It is popular in some circles to rail against software engineering methodologies and, often, modern software as well. We must acknowledge, however, that the demands placed on modern software far exceed the demands placed on the software of yesteryear.
This is also reflected in the way we teach Computer Science now. For example, MIT's new undergraduate curriculum shifts focus towards building systems with unreliable, incompletely understood components. The fact that we can build software this way points to a culture of library reuse as our chief abstraction rather than the humble function.
"When you have a problem with X, have you pulled up the X server internals?
Have you dug into the problems with your drivers?
No, because the kernel is 10 million lines of code and the only way in is grep!
Fuck that, that's not helping me."
- Brit Butler, Wanting Types, Demanding Mirrors (video coming soon)
But I still yearn for the days when you could conceivably understand your computer soup-to-nuts. In October, I'll have been doing "real programming" for 4 years and I still accept a large portion of the day-to-day workings of my machine as magic. How many of us really have a complete picture of what is going on under the covers? We don't wrap our heads around the workings of a 1-billion transistor processor, much less the massive body of code atop it.
I'd wager 100 kloc is the upper bound on a well-organized system I think I could mostly keep in my head. That's 2000 pages or 5 400-page volumes at 50 lines/page. While prose is very different than code this would seem to be roughly the same order of magnitude as prominent fiction series like LOTR, Harry Potter, Game of Thrones, etc.
There is a torrent of OpenGenera floating around that contains about 870 thousand lines of Lisp. It should be about 20 years old. That means my current desktop is a 40-fold increase in size over Genera. Windows 3.11 is probably bigger than Genera and the size of a Desktop OS probably hasn't doubled every 4 years for the last two decades...but it is an interesting figure.
MenuetOS is a more provocative example. Menuet is a desktop OS written exclusively in x86 assembly. There is an open source 32-bit version and a closed source 64-bit one. I haven't read it so I can't say whether or not it is comprehensible. It is only 36 kloc for the kernel and 58 kloc for the apps though. Even with limited hardware support, 94k for a graphical OS is a noteworthy point in the design space.
"So. I ask: how many people does it take, at a minimum, to maintain
our current level of technological civilization?"
- Charlie Stross, Insufficient Data
I am explicitly not asking for a world where desktop OSes are limited, comprehensible systems over full-featured ones that can't fit in one human's head. I'm not worried about bus factors. But an Operating Systems and Hardware/Software Interface course are not enough!
I want a completely open, comprehensible system that does something cool. It doesn't have to be self-hosting. It does have to be something I can study and change, observe and modify at every level of the system. It should be something that actually shipped to consumers at one point. The absolute lack of such a system for people trying to learn frustrates and disappoints me.
"We should burn all libraries and allow to remain only
that which everyone knows by heart."
- Hugo Ball
All this is exactly why I started working on a Nintendo emulator in lisp. It's still early days but it's my forever project until it's done. 6502 emulation is done, some basic Nintendo functionality is working. Once the NES is 90% complete, I plan to try writing a simple lisp-like language that targets the NES. I'll then use that to produce an annotated, high-level reconstruction of my favorite childhood game, Mega Man 2. It is a bit ridiculous and probably overreaching but it's worth a shot. And in some way, it's the computing toy I've always wanted.
posted on 2013-03-04 16:28:00
I've been experiencing the Dunning-Kruger effect a lot lately. At least, I've been feeling like a fraud. And while I could list reasons I'm not a great programmer, asking why I felt like a fraud has led me to something more interesting. I don't think I've worked at a company where I knew "where the money is coming from". What does that mean exactly?
All of this creates a surprising problem for me: The value I add is opaque from my perspective. I take it on the word of my superiors and peers that any value is present. This makes it essential that I trust and enjoy working with those people.
It might not be immediately apparent why this is a problem. Find a company with decent people and culture and you don't need to be directly connected to the product or customers. Just churn out code and have fun. Advertisers will foot the bill. While it's true that you can sustain a business this way it certainly isn't ideal. The issue comes from just how decoupled the product becomes from the revenue. When it's time to grow revenue, you have to do it by attracting more eyeballs. Here's how that works:
But Product Managers are not users. Programmers are not users. Advertisers are not users. Sure, we use the product some to verify the code works during testing but we're not invested in it. A/B testing is not the same as user input. There is also no requirement that you correlate traffic with actual perceived value. Many companies just read traffic AS perceived value. Frankly, that's bullshit. Our loyalty is necessarily to the advertisers. They pay us...but the users are the real customers. The product just happens to be paid for by collecting data about how they use it.
Thus far, none of this should surprise anyone who has worked in the tech industry. Hell, this shouldn't surprise anyone with a Facebook account. It points however to a serious cultural problem in many tech companies: not letting (or demanding) your technical experts be, well, technical experts. A friend of mine calls this "{} for $". Many people rant about this as "Taylorism in software". It cannot be overstated that no programming paradigm nor software engineering methodology will eliminate the need to connect engineers to the product. Similarly, letting the engineers take the reigns is not anathema to good product design or improved value to the business. And this is not new. Quoth Don Eastwood in a 1972 Status Report on MIT's Incompatible Timesharing System:
"In general, the ITS system can be said to have been designer implemented and user designed. The problem of unrealistic software design is greatly diminished when the designer is the implementor. The implementor's ease in programming and pride in the result is increased when he, in an essential sense, is the designer. Features are less likely to turn out to be of low utility if users are their designers and they are less likely to be difficult to use if their designers are their users."
Earlier in the report, Eastwood says, The system has been incrementally developed almost continuously since its inception.
Hello, agile kids. I'll say it again. Any company pretending that software engineering methodology or a given technology replaces the need to connect engineers with what they're building deserves to be skewered. The reason knowing "where the money is coming from" is so essential is that software is different than any other product in history. Because the design in a fundamental sense is the product. If you don't know what I'm talking about, I'd encourage you to watch Glenn Vanderburg's talk from RailsConf 2011. If your engineers don't understand the reason for what they're building, then the product can at best accidentally support the business. If you think you or your company can just scrape by for your whole career without getting eaten, I'd encourage you to reevaluate that assumption.
One big reason most companies are hierarchies more concerned with maintaining market position than creating value is the inherent risk. Real growth comes from bets and empowering people to change what's needed to find "a better way" or "The Right Thing"...and that's terrifying. Few individuals or institutions have the guts, bravery, and stamina for continuous reinvention. It's exhausting. Not to mention that it puts the focus squarely on a company's employees. It seems our best chance at winning big comes from those kinds of risks though. Github and Valve's experiments in distributed management are a brilliant step in this direction.
A lot of what has had me feeling like a fraud is remembering how much I have to learn. Learning is a kind of reinvention itself though and one of the reasons I've loved computers since the beginning. So I'll keep learning and next time I'm in a programming interview, I look forward to asking the other hackers, Do you know where the money is coming from?
posted on 2012-09-26 20:13:00
I have a bad feeling that I'm about to piss off a lot of people. Oh, well.
Bret Victor gave a very interesting talk at Strange Loop called "Visible Programming". From what I can tell, Bret is a very smart guy and an accomplished UI designer. I was surprised to find that while I agreed with many (all?) of his premises I disagreed with most of his argumentation. As I've received a question or two about it, I'll try to clarify my thoughts here.
I have three core points:
Ironically, I gave a talk on a different but overlapping topic a few weeks back. As mentioned in Chris Granger's talk, examples like the ones he and Bret Victor gave are much easier when working with a dynamic runtime. Unfortunately, dynamic is conventionally interpreted as a language with dynamic types and, put simply, we should try to change that. I argue that a dynamic language is one that allows you to inspect values and update function and class definitions while the program runs. This is formally known as reflection. What I believe dynamic language advocates care about is the ability to work with a "live" system, not the presence or lack of static type checking.
Being able to visualize program execution means having a snapshot of the program state during each step of execution. Setting aside the impractical size of such a thing for programs with arbitrarily large working data sets, this requires either A) annotating every line or function call with logging statements/function tracing, or B) instrumenting the language runtime in some fashion to retrieve values during execution. That's why these examples are much easier with a reflective language like Javascript where you can hook arbitrary behavior into the Object prototype.
Trying to get an editor to do this for all languages is a fool's errand without a nice reflective runtime and API to retrieve data from. It's hard enough with that stuff! And that means we need reflective compilers and interpreters before we can have an ideal editor.
Many of Bret's examples were highly domain specific. While it makes sense to have a simple interface for toying with values in a visualization/drawing program, it's harder to see how to usefully apply that to something like protein folding software or even RSS feed parsers. Having editor extensibility in addition to reflective systems enables arbitrary widgets or modes for dealing with a given problem domain or data visualization need. Plain and simple, there's no way to build-in useful visualizations that are universally applicable. I admit I am not a designer or gifted at visualization so I may have simply struggled here.
Bret essentially wants to make the experience of programming more tractable through interactivity and tangibility. He suggested 5 key requirements:
While philosophically well-formed, Bret seems to miss the fact that runtime support is required for a Visible Editing Experience to emerge. If the industry still doesn't understand that dynamism is about the runtime rather than types, clamoring for a magic editor will get us nowhere. I want to see a more interactive, tangible environment in the future as well but we cannot get there by arguing that IDE/editor writers need to step up their game. We need to make a concerted argument for the resurrection of highly reflective systems in both research and industry. Once systems with robust reflective capabilities are widespread, realizing a vision such as that described will be a week long hack rather than a decade-long million man-hour slog.
I'd also like to examine how far reflection can scale and a bit about making such a thing applicable to both novices and experts but I need more time to compose my thoughts so I'll save that for a future post. Any comments on this post or its mistakes, of course, are welcome.
posted on 2012-09-25 10:17:00
Lots of detail on anatomy and physiology. Use that to derive theories and test with software.
Neocortex is a predictive modeling system.
Grok, a predictive modeling product.
Future of AI.
Neocortex builds online models from streaming data. Think about it in terms of the senes.
Beware, vision is more than one sense. Retina is array of a million sensors. Auditory nerve is 30,000.
Brain has several million sensors changing firing in the 10s of milliseconds.
The brain invokes model building in response to novel sensory input. It can: make predictions, detect violations of predictions, generate actions.
The brain is more of a memory system than a computing system.
Neocortex is a hierarchy of projected systems. retina->cochlea->somatic nerve. But all one memory algorithm.
Primary memory is sequence memory. Playback of stored patterns. Stream processing!
Sparse distributed representations of data.
Traditional computing uses dense representation. ASCII is a perfect example. Individual bits have no meaning, programmer assigns meaning.
Brain uses sparse representation. Mostly 0 bits, maybe 2% 1s. Sparse IntMap? Bits that represent specific things. Top bit that means X.
SDR Properties:
Similarity: shared bits = semantic similarity. A similar bit vector has similar semantics.
Store and Compare: Store indices of active bits, don't traverse the whole thing. Subsampling is OK though!
Probability shows errors are very unusual even with subsampling. If you do make a mistake, it's a close/semantically similar one.
Union membership: Sets! Is this SDR (10..001) a member of this union of SDRs (00..001)? Very high correctness.
The key to machine intelligence is sparse distributed representation.
Sequence memory:
If pattern does not repeat, forget it. If it does, reinforce it.
Synapses are either connected or not. Represented via scalar, if it's above a threshold, it is connected.
Typical system: 1 region, 2,000 columns, 30 cells per column, 128 dendrite segments per cell, 40 connections per segment. 300M connections. No SPOF.
Predictive Analytics Today:
Future of Machine Intelligence:
35 years later...
Worked at Microsoft, got hired as a PM. PM on Visual Studio. Owned C#+VB in the IDE. Was asked "What is the future of editing?"
Well...how do users really work with it? NO end-to-end usability studies on Visual Studio. Only on specific features.
Richest feedback you'll ever get on a product is a usability study.
Expectation: Visual Studio is amazing tech! Actual: Too complicated, too noisy. Shock: No one actually vocalized problem.
If it's true that the key attribute of a good programmer is keeping the system in your head, its very sobering. How to teach memory?
Everyone just uses: Code navigatoin, Debugger, Editor. Basic workflow has not changed since Emacs.
Many innovations but never went mainstream: Smalltalk, Lisp Machines, etc. Time to re-imagine.
Tried many things, nothing like Light Table. What changed?
We're always dealing with abstraction. We are not recipe writers. We consume and subsequently create abstractions. We are synthesizing machines!
Need to ask questions about them. Poke at them. You don't understand code until you run it. Writing it is not enough.
Realization: Interactivity/REPLs were a big enabler at getting people into computing. Closer connection to what you're doing. YAFIYGI->WYSIWYG is similar.
What is an IDE for Abstractioners? Light Table. Initially a 6 day hack. Half a million people saw it in 2 days.
Raised 300k+ on kickstarter. Acknowledgement that we are in the dark ages of software dev. We're disconnected from our software!
Brief live demo...
git status
and update the git-state with it.(on files.save [] (update))
. We're live!defui
is a Light Table construct to make a DOM element and bind events to it.HA! We've been in a custom presentation mode.
Missed due to chatting with Chris Granger and other awesome folks.
Okay, I couldn't keep up and had you been there you wouldn't blame me.
But this was cool and analagous to the Byrd/Friedman running programs backwards and forward stuff from yestereday.
Also, Buyer Beware: Oleg's papers may be more clear than his presentation style. :P
In an age of shamanism when it comes to operating computers.
Talking about pressure as it relates to large scale distributed systems. Cliff's company, Boundary, does high-volume streaming.
One day they noticed IOPS starting to disappear. Kafka message queue, writes to disk. If IOPS start dropping...
Pressure rising on interior nodes, data kept coming in, not going out. Death spiral! Chained failures.
Like riding out a storm, just trying to keep systems up until traffic dies down.
There is no explicit flow control. But we need it! (We already have it.)
Only documentation of queue explosion problem + flow control in Erlang. ... Ulf Wiger mailing list post.
Create an end-to-end linkage of back pressure so edge nodes can know when there's a problem and react.
{active, true}
is an unfettered firehouse.{active, once}
provides explicit control over when delivery occurs. Overload then backs up to TCP receive queue. Slows down sender!MemoryAwareThreadPoolExecutor
Is this familiar? "SEDA: An Architecture for Highly Concurrent Server Applications"
What does this buy you? Back pressure gives you time to recover. Like System Dynamics, about getting a grip on non-linear behavior.
An aside: SCTP is a little better for this.
Aspects of Expressiveness:
Expressiveness over performance, every time! (in Ioke)
Types of Abstraction:
Macros!
Different kinds:
Modularity!
Static typing!
Generics!
Working with generic code over an unrestricted set of containers.
Most abstractions are leaky. See ORMs. Occasionally can't get underneath it to use raw SQL.
"Abstractions may be formed by reducing the information content of a concept, to retain information only relevant for a specific purpose." - What purpose?
Effective communication requires common experience/understanding. Referents in linguistics.
Lin-guis-tics:
Communicating indirectly with many audiences though. The machine, team members, ops, etc. Even business stakeholders! Totally unique to programming.
Syn-tax:
From a PLT perspective, doesn't matter. In day to day pragmatics, it does matter. Everyone knows this.
Can go too far in either direction, Salt or Saccharine.
Operator overloading. Probably not one right answer.
Don't support it.
Support overloading built-ins.
Don't have operators. Everything is a function.
Arbitrary operators to be defined or redefined. (haskell/ml)
Should built-ins look different than user defined stuff? (editors note: Hell no, let the editor show you/syntax highlighting).
Expressiveness is not always better. Many dimensions to optimize along. Language design is a balancing act. Find the right clustering of features for a given point in the design space.
Expressiveness and abstractions are relative.
5 Principles to aim for:
Examples of these principles in a hypothetical environment: (JS & Processing for this talk)
How does this scale to real-world programming? I have answers but I think it's better to question the question.
Asking how to scale to everyday programming is like asking how engines benefit horses. We have to work like this.
Argues against static runtimes. I agree but this was a weak treatment. There is tooling to work around this.
"There is no future in destroy-the-world programming. It's got to go." - Agreed.
Ten days in May 1995: Shipped Mocha.
September 1995: Livescript
December 1995: Javascript.
"Hey maybe we should fix equality?" "Nope, too late."
Adding very minimal classes.
Adding MODULES!!!! Fuck. Thank God. FINALLY. Not first-class but better than nothing.
Symbols (formerly names or private names). They're self-named objects that you can't forge from strings. Hygiene!
If you're using symbols, there ought to be a nice way to deference/get at them. How about @symbol?
Utilities for symbols, many interesting applications.
Default parameters. arguments is a hideous hack. We will kill the arguments object but Brendan will pay to see that movie. :)
&rest parameters. Guess what, MO LISPY! Prototyped in Spidermonkey right now.
... is spread, inverse of rest. I.e. the **kwargs\kwargs split. Splicing?
for-of, iterators and generators. Have to use of instead of in. Can finally iterate over values instead of keys.
Callback hell. What can we do? More problems than aesthetics, could capture too many outer variables.
Promises/futures still not here...but we will have coroutines from generators and use combinators to make things suck much less!
Array comprehensions. Lazy generator comprehensions too! When did they get feature-itis?
Sets too! Proxies for metaprogramming API. Think they got feature-itis when they thought about this audience. ;)
Can emulate NoSuchMethod/DoesNotUnderstand with proxies. Couldn't agree on standardization. Could add a default one to Object.Prototype.
Apparently all that was library and application improvements. What about code generators?
So bytecode...
JS might actually be better than bytecode and COMPRESS better.
Bytecode standardization simply won't happen. Need to be versioned, deters competitive race, don't have the bandwidth in companies.
"Versioning is an anti-pattern on the web. You want to fail soft."
No call/cc for you! Also, many humans still like writing JS.
Proper tail calls have been standardized. Thanks Dave Herman!
Just care about making the web better. Wound up adding Typed Arrays for WebGL. Big potential win for compilers there. See: emscripten.
"The reach of the web exceeds any other platform."
"To game developers, they're just excited about doing another port."
LLJS, low-level javascript is another experiment. C-like lang to typed arrays js to... Feel free to play with it.
ES4 collapsed when they tried to add something type-like. NOT adding types or contracts or whatever. Have typed arrays. Go away.
"People write Javascript in a latently well-typed way."
There is however a proposal for User-defined structs that are stored in typed arrays. Close enough, right?
The last missing JS feature is... MACROS. Not in ES6. Sweetjs.org though. It's on github, it works. It's a sound JS reader and hygienic macros.
"Worst thing I did out of Perl envy was to make regular expressions with slashes...means you have to parse (the whole thing) to lex!"
"Please help because if we can get this into ES7, we can get put ourselves out of business, and I can do something else."
Left with a general optimism on the continued future of Javascript.
I was going to have to miss the last keynote and two talks as well as all the camaraderie that would occur after. Delta had screwed up my flight and I didn't have a hotel room for the night. Thankfully, some Loopers came to my rescue though Delta still managed to extract $170 from me. Soulless corporations! It was totally worth it. I can't think of a more worthwhile or energizing way to spend 3 days. I'd like to thank everyone I was able to spend time drinking and chatting with especially Scott Vokes, Paul Snively, Andreas Fuchs. Brian Rice, and Jose Valim. Obviously, I never want to go home.
posted on 2012-09-24 10:46:00
Though I didn't take notes on these, or today's keynotes, I have seen quite good coverage of ELC and Strange Loop talks here.
Design Patterns: Are they a sign of weakness in your language?
Graph of known Monad tutorials. Skyrocketing. :P
Monads are a great abstraction to capture/describe patterns, NOT explain them.
Possibly the talk I'm most excited for today. DESTROY ALL SOFTWARE!
3 Confessions
They will not merge our kernel patches. How do we move forward? Our "Shipping Culture" is poisonous to infrastructure. We just accrete low level infrastructure. Programmer Archaeologists are we. INCREMENTAL DEVELOPMENT WILL NOT WORK FOR THIS.
"A Data Structure is just a stupid programming language." - Bill Gosper
Perhaps instead...
"A data structure is just a tiny virtual machine." - Scott Vokes
"The cheapest, fastest, and most reliable components are those that aren't there." - Gordon Bell
Good choice of data structure /subtracts/ code.
Data structures set the path of least resistance for interacting with your data.
You probably won't know the ideal DS up front. Don't paint yourself in a corner.
?- uses(prolog, Person).
no.All coming to a github near you soon.
Elided due to battery life.
posted on 2012-08-28 20:14:15
I just saw this over on PLT Alain de Botton's twitter feed and couldn't resist collecting and reposting it here:
The Goldilocks Principle for Programming Languages:
Addendum: I was an idiot 5 minutes ago, and will be an elitist in 5 minutes. Where's my beautiful code gone?
What a charming, weird little enterprise hacking is.
PS: With any luck I've got a very cool announcement coming in the next few days. Also, blogging is a lot more fun using emacs+git. I may just start doing it more.
This blog covers 2015, Books, Butler, C, Dad, Discrete Math, Displays, Education, Erlang, Essay, Gaming, Gapingvoid, HTDP, Hardware, IP Law, LISP, Lecture, Lessig, Linkpost, Linux, Lists, MPAA, Milosz, Music, Neruda, Open Source, Operating Systems, Personal, Pics, Poetry, Programming, Programming Languages, Project Euler, Quotes, Reddit, SICP, Self-Learning, Uncategorized, Webcomic, XKCD, Xmas, \"Real World\", adulthood, apple, career, careers, choices, clones, coleslaw, consumption, creation, emulation, fqa, games, goals, haltandcatchfire, heroes, injustice, ironyard, linux, lisp, lists, math, melee, metapost, milosz, music, pandemic, personal, poetry, productivity, professional, programming, ragequit, recreation, reflection, research, rip, strangeloop, vacation, work, year-in-review
View content from 2024-09, 2024-06, 2024-03, 2024-01, 2023-12, 2023-07, 2023-02, 2022-12, 2022-06, 2022-04, 2022-03, 2022-01, 2021-12, 2021-08, 2021-03, 2020-04, 2020-02, 2020-01, 2018-08, 2018-07, 2017-09, 2017-07, 2015-09, 2015-05, 2015-03, 2015-02, 2015-01, 2014-11, 2014-09, 2014-07, 2014-05, 2014-01, 2013-10, 2013-09, 2013-07, 2013-06, 2013-05, 2013-04, 2013-03, 2013-01, 2012-12, 2012-10, 2012-09, 2012-08, 2012-06, 2012-05, 2012-04, 2012-03, 2012-01, 2011-10, 2011-09, 2011-08, 2011-07, 2011-06, 2011-05, 2011-04, 2011-02, 2011-01, 2010-11, 2010-10, 2010-09, 2010-08, 2010-07, 2010-05, 2010-04, 2010-03, 2010-02, 2010-01, 2009-12, 2009-11, 2009-10, 2009-09, 2009-08, 2009-07, 2009-06, 2009-05, 2009-04, 2009-03, 2009-02, 2009-01, 2008-12, 2008-11, 2008-10, 2008-09, 2008-08, 2008-07, 2008-06, 2008-05, 2008-04, 2008-03, 2008-02, 2008-01, 2007-12, 2007-11, 2007-10, 2007-09, 2007-08, 2007-07, 2007-06, 2007-05