posted on 2009-11-15 00:09:07
Today, I want to talk about something I've been meaning to get around to for a while. Specifically, I want to mention some realizations I've had about restrictions-as-strengths as it relates to programming languages. This blog post was so long in coming in part because I had conflated that issue with a desire to also discuss the source of brittleness (or non-modularity) in programs and how this all tied back to the foundations of programming and CS. Obviously, that's too much for one post. There are three questions here and each is important to me. Later, I'll blog about the other issues and hopefully write a summary to tie it all together.
Programming is really hard. We've known this for a while. As Hal Abelson said
, "Anything with Science in the name, isn't." Software engineering is no better off. Certainly no one would argue that we know how to build software as well as structural engineers know how to build bridges. Difficulty in software doesn't stem from a single source but as programmers we need to localize it as much as possible. One way we do this historically has been through tools: Our programming languages with their compilers, profilers and debuggers, our operating systems, and other bodies of code to reuse (libraries).
Software has many demands placed upon it. First and foremost it needs to be functional by which I mean correct and relatively stable. Beyond that it needs to be reasonably fast and there are often other concerns from user friendliness to security. All of these concerns introduce complexity to our software, that complexity needs to be managed and I think a central question in managing that complexity is how to partition and quarantine it. I did a rather poor and embarrassing job of at least raising that question
a while back. I was lucky enough to find an outstanding attempt at an answer on Daniel Lyons' blog
. It was completely coincidental though, he hadn't read my article.
Daniel framed this problem as one of seeking minimalism. I see the same answer from a slightly different angle. To me, there seems to be a pattern of trying to handle complexity by restricting the actions of the programmer. For example, in Peter Seibel's Coders at Work there are numerous mentions of how different subsets of C++ are chosen by different teams of programmers to reduce the complexity of overlapping or interrelated features. People will entirely abandon templates or operator overloading. Douglas Crockford mentions making sure to never use the
statement in his code. These are examples of programmers simplifying their own mental model to make problem solving more tractable.
Languages do this too of course, the most prominent examples from my very limited experience being Haskell forcing you to think in terms of its type system or Factor forcing you to think in terms of its stack. Adapting to the constraints may be awkward or difficult at first but they do provide real benefits. The Glasgow Haskell Compiler is capable of remarkable optimizations because it can assume that you've adhered to the restrictions of immutability and laziness. Judicious use of the type system can eliminate entire classes of possible bugs and restricting the use of mutable state simplifies parallel programming and concurrency. Through use of the stack, Factor strongly encourages thinking about the dataflow
of your program. I've heard this sort of thing expressed before as a language being opinionated about how to solve a problem and there are plenty of diatribes on the failure of one paradigm (opinion) or another, especially OO
since it's been dominant for so long. But let's not confuse this issue as being about particular paradigms or programming languages.
There are languages on the other end of the scale. I tend to think of Common Lisp as the primary one. (Disclaimer: I've written far more code in CL than anything else. My coding experience in every other language is positively trivial by comparison.) Common Lisp has been described by many others as being agnostic or happy to let you express a problem any way you can think how. Then again, Common Lisp requires you to think in Abstract Syntax Trees. It's the rift between opinionated and unopinionated languages that I'm curious about. Of course, Haskell and Lisp are (generally speaking) solving different problems
as lispm (Rainer Joswig) notes on hackernews.Vladimir Sedach suggests that the rift is about metaprogramming
. More specifically, he states that Lisp and Smalltalk are self-describing computational systems. Lisp is written in Lisp. Smalltalk is written in Smalltalk. It's that old metacircularity
chestnut. Furthermore, he mentions that Haskell can't be self-describing because it's built on two
computational systems. The type system is one and the language built atop it is another. Besides as Vladimir says, "If type systems could be expressed in a metacircular way by the systems they were trying to type, they wouldn't be able to express anything about types (restrictions on computational power) of that system." Factor's FAQ even mentions that they avoided purity and static types to enable the benefits of metaprogramming
and interactive development. Personally, I know what I miss most in Haskell is an environment like SLIME. Interactive development of that style has an entirely different feel to me.
These observations about types and metaprogramming were a revelation to me and clearly, it's a limitation on the expressive power of something like Haskell. The question seems to remain open however as to whether or not such restrictions are enough of a strength to offset their cost. It seems to me that it must be situational
but I'm interested in a more in-depth examination of the problem. Unfortunately, I can't offer such an examination myself today. Googling around a bit I found a discussion has started on hackernews about lisp while I wrote this. In the discussion jerf writes largely about this issue
. Jerf suggests that the other end of the restricted/unrestricted scale is Java which sounds about right to me. He also suggests that the problem with maximum empowerment languages is that they don't scale to large team sizes and (correspondingly) large programs.
Of course, there are counterexamples like ITA Software but macros were thoroughly debated even among friends at ILC 09 as harmful or helpful to team programming
. Vladimir Sedach again has a good grasp on their utility
. In my opinion, neither Factor nor Lisp has resolved this question yet but more companies are getting involved with metaprogramming whether it be in Ruby or something else. I hope methods of containing the destabilizing facets of metaprogramming will emerge. Where Common Lisp is concerned, I think Fare's XCVB is an interesting opportunity
in this direction and I'm watching it intently.
Enabling single programmer productivity is important but so is enabling teams. Java as an extreme example of restriction has caused at least as many problems as it has solved. Languages with powerful metaprogramming features often need to be restrained in some fashion when used by large teams. There must be a middle ground somewhere. I say this because eventually our systems (codebases) become huge and unwieldy and we need our languages to support the difficult task of controlling that complexity and of keeping them modular and malleable. That problem of verbosity and inflexibility is precisely what metaprogramming tries to solve. I'll write about the problems in achieving modularity more in my next post.
posted on 2008-05-21 17:33:06
It's time to post on something other than code for a change. The last time I posted something non-technical was a music-related post in mid-April. I've had a strong urge lately to say something about this blog's title, "Improved Means for Achieving Deteriorated Ends". I've never really explained what that means to me before. It relates somewhat to a a fairly recent post about my "emerging philosophy
" and I definitely have more to say but am still searching for the words to get it out. While searching for those words I've stumbled on an idea that I think is worth discussing and that perhaps serves as a more concrete example of the sorts of issues I'm thinking about.
So, what's the grand exciting topic in store for today? Hiring practices. I'm sure your first thought is that hiring issues are inherently boring. I'd agree if we're talking about staffing a company of 75,000 people. For example, I talked to a Chik-fil-a
manager today and asked him for a ball park estimate of how many people worked nationwide at Chik-fil-a. He said that he had a small store with about 25 employees but stores vary between 10 and 100 employees and average about 50. There are 1,300 stores nationwide and then you've still got whoever works at corporate facilities and not in retail.
The truth is, in this setting the product (and your company's success) are not a function of the quality of your individual employees. In fact, if you could sell people Chik-fil-a sandwiches without stores it'd be a huge cost savings provided you could figure out the distribution. You really just need people to fill a non-contributory role. If there were robots that could do the job and customers didn't mind the difference that would be just as good. The point being that the person working the register's chief contribution to the product is handing it over once you ask for it. I'm not generalizing to all retail here but you get the idea.
Now, the difference in the hiring problem occurs when you are
looking for contributory employees. Employees whose contributions fundamentally shape the final product that the success of your company rides upon. Careers like architects, programmers, teachers, or journalists. Since these people make or break the success of your firm you can't hire based around basic arithmetic and serviceable grammar. Architects, for example, need to have a sense of design, an understanding of how buildings work, and working knowledge of the tools used to make designs become paper and then reality.
To be as successful as possible you don't just want to hire decent people. You want to hire talented people. To whatever extent possible, you want to hire the best
people and this is precisely where we start running into problems. Spotting the best people is very hard and in some ways analogous to The Blub Paradox. That is, I wouldn't trust a person who'd never programmed to hire programmers. Moreover, if an incompetent or only decent programmer is hiring other programmers it's not evident that he'd be able to spot the programmers better than himself. He might not know how to recognize them.
Most companies have two chief hiring practices that I'm aware of, the resume and the interview. The resume process is a screening process. It really exists so that you can weed out candidates based on a standard set of assumptions about the things they need to be successful and whether their qualifications seem to match up with those assumptions. It's pattern matching. This is why it's more difficult to get jobs sans sheepskin
. People who pass the initial screen are interviewed by a member of the department with some experience (generally a manager) and evaluated for competency based on the interviewer's (hopefully reasoned and up-to-date) understanding of the job requirements.
One problem with this strategy is that you wind up with false positives in both directions. Any hiring manager has a story about a sure thing hire that turned out to be a nightmare working with the team or the long shot employee that wound up turning around two lagging departments. The real problem, I'll argue, is that current hiring practice is not evolving. You don't learn from your false positives. How would you do that you might ask? I'll tell you how. Treat employee hiring like investing. Companies, especially companies that sell products derived from their employees' creativity, like to talk about how their product is their people or the people are the difference. I say put your money where your mouth is.
If you turn an employee down for a job, I want someone to be able to tell me in two years whether he's gone on to have a very successful career at another company or has wound up working in a different industry. If I hire somebody and they turn out to be a disaster I want to dig up anything I've got on their initial interviews, write the eventual problems on the transcripts in red and put it in a separate filing cabinet for future reference. Do you see anybody going that far to hire good people? No. I won't argue that there's no reason for this. One reason is: it would be a full time job.
Let's take this idea a little further though. Let's say we don't just track our false positives. Lets keep in mind that there are different classes of applicants to begin with and, just to stick with our investment theme, let's call them low-risk, medium-risk and high-risk. Why not? After all, that kid without the diploma is high-risk
. At least, that's the reason you gave for not hiring him wasn't it? Oh, that's right. You didn't have to give a reason for not hiring him. Nobody checks up on hiring decisions. There's something else worth thinking about. If you miss a candidate that could've netted your firm an extra 10% last quarter should your ass be on the line or not?
Anyway, what was I saying? Oh, right. A standard reason that most hiring managers avoid candidates without diplomas is the amount of risk. It ties back to a famous IT saying, "Nobody ever got fired for buying IBM." If it seems like a sensible decision to hire a diploma holding guy, hire him. At least if he turns out to be a disaster a bunch of people can say he doesn't look like an obviously bad candidate and they might've done the same thing. If a high school dropout turns out to be a bad employee people will say you should've known from the get-go. So, let's just call the kid high-risk. But keep in mind that high-risk goes in both directions because even though he never got his GED he's been programming x86 Assembly since he was 7.
Now, obviously this isn't practical all the time. It wouldn't work for Chik-fil-a and I'm sure there's some point where it wouldn't work even for a company whose product depends on their creative, contributory employees. You could only scale a process like this so much and whether that's a function of the number of employees or something else I couldn't say. In fact, doing this internally in a firm might be suicide. It might have to start as an HR firm exclusively though what we're describing probably sounds more like a scouting agency. You'd also almost certainly have to stick to a certain industry since, as I mentioned above, you have to be a good programmer to recognize a good programmer. Additionally, you definitely would want to spread your portfolio of hires across risk segments but let's move on before we get bogged down.
About now you might be wondering how this all ties back to my "emerging philosophy". It ties back because of the chain of impact from the corporate world to the academic one and beyond. I think a lot of problems exist in the educational system because we think absolutely everyone has to go through it. Parents are convinced that without college diplomas their children will not find gainful employment or have successful lives. I don't claim that hiring practices alone account for this but they do play a part. More importantly, the overall impact of this is one of waste in society. It's waste of compulsory schooling on those that may not need or desire it and it's a waste of those who excel through nontraditional means. It's time to find as many of these little bits of waste and inefficiency as we can and fix them. I think if we'll do that we'll find that the problem isn't that we need to produce more to satisfy demand but that we need to figure out a more equitable and beneficial distribution of the copious resources we already have.
posted on 2008-02-27 05:27:22
Every now and then I see, hear, or read something that sets my brain on fire. I find some connection that I didn't notice before and go off exploring. It happened a week or two ago over lunch at work. Occasionally there are lunch meetings around work-related activities that you can voluntarily attend. This particular lunch had us watching Paul Hawken's speech at Greenbuild 2007. It set the old wheels turning and the dominos toppling over. This is what spilled out...
For me, I think this all started formally with Thoreau. He was the first person who promoted an idealogy that I both a) actually heard and b) identified with. This would've been in the 11th grade. Prior to this I was more or less an anarchist and held no ethos as my own. Long story short, I've moved on but the transcendentalists sort of hold a special place in my heart. And the thing is, while I've hung onto Walden like a treasure I haven't read it in too long. So when Paul Hawken started mentioning transcendentalists in his speech it was a moment of revelation for me as I came full circle.
There's a crystallization of my beliefs here that has taken years and is an ongoing journey so bear with me. The older I've grown the more interest I've taken in the notion of Historical Progress, or at the very least, of charting where the universe has been and where it's going. Particularly in the last two years or so, thinking about forms of social organization and things like Robert Wright's Nonzero and Ray Kurzweil's The Singularity is Near have been interesting brain food. Of course, I've also been following Open Source, flirting with the environmentalists, and trying to figure out the lispers. And I think I see what I identify with and agree with in it all.
You see, I have this 90% theory. The theory states that at least 90% of human labor is expended to maintain the status quo and that only a fraction of our energy can go to what we care about or to new and creative things. This has come about because of increasingly complex social strata emerging as political and economic forms cave under the growth of civilization. Not western civilization but all civilization. It's a growing interdependency as Robert Wright describes.
He's more or less nailed it. And this interdependency has occurred through large centralized institutions (particularly corporations and governments) that have driven forward economic progress at the price of independence. It's much harder in the 21st century to figure out how to completely sustain yourself in your basic survival needs without withdrawing from society altogether. The sort of experimentalism that was possible by providing for oneself outside of the traditional means is no longer a realistic goal. We have boxed ourselves in.
This seems to me an ethical issue. It is not an ethical issue because consumerism or capitalism is wrong but because one of the lauded premises (whether actual or not) of our nation was some increased degree of influence over one's destiny. In the present day, this influence is lacking if one wants to live in a social setting but with strong self-provision of needs. This frontier opportunity has gone missing on both the frontier of new societal forms and the independent pursuit of sustenance through life, liberty, and happiness.
This is not, it is important to note, the fault of capitalism or democracy. Rather, these are the limits of these systemic forms under burdens of complexity they are not equipped to handle. They have become our Golden Hammers. (A recent e-mail from a dear friend and former teacher of mine eloquently expresses distress over this fact.) In short, the promised revolutionary nature of America has been lost. The Internet is another thing that has been lauded as inherently free. And like the FSF and Open Source this meant free in the context of freedom, libre, not free as in beer.
Lawrence Lessig was the first to note to us that the internet is not inherently free. That being a man-made thing, like a government, the internet was subject to our control and thus could have its freedom removed. Whether or not that is occurring right underneath us remains a subject of debate. I contend, however, that is has already occurred with regards to our nation and, thanks to globalization, much of the "westernized" world. We are buried under the mountains of our own bureaucracy; democracy and capitalism have become organizational forms which limit our innovation and our efficiency. Most importantly, this has impeded our capacity to do what we love.
And I feel like that is what the transcendentalists realized so long ago. They saw it coming and just maybe saw some of the things we stood to lose. Paul Hawken says Emerson realized that "It's all connected". That's why Thoreau reasoned his way into jail, because if he paid taxes while Texas Rangers raped Mexican women he figured it made him a rapist. That's why my friend Alexa saw people in South America who lived in poverty and slept on the job and instead of asking why they weren't like us, asked why we weren't like them. And that's why Richard Stallman found ethics in software (namely a printer driver). It's why a British journalist decries college and lispers wage a near religious war on tedium and repitition.
These all seem like strange places to find ethics. And maybe they are. But this seems to be the strand that ties together article after article I read about why college is unnecessary or high school is drudgery or work is slavery. And I think if I had to give it a name, I could only call it Efficiency. Because we're just not as efficient as we'd like to be. We've finally lost the right to enough of our time that we're jealous about it. There's a recognition that there should be a way to keep society going without all of us dedicating all our time to make sure that it's still there tomorrow. And in come the environmentalists, shouting about the world we leave our children, talking about building to last.
And if there is one question I'm interested in answering it's this: Surely, there is a way for us to restructure society so that only 70% of human labor or 50% or less has to go to maintaining the status quo. Right? Then the rest could go to creativity or innovation or progress or us. I think there is and I'm not opposed to spending my life trying to answer the question. There do thankfully seem to be a number of revolutionary trends coming about, some of which are already well in swing. But it's late, I have work tomorrow, and much as I'd love to I have to call this a wrap for one night.
posted on 2007-05-04 10:08:00
I feel I should explain a bit about why the events of May 1st were so important, why it was as I called it "a watershed day". Since I wrote that piece this afternoon the events covered have been very much in my thoughts and I've discussed them with a number of friends of mine, some technically inclined, some not. There were two events. The first being Dell's decision to offer Ubuntu preinstalled on select computers of theirs. This is a huge victory for Open Source Software generally and also Linux particularly. It's a larger victory for the Open Source Production Model because it stands as evidence that such a production model can compete with that of proprietary vendors such as Microsoft, Apple, etc. That I consider to be (significantly) less important than the second event of the day. That is, the HD-DVD scandal
. Or the cyber riot
. Whatever it should be called. Those of you who know how much I trumpet on about Open Source and Linux should understand what a large claim that is for me. It's more important that a ton of people revolted online against a standard than that Dell said they would sell (Ubuntu) Linux computers. And I've been predicting that Linux on the Desktop thing for a good year now. A year's expectations fulfilled but secondary to some arbitrary online screaming fit? Yes. Part of that is because I was expecting the Linux thing to happen sooner or later. I'm glad it was sooner but not shocked. I figured Linux would be about ready by now it would just take a company with the guts to try it. I can say I'm half-surprised (but pleasantly) that the company was Dell. The revolt was much more important though and showed us much much more
about the dynamics of online communities and power structures.
First, by way of introduction to the problem space, I'd like to clear up what could be an easy misconception. What essentially happened was a 16 byte code (09-f9-11-02-9d-74-e3-5b-d8-41-56-c5-63-56-88-c1
) that protects HD-DVDs from being pirated, played on unsupported platforms (such as Linux), being ripped, etc. was leaked onto the internet. A piece of legislation protecting the code called the DMCA was passed in 1998 which extends the protections of copyright and makes it illegal to produce or spread methods of circumventing or infringing copyright. So, the code is not protected speech. My posting it here even is illegal. (It should be noted that some feel the very existence of the code and the resulting inability to play HD-DVDs on Linux or back them up is a consumer rights violation. Legally, this assertion is not ungrounded but until the DMCA is repealed it is irrelevant. The DMCA for its part has faced much derision and opposition since its inception for many reasons, vagueness high among them. If I informed you that you could circumvent copyright and reproduce a book with a copier, paper and ink, I could be in violation of the DMCA, for example.) The Movie Companies whose copyrights are protected by this code are of course upset that the safety of their product is now jeopardized by piracy. The code leaked out onto the web in February and the movie companies began sending out cease and desist letters so that sites would take it down. Then, on May 1st people started noticing. Three sites in particular which all derive their content (information) from their users formed the center of it all. Slashdot, Digg, and Wikipedia. Slashdot and Digg are user-generated technology news sites and Wikipedia is, of course, the online encyclopedia we all know and love. When Wikipedia and Digg started trying to censor the code (Slashdot didn't) from their sites people started noticing and rebelled in extraordinary fashion. Within 48 hours the number of hits when the code was searched for on Google went from under 1,000 to over a million. Digg and Wikipedia were swarmed with people trying to propagate the code in dozens of forms (such as masquerading it as lottery numbers, an IP address, or even a picture of stripes where the colors' hex values spelled out the code). Digg was the center of the controversy and simply could not control the number of users forcing the information onto the site through stories, diggs (votes that increase a story's visibility on the main page), and comments. Wikipedia had more success by locking the entry for HD-DVD and did a number of other things to prevent the spread but still had it's forums inundated with the code.
The important fact wasn't that people spread the code and lashed out\fought back against what they perceive as draconian intellectual property regimes and corporations (that happened for regular old DVDs with DeCSS in 1999) but that these sites, the icons of the Social Web or Web 2.0, were at the mercy of their userbases.
The amazing promise of the Open Source revolution has been the efficiency and power of it's production models. That's what enabled a few thousand volunteers and about a thousand dollars a month to compete with Encyclopedia Britannica through Wikipedia. That's what enabled a rag tag bunch of software developers from round the globe compete with Microsoft and Apple through Linux. While it's clear that the Open Source Model has definite advantages its limitations and drawbacks are somewhat less studied and, perhaps due only to lack of experience and evidence, less clear. We remain uncertain what can benefit from "going open," we remain uncertain about exactly how the power structures work, and we remain uncertain about exactly who is in control. It's the difference between the interactions and activities of hierarchies and bureaucracies (which we understand so well) and those of networks. The importance of such knowledge and it's relevance in the coming century has been demonstrated by our need to understand the dynamics of networked organizations like Al-Qaeda and the Taliban. For the most part, our bureaucracies have trouble stopping them even with considerably greater resources due to networks decentralized nature. It's as though there's no point to attack. What is particularly significant about the events of May 1st is that Wikipedia and Digg are not equal in their openness. Specifically, digg was unable to control it's users and Wikipedia was. This seems to imply that digg is more open
than Wikipedia. Wikipedia however is promoted as more open than Digg and is designed with openness in mind. Digg's openness was, at least to some extent, accidental. Wikipedia has had to deal with more cyber-vandalism of this sort and so it was better equipped for the task. However, the same tools and methods of control that allowed them to prevent vandalism of entries enabled them to censor as well. Everything on Wikipedia is under an Open Source legal license (the GFDL). Digg's content is protected in no such way but it has fewer restrictions and administrative tools to control submissions and content. This is in part interesting because some people have suggested that Digg took advantage of its userbase to aggregate news content but this implies a control that is completely lacking. The suggestion at face value does seem a bit ridiculous when you consider that Wikipedia is doing precisely the same thing until you consider that Digg is a for-profit venture and grosses about $3 million annually. What's interesting about that suggestion is that it agrees with what we might imagine to be the case. Digg allows people to submit the news which people do because they enjoy it and Digg profits from it. But it's not that simple. Digg provides a platform on which people can author and vote on content and they profit from being an attention center of the web. Attention is becoming economically valuable. When companies use Google's AdSense they are essentially trying to buy attention. The web has made the reproduction of content an exercise in attention economics. All content, all video, audio, images, and text can be reproduced and distributed (effectively) for free. The scarcity has become one of time, one of attention. Hence the attention is the valuable thing. Sites on the web which get the most traffic are directly linked to the highest advertising profits. Digg gets the importance of attention and the users get the platform
. That's a very important distinction so I'm going to repeat it once more. The users don't get the content, they get the platform.
The essential defining element of any open source media, maybe any open source thing (so far as I can puzzle out) is that the users get the platform. The product, whether it's software, media, or otherwise is not what the users get. The users get the toolset that leads to the product and they (as the community) control the resulting product but that control is coincidental. As Jimmy Wales (founder of Wikipedia) said, "The big secret of course is that Wikipedia is not really about an encyclopedia, it's just a big game of nomic." The whole point is having control of the rules of the game. To this extent, I'm skeptical even of the claim that Wikipedia is more or less open than Digg. While Wikipedia locked entries from editing, the forums were still swamped with the Code they were trying to Censor. Moreover, as Jimmy Wales stated the rules could be changed at any time. While the platform has more controls and different power structures than Digg it still belongs to the users.
There have been numerous responses to the Code Frenzy over the last few days. One interesting reaction cited the entire movement as dumb because this sort of mass civil disobedience wasn't legal and wouldn't change the law or the decisions of the Content Corporations to use it and to use encryption. While those are all valid points I think they generally eschew the interesting aspects of this event in terms of hierarchies and networks clashing as social-organizational structures. Another reaction takes on the view of the necessary incentive to get people to spread sensitive information or participate in this sort of viral protest movement. Another still criticized Wikipedia for even trying to censor the number as it's effort would obviously be futile. The most interesting part is still Digg folding to it's user base. Businessweek had a cover story on Digg in August of last year and while Digg may make $3 million annually it's esteemed value is closer to 200 million dollars. Here's a 200 million dollar icon of the web being forced, more or less, to decide to work with or against their user base (which is the source of their power) and deciding to surrender to the whims of that user base even when that stance clearly flies in the face of the law and places them at odds with far more established and wealthy firms (the entire movie industry).
The Conclusion\Why it matters:
So, really, why such a big fuss about a little code and some cyber disobedience? Why the emphasis on new organizational structures? It is largely, for me, personal. I wrote this because these moments remind me of the little subtleties that I forget make Open Source special as an organizational form. I wrote this because I feel like I have a better understanding of makes something, anything, not just software, open than I did before the events of May 1st. But I'm also writing it because there's a direct connection between Economic\Material Progress and Innovation. New goods produce new profits and creativity is, I'm pretty sure, king. Google isn't open source but they've done the next closest thing. They've tried to foster good relations with their userbase and they allow their employees twenty percent time to work on what they want. That twenty percent time is motivating for people to produce. We all want to do what we want. I think that's a huge part of why Google's on top. They've found a way to make work not so worklike and in so doing increased the productivity of their workers. Eric Raymond once wrote that Enjoyment predicts Efficiency and I think that's a much more profound statement than he may have realized when he wrote it. If that's true and if, as I believe, Open Source fosters more enjoyment from it's participants than other methods of organizing production then it is a more efficient method of production than any other in existence. Open Production Models harness this enjoyment through voluntary selection of labor and many other motivating factors which I believe cause it to be potentially the most innovative organizational mode in existence. What's really fantastic is that I think it fixes a lot of the Spiritual Decay (which flies in the face of Material Progress) that Capitalism (depending on your view) has brought about. Finally, I think it's self-empowering and educating which ties into both the enjoyment and spiritual repair bits and also seems to foster a sort of social capital when many sociologists are concerned that our social capital is deteriorating, all the while providing public goods and services and reinvigorating the idea of a commons. Even if it just raises human efficiency in production and creativity, I'd say we can't ask for much better than that.
PS: I've underestimated the excellence of Radiohead's Kid A
PPS: Sorry this wasn't a short entry like I promised.
This blog covers 2015, Books, Butler, C, Dad, Discrete Math, Displays, Education, Erlang, Essay, Gaming, Gapingvoid, HTDP, Hardware, IP Law, LISP, Lecture, Lessig, Linkpost, Linux, Lists, MPAA, Milosz, Music, Neruda, Open Source, Operating Systems, Personal, Pics, Poetry, Programming, Programming Languages, Project Euler, Quotes, Reddit, SICP, Self-Learning, Uncategorized, Webcomic, XKCD, Xmas, \"Real World\", adulthood, apple, careers, choices, clones, coleslaw, consumption, creation, emulation, fqa, games, goals, haltandcatchfire, heroes, injustice, ironyard, linux, lisp, lists, math, melee, metapost, milosz, pandemic, personal, poetry, productivity, professional, programming, ragequit, recreation, research, rip, strangeloop, work
View content from 2022-04, 2022-03, 2022-01, 2021-12, 2021-08, 2021-03, 2020-04, 2020-02, 2020-01, 2018-08, 2018-07, 2017-09, 2017-07, 2015-09, 2015-05, 2015-03, 2015-02, 2015-01, 2014-11, 2014-09, 2014-07, 2014-05, 2014-01, 2013-10, 2013-09, 2013-07, 2013-06, 2013-05, 2013-04, 2013-03, 2013-01, 2012-12, 2012-10, 2012-09, 2012-08, 2012-06, 2012-05, 2012-04, 2012-03, 2012-01, 2011-10, 2011-09, 2011-08, 2011-07, 2011-06, 2011-05, 2011-04, 2011-02, 2011-01, 2010-11, 2010-10, 2010-09, 2010-08, 2010-07, 2010-05, 2010-04, 2010-03, 2010-02, 2010-01, 2009-12, 2009-11, 2009-10, 2009-09, 2009-08, 2009-07, 2009-06, 2009-05, 2009-04, 2009-03, 2009-02, 2009-01, 2008-12, 2008-11, 2008-10, 2008-09, 2008-08, 2008-07, 2008-06, 2008-05, 2008-04, 2008-03, 2008-02, 2008-01, 2007-12, 2007-11, 2007-10, 2007-09, 2007-08, 2007-07, 2007-06, 2007-05