In my previous post of this series, I discussed the role of military funding in the formation of a ‘genealogy’ of university laboratories, their projects, and staff which produced the conditions for hacking during the 1960s and 70s. As I drafted that post, I found myself drifting into a discussion around the role of venture capital but I have split that discussion into this final post below so as to highlight another important aspect in the study of the role of the university in the development of hacker culture.
Levy (1985) points to the arrival in 1959 of the TX-0 computer as a seminal moment in the history of hacking. The computer had been donated by the Lincoln Laboratory to MIT’s Research Laboratory of Electronics (RLE), the original successor of the Rad Lab and today, “MIT’s leading entrepreneurial interdisciplinary research organization.” Similarly, Eric Raymond points to the arrival at the RLE of the PDP-1 computer in 1961 as the moment that defined the beginning of ‘hackerdom’. Notably, at that time the RLE shared the same building as the Tech Model Railroad Club (TMRC), the legendary home of the first hackers. The history of hacking is understandably tied to the introduction of machines like the TX-0 and PDP-1 just as Richard Stallman refers to the demise of the PDP-10 as “the last nail in the coffin” for 15 years of work at MIT. Given the crucial significance of these machines, a history of hacking should include a history of key technologies which excited and enabled those students and researchers to hack at MIT in the early 1960s. To some extent, Levy’s book achieves this. However, in undertaking a history of machines, we necessarily undertake a social history of technology and the institutions and conditions which reproduced its development and in doing so we reveal the social relations of the university, the state and industry (Noble, 1977, 1984).
The birth of Digital Equipment Corporation
In 1947, the US Navy funded MIT’s Servomechanisms Lab to run Project Whirlwind to develop a computer that tracked live radar data. The Whirlwind project was led by Jay Forrester, leading systems theorist and principle inventor of magnetic core memory (the patenting of which was marked by a dispute between MIT and the Research Corporation resulting in the cancellation of MIT’s contract with the Corporation).
MIT’s Lincoln Lab was set up in 1951 to develop the SAGE air defence system for the US Air Force, which expanded on the earlier research of Project Whirlwind. The TMRC hackers’ first computer was a TX-0 from the Lincoln Lab with its use of a cathode-ray display borrowed from the SAGE project’s research into radar. Though large by today’s standards, the TX-0 was smaller than Whirlwind and was one of the first transistor-run computers, designed and built at MIT’s Lincoln Lab between 1956-7 (Ceruzzi, 2003, 127). Much of the innovation found in the TX-0 was soon copied in the design of the PDP-1, developed in 1959 by the Digital Equipment Corporation (DEC).
DEC was founded by Ken Olson and Harlan Anderson, two engineers from the Lincoln lab who had also worked on the earlier Whirlwind computer. Watching students at MIT, Olsen had noticed the appeal of the interactive, real time nature of the TX-0 compared to the more powerful but batch operated computers available and saw a commercial opportunity for the TX-0. Soon after they established their firm, they employed Ben Gurley, who had worked with them at the Lincoln Lab and designed the interactive display of the TX-0 which used a cathode-ray tube and light pen. It was Gurley who was largely responsible for the design of the PDP-1. DEC is notable for many technical and organisational innovations, not least that it permitted and encouraged its clients to modify their computers, unlike its competitor, IBM, which still operated on a locked-down leasing model. DEC’s approach was to encourage the use of its machines for innovation, providing “tutorial information on how to hook them up to each other and to external industrial or laboratory equipment.” (Ceruzzi, 2003, 129) This not only appealed to the original TMRC hackers but appealed to many of its customers, too, and led to DEC becoming one of the most successful companies funded by the venture capital company, American Research and Development Corporation (ARD).
The birth of venture capitalism in the university
ARD, established in 1947, is regarded as the first venture capital firm and was “formed out of a coalition between two academic institutions.” (Etzkowitz, 2002, 90). It was founded by the “father of venture capital”, Georges Doriot, then Dean of Harvard Business School, Ralph Flanders, an Engineer and head of the Federal Reserve Bank in Boston, and Karl Compton, President of MIT. ARD employed administrators, teachers and graduate students from both MIT and Harvard. The motivation for setting up this new type of company was a belief by its founders that America’s future economic growth rested on the country’s ability to generate new ideas which could be developed into manufactured goods and therefore generate employment and prosperity. This echoed the argument put forward by Vannevar Bush that following the war, “basic research” should be the basis for the country’s economic growth and both views confirm the idea/ideology that innovation follows a linear process, from basic research which is then applied, developed and later taken into production. However, whereas government was funding large amounts of R&D in universities, the founders of ARD complained of a lack of capital (or rather a model of issuing capital) that could continue this linear process of transferring science to society.
ARD funded DEC after Olsen and Anderson were recommended by Jay Forrester. This led to an investment of $100,000 in equity and $200,000 available in loans and within just a few years DEC was worth $400m. This allowed ARD to take greater risks with its investments: “The huge value of the Digital Equipment stock in ARD’s portfolio meant that the relatively modest profits and losses on most new ventures would have virtually no effect on the venture capital firm’s worth.” (Etzkowitz, 2002, 98). ARD’s success marked the beginning of a venture capital industry that has its origins in the post-war university and a mission to see federally-funded research exploited in the ‘endless frontier’ of scientific progress. It led to the development of a model that many other universities copied by providing “seed” capital investment to technology firms and the establishing of ‘startup’ funds within universities. Most recently, we can observe a variation of this method by the ‘angel investment’ firm, Y-Combinator, which specifically sought to fund recent graduates and undergraduate students during their summer breaks.
Y-Combinator and the valorisation of student hackers
A proper analysis of Y-Combinator in the context of the history of hacking, the university and venture capital is something I hope to pursue at a later date. In this current series of posts discussing the role of the university in the ‘pre-history’ of hacker culture I want to flag up that Y-Combinator can be understood within the context of the university’s role in the venture capital industry. Just as academic staff have been encouraged to commercialise their research through consultancy, patents and seed capital, in its early stage, Y-Combinator sought to valorise the work of students by offering its ‘summer founders programme‘. Similarly its founder, Paul Graham, has often addressed students in his writing and discussed the role of the university experience in bootstrapping a successful start-up company. Graham’s on-going articles provide a fascinating and revealing body of work for understanding the contemporary relationship between students, the university, hacking and venture capital. In this way Y-Combinator represents a lineage of hacking and venture capital that grew out of the university but never truly left because despite recent claims that we are witnessing the demise of higher education as we know it, the university as a knowledge factory remains a fertile source of value through the investment of public money and the production of immaterial labour, something that Vannevar Bush would be proud of.
Series conclusion
This is the last of a series of six posts on the role of the university in the development of hacker culture. These posts are my notes for a journal article I hope to have published soon which will argue, as I have done here, that the pre-history of hacking (pre-1960) is poorly documented and that much of it can be found in an examination of the history of American higher education, especially MIT.
As an academic who works in a ‘Centre for Educational Research and Development’, and who runs various technology projects and works with young developers, I am interested in understanding this work in the context of the trend over the last decade or so, towards ‘openness’ in higher education. Ideas and practices such as ‘open education‘, ‘open access‘, ‘open educational resources‘ (OER) and most recently ‘Massive Open Online Courses’ (MOOCs) and ‘open data‘, are already having a real impact on the form of higher education and its institutions and will continue to do so. My work is part of that trajectory and I recognise that the history of openness in higher education goes back further than the documented last 10-15 years. It is well known that the early efforts around OER, OpenCourseWare and the concurrent development of Creative Commons licenses owes a great deal to the ‘open source’ licensing model developed by early hackers such as Richard Stallman. I hope that in these posts I have shown that in turn, the free and open source software movement(s) was, in its early formation, a product of the political, economic and ultimately institutional conditions of the university. Richard Stallman felt compelled to leave the academy in 1984 as he found that “communism”, a foundational ethos of science as famously described by Merton (1973), was by that time little more than an ideal that had barely existed at MIT since the Great Depression.
This points towards a history of openness in higher education that is rooted in hacker culture and therefore in the commercialisation of scientific research, military funding regimes and the academy’s efforts to promote a positive ideology of science to the public. Stallman’s genius was the development of ‘copyleft‘, in the form of the GPL, which was very influential in the later development of Creative Commons licenses used (and partially developed) in higher education. Through the growth of the free and open source software movements in the last 25 years, the academy has been reminded (and as participants, reminded itself), that the ideal of communism in science forms the basis of a contract with society that can still be achieved through the promotion of openness in all its forms. However, in hindsight, we should be cautious and critical of efforts to yet again valorise this new agenda in science through calls to adopt permissive licenses (e.g. CC-BY, MIT, ODC-by) rather than Stallman’s weapon of scientific communism: Copyleft.
Fantastic series Joss. I have to go back and read through it a second time, there is so much of interest here. I suspect the reason may be because you were focusing specifically on the origins of hacking within the university, but I was surprised by the relative lack of mention of West Coast schools. Stanford & SRI in particular came to mind. Also, not sure if it fits the narrative, but the feds had a profound effect on the fabric of universities post-WWII with the GI-bill, which suddenly introduced hundreds of thousands (largely) men with military service records to university campuses. Not sure I can draw a direct line to the history of hackers, but I suspect there may be one to be found.
Thanks, Scott. I’ll have a read around the GI bill. As you say, it feels like its worth pursuing. The reason I’ve left out Stanford is just to reign things in a bit. Really, there’s a book here, but I’m trying to work up a journal article initially. Also, Stanford (and other universities) borrowed so heavily from MIT, that to just examine MIT can sufficiently explain the role of ‘the university’ in quite generic terms. Of course, there’s a story to tell about Stanford, too. But that’s for another time.
I only mention SRI because (with its cousin XEROX PARC) they played such a formidable role in the invention of both the internet and the personal computer. It might be an interesting to contrast the forms of hackerism coming out of specifically communal mainframes with the forms that focused more on the creating of machines for individuals and the networks that interconnect them. But I agree, a LARGE subject worthy of a book. Actually, I think you’re well on your way with this start.
Another piece that often gets left out in the discussions of hackers is the role of the automobile for personal mobility and the rise of “gearheads” (which in a higher ed context can often be found in mechanical engineering departments, aerospace engineering, etc). It’s a different notion of hacking but I find it interesting as I don’t think any of these narratives – “hackers,” “university research funded by the military” etc) entirely account for the flavours we now find, though this is not to dismiss these either; they are hugely important and largely left out of the “lone genius” and other type mythos used to obscure.
Joss, this is such interesting stuff. Like Scott, I will have to read it through again in order to focus on one or another of the different elements, as all of it is new to me. My head spins through the connections that constitute the military-industrial-academic-technological and apparently hacker machineries…more so because you open up an entirely different perspective on the meaning of the hacker than I had previously been able to consider. Which is, I think, one of your arguments.
I was also making other links, as one does when trying to find a place for something new in existing understandings. One was to Merton, who has come screaming back to me from a decade ago, again from an unexpected direction. In the late 1990s, I spent a fair amount of time with Merton when sorting through the history of sociological knowledge in relation to the Cold War and then when developing critiques of ‘normal science’ in work on the sociology of scientific knowledge — though perhaps normatively separating out, less than consciously, ‘disinterestedness’ and ‘universalism’ from ‘organized skepticism’ and ‘communism’. As Andrea and I have been thinking about the implications of metricisation and marketization on epistemology itself, it might be interesting to reread Merton in the context you set out.
The other connection was to an article I read recently by Rodrigo Nunes, ‘Nothing is what democracy looks like’, which puts forward a fairly devastating critique of horizontalism as an organisational form (or at least fetishised versions of it). He argues that one of the fatal contradictions in the idea is that many ‘open’ spaces are internally open precisely because they have become externally closed. One thing I have not done yet is think seriously about whether discussions about ‘openness’ in these contexts is logically connected in any way to the way you are using it here, and further from this to the philosophical elaborations of openness in terms of receptivity, indeterminacy, possibility and etc. But as you say, that’s for another time…
Thanks for sharing these thoughts and ideas.