Commons based peer-production: One minute of Wikipedia edits

The technical conditions of communication and information processing are enabling the emergence of new social and economic practices of information and knowledge production. ((The Wealth of Networks: Direct link))

You may have read Yochai Benkler’s book, The Wealth of Networks, where he discusses Wikipedia as an example of commons-based peer-production. Did you know that you can see this relatively new model of knowledge and economic production live, in real-time? The video below is just one minute of Wikipedia edits recorded from the live changes on the irc.wikimedia.org #en.wikipedia channel. Using the IRC channel, you can watch Wikipedia being created as it happens, which means you can see the incremental production of collective knowledge as it happens. I recommend full-screen HD to see the detail as it passes up your screen. There are different channels for the different language versions. I chose the English version.

The Wikimedia site provides detailed statistics about the use of their sites, although the English Wikipedia statistics stop at October 2006 🙁 Perhaps there’s just too much activity on that site for them to collect and measure?

A lot of people still have an aversion to Wikipedia, but I don’t think they get it. Wikipedia is completely open to anyone to contribute. If you don’t think it’s good enough, ((See the famous Nature article which compared Wikipedia to Encyclopedia Britannica [PDF])) isn’t it your (moral?) responsibility to correct and improve it? Like it or not, as a single source, it has by far the widest reach of any web-based learning resource and although I don’t have the time to substantiate this, I bet that after Google, it’s the second online resource that students visit when beginning their research. ((Via Twitter, AJCann just pointed me to some research he’d done which shows that 100% of his student cohort use Wikipedia)) If you challenge what’s happening on Wikipedia, you’re fighting a losing battle. Stop complaining and start contributing!

Personally, I watch the Wikipedia edits rolling up my screen, seeing contributions as they happen from individuals I’ll never know and am filled with optimism. Each edit is underwritten by a Creative Commons license which protects and preserves this body of knowledge for perpetuity. If there were world heritage sites on the Internet, Wikipedia would surely be the first to be recognised as such.

Getting your Triples into Talis Connected Commons

A few days ago, I wrote about adding Triplify to your web application. Specifically, I wrote about adding it to WordPress, but the same information can be applied to most web publishing platforms. Earlier this month, TALIS announced their Connected Commons platform and yesterday they announced a commercial version of their platform for the structured storage of Linked Data. Storage is all very well, but more importantly they have an API for developers, so that the data can be queried and creatively re-used or mashed up.

So this got me thinking about JISCPress, our recent JISC Rapid Innovation Programme bid, which proposes a WordPress Multi-User based platform for publishing JISC funding calls and the reports of funded projects. This is based on my experience of running WriteToReply with Tony Hirst.

Although a service for comment and discussion around documents, one of the things that interests me most about WriteToReply and, consequently the JISCPress proposal, is the cumulative storage of data on the platform and how that data might be used. No surprise really as my background is in archiving and collections management. As with the University of Lincoln blogs, WriteToReply and the proposed JISCPress platform, aggregate published content into a site-wide ‘tags’ site that allows anyone to search and browse through all content that has been published to the public. In the case of the university blogs, that’s a large percentage of blogs, but for WriteToReply and JISCPress, it would be pretty much every document hosted on the platform.

You can see from the WriteToReply tags site that over time, a rich store of public documents could be created for querying and re-use. The site design is a bit clunky right now but under the hood you’ll notice that you can search across the text of every document, browse by document type and by tag. The tags are created by publishing the content to OpenCalais, which returns a whole bunch of semantic keywords for each document section. You’ll also notice that an RSS feed is available for any search query, any category and any tag or combination of tags.

Last night, I was thinking about the WriteToReply site architecture (note that when I mention WriteToReply, it almost certainly applies to JISCPress, too – same technology, similar principles, different content). Currently, we categorise each document by document type so you’ll see ‘Consultations‘, ‘Action Plans‘ ‘Discussion Papers‘, etc.. We author all documents under the WriteToReply username, too and tag each document section both manually and via OpenCalais. However, there’s more that we could do, with little effort, to mark up the documents and I’ve started sketching it out.

You’ll see from the diagram that I’m thinking we should introduce location and subject categories. There will be formal classification schemes we could use. For example, I found a Local Government Classification Scheme, which provides some high level subjects that are the type of thing I’m thinking about. I’m not suggesting we start ‘cataloguing’ the documents, but simply borrow, at the top level, from recognised classification schemes that are used elsewhere. I’m also thinking that we should start creating a new author for each document and in the case of WriteToReply, the author would be the agency who issued the consultation, report, or whatever.

So following these changes, we would capture the following data (in bold), for example:

The Home Office created Protecting the public in a changing communications environment on April 27th which is a consultation document for England, Wales and Scotland, categorised under Information and communication technology with 18 sections.

Section one is tagged Governor, Home Department, Office of Public Sector Information, Secretary of State, Surrey.

Section two is tagged communications data, communications industry, emergency services, Home Secretary, Jacqui Smith MP, Rt Hon Jacqui Smith MP.

Section three is tagged Broadband, BT, communications, communications changes, communications data, communications data capability, communications data limits, communications environment, communications event, communications industry, communications networks, communications providers, communications service providers, communications services, emergency services, Her Majesty’s Revenue and Customs, Home Office, intelligence agencies, internet browsing, Internet Protocol, Internet Service, IP, mobile telephone system, physical networks, public telecommunications service, registered owner, Serious Organised Crime Agency, social networking, specified communications data, The communications industry, United Kingdom.

Section four is tagged …(you get the picture)

Section five, paragraph six, has the comment “fully compatible with the ECHR” is, of course, an assertion made by the government, about its own legislation. Has that assertion ever been tested in a court? authored by Owen Blacker on April 28th 11:32pm.

Selected text from Section five, paragraph eight, has the comment Over my dead body! authored by Mr Angry on April 28th 9:32pm

Note that every author, document, section, paragraph, text selection, category, tag, comment and comment author has a URI, Atom, RSS and RDF end point (actually, text selection and comment author feeds are forthcoming features).

Now, with this basic architecture mapped out, we might wonder what Triplify could add to this. I’ve already shown in my earlier post that, with little effort, it re-publishes data from a relational database as N-Triples semantic data, so everything you see above, could be published as RDF data (and JSON, too).

So, in my simple view of the world, we have a data source that requires very little effort to generate content for and manage (JISCPress/WriteToReply/WordPress), a method of automatically publishing the data for the semantic web (Triplify) and, with TALIS, an API for data storage, data access, query, and augmentation.  As always, my mantra is ‘I am not a developer’, but from where I’m standing, this high-level ‘workflow’ seems reasonable.

The benefits for the JISC community would primarily be felt by using the JISCPress website, in a similar way (albeit with better, more informed design) to the WriteToReply ‘tags’ site. We could search across the full text of funding calls, browse the reports by author, categories and tags and grab news feeds from favourite authors, searches, tags or categories. This is all in addition to the comment, feedback and discussion features we’ve proposed, too. Further benefits would be had from ‘re-publishing’ the site content as semantic data to a platform such as TALIS. Not only could there be further Rapid Innovation projects which worked on this data, but it would be available for any member of the public to query and re-use, too. No longer would our final project reports, often the distillation of our research, sit idle as PDF files on institutional websites and in institutional repositories. If the documentation we produce it worth anything, then it’s worth re-publishing openly as semantic data.

Finally, in order to benefit from the (free) use of TALIS Connected Commons, the data being published needs to be licensed under a public domain or Creative Commons ‘zero’ licence. I suspect Crown Copyright is not compatible with either of these licenses, although why the hell public consultation documents couldn’t be licensed this way, I don’t know. Do you? For JISCPress, this would be a choice JISC could make. The alternative is to use the commercial TALIS platform or something similar.

As usual, tell me what you think… Thanks.

Triplify: Make your blog mashable

Last week, I wrote about how it is relatively simple to ‘pimp your ride on the semantic web‘. Over the weekend, I stumbled upon Triplify, a small ‘plugin’ for pretty much any web publishing platform, that “reveals the semantic structures encoded in relational databases by making database content available as RDF, JSON or Linked Data.” What is so appealing about Triplify is how easy it is to implement, especially alongside a WordPress site.

I can confirm that the three-step installation process is all it takes, although I wouldn’t undertake implementing this blindly as you are, literally, exposing a semantic representation of your database content. In other words, you should look at the configuration file you’re using and check that it’s going to expose the right data and not clear text passwords and unpublished posts and comments. Before I  implemented it, I realised that it would expose comments on a bunch of posts that I have since made private (they were imported from an old, private blog), so I had to ‘unapprove’ those comments so the script didn’t expose them to the public. A five minute job. Alternatively, the script could probably be modified to work around my problem, by only exposing comments after a certain date, for example.

The end result is that, with a WordPress site, you expose a semantic representation of your users, posts, pages, tags, categories, comments and attachments in RDF (N-Triples) and JSON formatted data (for JSON, just add ‘?t-output=json’ to the end of the URI). Like I said though, it could be used on any database driven web application. Here’s what you get when you expose the high level links to your content:


<http://blog.josswinn.org/triplify/> <http://www.w3.org/2000/01/rdf-schema#comment> "Generated by Triplify V0.5 (http://Triplify.org)" .
<http://blog.josswinn.org/triplify/> <http://creativecommons.org/ns#license> <http://creativecommons.org/licenses/by/2.0/uk/> .
<http://blog.josswinn.org/triplify/post> <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://www.w3.org/2002/07/owl#Class> .
<http://blog.josswinn.org/triplify/attachment> <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://www.w3.org/2002/07/owl#Class> .
<http://blog.josswinn.org/triplify/tag> <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://www.w3.org/2002/07/owl#Class> .
<http://blog.josswinn.org/triplify/category> <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://www.w3.org/2002/07/owl#Class> .
<http://blog.josswinn.org/triplify/user> <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://www.w3.org/2002/07/owl#Class> .
<http://blog.josswinn.org/triplify/comment> <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://www.w3.org/2002/07/owl#Class> .

Here’s an example of what you get when you expose the full content:


<http://blog.josswinn.org/triplify/post/154> <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://rdfs.org/sioc/ns#Post> .
<http://blog.josswinn.org/triplify/post/154> <http://rdfs.org/sioc/ns#has_creator> <http://blog.josswinn.org/triplify/user/1> .
<http://blog.josswinn.org/triplify/post/154> <http://purl.org/dc/terms/created> "2008-10-06T05:55:25"^^<http://www.w3.org/2001/XMLSchema#dateTime> .
<http://blog.josswinn.org/triplify/post/154> <http://rdfs.org/sioc/ns#content> "Up early to go to Sheffield for LPI exams. The last week has left me underprepared. Never mind." .
<http://blog.josswinn.org/triplify/post/154> <http://purl.org/dc/terms/modified> "2008-10-06T20:12:15"^^<http://www.w3.org/2001/XMLSchema#dateTime> .

...

<http://blog.josswinn.org/triplify/post/154> <http://www.holygoat.co.uk/owl/redwood/0.1/tags/taggedWithTag> <http://blog.josswinn.org/triplify/tag/27> .

...

<http://blog.josswinn.org/triplify/post/154> <http://www.holygoat.co.uk/owl/redwood/0.1/tags/taggedWithTag> <http://blog.josswinn.org/triplify/tag/41> .
<http://blog.josswinn.org/triplify/post/154> <http://www.holygoat.co.uk/owl/redwood/0.1/tags/taggedWithTag> <http://blog.josswinn.org/triplify/tag/42> .

...

<http://blog.josswinn.org/triplify/post/154> <http://sdp.iasi.rdsnet.ro/semantic-wordpress/vocabulary/belongsToCategory> <http://blog.josswinn.org/triplify/category/22> .

...

<http://blog.josswinn.org/triplify/tag/154> <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://www.holygoat.co.uk/owl/redwood/0.1/tags/Tag> .
<http://blog.josswinn.org/triplify/tag/154> <http://www.holygoat.co.uk/owl/redwood/0.1/tags/tagName> "valentine" .

You can choose to expose different levels of information in your HTML source. If you have more than a moderate amount of content, you’ll probably want to just expose the top level links as in the first example and let the users of your data dig deeper. You’ll also note that you can (and should) attach a license to your data.

A number of namespaces are recognised as well as a WordPress vocabulary.


$triplify['namespaces']=array(
'vocabulary'=>'http://sdp.iasi.rdsnet.ro/semantic-wordpress/vocabulary/',
'rdf'=>'http://www.w3.org/1999/02/22-rdf-syntax-ns#',
'rdfs'=>'http://www.w3.org/2000/01/rdf-schema#',
'owl'=>'http://www.w3.org/2002/07/owl#',
'foaf'=>'http://xmlns.com/foaf/0.1/',
'sioc'=>'http://rdfs.org/sioc/ns#',
'sioctypes'=>'http://rdfs.org/sioc/types#',
'dc'=>'http://purl.org/dc/elements/1.1/',
'dcterms'=>'http://purl.org/dc/terms/',
'skos'=>'http://www.w3.org/2004/02/skos/core#',
'tag'=>'http://www.holygoat.co.uk/owl/redwood/0.1/tags/',
'xsd'=>'http://www.w3.org/2001/XMLSchema#',
'update'=>'http://triplify.org/vocabulary/update#',
);

So, what’s the point in doing this? Well, it’s fairly trivial and if you think that structured, linked, machine-readable licensed data is a Good Thing, why not?  The Triplify website lists an number of advantages:

Such a triplification of your Web application has tremendous advantages:

  • The installations of the Web application are better found and search engines can better evaluate the content.
  • Different installations of the Web application can easily syndicate arbitrary content without the need to adopt interfaces, content representations or protocols, even when the content structures change.
  • It is possible to create custom tailored search engines targeted at a certain niche. Imagine a search engine for products, which can be queried for digital cameras with high resolution and large zoom.

Ultimately, a triplification will counteract the centralization we faced through Google, YouTube and Facebook and lead to an increased democratization of the Web

The vision of the semantic web and semantic publishing is one of meaningfully identifying objects (and people) on the Internet and showing their relationships. This should improve searches for things on the web, but also improve how we exchange knowledge, re-use information and help clarify our identity on the web, too. It’s an ambitious task, but made easier with tools like Triplify.  The semantic web also raises questions over individual privacy and, if data is well formed and accessible, it may be easier to control and therefore censor. The creator of Triplify recently gave a technical presentation on Triplify and how it is being used to publish data collected by the OpenStreetMap project. It shows how geodata exposed in this way can result in mashup applications that directly benefit you and me.

Open Education Project Blueprint

Each participant on the Mozilla Open Education Course, has been asked to develop a project blueprint. Here is the start of mine. It’s basically a ‘Personal Learning Environment’ (PLE) ((See Personal Learning Environments: Challenging the dominant design of educational systems))and I’m going to try to show how WordPress MU is a good technology platform for an institution to easily and effectively support a PLE. I’m going to place an emphasis on ‘identity’ because it’s something I want to learn more about.

Short description

University students are at least 18 years old and have spent many years unconsciously accumulating or deliberately developing a digital identity. When people enter university they are expected to accept a new digital identity, one which may rarely acknowledge and easily exploit their preceding experience and productivity. Students are given a new email address, a university ID, expected to submit course work using new, institutionally unique tools and develop a portfolio of work over three to four years which is set apart from their existing portfolio of work and often difficult to fully exploit after graduation.

I think this will be increasingly questioned and resisted by individuals paying to study at university. Both students and staff will suffer this disconnect caused by institutions not employing available online technologies and standards rapidly enough. There is a legacy of universities expecting and being expected to provide online tools to staff and students. This was useful and necessary several years ago, but it’s now quite possible for individuals in the UK to study, learn and work apart from any institutional technology provision. For example, Google provides many of these tools and will have a longer relationship with the individual than the university is likely to.

Many students and staff are relinquishing institutional technology ties and an indicator of this is the massive % of students who do not use their university email address (96% in one case study). In the UK, universities are keen to accept mature, work-based and part-time students. For these students, university is just a single part of their lives and should not require the development of a digital identity that mainly serves the institution, rather than the individual.

How would it work?

Students identify themselves with their OpenID, which authenticates against a Shibboleth Service Provider. ((See the JISC Review of OpenID.)) They create, publish and syndicate their course work, privately or publicly using the web services of their choice. Students don’t turn in work for assessment, but rather publish their work for assessment under a CC license of their choice.

It’s basically a PLE project blueprint with an emphasis on identity and data-portability. I’m pretty sure I’m not going to get a fully working model to demonstrate by the end of the course, but I will try to show how existing technologies could be stitched together to achieve what I’m aiming for. Of course, the technologies are not really the issue here, the challenge is showing how this might work in an institutional context.

I think it will be possible to show how it’s technically possible using a single platform such as WordPress which has Facebook Connnect, OAuth, OpenID, Shibboleth and RPX plugins. WordPress is also microformat friendly and profile information can be easily exported in the hCard format. hResume would be ideal for developing an academic profile. The Diso project are leading the way in this area.

Similar projects:

UMW Blogs?

Open Technology:

OpenID, OAuth, RPX, Shibboleth, RSS, Atom, Microformats, XMPP, OPML, AtomPub, XML-RPC + WordPress

Open Content / Licensing:

I’ll look at how Creative Commons licensing may be compatible with our staff and student IP policies.

Open Pedagogy

No idea. This is a new area for me. I’m hoping that the Mozilla/CC Open Education course can point me in the right direction for this. Maybe you have some suggestions, too?

History of the Internet, PICOL and CC video

Just a couple of videos which I came across by accident. Both demonstrate how well information can be communicated through animated graphics and images. The first, History of the Internet, “is an animated documentary explaining the inventions from time-sharing to filesharing, from Arpanet to Internet.” I read Where Wizards Stay Up Late this year, which is a compelling read about the same subject. I can imagine the video being used as an effective teaching resource in class with the book included on a reading list.

[vimeo 2696386]

The video looks fantastic in HD on my 24″ iMac display 🙂 One of the reasons for this is the use of the PICOL icons, which are an impressive attempt to “find a standard and reduced sign system for electronic communication.” PICOL stands for Pictorial Communication Language and the icons are CC licensed. While reading about the PICOL project, I came across a decent video introducing Creative Commons, which I hadn’t seen before. I think I’ll use it for my Thinking Aloud seminar later this month.

These days are full

I am conscious that it’s been almost a month since I last wrote here but that is largely due to my work on other projects, websites and blogs.  Here’s an overview of some of the work I’m currently involved in. If you’re working on similar projects or want to discuss or collaborate on any of this, do get in touch.

The Learning Lab

I recently wrote a brief summary of the work I’ve been doing under the ‘Learning Lab’ banner, since I started my work as Technology Officer in the Centre for Educational Research and Development. WordPressMU occupied a large chunk of my summer, though I feel I have a good understanding of it now and can relax a little while supporting staff and students who wish to use it. It will soon be moving to the new, permanent home of http://dev.lincoln.ac.uk

One of the unexpected outcomes of working on WordPressMU was the realisation that not only training but a different model of support is key to sustaining and improving the use of blogs and other Web 2.0 tools.  I’m keen to advocate and support the user-to-user support model that most open source and social web services develop rather than the traditional user-to-professional, ‘Help Desk’ model that exists for much of the software provided by the university. Models of user support are not something I’ve taken much of an interest in until recently, but the reality is that I alone am unable to support the growing adoption of WordPressMU at the university and I need to encourage staff and students to help themselves wherever possible.

Having said that, with colleagues in the Library and Research Office, I’m also planning to offer regular staff training sessions on the use of Web 2.0 tools in education and I’m visiting classes to give one hour introductions to WordPress, which is a good opportunity to work with and learn from both students and staff. In addition to this, I’m contributing towards the revision of policy documents which ensure that these new tools are used effectively and appropriately.

Lincoln Academic Commons

This is something I’m developing to promote and support the various initiatives at the university which provide Open Access to our research, teaching and learning. I started working for the university on a JISC-funded project to develop an institutional repository, having been working as an Archivist and Project Manager of a Digital Asset Management system in my previous job. Then, a few months ago, I heard about the difficulties people in the Lincoln Business School were having trying to establish a series of ‘Occasional Working Papers’ (OWPS) using existing portal software provided by the university. At the same time, I was looking at the Open Journal System for publishing Open Access journals, so I suggested that we set up the OWPS using OJS. Seeing what a great piece of software OJS is, I then suggested we use it for NEO, a planned journal of student research which we intend to launch in the Spring. Finally (and this is where it gets really interesting for me), Mike Neary, Dean of Teaching and Learning and Head of the Centre for Educational Research and Development, is advocating a more critical engagement with the debates about the marketisation of higher education through teaching practice. He’s calling this critical engagement, ‘Teaching in Public’, which encompasses the idea of an Academic Commons.

Professor Neary argues the uncertainty over the university’s mission requires the notion of ‘the public’ to be reconceptualized, so as to remake the university as an academic project that confronts the negative consequences of academic capitalism, and the commodification of everyday life. He will present Karl Marx’s concept of the ‘general intellect’ as an idea through which the university might be remade.

I contributed to a book chapter Mike has recently written which elaborates on this in more detail. You can read more about that on a previous blog post.

Access Grid

A project I’ve been leading for some months now is the installation of an Access Grid node at the university. We were fortunate in being approached by the Mental Health Research Network (MHRN) several months ago who offered to fund the installation of an AG node at the university to support their staff who work at the university and provide a facility that is otherwise missing in Lincolnshire. It’s been a really interesting and useful project for me as I learned about how the university undertakes a tendering exercise and I’ve been able to work with colleagues from across the university.  The node should be available to use sometime in January. The Access Grid project is yet another technology-based initiative at the university which further improves our research infrastructure and supports collaboration and the wider exchange of ideas among colleagues worldwide.

Anytime, Anywhere Computing

This is a new project that brings together three, originally separate proposals, that the ICT department and CERD were proposing to take forward. It covers:

  1. ubiquitous wireless networking
  2. so called ‘thin client’ technology as an alternative to desktop PCs and the management of software applications and resources
  3. access via user-owned devices, such as low-cost and increasingly popular ‘netbook’ hardware

We’re just starting to look at how we might offer the same user experience and services on our wireless network as we provide on our wired network. Currently the wireless network only offers Internet access. At the same time, we’re interested in evaluating new virtualisation technologies for the desktop. The ICT department are concluding a server consolidation project which is virtualising much of our server infrastructure. This brings many benefits and allows the ICT department to provide a more flexible service to users.  Our new study will look at whether similar virtualisation technology can bring benefits to desktop users, too. The third part of this project is based on a proposal I made a few months ago to evaluate the user experience and support issues that the new generation of ‘netbooks‘ introduces. Smaller screens, Linux operating systems, an emphasis on web-based applications and the rapid adoption of these low-cost devices often aimed at the education sector, require a better understanding of the impact of this technology and the influence it may have on driving students to use more and more web-based applications.

Are you working on similar initiatives? If so, please leave a comment and share your experiences.

Academic Commons: Learning from FLOSS

While preparing my ‘Thinking Aloud’ seminar on Academic Commons, I was pleased to see that the Wikipedia entry for Open Educational Resources notes:

What has still not become clear by now to most actors in the OER domain is that there are further links between the OER and the Free / Libre Open Source Software (FLOSS) movements, beyond the principles of “FREE” and “OPEN”. The FLOSS model stands for more than this and, like e.g. Wikipedia, shows how users can become active “resource” creators and how those resources can be re-used and freely maintained. In OER on the other hand a focus is still on the traditional way of resource creation and role distributions. [my emphasis]

As it happens, this is a significant interest of mine and one I touch upon in a forthcoming book chapter I contributed to.  We conclude:

The idea of student as producer encourages the development of collaborative relations between student and academic for the production of knowledge. However, if this idea is to connect to the project of refashioning in fundamental ways the nature of the university, then further attention needs to be paid to the framework by which the student as producer contributes towards mass intellectuality. This requires academics and students to do more than simply redesign their curricula, but go further and redesign the organizing principle, (i.e. private property and wage labour), through which academic knowledge is currently being produced. An exemplar alternative organizing principle is already proliferating in universities in the form of open, networked collaborative initiatives which are not intrinsically anti-capital but, fundamentally, ensure the free and creative use of research materials. Initiatives such as Science Commons, Open Knowledge and Open Access, are attempts by academics and others to lever the Internet to ensure that research output is free to use, re-use and distribute without legal, social or technological restriction (www.opendefinition.org). Through these efforts, the organizing principle is being redressed creating a teaching, learning and research environment which promotes the values of openness and creativity, engenders equity among academics and students and thereby offers an opportunity to reconstruct the student as producer and academic as collaborator. In an environment where knowledge is free, the roles of the educator and the institution necessarily change. The educator is no longer a delivery vehicle and the institution becomes a landscape for the production and construction of a mass intellect in commons. ((Neary, M. with Winn, J. (2009) ‘Student as Producer: Reinventing the Undergraduate Curriculum‘ in M. Neary, H. Stevenson, and L. Bell, (eds) (2009) The Future of Higher Education: Policy, Pedagogy and the Student Experience, Continuum, London))

I’d be interested in discussing these ideas with anyone who has similar interests. I don’t doubt I have a lot to learn from others.

Having been part of the open source community for the last eight years, I am utterly convinced there’s a lot to learn from the collaborative and highly productive social and creative processes that allow software developers, documentation writers and application end-users to work together so effectively. Over decades, they have developed tools to aid this process (mailing lists, IRC, revision control, the GPL and other licenses, etc.). Isn’t it time that more of us worked and learned in this way? Creative Commons is our license to do so; the Internet is our means of connecting with others. The tools are mostly available to us and where they are inappropriate for our discpilines, we should modify them. In the FLOSS community, knowledge is free and the community is thriving.

As the Wikipedia OER article states, the transmission of knowledge from teacher to learner is slowly being freed, but the social processes of knowledge production remain largely the same. Students are still largely positioned as consumers of knowledge and the hierarchical relationship between teacher and learner is increasingly contractual rather than personal. There are exceptions, I know, but there are few genuine opportunities for teacher and learner to work together productively, especially on a group scale. Institutions, like my own, are making some efforts to change the ‘Learning Landscape’ but I don’t think it is necessarily an institutional responsiblility. It’s the responsiblity of individuals (especially in an environment so relatively loosely controlled as a university) to work out the social relations and processes of peer production for themselves, looking for support from others when they need it.

How can the model by which the FLOSS community works so productively and openly be translated to every academic discipline and change the way that knowledge is both produced and transmitted? Technology is at this core of this enterprise and I suspect many teachers and learners feel alientated and divided by it.

By the way, here’s my Thinking Aloud presentation.