David Kernohan complained recently that I don’t blog any more. Well, it’s true that I don’t blog *here* as much as I used to because over the last year or so I’ve been running several projects and, when I have time, often blog on the respective project sites.
However, I am still blogging here, but there’s been a problem with the Feedburner RSS feed. It seems that it’s been timing out and hasn’t picked up any posts since April 2012. I’ve been trying to fix it this morning but feedburner is pretty strict in how long it will wait to fetch a feed before it times out. I’m going to have to overhaul the blogs server to really fix it. In the meantime, why not subscribe to the actual blog feed, where you’ll see that I’ve written a number of posts since last April and am currently exploring the role of the university in the history of hacking.
I repeat this to people all the time. If I write it down here, then I only have to share a link 😉
RSS feeds are a very popular way of syndicating content from one source website to another subscribing website.
Some university websites, such as the Institutional Repository or University blogs, produce RSS feeds but not all university websites can easily subscribe to them. However, by using ‘feed2js’, any website can display a syndicated news feed in just a few steps. This way, you can embed your blog or publication list in Blackboard or on your personal web profile, for example.
Creating a publications list from the repository
We use EPrints as our Institutional Repository. EPrints provides news feeds (RSS, RSS2, Atom) for every search query. Therefore you can create a news feed of publications by Faculty, School, Department, Research Team or Staff member. Having created the news feed, you can then display that list of publications on any web page of your choice.
The advantage of this is that every time you deposit something new in the repository, the list will automatically update on your chosen web page. You never need to edit your publications list again.
Steps to embedding your feed
Create your publications list. Use the Advanced Search page to construct your publications list. If you want a personal publications list, simply search for your name. If you have a common name, your search may return publications that belong to someone else. In that case, you should keyword all your repository items with a unique ‘key’, such as ‘q73g’. You can then search for that keyword and your name and only your items will be returned by the search.
Copy your feed URL. Typically, you need to right-click on the orange RSS 2.0 icon on the search results page and copy the link.
Go to http://feed2js.org/index.php?s=build and paste your link into the URL box. If you are a member of the University of Lincoln, contact me for a better link, hosted at the university.
From this point on, you can click the ‘Preview Feed’ button at any time to see what your feed will look like. Read the listed options carefully. They allow you to choose whether you wish to display the title of the feed; whether you wish to show the full content of the feed or just the titles; whether you wish to show images or video content in the feed (if there is any in the original source), etc. Experiment by previewing the feed to see what looks best for you.
When you are happy with your feed, click the ‘Generate Javascript’ button. Copy everything inside the Get Your Code Here box. Note how the box scrolls. Copy it all!
Paste the javascript into the appropriate place in your website’s HTML code. Save your web page and examine your work. The embedded feed should fit in well with your existing web site design and use the colour scheme you have chosen for your site. If you wish to make the publications list stand out from your web page, you should read the page about dressing up your output.
There is no more you need to do. The feed will automatically update every hour or so with any new content from the source website.
This morning, I found myself on Baseline Scenario, a well-known site which discusses the economic crisis. I noticed that the authors of the site had laboured over producing a PDF version for each month of their archive, by copying and pasting to Word and producing a PDF. There’s a nicer way of doing this, I think. When you’ve done it once, it should take you no more than ten minutes to go through the whole process any other time.
WordPress provides a way to filter content by date. In our example, we’ll grab the RSS feed from the first month of publications: http://baselinescenario.com/2008/09/feed The permalink structure is clear enough on WordPress. For Blogger, it’s nowhere near as intuitive.
The feed will display the articles in descending date order. When you are reading the PDF or eBook version, you don’t want to read the last article first, as you would on the website. To reverse the order of the feed, use Yahoo Pipes (or for WordPress, see @mhawksey’s comment below). You can clone my example. If you’ve not used Yahoo Pipes before, don’t worry. You just need a Yahoo account. The example I give is as simple a pipe as you will see and should make sense as soon as you look at it.
Once you’ve created the pipe of the feed in ascending order, save and run the pipe. Look for the RSS icon and copy the pipe’s RSS link, which should look like this: http://pipes.yahoo.com/pipes/pipe.run?_id=cb438b51b2819eb1f4f5ec6f10daf09e&_render=rss
Next, go to FeedBooks. Sign up for an account if you don’t already have one. Now, we create a Newspaper.
Click on ‘Add a RSS feed’. Give it a name (in our case ‘September 2008’) Â and paste your RSS feed into the box. Once it’s found and accepted your feed, click ‘Publish’.
You can now click on the name of the specific feed and you’ll be presented with a page that offers an ePub, Kindle and PDF versions of your feed. Here’s the Baseline Scenario September 2008 example.
That’s it. You can do it with whole sites, too, if you like. Here’s one I did earlier (Blogger). The only thing you need to remember is to ensure that the RSS feed contains all the items you’re looking for. For the Blogger site, the source feed looks like this: http://www.blogger.com/feeds/27481991/posts/default?max-results=1000 A thousand items is more than enough to capture this site for quite some time. For WordPress, the site owner has to change their Reading Settings to include sufficient items. For the Baseline Scenario, they need to set this at a number high enough to ensure that a month’s worth of posts are included. I would just set it at 3000 and then forget about it. It would mean the entire site could be captured this way for the next year or so.
In my previous job as Audiovisual Archivist, I spent a lot of time examining various metadata standards in detail; hours spent pouring over PBCore, METS, MODS, MIX, EXIF and IPTC/XMP, because we were designing a content model for an in-house Digital Asset Management system. I thought I had put it all behind me yet here I am staring at Phil Barker’s informative post about ‘metadata and resource description’ and it’s all coming back to me… Arrghhh 🙂
Workpackage six of the Chemistry.fm project aims to:
Plan the storage, delivery and marketing of the course.
Choose a metadata standard
Evaluate third-party hosting such as Flickr, Slideshare and YouTube as well as JORUM and the IR.
Ah, if only life were as simple as a series of bullet points!
As I was creating the project poster yesterday, I was reminded about the various ways that our project OERs could be ‘broadcast’. Although collaboration with our community radio station SirenFM, is core to the approach of our project, we all know that there are many ways for anyone to be a broadcaster on the web and part of the fun of this project for me, is being able to explore the different ways that educational content can be pulled and pushed between subscribing students and members of the public.
My plan at the moment is to use our Institutional Repository as the ‘canonical reference’ for the OERs. During our JISC-funded LIROLEM project, we developed EPrints to better accommodate multimedia resources and it makes sense to use a versioned digital archive that supports embedded media enriched by copious amounts of metadata. (I know it’s a requirement to use JORUM, too, but at the first Programme Meeting, it became clear that JORUM can be used simply as a directory where we can register URIs of existing OERs, so that’s what I’ll be doing).
Anyway, Archivists, have you ever feasted your eyes on the source code of an EPrint? Of course you have. Here’s a reminder.
programme tag [there is no “DC.keyword” term, so EPrints uses name=”eprints.keywords”]
title [name=”DC.title”]
author [name=”DC.creator”]
date [name=”DC.date”]
url [name=”DC.identifier]
technical information [name=”DC.format”]
language [hmmm, nowhere to be seen. Can we add that?]
subject classification [name=”DC.subject”]
keywords/tags [there is no “DC.keyword” term, so EPrints uses name=”eprints.keywords”]
comments [We use the SNEEP plugins but the comments are not showing in the source code – do we need to make sure they are crawlable? Some people aren’t keen…]
description [name=”DC.description”]
I’ve highlighted the Dublin Core terms above, but happily, the data is available in several other alternate formats:
Now, we could choose to lump all the OERs that we create into one single EPrint, but that doesn’t give us much flexibility and remember that EPrints is serving as the canonical reference for the OERs, not necessarily the final presentation layer that people will actually be using to browse, download and use the resources from. So if we were to group the OERs into sets of items that constituted an EPrint and then relate those EPrints to each other, using the “DC.isPartOf” property, from the point of view of metadata, we’ll be creating a consistent whole, but giving ourselves some flexibility in how we ‘broadcast’ the content of the course.
If we consider the course MindMap that we knocked up a while back, we might decide to create a single EPrint for each of the five major ‘nodes’ of the course. Doing this, would then give us an RSS 1.0 (RDF), RSS 2.0 and Atom feed for the course where each node was an item.
Before I move on with this, look at the export formats that EPrints offers for a query. Imagine that the course could be exported in each of these ways:
The zip export allows you to download the entire query and all it’s resources at once. The HTML citation format allows you to produce some HTML you could copy and paste into any web page. It could just as easily be dropped into Blackboard as it could on any other (and anybody’s) web page. BibTex would allow you to browse the course via your preferred reference management software and JSON… I still don’t completely get it, but it’s pretty fancy, I know that much.
Anyway, If each of the mindmap nodes is an ‘item’ in the RSS feed, then perhaps we can use that to feed a WordPress site, using the FeedWordPress plugin? Nope. It doesn’t seem to work. FeedWordPress recognises the feed but doesn’t import anything. Testing it with another feed based on keywords does work, but the information included in the feed is sparse, so that’s no good. By the way, the EPrints RSS 2.0 feed does include the xmlns:media=”http://search.yahoo.com/mrss” namespace and marks up the preview thumbnails accordingly:
(Another way to tackle this might be using our newly developed ‘EPrints2Blog’ plugin, which allows a depositor to post information about their new EPrint to a blog of their choice (using XML-RPC). As we deposit the course EPrints, each could be posted to a WordPress site. The resulting feed from the WordPress site does include some embedded media, but it’s still a bit of a hack. No, scrap this idea).
Right, how about this…?
Using EPrints as the canonical source for each of the files for the course, we could create a WordPress site with the addition of the Dublin Core and OAI-ORE plugins for WordPress.
For each WordPress post, this gives us the following metadata:
This is more like it. Click on the oai-ore link and look at the source code. It’s too big to display here, but it does what you’d expect and produces a OAI-ORE 1.0 compliant Atom/XML file. Contained within the file is a ‘resource map’ of all the WordPress posts and pages marked up with Dublin Core and FOAF terms. Thinking about how the course site might be represented in this way, it makes sense to atomise the course even further so that each of the sub-nodes of the Mind Map is a WordPress post. Using the current course structure, that would result in about 20 separate posts to represent the course. Each post would contain one or more resources such as a PDF, video, audio, slides, etc. Is it worth atomising it even further and creating a post for each of these resources, too, I wonder? Quite possibly.
Unfortunately, the resource map does not include media that are included in each post or page – apparently it’s on the developer’s list of things to do. Maybe we could use some of the project budget to ask Alex, who’s working on the JISCPress project with me, to extend the plugin in this way…
Finally, there’s also a MediaRSS plugin for WordPress, which could enhance the RSS feeds to include all the media used in the course. Here’s an example that’s including images by default. I’ve already written about the various feeds that are available for WordPress, with some careful categorisation and tagging, media rich feeds would be available for different points (‘nodes’) of entry into the course.
Once we are at this point, I guess we’re ready to think about broadcasting the course via Boxee and DeliTV (no time to dig into that now. Sorry!)
Metadata… arrghhh!
p.s. you’ve probably noticed that I’m a bit weak on the EPrints and OAI-ORE stuff, to say the least. Please do pick me up on where I’m going wrong with this. Thanks 🙂
It’s made Dave Winer happy, which is no easy task, so I think PubSubHubbub is worth mentioning here. If it’s working as it should, this post should appear in my Google Reader, almost immediately after I’ve published it. That’s because PubSubHubbub is “a simple, open, server-to-server web-hook-based pubsub (publish/subscribe) protocol as an extension to Atom [and RSS].” My blog feed is managed by FeedBurner which has already implemented the new protocol, as has Google Reader FriendFeed. They should therefore ‘talk’ to each other in realtime. Watch the video and you’ll see how it works. It’s pretty straightforward. It just takes a company the size of Google to push it through to adoption. The engineers say they were using it like Instant Messaging the night before the demo, which says something about how responsive this is. Technically, it should be another challenge to Twitter in that it allows for a distributed method of near realtime communication.  I’d like to see that. I feel like an idiot communicating within the confines of  Twitter, sometimes.
University students are at least 18 years old and have spent many years unconsciously accumulating or deliberately developing a digital identity. When people enter university they are expected to accept a new digital identity, one which may rarely acknowledge and easily exploit their preceding experience and productivity. Students are given a new email address, a university ID, expected to submit course work using new, institutionally unique tools and develop a portfolio of work over three to four years which is set apart from their existing portfolio of work and often difficult to fully exploit after graduation.
I think this will be increasingly questioned and resisted by individuals paying to study at university. Both students and staff will suffer this disconnect caused by institutions not employing available online technologies and standards rapidly enough. There is a legacy of universities expecting and being expected to provide online tools to staff and students. This was useful and necessary several years ago, but it’s now quite possible for individuals in the UK to study, learn and work apart from any institutional technology provision. For example, Google provides many of these tools and will have a longer relationship with the individual than the university is likely to.
Many students and staff are relinquishing institutional technology ties and an indicator of this is the massive % of students who do not use their university email address (96% in one case study). In the UK, universities are keen to accept mature, work-based and part-time students. For these students, university is just a single part of their lives and should not require the development of a digital identity that mainly serves the institution, rather than the individual.
How would it work?
Students identify themselves with their OpenID, which authenticates against a Shibboleth Service Provider. ((See the JISC Review of OpenID.)) They create, publish and syndicate their course work, privately or publicly using the web services of their choice. Students don’t turn in work for assessment, but rather publish their work for assessment under a CC license of their choice.
It’s basically a PLE project blueprint with an emphasis on identity and data-portability. I’m pretty sure I’m not going to get a fully working model to demonstrate by the end of the course, but I will try to show how existing technologies could be stitched together to achieve what I’m aiming for. Of course, the technologies are not really the issue here, the challenge is showing how this might work in an institutional context.
I think it will be possible to show how it’s technically possible using a single platform such as WordPress which has Facebook Connnect, OAuth, OpenID, Shibboleth and RPX plugins. WordPress is also microformat friendly and profile information can be easily exported in the hCard format. hResume would be ideal for developing an academic profile. The Diso project are leading the way in this area.
I’ll look at how Creative Commons licensing may be compatible with our staff and student IP policies.
Open Pedagogy
No idea. This is a new area for me. I’m hoping that the Mozilla/CC Open Education course can point me in the right direction for this. Maybe you have some suggestions, too?