A wealth of valuable data is collected and stored in the systems of complex organisations such as our universities, frequently underutilised for a multitude of reasons from institutional inertia to technological complexity. As budgets contract and competitive pressures increase, the timely and effective exploitation of data is becoming an increasingly important characteristic of the successful organisation; and universities are no exception. From efficiently transparent reporting to data-driven internal decision making and the cost-effective nurturing of new avenues for growth, collaboration or differentiation, there is increasing value in effectively exploiting data to further the institutional mission.
World Wide Web inventor Sir Tim Berners-Lee and others talk compellingly of the value in moving from today’s ‘Web of Documents’ toward a ‘Web of Data’ in which much of the data we already hold is made available – via the architecture and technologies of the Web itself – for manipulation by computers. Pages on the web meant for reading by people would gain structure, so that while you or I might read a postal address off the screen as we do today, software would see the same page and offer to calculate a route to that address, add it to your address book, and more. The research paper stored in your institutional repository would be linked to related papers by the same authors, and placed within context to demonstrate institutional research prowess. The courses offered by your institution would be automatically aggregated with similar courses from elsewhere and made easily accessible to potential students who might never visit your web site or order your prospectus. Relevant data from your institution would be available alongside that from other bodies, powering a range of applications for staff, students, funders, industrial partners and more; the value locked up inside institutional systems would be made available to drive efficiency in today’s procedures whilst creating the opportunities for tomorrow’s.
This vision of a ‘Semantic Web’ has been discussed for years, but a combination of political and commercial will, community readiness, technological capability and openly available data has led to a recent leap forward in adoption of one particular aspect of that vision under the banner of Linked Data.
This concept of Linked Data is attracting attention in quarters unfamiliar with the Semantic Web community from which it emerged. Recent announcements from the UK’s Prime Minister see the Government join existing implementers as diverse as the BBC, Thomson Reuters, Tesco, Best Buy and Johnson Johnson.
Four simple principles, or rules, laid down by web inventor Sir Tim Berners-Lee describe the practicalities of Linked Data, and implementers have been quick to apply these in exposing large collections of data for use and reuse, facilitated by the underlying structure of the web itself. In a world in which no single database is comprehensive, the value of being easily able to link related assertions from across diverse data silos is proving compelling.
This report describes Linked Data, and highlights a number of the sectors in which it is already being put to work.
A series of recommendations outline ways in which JISC and the wider community might approach the application of Linked Data’s rules to good effect.
The core recommendations are reproduced in the following sections.