jisc logocni-ukoln logos



26TH - 27TH JUNE 2002

Summaries and Presentations

Introduction | Programme | Booking Form | Delegates

SPARC: Open Access to Scholarship New Solutions

Chris Bailey, University of Glasgow: "Introduction"
Presentation: [Powerpoint version] | [HTML version]

Stephen Pinfield, University of Notthingham: "Creating Institutional Repositories"
Presentation: [Powerpoint version] | [HTML version]

Bas Savenije, University of Utrecth: "Creating Open-Access Journals"

Chris Bailey, University of Glasgow: "Creating Change in Scholarly Communication"
Presentation: [Powerpoint version] | [HTML version]

Gail McMillan, Director, Scholarly Communications Project, Virginia Tech
ETDs Electronic Theses and Dissertation
Building an Institutional Asset in a Global Context

Introduction to ETDs: Electronic Theses and Dissertations
How to begin an ETD program, including people, technology requirements and software
Expectations: Library, students, faculty
Major issues, including copyright and archiving issues

Presentation: [Powerpoint version] | [HTML version]

Joan K Lippincott, Ph.D Associate Executive Director, Coalition of Networked Information:
New Directions for the Networked Digital Library of Theses and Dissertations

The Networked Digital Library of Theses and Dissertations (NDLTD) is an international project that promotes the creation of electronic theses and dissertations (ETDs) in order to widely share the products of research and to prepare graduate students to publish in the electronic environment. ETDs are also a potential building block of content for institutional repositories.

The NDLTD recently completed a strategic planning process to provide a new organizational framework and to confirm the direction of the program. The head of the strategic planning committee will provide a report of the outcomes, will solicit feedback from the attendees, and will discuss how UK universities can become more involved in this international effort.

Presentation: [Powerpoint version] | [HTML version]

John MacColl, Sub-Librarian, Edinburgh University Library & Director of the SELLIC Project:
Electronic Theses and Dissertations: A Strategy for the UK

Presentation: [Powerpoint version] | [HTML version]

Paper: ETDs: a Strategy for the UK [WORD]

Back to top

Lorcan Dempsey, VP of Research OCLC
Divided By a Common Language: Digital Library Developments in the US and UK

The US and UK share a common language. They share common traditions in library and information management. Despite these continuities, there are also interesting discontinuities in approach and style. This presentation considered some of these continuities and discontinuities in the context of recent developments in digital library and related areas. It then considered some of the shared challenges faced by both communities as they consider how best to support research, learning and cultural engagement in a time of institutional change.

Presentation: [Powerpoint version] | [HTML version]

Liz Lyon, Director, UKOLN, University of Bath
Tony Hey, Director UK e-Science Core Programme

Shaping the e-future? Grids, Web Services and Digital Libraries The presentation aims to identify and draw together the threads linking the three development areas of Grids, Web Services and Digital Libraries and to highlight areas where exchange of experience and re-application of concepts, will lead to a broadening of existing ideas and help to shape our vision for the future.

The presentation covers an introduction to Grid computing activities in the UK and in particular will focus on the e-Science Programme. An Overview of the Programme will be given showing the range of disciplines involved with brief descriptions of some of the key projects and the major challenges faced by the Programme. The Open Grid Services Architecture will be introduced and an update on progress in this area will be presented. The relationship between Grid computing and Web Services will be described and the inter-relationships with key business players drawn out.

A progress update on work developing the JISC Information Environment architecture will be presented focusing on the impact of Web Services developments and the associated standards. Finally the session will explore some of the issues arising from the Web Services approach, including the functionality of infrastructure components such as registries, the application of semantic and knowledge technologies, the re-application of Grid concepts within Digital Libraries and some new and emerging approaches to the management of services including autonomic computing and the provision of e-Utilities

Presentation: [Powerpoint version] | [HTML version]

Paper: The Data Deluge: An e-Science Perspective (Tony Hey) [WORD]

Back to top

Carl Lagoze, Digital Library Scientist, Cornell University
Andy Powell, Assistant Director Distributed Systems and Services UKOLN, Bath University
Martin Halbert, Director, Library Systems, Emory University, Robert Woodruff Library.

The Open Archives Inititative: Progress and Practice The three panelists described recent work on and with the Open Archives Initiative Protocol for Metadata Harvesting. Carl Lagoze, who along with Herbert Van de Sompel, serves as the OAI executive, described work to develop Version 2 of the Protocol. Andy Powell and Martin Halbert described implementations and practical experiences.


Carl Lagoze: [Powerpoint version] | [HTML version]

Andy Powell: [Powerpoint version] | [HTML version]

Debbie Campbell, Director, Co-ordination Support Branch, National Library of Australia
Engaging National Information Infrastructure With Learning Environments

A discussion of collaborative developments within the Australian higher education sector which seek to more effectively link the learning experience with existing information infrastructure.

Several Australian collaborative projects are testing various protocols to explore access and use of quality information resources in learning, teaching and research. The COLIS (collaborative Online Learning Information System, AARLIN (Australian Academic Research Libraries Information Network) and National Library's Resource Discovery Service projects provide examples of future service directions.

Potential relationships between services arising from these projects were also discussed.

Presentation: [Powerpoint version] | [HTML version]

David Dawson, Senior ICT Adviser, Resource
Creating Content Together: practicalities and partnerships

Presentation: [Powerpoint version] | [HTML version]

Back to top

Dr. Joyce Ray, Associate Deputy Director for Library Services, Institute of Museum and Library Services
Help! How will we Digitize Cultural Heritage in the US?

In the U.S., digitization activities, research, and funding are decentralized among a variety of federal and state agencies, universities and other institutions, and private foundations. There is no national program to digitize cultural heritage materials. Yet this complex and chaotic situation is also exciting and rich in activity. It has necessitated collaboration among the many members of the digital library community, which may turn out to be the greatest asset to a national cultural heritage digitization program

Presentation: [Powerpoint version] | [HTML version]

Back to top

Jerry Goldman, Professor, Department of Political Science, Northwestern University
Listen UP: Digitizing Oral Heritage

Our cultural heritage extends beyond text and images to audio and video. Storage requirements and lack of agreement on standards have stood in the way of a preservation strategy. The longer we wait, the lower the cost and the greater the likelihood that a standard will emerge. This logic may leave archivists and librarians immobilized and when action comes, the deluge will also arrive. Moreover, the risks of waiting are real and substantial. The amount of audio materials now in libraries and archives (either on vinyl, open-reel, cassettes, and digital) is staggering. My talk explored some ways to address digitizing spoken-word collections for preservation, delivery and access.

It may be necessary to convince text-driven skeptics of the value in spoken-word materials. Scholars and print journalists are complicit in the commitment to text-only versions of 'history' and 'news.' For historians, their methodology relies on structured interviews, transcription, and the opportunity for subjects to revise their remarks. The result is a written transcript, the sine qua non of the oral historian. For journalists, the story comes to the reader in the inverted pyramid form with a strong lead and the facts to support it buried in the paragraphs that follow. But I would maintain that the written transcript and the print news story, while valuable, are pretenders to history or news. The document is not the text, but the spoken word, which contains emotive information to inform and enlighten the listener whether student or scholar or journalist.

The tasks are substantial but the good news is that costs are declining rapidly. Moreover, the current decay of analog audio materials demands action now. The National Gallery of the Spoken Word, an DLI2-sponsored project at Michigan State University, has been addressing several issues, including preservation of analog materials and digitization of new content. As a co-principal investigator on this project, I reported on preservation, digitization, delivery, and use.

Spoken word materials can be deeply affecting. They bring to life in ways that text cannot a vital aspect of our heritage. Yet for all its power, these materials remain rigidly linear. One must listen in order to find the relevant or appropriate part. For example, President Richard Nixon employed

voice-activated devices to capture every conversation and glass clink in the Oval Office, on the telephone, in the Executive Office Building, and at Camp David and other venues. In this remarkable 3000+ hour collection, how does one find the needle in the haystack. Metadata help, such as conversation logs and dates. But that merely narrows the field. Until recently, one had to listen to hours and hours of information, but new search strategies will make access and use far easier.

The last issue I addressed was the manner of use. Once you find the critical evidence, how do you preserve it in a standard form that you can cite and others can check? Today's print citation practices do not come close to achieving the precision and efficient verification that a digitized world enables. But here too the future is bright. The ability to identify start- and stop-points in a digital stream will become standard and tools exist today to exploit this pinpoint accuracy.

The future may seem overwhelming when we contemplate the enormity of the task. The challenge we face is action versus inaction. And act we must because the risk to our oral heritage in the face of inaction will prove unfortunate indeed

Back to top

Maggie Jones, Researcher, JISC
Challenges of Digital Preservation: Do we have a roadmap?

The Cedars Project - what did it achieve and where to next?

The Cedars (CURL Exemplars in Digital Archives) Project was funded as part of eLib Phase 3, initially for three years, extended by a further year. The JISC funding of a project focussing on digital preservation recognised the strategic importance of addressing digital preservation challenges before the full benefits of digital technology can be realised by the scholarly community.

The emphasis on JISC funded projects is on practical outcomes rather than "blue sky" research so it was in that context that Cedars was funded. Some broad conclusions the project reached was that digital preservation is technically soluble but that organisational, political and financial issues prove greater barriers. The crucial importance of preservation metadata was realised very early on in the project and long-term preservation is dependent on the creation and maintenance of this metadata.

The Cedars Project delivered a range of outcomes, including five guides covering specific aspects of its work. All of these are available from the project website at http://www.leeds.ac.uk/cedars/. A final invitation-only workshop gave delegates the opportunity to learn what the Cedars Project has done, put that into a wider context and, most importantly, to consider what needs to happen next. It was concluded that much progress has been made at the global level on digital preservation but that more work was urgently needed. Additional research is still required but it was a strong feeling at the workshop that it is time to move to implementation.

In conclusion, the Cedars Project has made a significant contribution to progress in understanding specific preservation challenges and suggesting some ways of overcoming them. In the UK, the Digital Preservation Coalition and the Research Support Libraries Group will play key roles in ensuring this

Presentation: [Powerpoint version] | [HTML version]

Dale Flecker, Associate Director for Planning and Systems, Harvard University Library
The Harvard E-Journal Archiving Study

With funding from the Andrew W. Mellon Foundation, Harvard digital library staff spent a year investigating the archiving of commercial electronic journals. Working with three major publishers (Blackwell, University of Chicago Press, and Wiley) the team analyzed the content (in both technical and functional terms) of current e-journals, the standardization of e-journal articles and their packaging for deposit in archives, issues of license and business terms, organizational and economic requirements, quality control issues, and preservation implications of archives. A model for a large-scale archive was developed. This presents some of the more interesting issues encountered.

Presentation: [Powerpoint version] | [HTML version]

Meg Bellinger, Vice President, OCLC Digital & Preservation Resources
Building a Digital Archive through Collaboration: A Report from OCLC

OCLC's three-year strategy, published last year, outlines a plan for libraries and OCLC to transform WorldCat from a bibliographic database and online union catalog into a globally networked information resource of text, graphics, sound, and motion. The rebirth of WorldCat in Oracle will result ultimately in a global knowledgebase supported by a set of integrated, Web-based tools and services that facilitate contribution, description, discovery, access, exchange, delivery, and preservation of knowledge objects as well as of participating institutions' expertise. OCLC Digital & Preservation Resources (DPR) is currently building a large-scale digital archive with which to support such preservation requirements. This presentation offers a high-level description of the technical rationale behind OCLC's activities in building the first phase of its Digital Archive.

Part 1 of the presentation describes what prompted OCLC to build a digital archive. Part 2 explains how we scoped the first phase of the project, while Part 3 notes considerations in designing a useful, useable tool. Part 4 answers several questions we asked ourselves in operationalizing the OAIS Reference Model, and Part 5 poses questions we must answer in future phases.

Presentation: [Powerpoint version] | [HTML version]

Back to top

Susan Haigh, Manager, Program Development, National Library of Canada
Toward a Digital Library of Canada: Limping, Lurching and occasionally Leaping

The presentation provides an overview and assessment of the digital library environment in Canada - areas of progress, gaps, and what we might do to address those gaps. The presentation suggests factors that seem essential to achieving a coherent national digital information environment, including funding, coordination, rich content, aggregation services, a willingness to collaborate, and a commitment to long-term access. For each of these factors, Canadian approaches will be outlined to support comparison with the U.S and U.K

Presentation: [Powerpoint version] | [HTML version]

Phillip D Long, PH.D, Senior Strategist, Academic Computing Enterprise, MIT
The Open Knowledge Initiative - building a framework for educational applications

The Open Knowledge Initiative (OKI) is developing an open and extensible architecture to support a wide range of educational tools. OKI hopes to foster the development of a community of educational technologists, educators, and developers who together build open source learning applications. The OKI architecture is designed to support open source and commercial vendor implementations of learning management systems (LMS) or virtual learning environments (VLE).

The primary architectural characteristic of OKI is the definition of boundaries between different parts of a learning management system. These abstraction layers are defined as Application Program Interfaces (APIs), which serve to isolate important system functions. This provides a mechanism for managing change in the implementation of a function providing a predictable boundary to the system.

OKI is a layered as well as modular architecture. Development is facilitated through providing a rich set of services allowing programmer/educator teams to concentrate on the real pedagogical aspects of design and not concern themselves with basics functions like how to authenticate a user or where to store documents and metadata.

OKI is both implementation independent and web agnostic. The OKI framework supports implementations using a variety of object oriented languages. While description of the binding of APIs to an implementation are done using Java, this is a convenient artifice. Similarly, while it is expected that the web will remain the primary online learning environment, OKI supports both web-based applications as well as applications intended to run natively on local machines.

The work of OKI initially focused on defining what are described as Common Services. These are services that allow for integration with enterprise infrastructure and help to assure adaptability to multiple and evolving standards and changing technology. Current work in OKI is shifting towards defining the next layer of Educational Services. These services will be built on the lower level APIs and initially concentrate on providing classroom management, content management, quizzing and testing services, and communications services to pedagogical applications, in addition to serving as a model for elaboration of other educational services.

While bottom up design continues in constructing the OKI architecture, top down attention to learning and pedagogy have driven the OKI process. OKI has sought to gain greater understanding of the needs for learning tools that support different instructional approaches and settings. Educational technologists, faculty and developers have articulated important learning principles systems like OKI must support through OKI sponsored workshops. Through the Educational Activities and Learning Practices work team in OKI, efforts to define the functional requirements of learning tools that actively support innovative pedagogy are driving the first wave of OKI applications. These will in turn help describe and develop the educational services needed by a robust and extensible learning system architecture.

OKI started with a simple goal to break the pattern of building stovepipe learning applications that are difficult to integrate, maintain, and extend across diverse learning settings. It has evolved into an international effort to build an open architecture learning framework fostering community engagement. For it to be successful, a diverse community must find local value in it to move OKI forward.

Back to top

Catherine Grout, Programme Director, JISC Development Group
A common Information Environment: New Challenges for the UK Education Community

Presentation: [Powerpoint version] | [HTML version]

Dr William Arms, Professor, Computer Science, Cornell University
The National Science Digital Library: the Challenge of Scale

The National Science Digital Library (NSDL) is a multi-year program of the National Science Foundation (NSF) to build a comprehensive library of all material in digital form that is relevant to education in science, mathematics, engineering and technology education, very broadly defined. A first, small-scale release of the library is scheduled for December 2002.

The effort has been distributed among large numbers of separate grants that focus on various collections, services and research topics. A single Core Integration team has the task of integrating the projects into a single library.

Presentation: [Powerpoint version] | [HTML version]

Back to top


Alan Robiette, Authentication and Security, Programme Director, JISC
New developments in access management - setting the scene

This presentation reviewed the emerging developments in managing access to electronic information, with particular reference to standards and architectures. Projects which will be discussed include Shibboleth (Internet2), PAPI (Spanish national academic network) and Athens (EduServ). Finally the talk will examined the possible cross-fertilisation between digital library activities and the Grid model of distributed systems.

Presentation: [Powerpoint version] | [HTML version]

Ed Zedlewski, Executive Director, Eduserv
Athens: Single Sign On and devolved authentication services.

Presentation: [Powerpoint version] | [HTML version]

Diego Lopez, Co-ordinator, of middleware services, RedIRIS, The Spanish NREN
PAPI: simple and ubiquitous access to Internet information servers

PAPI is a system for providing access control to restricted information resources across the Internet.

The authentication mechanisms are designed to be as flexible as possible, allowing each organization to use its own authentication schema, keeping user privacy, and offering information providers data enough for statistics. Access control mechanisms are transparent to the user and compatible with the most commonly employed Web browsers and any operating system. Since PAPI uses standard HTTP procedures, PAPI authentication and access control does not require any specific hardware or software, thus providing users with ubiquitous access to any resource they have right to.

The presentation starts by discussing the design principles in which PAPI is based. Afterwards, the architecture of the system will be introduced, describing the functionality of its main components and providing an overview of the protocols employed. An overview of the main capabilities and open issues of the system will closed the presentation.

Presentation: [Powerpoint version] | [HTML version]

Back to top

User Studies

David House, Deputy Vice Chancellor, University of Brighton, Chair of the JISC Committee on Awareness, Liaison and Training
Use of electronic information services: fact and fiction.

The presentation offers an overview of the way in which JISC monitors the use of the information services which it provides, and reflects on the lessons of this feedback for the future development of the information environment. Presentation: [Powerpoint version] | [HTML version]

Leigh Watson Healy, Vice President & Chief Analyst, Outsell Inc
The Voice of the User: Where Students and Faculty Go For Information

Highlights of the Outsell/Digital Library Federation study profile the information content preferences and behaviors of 3,200 students and faculty across academic disciplines in liberal arts colleges and research/doctoral universities. Students and faculty rely on a widely distributed and diverse information environment, which makes understanding and reaching them a tremendous challenge. Libraries have needed better information for understanding changing patterns of library use as a key component of strategic planning. This presentation will provide insight into the overall academic information environment that engages users, how student and faculty behaviors and preferences are affecting library use and the demand for information resources, and implications for libraries and information technologists.

Presentation: [Powerpoint version] | [HTML version]

Back to top

Stephen Griffin, Program Director, National Science
Foundation JISC/NSF Initiative

Highlights of Digital Libraries Phase 2 Projects

Dr Fred M Heath, Dean of Libraries, Texas A & M University
Joseph F Boykin, Dean of Libraries, Clemson University
Duane Webster, Executive Director, Association of Research Libraries

Service Quality in the Information Environment: The LibQUAL+ Protocol

  1. What is LibQUAL+?
  2. What are the goals of LibQUAL+?
  3. Why was LibQUAL+ begun?
  4. How will LibQUAL+ benefit library users?
  5. What is the basis for the LibQUAL+ survey instrument?
  6. How is LibQUAL+ conducted?
  7. How is LibQUAL+ being paid for?
  8. How can I get more information about LibQUAL+?

1. What is LibQUAL+?

LibQUAL+ is a research and development project undertaken to define and measure library service quality across institutions and to create useful quality-assessment tools for local planning. Service quality has always been a value for libraries; LibQUAL+ provides a measure of that value. LibQUAL+ currently tests a tool for measuring library users perceptions of service quality and identifies gaps between desired, perceived, and minimum expectations of service. The project will continue as an R&D endeavor based at the Association of Research Libraries (ARL) in collaboration with the Texas A&M University Libraries through 2003, by which time LibQUAL+ will evolve into an ongoing service quality assessment program at ARL.

2. The goals of LibQUAL+ include:

establish a library service quality assessment program at ARL; develop web-based tools for assessing library service quality; develop mechanisms and protocols for evaluating libraries; and identify best practices in providing library service.

3. Why was LibQUAL+ begun?

There is increasing pressure for libraries to move towards outcome-based assessment, instead of relying merely on input, output, or resource metrics. This pressure comes from funding authorities as well as users themselves. Outcome measures may show how well an organization serves its users; they demonstrate an institution's efficiency and effectiveness. LibQUAL+ is one of several outcome-based assessment efforts begun under the ARL New Measures Initiative.

4. How will LibQUAL+ benefit library users?

Individual libraries participating in LibQUAL+ can identify where their services need improvement, in the view of their users. They also can compare their service quality with that of peer institutions in an effort to develop benchmarks and understanding of best practices across institutions; for this reason, several library consortia and other peer groups have chosen to participate in LibQUAL+. By initiating action based on the information they receive from their library users and from other LibQUAL+ participants, libraries can provide services that are more closely aligned with user expectations. As library services are improved, the ultimate goal is to surpass user expectations in search of excellent library services that better help users to reach their learning and research objectives.

5. What is the basis for the LibQUAL+ survey instrument?

The LibQUAL+ survey instrument is adapted from an instrument called SERVQUAL, which is grounded in the "Gap Theory of Service Quality" and was developed by the marketing research team of A. Parasuraman, V.A. Zeithaml, and L.L. Berry. The Texas A&M University Libraries and other libraries have been using modified SERVQUAL instruments for several years. These applications showed the need for a newly adapted SERVQUAL protocol that serves the needs of libraries; thus LibQUAL+ was born. The original SERVQUAL instrument was regrounded based on a series of interviews with library users. The regrounded instrument, called LibQUAL+, is being refined with each iteration of the survey through the pilot phase (1999-2003).

6. How is LibQUAL+ conducted?

New technology and the use of the Internet make it possible for libraries to survey their users with minimal local effort--LibQUAL+ uses a scalable web interface and protocol to ask library users about their library service expectations. Each participating library gathers a random sample of email addresses representative of their user population and sends a message to the sample encouraging recipients to complete the survey on the Web. Survey data are transmitted directly from the central LibQUAL+ server to a database. The data are then analyzed and reports that provide information on how users perceive the quality of their library services are generated for the individual libraries. The reports present information on the gaps between users desired, perceived, and minimally acceptable levels of service.

7. How is LibQUAL+ being paid for?

LibQUAL+ is presently funded through a variety of means: external funding through September 2003 from the U.S. Department of Educations Fund for the Improvement of Postsecondary Education (FIPSE), contributed funding from ARL and Texas A&M University, and modest fees from participating libraries to underwrite production of deliverables. Participating libraries also contribute staff and organizational resources for preparation and administration of the survey instrument.

8. How can I get more information about LibQUAL+?

For More Information see the LibQUAL+ homepage at http://www.libqual.org/ or contact Consuella Askew Waller: consuella@arl.org.

Presentation: [Powerpoint version] | [HTML version]

Back to top

Jerome McDonough, Digital Library Development Team Leader, New York University
METS: Metadata Standards for Digital Library Objects

This presentation provided an overview of the Digital Library Federation's METS initiative, including:

  1. a brief background history of the METS initiative;
  2. a technical overview of the METS schema and extension schema;
  3. a status report on development of software tools for use with METS; and
  4. a discussion of immediate plans and goals for the METS initiative.

Presentation: [Powerpoint version] | [HTML version]

Bruce Royan, CEO, SCRAN Cultural Multimedia for the support of Learning

This paper described the Scottish Cultural Resources Access Network (SCRAN), a networked multimedia resource base for the study and celebration of human history and material culture, largely from the digitised resources of Libraries, Archives, Museums and other Cultural Organisations in Scotland.

SCRAN is a not-for-profit educational charity jointly set up by Scotland's cultural institutions to create and deliver networked learning content and manage the resulting digital Intellectual Property Rights.

Digitised assets contributed to SCRAN are governed by a licence agreement protecting the contributors' commercialisation rights while ensuring unrestricted access, free at the point of use, for members of participating educational institutions.

Full text and thumbnail images are freely available for home learning, while SCRAN licensed members can download more extensive assets, copyright cleared for educational use, and protected by invisible watermarking and fingerprinting.

Subscriptions to SCRAN have now been purchased by several local education authorities in England and 20% of all Scottish public libraries. The Scottish Executive has recently purchased a global licence for every school in Scotland. This is matched by a deal with the Joint Information Systems Committee for all universities and colleges throughout the United Kingdom.

The SCRAN educational licence includes the right to use a number of software tools and templates, supplied by SCRAN, to ease the integration of SCRAN-licenced resources into local learning resources such as printed worksheets and electronic exhibitions. SCRAN already gives learners, teachers, and students access to over a million cultural records, including some 160,000 downloadable multimedia assets and 800 educational resources, written by practicing teachers and tied into the curriculum

Back to top

Introduction | Programme | Booking Form | Delegates

Supported by:
Link to Eduserve

Email comments to s.hassen@ukoln.ac.uk
Page last revised on: 25-Sep-2002