Evaluation of the Electronic Libraries Programme

Prepared by
John Kelleher, Elizabeth Sommerlad and Elliot Stern

The Tavistock Institute
Evaluation Development And Review Unit
30 Tabernacle Street
London EC2A 4DD

3rd January 1996

Copyright (c) The Tavistock Institute
Please do not reproduce or relay without appropriate permissions. Where use is made, please acknowledge authorship.


1. Introduction

2. Scope And Outline Of Content

3. The Electronic Libraries Programme

4. General Principles Of Evaluation Design

5. Operational Approaches To Project Evaluation

6. Utilising Evaluation Outputs

7. Preparing An Evaluation Plan

8. Technical Assistance

9. Annual Reporting

10. Other eLib Evaluation Activities

11. Some Useful References

Annex B: Framework for Annual Reporting

Letter to Project Leaders

Feedback Questions


This document provides a guide to developing and implementing evaluation in individual eLib projects.


The Guidelines draw on the extensive consultations we have had with actors both internal and external to the eLib programme and as well reflecting lessons from our experience of evaluating innovative programmes in the area of information and learning technologies. Whilst evaluation inevitably entails making choices - about which questions to ask, what criteria to adopt, which methods to use, and so on - we have taken a definite position in setting out what we consider to be the most effective way in which projects can undertake a relatively small-scale, highly focused and manageable evaluation activity that is relevant and useful to the projects and to the programme. We are mindful of logistics: the need to specify evaluation plans quickly, and what is reasonable to expect of projects by way of time and resource commitments.


Project level evaluations may variously be conducted by project staff, subcontracted experts and colleagues located elsewhere in the HE system. However, these Guidelines are oriented towards project managers/co-ordinators rather than to experienced evaluators. Very few projects have evaluators formally associated with them, and the likelihood is that evaluation will be integral to project management or subsumed within the development role of the different actors. Thus the Guidelines presume a relatively low level of evaluation expertise and are fairly modest in the expectations of evaluation at the project level.

The Guidelines offer general advice for evaluation managers, as well as more technical advice for evaluators.

FIGIT's Requirements

These guidelines do not constitute a set of requirements which projects must follow, rather they are an aid for developing an evaluation strategy which is most appropriate for each project. FIGIT will require every project to prepare an evaluation plan and to deliver regular evaluation reports, but the content of these will depend on the nature of the project.

The guidelines as current drafted reflect the following considerations:

Steering and Advisory Groups

A stress on the importance of broad based steering or advisory groups which involve representatives of the most important potential users of project outputs in the project from an early stage both to provide formative feedback and as a mechanism to disseminate project learning.


Encouragement for each project to concentrate on a limited but clear set of priority issues in their evaluation.

Structured and Systematic Collection of Data

A need to go beyond collecting user responses in an ad hoc or voluntaristic fashion through, for instance, on-line questionnaires (useful though these are), to collecting and assessing formative user feedback in a structured and systematic fashion from test sites using various techniques (such as, recording/logging behaviour, structured interviews, observation and participant observation surveys).

Focus Groups

A much wider use of different types of 'focus groups' (that is, the type of structured group discussion of designs or prototypes by representative groups of intermediate and end users familiar from market research) in order to (i) guide project direction/product development and, (ii) to better predict likely needs, desirable attributes, usage/demand, obstacles and resistance.

'Sustainability' and Forecasting

Recognition that addressing issues of 'sustainability' (as manifested, for instance, in the production of business plans) requires an assessment approach and associated data collection which identifies and model s the relationship between various quantified variables (such as running costs, usage, value added to users activities, willingness to pay) in order to make a 'case' for continuance based on realistic scenarios.


An acceptance that there are limits to the extent to which projects can be asked to attempt to assess their value and impact in terms of overall Follett objectives for reasons, among other things, of resources, competencies and motivation, and that much of this kind of work can be undertaken more effectively and efficiently (and, perhaps, more credibly) in each programme area by external evaluators.


These Guidelines have the following content:


First, consideration is given to the defining features of eLib as a strategic initiative that sets the parameters for evaluation and assessment activities.

General Prinicples

Second, some general principles of evaluation design are outlined relating to the different purposes of evaluation and assessment activities, the need to take account of the evaluation questions and interests of key stakeholders, the integration of evaluation with other data-generating activities, and the role of users in the evaluation.

Recommended Approach

Third, approaches are recommended to evaluation and assessment in projects, both generally and for each of the programme areas. Desirable methodological approaches are outlined. The main issues arising in each of the programme areas are discussed, with consideration given to what should be evaluated, the kind of evaluation questions that need to be addressed and relevant criteria for assessment.

Annual Reporting

Fourth, requirements for annual project reporting are presented and how this relates to evaluation activities is explained.

Evaluation Plan

Fifth, the necessary elements of an evaluation plan are presented and reviewed, including planning for utilising evaluation outputs. And the type of technical assistance projects can expect from eLib in preparing plans is outlined.

Other eLib Evaluation Activities

Sixth, other eLib evaluation activities, which may interface with project evaluation activities are briefly described.

Reference Materials

Seventh and lastly, some useful bibliographic references are given to evaluation source material and to specific tools and methods for evaluating and assessing innovative IT tools, products and services which projects may find helpful.


What you are asked to evaluate and how you go about it is closely related to the nature of the programme and its objectives. Taking part in an innovative programmes such as eLib sets a particular backcloth to the evaluation task.

The eLib programme is a strategic initiative that is seeking to shape and accelerate the development and uptake of electronic media and network services in UK HE libraries and HEIs. Its broad objectives include innovating in scholarly communication, enhancing the quality of teaching and research, improving access to information in a cost-effective way, and increasing library performance.

A Strategic Initiative

Elib is situated within a complex, dynamic environment. It is simultaneously trying to develop and demonstrate new services, systems and possibly infrastructures whilst also seeking to shape new user/supplier communities. It is trying to accelerate the speed of uptake of network based solutions in the face of urgent current problems whilst at the same time prefiguring new paradigms of research, scholarly communications and learning that electronic media enable. It is trying to work with existing professional networks and groups - the library and academic 'communities' - whilst these professional networks and groups are already finding the pace of change hard to manage. And it is seeking to encourage dialogue and new partnerships between strategic actors who will hopefully ensure the sustainability of the project innovations.

A Complex, Dynamic Environment

A distinctive feature of eLib is its experimental, innovative nature. This is not a programme in which the output is the answer, although projects will produce exploitable products and services. Its broader purpose is to explore a set of possible approaches that might take us closer to the future electronic library, and to develop a better understanding of the technical, economic and social dimensions involved in the process of innovation.

Moving towards the Electronic Library

In sum, the eLib programme has a particular set of characteristics. These are:

Implications of the Innovative Orientation of eLib for Project Evaluation

The innovative orientation of eLib has several implications for project evaluation:

First, the primary purpose of evaluation is to contribute to the collective learning of all those involved in the programme or having a stake in it. Such is the experimental, open-ended nature of the programme that we do not know what is going to work and thus participants should be open to the idea of learning from failures and difficulties of implementation as much as from achievements and success. As a developmental programme, evaluation should contribute to the building of future scenarios and the gathering of information to inform future choices. Evaluation has thus to engage with the innovation process and to help shape its direction or trajectory. This does not preclude a role for a more detached evaluation in assessing the eventual results of a project - the usability of tools, the take-up by end-users, the transfer into other settings, the added value to the academic endeavour.

Second, the evaluation needs to take account of the interests and perspectives of the different stakeholders eg. by ensuring that the evaluation questions address their concerns and what it is they want to find out from the trials and demonstrators.

Third, the local projects need to collate results so they can be made available to the stakeholders in the projects as well as to ensure that the data they gather will inform key decisions regarding the programme.

Fourth, if evaluation is to be useful, it will have to follow the entire course of an innovation life-cycle and not simply focus on the effects or outcomes. Evaluation, in order to make positive operational inputs into projects, cannot wait until final results are known. Feedback and learning has to take place iteratively and continually throughout the project's lifecycle in order to enhance the prospects for ultimate success.

Fifth, conduct of the evaluation should contribute to networking, the transfer of knowledge and know-how and the take-up of project outputs in relevant markets or settings. This means projects should be conscious of the dynamics of the development process - the design of piloting, the way users and other stakeholders are involved, the decision-making process - since these will shape the end-result.


Building Blocks

Evaluation is usually designed around a set of general principles or building blocks that guide the essential decisions in putting together an evaluation plan and keeping it under review. The framework below identifies six design principles for eLib project evaluation and the main questions which need to be addressed. The activities they give rise to will in turn need to be coordinated and prioritised.

Figure 1: Evaluation Elements

Purposes of Evaluation What are the main purposes of the evaluation?
Stakeholders Who are the different actors who have a stake in the project and its evaluation?
Lifecycle What evaluation activities are appropriate at different stages of the project lifecycle?
Utilisation How will evaluation be integrated into the project?
User involvement How will users be involved in the evaluation?
Methods and Techniques What kinds of evaluation questions will be asked and what assessment methods are appropriate?

Purposes of Evaluation

The purposes of an evaluation are related to how evaluations are to be used and who are the intended users of the evaluation findings and results. The eLib projects have a number of purposes for evaluation which they share in common. These are:

- firstly, to improve performance by helping project partners manage the process of developing, piloting and implementing prototypes, demonstrators, and systems.

- second, to provide evidence as to the usability, cost-effectiveness and added value of the systems, services, products and configurations that are being developed.

- third, to contribute to the overall learning in the eLib programme as a whole that will be useful for future projects and programmes.


In any evaluation, there are multiple stakeholders. These include the people whose support and cooperation is necessary for the project to succeed, as well as the people who are expected to use or to act on the evaluation findings.

Different stakeholders are interested in different questions relevant to the kind of decisions they have to make. Often, they will have different or competing views about what is important, what constitutes success and how success might be measured.

The eLib project evaluations will need to identify:

- who the different stakeholders are in their particular project,

- how they will be involved in the project,

- what kinds of questions they are asking, and

- what kind of data or information needs to be provided at different stages in the project life-cycle to inform their decision-making.

Evaluation and the Lifecycle

Evaluations usually mirror the life-cycle of the project. Each phase of the project life-cycle will require some evaluation attention. Thus at the needs analysis phase, evaluators will be concerned with feasibility questions raised by user needs and contributing to the design of the pilot, demonstrator or system model. When a demonstrator or prototype is being developed, evaluations can usefully monitor implementation and provide feedback on interim findings. In the testing phase, evaluation efforts will be devoted to assessing its usability, functionality, cost-effectiveness and other kinds of short term effects.

eLib projects will need to give consideration to both formative evaluation (at the design, development and implementation phases) that is intended to improve performance, as well as summative evaluation (at the end phase) that provides evidence of achievements and effects. Assessing the likely future outputs and effects of the projects is also important however and needs to be included within the evaluation plan.

Involving the Users

Involving the end users (librarians, academic teaching and research staff, students) as active participants in the development process increases the likelihood that the final products and services will meet the needs of users and achieve project goals. At the start of an innovation process, user needs and expectations generally cannot be well specified; they evolve as opportunities arise for trying out early versions of the product or system, and their understanding of what the technology can do and how it relates to their work practices changes.

Users are thus a valuable source of input and feedback that can enhance the quality of what is being developed;

- by informing choice between given design alternatives

- by generating and refining usability objectives

- by guiding revisions using experimental pieces of the system in relation to usability criteria

- by trying out early versions of product or system which can reveal major design problems

- by suggesting improvements

Users also have an obvious role to play in the testing and field trials.

Key questions which will need to be addressed when drawing up an evaluation plan are which users need to be involved, how should they be selected, when and how often is data or feedback to be gathered, and how will their motivation be sustained. The answers to these questions will depend on the programme area as well as on the particular objectives and circumstances of projects.

Methods and Techniques

The selection of methods and techniques in an evaluation is shaped by what the evaluation is for, what kinds of questions are being asked, who the users or audiences of the evaluation are and what views they have about what constitutes valid and reliable data. Other considerations to take into account include the available resources for evaluation, logistical aspects (location of field tests) as well as the expertise and methodological preferences of evaluators.

Most projects will need to use a range of evaluation methods and techniques that is suited to their particular circumstances. Both quantitative and qualitative methods are valued and legitimate: each have their strengths and weaknesses. The selection should be on the grounds of flexibility and appropriateness but are likely to include

- user surveys and needs analysis steering and advisory groups which provide formative feedback

- structured feedback from test-sites using various techniques

- focus groups as a form of structured group discussion of designs or prototypes

- focus groups as a form of structured discussions by representative groups of the innovation in terms of potential uses, attractions, limitations, preparedness to buy, and so on

- modelling of functional, cost, organisational and technical variables


The tenet of utilisation is that evaluations should be useful. Utilisation is most effective when it is planned for from the outset. It is important for example to consider how evaluations are to be used, to involve stakeholders in ways that will increase their commitment to acting on the findings, and actively to involve users so as to increase the prospects for uptake and diffusion of exploitable outputs.

Utilisation is also about capturing the learning from the project in all its forms, and making it available for other and future projects and planning. This might be through regular reports, through Annual project reviews as well as through seminar/conference events and other kinds of cross-project exchange.


A Practical Approach

The three elements of the operational approach elaborated below provide the skeletal backbone for projects' self-evaluation activities. Taken together with the design principles for evaluation, projects should be in a position to plan and undertake a focused and manageable evaluation that is relevant and useful to the projects and to the programme. The approach implies convergence between management and evaluation activities: good evaluation practice is at the same time good management practice and need not require a burden of additional effort and resourcing. Projects may wish to give a higher profile to evaluation in their own projects, and several projects have elaborated plans for evaluation that address their own needs and interests more fully . The advice given here can be readily incorporated into any evaluation plan.

Key Elements

The three key elements of an operational approach to eLib project evaluations are:

- steering or advisory groups

- structured, systematic feedback

- forecasting and planning

The particular expression that each of these elements takes in a project evaluation will be the outcome of the characteristics of the domain area as well as of local project circumstances. What follows, then, is firstly a general elaboration of the three elements in terms of why do it this way, how to go about it and what the general advantages are; and secondly, a translation of the general into the specifics of the different programme areas.

Steering or Advisory Groups

A well constituted steering or advisory group brings together key players who will be influential in decision-making about future choices relating to short-term project outputs as well as being well-positioned to shape the conditions for the likely future uptake of products or services and their embedding in organisational or disciplinary contexts. Steering group members are potentially effective in networking with a broader set of actors, so facilitating transfer of knowledge, know-how and exploitable products into other settings.

Most projects have steering or advisory groups already in place although the way in which they relate to the project, the nature of their contribution and the appropriateness of their membership is unclear in some instances. Projects are therefore encouraged to take the following steps:

Broad Base

First, to ensure that their steering or advisory group membership is broadly based and involves representatives of the most important users of project outputs. They are likely to include senior management in their HEIs, senior academics in their discipline, representatives of the intermediate and end-users (library and teaching staff, who can also represent the interests of students as end-users), publishers, and others relevant to their programme area and local circumstances.


Second, to clarify the role of the steering/advisory group and what it needs from the project in turn to do its job effectively. In eLib projects, steering/advisory groups have a number of tasks. These include:

- providing general steerage for the project, including its redirection if necessary

- giving general advice on matters pertaining to the interface between the project and its institutional and wider environment

- providing ongoing formative feedback, on the basis of project reports and evaluation data,

- acting as a mechanism to disseminate project learning

Project managers may wish to engage with their steering group at an early stage to clarify what it sees as the key evaluation questions that should be addressed, what kind of information or data it needs to allow members to do their job and how such information is best presented, and what in their view constitutes success for the project.

Broadly based steering groups of the kind advocated here will be representative of the different stakeholder interests and perspectives on the electronic library, which may well be conflictual or in some degree of tension. These differences or tensions are generally reproduced within the innovation itself, unless addressed at a higher level. Steering groups are an appropriate forum for exploring real and imagined differences, for optimising potential outcomes for the different parties, and for engaging with the wider issues that are raised by developments in IT.

Structured Systematic Feedback

As part of their formative and summative evaluation activities, all projects will be gathering data at different stages over the lifecycle of the project. If evaluative data is usefully and reliably to inform the development of the evolving prototype or demonstrator and enhance its usability and functionality, then it is of great importance to collect feedback in a structured and systematic fashion. This implies going beyond collecting user-responses in an ad hoc or voluntaristic fashion.

User Populations

Projects will therefore need to consider:

- which user population(s) are important for the development and field testing of their (early) prototypes and demonstrators,

- how representative is the sample of users from whom data is gathered

- which kinds of methods and techniques are best suited to ensuring the systematic collection of data which permit reliable inferences to be made or generalisations drawn

Among the most relevant methods and techniques for field testing and validation of eLib prototypes, pilots and systems are:

Survey Methods

Survey methods, which are particularly appropriate for describing users' views and attitudes. Interviews and questionnaires are the most commonly used techniques. Interactive interviewing allows for a direct exchange and communication of perceptions, views and attitudes. On-line questionnaires can be a useful form of data capture although questions of representativeness arise where responding is voluntary and the profile of respondents cannot be matched against the total user population.

Observation/Field Studies

Observation/Field Studies, which generally involve some amount of genuine social interaction in the field with the users; what is more characteristic of this method is that it involves the direct observation of 'naturally' occurring events. The main techniques range from participant observation in natural contexts, to experimental or behavioural observation in which the users are placed in specially designed conditions (trials or test sites).

Focus Group Methodology

Focus group methodology, which draws people from the user population which meet at different time throughout the development process to have focused discussions about relevant experiences. This method is particularly relevant to the innovation design process and is discussed in more detail below.

Tracking and Monitoring

Tracking and monitoring techniques such as diaries and logs of users and developers, and electronic recording of use patterns.


Other techniques that are relevant to the development of IT products and services include 'think aloud' exercises, protocol analysis and critical incident analysis.

Choosing Methods

The particular method and techniques that are used should be congruent with the kind of questions being asked and the underlying rationale for the project. For example, if the main rationale for trials is testing and validation of a system that meets a requirements specification, the type of questions that are likely to be addressed will be open and exploratory: you will want to confirm whether the choices made in response to a requirements analysis are the right ones, and you will be looking for evidence by describing how the innovation works. In this case, survey methods involving face to face and group interviews with key stakeholders are likely to be appropriate and field study methods (observation, protocol analysis, think aloud exercises) could provide in-depth and detailed data necessary to enable modifications or improvements to be identified.

If the rationale is to demonstrate workable systems in different environments, you will be looking to define and elaborate the main issues or factors which affect uptake and dissemination. You will want to obtain and collate the informed judgements/predictions of key participating actors as to the likely implications/outcomes for their organisations (through such methods as surveys and focus groups) and to use the 'real world' experience of validation exercise and organisational or market studies to model and predict the elements, nature and possible magnitudes of likely future costs, benefits and effects.

Forecasting and Planning

eLib projects are engaged in a process of innovation which by its nature is uncertain: evaluation can help manage that uncertainty by aiding forecasting and planning for the future. Evaluation therefore needs to concern itself not only with assessing the outputs of projects-what works, and what does not work-but also with the gathering of information that will predict likely future needs and demands and inform future choices.

Focus Groups

A participative, interactive method that is appropriate for this purpose is the focus group. Focus groups are a type of structured group discussion, familiar from market research, and now used more extensively in social science field research, which draw people from some user population or bring together a relevant group of stakeholders who meet to discuss different aspects of an innovation.

In eLib projects, focus groups can serve two distinct purposes:

- first, providing feedback on product and service development, and enhancing their usability, and

- second, providing the kind of information that will help projects to model future conditions under which their technology will be adopted or applied, and to guide project direction appropriately.

The membership of the focus group, the time at which they meet, and how they are set up and conducted will be different in each of the two instances.

Focus Groups for Product and Service Development

Projects will be using a variety of techniques to gather systematic, structured feedback on designs, prototypes and systems at key stages of the project lifecycle. The focus group method can complement other data gathering as part of an ongoing process o f adjustment and product refinement. Setting up a focus group for this purpose needs careful consideration, particularly in relation to membership. In some projects, it will be appropriate to involve representative groups of intermediate and end-users; in other contexts it may be desirable to involve users who have sufficient technical knowledge and experience of the design issues to be able to communicate usefully with the project team.

Using Focus Groups to Predict Future Responses

The focus group approach is particularly suited to exploring likely reactions to an innovation when launched, and how an innovation could be tailored to the changing requirements of users when it is introduced into their working practices and wider working environment. Through a discussion that is structured by a focus group facilitator, the group can provide market relevant information about the likely usage and demand for a service or product, the desirable attributes of this or later models, the kinds of obstacles or resistances that might be anticipated, and how needs will change as the environment changes.

The advantage of focus groups for this kind of work is that they bring together different stakeholders in a dialogue and exchange of perspectives. In the dynamic, or even conflict, of a group discussion new data will be uncovered or developed that would not be forthcoming in, say, individual interviews. Examples, of such groups might be the drawing together of academics, librarians, student representatives, and those responsible for corporate planning from one or more HEIs to explore the possible implications an innovative EDD or ODP system, or drawing together academics in a particular subject to discuss the uses to an innovation in ANR or new EJs.

Forecasting and Making 'Cases'

Only some eLib projects will be in a position at the end of their project funding to demonstrate 'sustainability' by reference to an established ('paying') user base or market. For the other projects, business plans or applications for continuing funding will require an assessment approach and associated data collection which identifies and models the relationship between various quantified variables (such as running costs, usage, value added to users' activities, willingness to pay) in order to make a 'case' for continuance based on realistic scenarios.

Asssessment of Direct Effects

An assessment of direct economic and organisational effects should cover:

- for supplier organisations, any process restructuring or cost implications, as well as expected eventual revenues, return on investment and reputational effects, and

- for user organisations, contribution of the application to process re-engineering and impacts on the organisational value chain, as well as expected eventual cost implications, organisational benefits, opportunities created, and any ultimate effects on overall organisational performance (ie. in library provision, in teaching, in research).

There are many techniques to value the benefit of process or product innovation (although their robustness is often disputed from different perspectives). However, it is not usually possible to assess costs, benefits and effects entirely on the basis of the inevitably partial, and limited experience of a pilot or demonstration exercise. Evaluators should therefore, as we have said already, variously;

- collect as much data as possible from the applications pilot (this is critically dependant on correctly designing the validation exercise so as to generate relevant data);

- use the 'real world' experience of the validation exercise to model and predict the elements, nature and possible magnitudes of likely future costs, benefits and effects (these projections may be critical to any subsequent further exploitation of the application); and

- obtain and collate the informed judge-ments/predictions of key participating actors as to likely implications/outcomes for their organisations.

Assessment of Market Potential

An assessment of market formation/development must address intended or possible innovation dissemination or transfer patterns which channel project effects. These can widely differ in form and locus of activity-each having their own different criteria, indicators of which will need to be monitored in quite different ways. For example,

- dissemination may occur through the extension of the demonstration network (enlarging the number of users of the application which has been built and validated) which may require the extension of JISC or individual HEI infrastructure,

- dissemination may occur through the spatial replication of the application, in different HEIs, or

- dissemination may occur, as in traditional product innovation, through the sale of an artefact (a product, a software system, a turnkey solution) again with wide variety in the situations in which it is adapted and used.

More generally, market formation/development will be critically determined by the effects projects have on the perceptions and actions of (existing and embryonic) networks of actors in the broader education and research domain, in other words, on how new communities of users and providers are formed. Such effects are best assessed not just by looking at intentions or even changes in strategy but by examining actual institutional behaviour and investment decisions inspired or influenced by the project. Undertaking such an assessment requires mapping the organisations touched by the project and examining any consequent actual or potential changes in behaviour.

Applying this Operational Approach to eLib Project Areas

The remainder of this section, which takes the form of Table 1: Examples of Operational Approaches to Project Evaluation by Area, indicates how the operational approach outlined above might be translated into evaluation agendas for projects in each eLib area:

Additional Assistance A guide such as this cannot cover in detail every aspect of undertaking such evaluations, nor forsee every particular project contingency. There will inevitably be technical aspects of, say, field work design or analysis which projects may need further assistance with. Section 8 describes how additional technical assistance on particular issues or techniques will be made available to projects who require it. And section 11 suggests some useful references.

Table 1: Examples of Operational Approaches to Project Evaluation by Area

Area Electronic Journals Digitisation
General In EJs the main evaluation issues arise from the potential uses of multi-media journals by scholars in different disciplines and the attributes, facilities and systems these require.

Although copyright issues and document formats have been highlighted by many for investigation these represent just aspects of broader questions, (respectively, the composition of entire editorial, production, delivery, charging and payment systems, and the multi-media architecture of journals generally and in specific disciplines).

Consequently, the role of evaluation in these projects is found in the attempt to learn how to produce multi-media journals and what multi-media journals are required, through developing actual or prototype EJs

In Digitisation the key evaluation issues are access and delivery.
Formative Role of Steering and Advisory Groups Receiving formative feedback on structure and (desirable) content.

Raising expectations and gaining legitimacy among influential academics in the discipline(s).

Involving and mobilising publishers.

Developing models of negotiation and working between publishers and academics.
(as Electronic Journals)

Generalisability of models.

Structured and Systematic Feedback Close analysis of reader and writer behaviour, experiences and reactions through; -observation of trial users, -recording of on-line sessions (eg key stroke analysis),
-questionnaire surveys,
-and, focus group discussions.

Iterative translations of analysis of user behaviour/ experiences/reactions into functional specifications.

Identification of use patterns (who? when? how often? what for? why not?) by on-line logs; traffic analysis; or surveys, for use in forecasting and planning.
Organisation of test sites and trials to examine aspects of usability through
-questionnaire surveys,
-and focus group discussions.
Forecasting and Planning Use of focus groups, modelling, scenario building, and other analysis to establish requirements such as:

functionality/usability -consistency in interfaces
-mark up languages/ multi-media structures

socio-technical -mechanisms for charging, authentication, IPR payment
-attributes which 'trigger' market take-off

-added value and 'preparedness to pay'
-revenues and costs

-critical mass for take off
-confidence to invest -market creation
Use of focus groups and surveys to establish
-quality of resolution and reproduction required
-user infrastructure requirements (connections, workstations, printers)
-performance/price trade-offs.
Common Elements in Annual Reporting General cognitive and communicative needs.

Discipline specific cognitive and communicative needs.

Progress with;

-multi-media/hyper-media structures and document formats
-copyright arrangements and charging mechanisms
-standardisation of interfaces
Sourcing copyright.


Techno-economic feasibility.

Table 1 (continued): Examples of Operational Approaches to Project Evaluation by Area

Area Access to Network Resources Electronic Document Delivery

In ANR, where projects may represent ongoing services, an evaluation approach is needed which contributes to the continuous improvement of the functionality and content of the service.

There is also a need to look comparatively at the role a subject gateway is playing in the overall provision of information resources in a subject area.

In EDD there are two main distinct but connected evaluation questions. On the one hand, given both technical change and the likely emergence of national EDD services, the tangible value of specific EDD projects will be derived in the short to medium term (but not in the longer term) through the beneficial operation of the actual systems developed. On the other hand, the longer term value of the EDD projects will be as a catalyst for changes in library work practices and organisational systems which will be required whatever eventual EDD technologies emerge.
Formative Role of Steering and Advisory Groups

Receiving formative feedback on usability, functionality and content.

Raising familiarity and expectations among influential academics in the discipline(s).

Embedding within wider disciplinary information resource access system(s)/networks.

Raising expectations and developing future scenarios for library services and teaching.

Mutual adjustments and planning by librarians and teachers on the basis of prototypes.

Establishing likely functionality/operational contribution.

Joint determination of techno-economic choices

Structured and Systematic Feedback

Support for continuous improvement through user surveys and focus groups. Including the added value of ROADS technology.

Identification of use patterns (who? when? how often? what for? why not?) by on-line logs; traffic analysis; or surveys.

Assessment of impact of training on use and uptake of services through immediate and follow up formative feedback.

Analysis of trial site experiences through
-task and protocol analysis
-observation/field studies
-focus groups

Logging and analysis of transactions.

Iterative translations of trial site experiences into improved functional specifications.

Forecasting and Planning

Use of interviews and observation (eg case studies) and focus groups to establish contribution to research and student work in representative departments

Modelling of ongoing running costs

Use of systems analysis, modelling, and scenario building to establish:

-infrastructural requirements
-actual or potential improvements in library performance
-cost/benefit (including economic gains of moving from 'holdings to access')

Common Elements in Annual Reporting

Usability issues in cognitive/interface terms.

Quality of content.

Development of revenue strategies such as subscription, central funding or dial up charges

Socio-technical models of systems

Copyright arrangements.

Organisational/cultural change.

Contribution to teaching and research.

Table 1 (continued): Examples of Operational Approaches to Project Evaluation by Area

Area On Demand Publishing Training & Awareness
General In ODP, as in EDD, the potential contribution to existing library services is well understood, including improving the flow/availability/cost of interlibrary loans, short loans, 'resources for courses' and so on. In ODP the practical task of identifying good techno-economic models of workable systems requires an evaluation approach which both supports learning from innovative activities and which aids the development of future scenarios.

In T&A the main evaluation requirement arises from the need for continuous formative feedback as training is delivered.

There is also a need to assess the extent to which the T&A projects are achieving eLib's cultural change objectives.

Formative Role of Steering and Advisory Groups (as Electronic Document Delivery)

Involving and mobilising teachers and publishers. Developing models of negotiation and working between publishers and HEIs.

FIGIT steering and advice.

Structured and Systematic Feedback

(as Electronic Document Delivery)

Collection of end user (student, teacher, researcher) feedback on usefulness and performance through
-surveys, and
-focus groups.

Collection of basic statistics on trainees (numbers, origins, job descriptions, etc).

Immediate feedback on training after training sessions informally and through questionnaires.

Follow up feedback interviews (eg by telephone) after a few weeks.

Forecasting and Planning

Use of focus groups, modelling and scenario building to establish:
-infrastructural requirements
-actual or potential improvements in library performance
-mechanisms for charging, authentication, IPR payment
-cost/benefit (including economic gains of moving from 'holdings to access')

Measurement of effect and potential of the training through case studies of changes in organisational practices consequent on availability of new skills.

Common Elements in Annual Reporting (as Electronic Document Delivery)

Training needs of different populations.

Barriers to change in work practices and cultures.


There are three main uses of project evaluation outputs;

- in the ongoing formative development of the innovation during the lifecycle of the project,

- in preparing business plans or applications for funding services and other ongoing activities, and other future oriented final reporting, such as technical, functional and organisational specifications for further developments, new R&D agendas, and so on, and

- in informing programme learning/evaluation through formal (annual) reporting.

Beyond this there is the ongoing sharing of experience and learning with other eLib participants and wider communities through participating in workshops, seminars and conferences, through lis-eLib discussions, and through academic and 'trade' articles.

It is important to clarify how and when evaluation outputs will be utilised. For example, how will the timing of field trials contribute to the iterative development of prototypes? Or, at what point will evaluation results be required to inform funding or commercial decisions?


An evaluation plan should simply identify what the evaluation priorities (questions) are, how they will be investigated, and what kind of results can be expected. The plan should specify how evaluation activities will be arranged, when they will be done and by whom. Any resourcing implications or dependencies should be noted. It should also be remembered that evaluation planning and planning user involvement or the execution of tests, trials and test sites should be closely co-ordinated.

The main activities needed for preparing a Plan are:

Main Activities

  1. Structured consultation with the users of the evaluation ('stakeholders'), about the main evaluation questions they want answers to;
  2. Review of the project plan so as to phase evaluation deliverables in time with the project's own decision points and stages;
  3. Clarifying the 'purpose' of the evaluation in terms of the main characteristics of the innovation process with which the project is engaged;
  4. Reviewing existing and planned data sources, and management information systems to ensure that the evaluation does not duplicate other efforts.
  5. Reviewing the skills available for evaluation purposes within the project partner organisations.

Content of Plan

The results of these activities will be an Evaluation Plan that:

  1. Defines the priority areas.
  2. Specifies the key evaluation questions that the evaluation will endeavour to answer (as understood at the present time).
  3. Defines the methods and tools to be used to answer these questions.
  4. Shows how the timing of evaluation deliverables will inform the key decisions of the project.
  5. Indicates the mechanisms and procedures to ensure regular feedback to all partners and especially project managers.
  6. Breaks down year one evaluation activities into its component sub-activities, allocating resources, making it clear who is responsible for what.
  7. Identifies additional design and planning tasks which will need to be undertaken for evaluation in later stages of the project.

It is expected that alongside preparation of the Evaluation Plan, projects will already have begun some of their evaluation activities. These should proceed in parallel with the preparation of the plan.

Figure 2: Evaluation Plan Checklist

Evaluation Plan Checklist

  • Does the project have an evaluation plan which clarifies the main activities, purposes and users of the evaluation and assessment activities?
  • Has the project allocated sufficient resources and acquired the requisite human expertise to implement this plan?
  • Does the plan allow that the outputs of evaluation and assessment activities can be incorporated into project decision-making?
  • Will the evaluation and assessment activities identify cost-effective solutions to real user needs in terms of functional effectiveness, institutional or disciplinary relevance and potential for growth, transfer or dissemination?
  • Does the plan clarify the process and timing by which suitable evaluation arrangements and methods will be selected and implemented?


Technical assistance regarding evaluation will be provided to projects. Technical assistance may take the form of running initial workshops on evaluation; consultations, where needed, on devising individual project self evaluation strategies; review of project evaluation plans: and the availability of detailed/specialist advice, for example on methodologies/techniques, on an 'on-demand'/'as and when' basis:


Workshops would be one day training events based on the Guidelines, organised on either a regional or domain area basis, in the Spring of 1996.

Initial Consultations

Productive initial consultations on evaluation strategies have already taken place with several projects in the context of project visits. Further consultations could either be provided by

- extending the cycle of visits in the new year to cover additional eLib projects, or by

- providing project 'clinics' in the context of the evaluation workshops.

Review of Evaluation Plans

Review of project evaluation plans will ensure that an adequate minimum provision for evaluation had been made in each project and provide advice and suggest amendments where deemed necessary.

Ongoing Assistance

Further ongoing technical assistance would be provided in the main by telephone and electronic mail. Where face to face meetings are required projects would be encouraged to visit the source of technical assistance or attend clinics arranged on the back of other meetings.


In part due to the strategic nature of the projects selected for funding, project learning will have relevance beyond the immediate project participants. With a consistent framework for project self evaluation a great deal of important learning will become available. If such knowledge is to be speedily accessible to the wider domain, rather than eventually seeping through by way of folk wisdom, it must be captured, systematised and disseminated. This requires an annual reporting structure to feed annual synthetic evaluation reports.

Format of Annual Reports

Projects will then be asked to report, in a simple format, on

- Progress against plan and changes to plan,

- Reasons for changes to plan, including changes to aims and objectives in the light of experience,

- Interim evaluation results, and

- Revised understandings/expectations about the evolution of the innovation.

Projects will be particularly requested to report, in keeping with the programmes overall evaluation preoccupations, on mobilisation/cultural change effects, learning, cost-effectiveness/value-added, demand/ usefulness/ performance, sustainability, and future scenarios.

A framework will be established for the collection of comparable data across the projects in each domain area to facilitate aggregation of data and to allow for cumulative aggregation of results.


The overall eLib evaluation design is summarised in Figure 3: Overall Design

Figure 3: eLib Evaluation Framework

Formative Evaluation Summative Evaluation
Guidelines for Project Self Evaluation

Annual Project Reporting and Synthesis

Management and Technical Assistance

Communication and Dissemination

Policy Mapping and Policy Outcomes
Outcomes and Impact Assessment
Cross Project and Area Studies

Important outcomes of eLib are likely to be located at domain level. Joint activities such as thematic evaluation studies across projects within areas will be required. These will be funded from a designated fund from which area actors and appropriate outside experts will be invited to bid for small evaluation contracts. Involving project based actors and other members of the HE libraries and information management community will encourage direct involvement, diffuse experience and support membership by the community of relevant issues.

These area studies will contribute cumulatively, with the other supporting studies and the projects' own evaluation reports, towards the summative evaluation of the programme.

Examples of such evaluation studies might be:

Training and Awareness

1. Mobilisation effects of eLib activities on cultural change in the HE sector. Centering on the Training and Awareness projects and the Concertation and Dissemination activities but including the mobilisation effects of all the projects, such a study would go beyond simple linear models of cultural change where the provision of information/training leads to changes in awareness and in subsequent individual behaviour. The study would take a more ecological or contextual framework and seek to assess links between T&A and C&D activities and actual changes in work practices, organisational systems and behaviours, the way in which new technologies are integrated into existing services and, more broadly, whole work contexts.

Access to Network Resources

2. A comparative evaluation of the value of the subject gateway approach to accessing network resources to researchers and students. A comparison of the value of the (largely) operational services BUBL and NISS and SOSIG, OMNI, EVEL and ADAM and the ROADS technology with alternative traditional and innovative approaches.

Electronic Journals

3. An evaluation of the potential impact of multi-media journals and journal environments on changing scholarly practices and needs in various disciplines. This study would both extend the scope of some of the evaluation work underway in projects and ex tend and draw together the diverse disciplinary learning taking place in the different EJ projects.

Electronic Document Delivery

4. A comparative case study based evaluation of the socio-technical impact of EDD systems on library performance. As a complement to the Loughborough study on feasibility and selection of document delivery systems.

On Demand Publishing

5. A case study based ex-ante evaluation of the potential contribution of ODP to teaching and learning. This study would use the ODP systems under development in eLib as case examples of the potential and limitations for improving teaching and learning.

Extra eLib

6. An assessment of the contribution of eLib activities to the overall evolution of UK library services in the context of lifelong learning.

Summative Assessment of eLib Outcomes and Impacts

The direct and indirect outcomes and impacts of a reasonably representative sample of projects will also be assessed to demonstrate the range of programme benefits and to clarify the likely aggregate impact of the programme as a whole. Such an assessment will investigate, among other things;


- contributions of process innovations to library and HE organisational performance

- transferability value of innovative services

- alignment of eLib initiatives with and influence on international developments

- actual or expected behavioural change especially as regards investment/resourcing decisions.

- programme impact on changing the formative context in which librarians, scholars and students operate.

- beneficial changes in the nature of scholarly communication

- contribution to life long learning and professional/practitioner life both formally (in HE contexts) and informally (in self study)


The methodology for this assessment should include

- a mapping and tracing of network effects of individual projects and linked sets of projects,

- institutional studies of the effects of one or a set of projects on library performance, teaching and scholarly practice, and

- discipline based studies of effects on scholarly practice.

Summative evaluation studies will also address the eLib II integrated demonstrator projects.

For reasons of accountability and credibility external rather than internal summative evaluation will be used.


Fink, A. (ed.) (1995), The Survey Kit. Sage Publications: London. A series of 9 booklets.

Fowler, F. Survey Research Methods (2nd edition) in Applied Social Science Research Methods Series. Sage Publications: London.

Krueger, Richard A. (1994), Focus Groups, A Practical Guide for Applied Research. Sage Publications: London.

Patton, M.Q. (1986), Utilisation Focused Evaluation. Sage Publications: London.

Rossi, P.H., Wright J.D. and Anderson, A.B. (eds.) (1993), Handbook of Survey Research.

Rubin, H. and Rubin, J. (1995), Qualitative Interviewing. The Art of Hearing Data. Sage Publications: London.

Sommerlad, E., Danau, D. and Hendrikse, A. (1995), User Involvement in the Development and Application of Learning Technology. The Tavistock Institute: London.

Yin, R. (1984), Case Study Research: Design and Methods. Sage Publications: Beverley Hills, CA.


Relevance and presentation of the Guidelines 
A1.    Overall, do the Guidelines appear relevant and applicable  
       to your project? 
A2.    Are there specific elements you would find useful to have  
       expanded/developed in more depth? Or, are there 'missing'  
       elements you would like to have included?   
A3.    Do you have any difficulties with the language or terminology  
       used in the Guidelines? 
Approach to evaluation in eLib projects 
B1     In the Guidelines we suggest an operational approach to project  
       evaluation in eLib based on three key components (steering or  
       advisory groups; structured, systematic feedback; and forecasting  
       and planning) and for each outline desirable methodological  
       approaches (surveys, focus groups, and so on).  Taking each of the  
       three components in turn, can you comment on what role, if any, you 
       think they might play in your project's evaluation and what form  
       each might take? 
       Are you contemplating any other evaluation activities, (which may  
       not be explicitly mentioned in the Guidelines) which you think may  
       be of wider relevance, eg to other projects, either in terms of  
       method/technique or results? 
B2.    In the Guidelines we describe a number of ways in which evaluation  
       outputs will be utilised (in formative development; in preparing  
       business plans and applications for further funding; and in  
       informing programme learning). Have you considered where,  when  
       and in what form evaluation outputs will be required in your project? 
       If so, can you describe this? 
B3.    Are there any resource implications for your project of  
       adhering to the approach laid out in these guidelines which  
       you feel FIGIT should be aware of? 
B4.    In the Guidelines we outline the type of annual reporting eLib  
       will require from projects.  The exact format for reporting has  
       not yet been finalised. Are you comfortable with this approach  
       to reporting or could it create difficulties for your project? 
       Are there specific issues or data items you would find useful  
       if reported on by all projects in your area? 
B5.    In the Guidelines we outline a number of ways in which technical  
       assistance on evaluation may be provided to projects. Do you  
       envisage that such assistance might be of use to your project? 
       If so, do you have a need or a preference for a particular  
       mode of support? 
Plans for your project evaluation 
C1.    Does your project have any agreed evaluation plans? 
       If so, can you briefly describe the main elements? (If you  
       have any documentation on your evaluation plans, besides  
       what is contained in your project proposal, you might wish  
       to attach copies.) 
C2.    Having read the Guidelines, and in so far as they have been  
       settled in your project, could you comment on,  
       - The main purposes of undertaking evaluation in your  
       - What issues will you be looking at? 
       - Who the different actors are who have a stake in the  
         project and its evaluation? 
       - What evaluation activities will be needed at the different  
         stages of your project lifecycle? 
       - How will evaluation be used and integrated into the  
       - How will users be involved in the evaluation? 
       - What kinds of evaluation questions should be asked and  
         what range of methods might be appropriate?   
Suggestions about the Guidelines and the programme evaluation 
D1.    Do you have any other suggestions about how the Guidelines  
       might be made more useful to you or to eLib as a whole?   
D2.    Are there any other comments you would like to make about the  
       Guidelines or about evaluation in eLib more generally?      


Ref: zz029/jc 1st December, 1995

Dear Project Leaders

Re: Draft Guidelines for eLib Project Evaluation

Please find enclosed a draft (or prototype) of the Guidelines for eLib Project Evaluation. The circulation of this draft version for comment is part of the process of consulting projects on the development of a cross programme approach to project evaluation.

We would appreciate if you, and perhaps some of your collaborators, could spend some time considering the content, scope and implications of these Guidelines in the context of your project. Particularly we would like you to consider (i) whether any elements of the proposed approach could potentially create difficulties for your project or seem discordant with your project realities, and (ii) whether there are specific elements (perhaps on planning or methods) you would find useful to have expanded/developed in more depth.

Those of you who have already met with us are not likely to find the proposed content surprising or controversial. However we would appreciate your comments, general or detailed, preferably by eMail. Alternatively you may now wish to telephone us to discuss matters further.

We intend to contact by telephone all of you whom we have not yet formally met in order to get a detailed response/feedback. We will be contacting you in the next few days to 'book' a time for a telephone conversation or perhaps an audio conference. The conversations will be structured around a set of questions which will be sent to you in advance. In the meantime please do not hesitate to contact us with any immediate concerns or queries.

Any response which you feel may be of general interest to other eLib projects or FIGIT could be communicated through lis-elib.

We look forward to talking to you.

Yours sincerely

Elizabeth Sommerlad (l.sommerlad@tavinstitute.org)
John Kelleher (j.kelleher@tavinstitute.org)

Evaluation in eLib: Phase One - Self-Evaluation/Project Level Evaluation

The Tavistock Institute has been commissioned by the Electronic Libraries Programme to prepare an evaluation framework and project guidelines that will support the evaluation efforts of projects, programme areas and the programme as a whole. Within the limitations of a brief assignment, our task is to consult extensively with programme and project actors as well as other key stakeholders in shaping the eLib evaluation agenda and the way it is implemented. The overall purposes of evaluation in eLib are to ensure that lessons and emerging understandings from the programme are made available and can be utilised widely; to contribute to programme and project success in the course of their lifecycle; and to assess success in achieving programme objectives.

The operational approach to evaluation which has been agreed by eLib comprises a balance between:

  1. self-evaluation by projects of matters of immediate importance to themselves and the programme (for example, user responses, sustainability, contribution to academic research or teaching and so on);
  2. evaluation of area wide developments (for example, intellectual property issues in electronic journals or cost/benefit and pricing questions in electronic document delivery) whether by sets of projects or external evaluators; and
  3. evaluation of the overall contribution and impact of the programme. Thus one task of our current work is to establish which evaluation issues should be prioritised, and which issues should be dealt with through projects' own activities and which de alt with at a programme level.

We have already met with a number of projects in each of the programme areas and reviewed documentation on all of your projects so as to gain an overview of evaluation plans, activities and priorities across the programme. The discussion we have had with projects so far, as well as our analysis of project and programme documentation, and consultations with other programme stakeholders have informed the development of our Draft Guidelines for eLib Project Evaluation.

[Index of Tavistock Reports and Papers] [Meta-index of Papers, Reports and Circulars]