Collection Description Focus, Workshop 5

UKOLN

Collection-Level Description and the Information Landscape
Users evaluate strategies for resource discovery

Thursday 30th January 2003
Hughes Hall, University of Cambridge


Discussion Group 2

How to set up a user evaluation?

Purpose of a user evaluation:

  • To validate thoughts / ideas
  • To maintain emphasis on users
  • To keep track of changing requirements / impact of technological development
  • Funding considerations (e.g. might be required by funders)
  • Benchmarking (compare with and adapt to the best practices)
  • To assess value of content

What to evaluate:

  • A plan need to be drawn first: what to evaluate
  • The web interface, relevance of information retrieved and the way users search for information are all important
  • Depth of information
  • The use of terminology (different per domain / generation)
  • User base (could be different from expectation. Could be anyone if system is web-based)
  • Difference between expert and novice users
  • Percentage of users who use advanced search features versus those who choose free-text search
  • If funding is well spent: direct to what users want
  • What users do and why
  • Phase of evaluation: before, during or after development? (Always need to show users something tangible though: demonstrator, prototype)

Comparative evaluations:

  • Internal: compare with systems in use, compare with non web-based systems, compare different version of design
  • External: compare with similar systems
  • Set up a control group with specific tasks to gather data on other search sites

Who are the users:

  • Different per domain
  • Different from physical users (of museums, libraries etc.)
  • Difference between traditional and on-line users
  • What about non-users? Socially excluded groups?

How can you get users to participate:

  • Bribe them! (travel expenses at least, small gifts)
  • Organise E-group
  • Set up on-line forum
  • Focus group

Barrier to participation:

  • When using on-line questionnaires, it is difficult to get users to participate
  • It is difficult to keep track of users to web sites, unless they sign in
  • Clear navigation is necessary and the form need to be in a obvious place on the web site
  • The number of hits does not provide useful information (needs more details web statistics)

What kind of evaluation?

  • Focus groups (time-consuming and need to be maintained and managed), questionnaires, interviews are the most common
  • Others include evaluation in controlled laboratory, natural work setting or field study
  • Evaluations can sometimes be intrusive and users may not behave the way they normal do

Size of sample, stratified samples, quantitative and qualitative data

  • Size of sample is cost-related but sample needs to be representative
  • Samples need to represent different groups of user
  • Interpreting qualitative data is no trivial task
  • Structured approach is necessary

What to do with the information collected?

  • Use the findings to inform and improve design!

To summarise:

  • Do not make assumptions about your users
  • User evaluation is not one-off but a iterative process

Some useful sources about Usability & User Evaluation;

  1. Top Ten Web-Design Mistakes of 2002 by Jacob Nielsen
    http://www.useit.com/alertbox/20021223.html
  2. Recruiting Test Participants for Usability Studies by Jacob Nielsen
    http://www.useit.com/alertbox/20030120.html
  3. Designing Web Usability: The Practice of Simplicity
    http://www.useit.com/jakob/webusability/
  4. Everything about usability:
    http://usability.gov/
  5. A reading list on user interface evaluation:
    http://www.hcibib.org/readings.html#toc-evaluation
  6. Report on the evaluation undertaken by the National Archives User Research Group (NANURG), March 2002
    http://www.resource.gov.uk/information/respubs2002.asp#nanurg
 

Information