Exploiting The Potential Of Wikis
Discussion group A5: Wiki Power Users


Discussion Group A5: "The Power User"

Discussion group A5 has the theme "The Power User". Note this session will be held only if there are sufficient numbers.

Suggested topics for discussion:

  • What are the main challenges related to provision and user of Wikis from your perspective as a Wiki power user?
  • In what areas can Wiki power users support effective use of Wikis?
  • How can our user communities best benefit for the expertise of power users?

Issues you might like to consider include:

  • Getting data into and out of Wikis
  • Migration between Wikis (including issues such as markup standards, application logic, etc.)
  • Quality of HTML from Wikis
  • Wiki URIs (conventions, length, application dependencies, ...)
  • Inter-WIki interoperability (transclusion, syndication, etc.)
  • Wiki access and authentication issues
  • Managing risks (e.g. open access Wikis)
  • ...

Report

Add a report on the discussion group below::

The group's discussions covered a number of the above points in general terms looking specifically at the following :
  1. Quality of content
  2. exporting the content of a Wiki
  3. Metadata issues
  4. Engagment and contribution (of and from student)
  5. Bots
1. Quality of content held in a Wiki : in terms of the actual content, there is the way the content will be displayed (controlled by the quality of the use of the mark-up language that is used. Templates were felt better for students to use compared to standard HTML for example created via some external editor (if ony that there would be some consistency), whilst the user needs to be aware that unlike traditional HTML, every character can count and spaces take of specific meanings. Indeed students can and will add their own templates.

There are of course other mark up lanuages that should be considered such as XML, SVG and CML and there can be issues around the use of these.

There is also the trust factor and so this can lead to issues around the engagement of staff / students (but as the group later concluded Wikis by their nature are somewhat self-repairing).

2. Exporting the contents of a Wiki - many Wikis include an export facility, usually in the CSS 2.0 specification, but this is often via a weak XML schema, and so only really suitable for input of the information into another similar system.

3. Metadata - whilst it's use is vitial this is often problematic for the user (they may view it as a chore and so avoid doing it accurately if at all. One answer to this is automated production (possibly via a Bot - see part 5 below) but this itself is problematic. Although semantic templates whereby the Wiki learns look very promising and can for example be via an automated approach working in the background with no human intervention (and so no risk of human error!)

At present the results are generally 80% useful (maybe more in sopecific fields (see below), the remaining 20% (mainly based around logical inconsistencies) is still dependant on human intuition beyond the current capabilities of the systems that do exist.

Ideally what we need is a library of semantic templates that can themselves call on human help when human understanding is called for and this may well result in at least 95% accuracy (such as the concept of the Mechanical Turk set out below).

The value of metadata is increasing but as systems develop we run the risk of making them difficult to use and a major problem will be getting the right mix of usability and resilience (i.e. we would not wish to see something so secure that it becomes difficult to use).

4. Engagement and contributing are also factors to be considered - e.g. how can we encourage students to participate and use software like Wiki's especially in situations where they prefer to be spoon-fed? Indeed will Web 2.0 tools which by their nature are dependent on human to human interaction be of use here. Ironically, the fact that Wiki's are so free-form may help.

Widening this out, are web 2.0 models really going to that effective for use in educational institutions? Our group doesn't pretend to know the answer, but it is worthy of debate nontheless.

Trustability is another issue - whilst trust in the content of Wikipedia for example is quite high there are occasions when through either misconception, or as a deliberate act some of the content can be incorrect. The way forward might be along the lines of http://www.citizendium.org/ where a panel of trusted experts such as University Academics monitor / create content.
5. Bots can be useful as they can for example work through pages correcting e.g. spelling errors. Indeed Bots may be the answer for automatic creating metadata. They are however limited in areas such as dealing with logical inconsistencies where human intervention will be helpful.

Dr Peter Murray-Rust has developed a GREP of some 3000 lines which is said to be 90% accurate for checking the content of cemical articles. Although very much textual at present a group is looking at expanding this to check diagrams.

Then there is Amazon's Mechanical Turk linked to a database of human experts when it meets a problem it finds to be unresolvable. This in turn will help us to appreciate the 5% - 10% that we need to move towards 100% accuracy (Knowing what we don't know to paraphrase Donald Rumsfold.)