National Centre for Software Technology, Mumbai, India
Phone: +91 (22) 6201606
Fax: +91 (22) 6201606
Abstract: One of the problems with Hypermedia for educational applications is that the learner needs to have a good conceptual map of the domain being taught, in order to effectively use the system. This causes a problem, since a learner may be exploring the domain for the first time and in a pure hypermedia system the user could and often tends to get lost.
In this paper, we present a framework for doing concept level modelling in a domain being taught with hypermedia. Based on this modelling, the system determines concepts the learner can go through. By doing this, at any given point in time, only a select part of the hypermedia material will be active for the user. The system therefore provides a guided learning environment within the hypermedia context.
Keywords: student modelling, concept based instruction, adaptive instruction
There has been understandably a great deal of interest in the WWW for educational applications. There are obviously a number of advantages in using it for education which have been enumerated elsewhere . However, one must realize that the WWW is just a tool which can be harnessed for education. The effectiveness of the system using it depends on the quality of the underlying material and the pedagogical framework used in the development of systems.
This paper describes a concept level framework for creating hypermedia educational applications. The framework imposes a certain structure on the creator of the courseware. More importantly, it restricts the 'exploration' of the learner to the material which he is likely to most successfully learn.
In the paper, we describe how courseware is created within the framework, how the user goes through the material, and an example of a prototype system which has been developed on the WWW using the framework. As one will see, the framework has the advantage of simplicity and domain independence.
In our framework the creator of the hypermedia courseware needs to carry out the following steps:
The way a system uses the framework is as follows:
The concept hierarchy is a prerequisite hierarchy in which the nodes represent the concepts in the domain and the links represent prerequisite relationships. The first thing which needs to be done, is to list out the concepts corresponding to the domain. This would involve analysis of the domain to determine these concepts.
To maintain simplicity of the framework, we treat all nodes as identical and do not distinguish between them in terms of any categories. Further the framework does not impose a constraint on how a particular concept is taught.
Once the concepts for a domain are enumerated, one needs to create a hierarchy to indicate the prerequisite relationships in the domain. For example, if in a domain the concepts are A, B, C, D and E, the following could be the prerequisite hierarchy for the domain:
A / \ B C / \ D E
This hierarchy indicates that concepts B and C are prerequisites of concept A and concepts D and E are prerequisites of concept C.
In many domains one would not get such a clean hierarchy. It would normally be clusters of concepts within the domain. However, in such cases the 'hierarchy' can be completed by adding virtual links between the clusters.
In many cases, one may get more of a network than a hierarchy. Even this can be handled within the framework as long as nodes of the network are connected via prerequisite links.
A number of authors have argued for the use of student modelling in hypermedia systems [for example, 2].
We use an overlay model to represent the student's knowledge in the system. The overlay model contains a list of all the concepts in the domain (the ones which were enumerated for the concept hierarchy). For each concept there is a confidence factor associated with it. We use a variant of the MYCIN model for uncertainty handling in order to represent and combine confidences for a concept . This model is widely used in the area of expert systems.
The confidence factor for a concept can range from -1.0 (person does not know the concept) to 1.0 (person knows the concept). All concepts in a student model have an initial confidence of 0.0 (before the student takes any of the tests).
Each question which tests a concept has two confidence factors associated with it. One factor indicates the confidence with which a person can be assumed to know the concept, if he gets that particular question correct. The other factor indicates the confidence with which a person can be assumed not to know the concept if he gets it wrong. Both these have the range 0.0 to 1.0. We refer to the two confidence factors associated with a question as CFr and CFw respectively. These confidence factors can be qualitatively given by the test designer and mapped to numerical values by the system.
The system uses one of the following formulae from the MYCIN model to compute the new CF for a concept (CFnew) based on the old CF (CFold) and the CF for the concept for the question which has been presented (CFq). If the student has got the question right, the CF will be CFr and if he has got it wrong it will be -CFw.
CFnew = CFold + CFq (1 - CFold) .. (1) when (CFold, CFq > 0) CFnew = CFold + CFq (1 + CFold) .. (2) when (CFold, CFq < 0) CFnew = (CFold + CFq)/(1 - min(|CFold|, |CFq|)) .. (3) otherwise
For instance, consider the case where the candidate has been given three questions testing a concept 'A'. Assume that the CFr values for the concept for these three questions is 0.2, 0.6 and 0.3 and the CFw values are also 0.2, 0.6 and 0.3.
If the person answers the first question correctly, the confidence in the concept becomes: 0.0 + (1 - 0.0) * 0.2 = 0.2 (using formula 1; CFold is initially 0.0). If the person answers the second question correctly, the confidence becomes: 0.2 + (1 - 0.2) * 0.6 = 0.68 and if the third question is correct it becomes: 0.68 + (1 - 0.68) * 0.3 = 0.776.
On the other hand, consider the case where the candidate gets the first two questions correct and the last question wrong. The confidence from the first two questions will be 0.68. The confidence after the last question, will become: (0.68 - 0.3)/(1 - min(0.68, 0.3)) = 0.38/0.7 = 0.54 (using formula 3).
So the model increases the confidence for a concept every time a person gets a question on that concept correct and reduces it for each question the candidate gets wrong. At the same time the model ensures that the confidence always lies between -1.0 and 1.0.
We have used a similar form of modelling in a remedial mathematics system named Mathemagic. A preliminary experiment indicates that the model helps the system provide effective focussed remediation to students. Students were able to improve their performance in topics in mathematics after relatively short interaction .
At the top-level the user is shown the list of concepts for which material is available in the system. These concepts are classified into 3 categories:
Initially when the user interacts with the system, the Concepts Learnt list will be empty. Some of the concepts which do not have pre-requisites will be in the list of Active Concepts. The remaining concepts will be in the Inactive Concepts. The user is allowed to go through any of the material connected with the Active Concepts. While going through this material the system forms a concept model for the user. If the confidence factor for that concept goes beyond a certain threshold (say 0.6), the concept is moved from the Active Concepts list to the Concepts Learnt list. Some concepts which were in the Inactive Concepts list which had that concept as a prerequisite will move to the Active Concepts list. This will proceed till all the concepts move into the Concepts Learnt list.
If the user does not get the required threshold for a concept, he will need to either go through the material for that concept again, retake the questions or both. This in a sense forces mastery learning of the concept.
Some of the issues which we have encountered while using the framework are:
It would be ideal to be able to limit the number of concepts to between 20 to 40 for a reasonable complex domain. If one has very fine granularity concepts, designing the questions and the material for the concept becomes more difficult. If on the other hand if the number of concepts is very small such as 5, the concepts may be too abstract to do effective modelling.
Designing the right types of questions is a non-trivial task. Obviously the questions should be related to the material covered for that concept in the system. They should also test the concept as closely as possible and be sharply focussed. The difficulty levels of the questions could however vary.
Internally the system uses confidence factors between -1.0 and 1.0. However, from the viewpoint of the designer it is easier to be able to select qualitative statements such as: ``This question is a strong tester of the concept'' which in turn would map to an underlying value. This is the approach we have used.
The number of questions depends on how good the questions are in testing that concept. Obviously, it would be inadequate just to have one question on the concept. Typically 3 or 4 would be needed if not more. The important point is that the confidence factor should be capable of crossing the threshold even if the learner does not necessarily get all questions right.
We have implemented this framework in a system which is being developed by the Open University (Milton Keynes) and NCST and runs on the WWW. The domain of this system is the human brain. The material has been classified into 34 concepts. The system however is being used more as a remedial system. The only difference between this and what has been discussed earlier is that the questions on the concept instead of being in the material are given as soon as the user selects a concept. If after attempting the questions, the user exceeds the threshold he is deemed to know the concept and not given the material corresponding to it. If on the other hand, he does not obtain the threshold, he is allowed to browse through the material on that concept to try to get a better understanding of the concept. Initial trials of the system using degree college students in India was positive .
The implementation of our framework on the WWW was relatively straightforward. The diagnostic and modelling components of the the system and the dynamic generation of links to material were handled in the system using Perl and CGI scripts. Each concept had a top-level HTML file which was dynamically linked if a person required browsing through that concept. In addition the dynamic categorization of the concepts was handled by Perl programs.
Concepts can refer to higher-level concepts in the material, but there cannot be links to these concepts from within the material. To enhance the response of the system, while running the experiment, we kept the video and audio files on a local server and the html and graphics files on the http server.
The framework we described in this paper is one which neatly fits into a hypermedia framework such as WWW. The framework has the advantage of simplicity and also domain independence. The hope is that by limiting the number of concepts a user can go through at any given time, the system makes the learning experience more effective and useful. The framework also enforces a discipline on the creator of the material which would be useful in the creation of learning systems using Hypermedia in general or the WWW in particular.
 Kiyoshi Nakabayashi, Yoshimasa Koike, Mina Maruyama, Hirofumi Touhei, Satomi Ishiuchi and Yoshimi Fukuhara. A Distributed Intelligent-CAI System on the World-Wide Web. In David Jonassen and Gordon McCalla (eds), the Proceedings of the ICCE-95 conference, Published by AACE, pp. 214-221.
 John Self. The Ebb and Flow of Student Modeling. In David Jonassen and Gordon McCalla (eds), the Proceedings of the ICCE-95 conference, Published by AACE, pp. 40-
 KSR Anjaneyulu, RA Singer and R Harding. Usability Studies of a Remedial Multimedia System. To be sent for publication.
 Parvate Vishakha, Anjaneyulu KSR, and Rajan P. Mathemagic: An Adaptive Remediation System for Mathematics. Accepted for publication in the Journal of Mathematics and Science Teaching, Association for Advancement of Computing in Education, USA.
 Shortliffe EH and Buchanan BG. A Model of Inexact Reasoning in Medicine. Mathematical Biosciences, Vol. 23, pp 351-379, 1975. Also in BG Buchanan and EH Shortliffe (eds), Rule-Based Expert Systems: The MYCIN Experiments of the Stanford Heuristic Programming Project, Addison Wesley, pp. 233-262, 1984.
Concept Level Modelling on the WWW
This document was generated using the LaTeX2HTML translator Version 96.1 (Feb 5, 1996) Copyright © 1993, 1994, 1995, 1996, Nikos Drakos, Computer Based Learning Unit, University of Leeds.
The command line arguments were:
latex2html -show_section_numbers -split 0 -font_size 10pt -next_page_in_navigation -t Concept Level Modelling on the WWW -previous_page_in_navigation conceptmodel.tex.
The translation was initiated by Dr K.S.R.Anjaneyulu on Thu Jul 17 14:16:19 GMT 1997