Skip to main content

The Requisites of a Question-Management System

September 2, 2004

{cs.r.title}



Contents
The Art of Making Questions
People are Different, Questions Are Too
Some Policies Used By Teachers While Assembling Tests
Authorship Control
Guidelines About the Creation of a Question-Management System
Summary

When we think about educational institutions, one of most traditional ways of measuring understanding is the use of questions, organized in assignments and tests. Teachers around the world share the challenge of creating a good set of questions about their subjects and also that of updating this database periodically to avoid repetitive or obsolete questionnaires--a demanding and time-consuming task. On the other hand, students are supposed to use a question collection in order to improve their skills or, at least, to prove they have learned the subject at hand. Observation of the behavior of teachers facing this challenge suggests us the need of a tool to facilitate their work.

At first sight, the design of such a tool may seem vague and complicated, but teachers with classroom experience share a strong feeling about the requirements of such a tool. Combining these viewpoints with some software-engineering concepts can be a nice guide for the development of Quaestio, the question-management module of the Schoolbus Project.

The first section of this article presents an overview of the usage of questionnaires by teachers and students, as well as some ideas about the quality of questions. The second section brings real experiences into question management, enumerating some strategies teachers use to select the questions that will comprise a test or an assignment. The final section suggests guidelines for creating a question-management system--a software design that will be discussed in detail in a future article.

The Art of Making Questions

There are several policies teachers create to select which question to use on a test, an assignment, or even an oral exam. These different approaches reflect the teacher's culture, the matter in question, the students' profile, and several other minor criteria. Once we recognize that a question-selection policy is an individual choice, the best we can do to figure out an educational scenario in which to study the nature of the questions; i.e., their features and purposes.

Inspired by the book Tools for Teaching by Barbara Gross Davis, we will consider the following criteria for classifying questions.

  1. Question type
    • Multiple choice: The question comes with a set of possible answers, from which the student should choose the correct one(s). This kind of question supports automatic correction and it is commonly used to produce assignments and tests that cover many topics at once, due to the short time the student has to answer each question.
    • True/false: The question suggests binary reasoning, where the student must agree with or reject the text of the question. This kind of question is very suitable to automatic correction, and is usually associated with teaching strategies based on knowledge recall.
    • Descriptive (essay): This most sophisticated class of question forces students to express themselves in their own words. This kind of question doesn't support automatic correction and it demands a lot of time to be created, answered by the students, and corrected by the teacher.
  2. Skill metrics
    • Knowledge: The question asks the students to reproduce the facts, principles, and procedures they have learned during the semester. This class of question is related to the student's capacity to memorize data, and is used by teachers in order to verify their own ability to transfer knowledge and guide their pupils' studies.
    • Comprehension: The question asks the student about the concepts underneath the facts; i.e., the ability of the student to recognize a concept in different situations.
    • Application: The question asks the student to solve problems, applying concepts and principles to new situations. It is commonly observed in courses related to logical and mathematics subjects, such as computing and engineering.
    • Analysis: The question verifies the student's ability to distinguish between facts and inference.
    • Synthesis: Integrates learning from different areas or requires students to solve problems by creative thinking.
    • Evaluation: Judging and assessing.
  3. Level of difficulty: Instead of using a qualitative way to classify the difficulty of the questions, such as easy, hard, or intermediate, we prefer to apply grades to the questions. This approach provides a more precise way for the teacher to establish the differences between two or more questions; i.e., a question with a lower grade is easier than one with a higher grade. The reasoning set a difficulty scale from 0 to 10, where 0 is the grade of a very easy question and 10 is the grade of a challenging question.
  4. Time to respond: When a teacher creates a question, he estimates the time students will spend to answer it. The idea of the effort needed to answer the question is commonly related to the grade of difficulty of the question (see above), but we prefer to treat this feature as a separate criterion because sometimes an easy question may demand a large amount of writing work or need an extensive explanation about the concepts involved. We will consider minutes to measure the time needed by a student to answer a question. The 0 minutes flag can be used to classify questions for which the idea of estimated time is not applicable.
  5. Applicability: The situation in which a question can be applied may vary according to the time a student has to answer it, or the format of the questionnaire in which it will be used: oral assessment, remote tutorial, homework, final exam, etc. To classify the applicability of a question, we will adopt the following criteria:
    • Tests and Examinations: Questions with this designation are the most comprehensive form of testing; i.e., the common way the teacher verifies the student's knowledge: an enumeration of questions printed on a piece of paper.
    • Quizzes: Questions that can be answered in a short time, usually multiple choice or true/false questions. This kind of question can be widely used by online training systems, such as tutorials on the Web.
    • Assignments (homework): Material to be dealt with at home, requiring the student to spend a long time and do research to answer properly. An assignment question supposes a large effort by the student and it is not appropriate for tests.

People are Different, Questions Are Too

The other feature used by teachers in question selection is the profile of the students. Each student has a different aim while doing a test--some of them are satisfied simply by passing, others need a bit more challenge. Choosing appropriate questions for each student is a result of the close contact between teacher and class throughout the semester. The students' profile is an important part of the question-selection process because it identifies the current skill level and the areas of need among the classes.

How to precisely identify a person's profile? The answer to this question seems to be "no way," even in a limited context like managing questions in a classroom environment. The analysis of the human behavior is far beyond the scope of this article, but we suggest that you think about this. Here, we will focus only on the criteria that are useful to select questions. Note: the criteria suggested below may vary due to the learning policy, and they suggest the tool must have some friendly way of configuring these criteria. Despite the complexity of dealing with human beings, the usage of student profiles is an important feature in any question-management system and some interesting thoughts were introduced during our earlier discussions of it.

One of most exciting suggestions came from professor the automated extraction of the students' profile through the analysis of their test results. Imagine a test composed of a set of questions previously classified with the criteria presented in the "The Art of Making Questions" section above, and imagine a student answering these questions. A mapping between the skills metrics of the questions and the quality of the student's answers can be used to establish the ability of the student on each criterion. The recognition of a student's skills is an easy task for a teacher who knows the student, but may be tricky for a machine to do. Despite the difficulty of the reasoning simulation by a computing system, we will try to implement a smart algorithm able to identify the student's profile and able to use this knowledge during question selection. How to do that? Some answers may be found in AI techniques such as neural networks or user modeling (you can participate in our brain storm through the Schoolbus Project developer's mailing list).

Is not hard to imagine how complex the design of such intelligent module would be. And to avoid spending an inordinate amount of time on it, a simpler solution seems more appropriate for now. For the first Quaestio prototype, we suggest a manual way of identifying the student profile; i.e., a grade established by the teacher on each criterion of the skill metrics. A teacher should have a friendly GUI in order to type grades between 0 and 10 to every criteria established in the skill metrics. Note that the idea is not to do a judgment of the students, but just to identify their profile. At first, this trivial solution seems to be an unfair strategy, but the choice is based on two premises:

  • The teachers know their students.
  • The teachers set the student profile in an attempt to provide an accurate representation of their students. The teachers know that the way they establish the profile of the students will impact the selection of questions.

Underneath the GUI used enter this student profile data, the system should use XML to describe the student profiles, including their skills. Despite the fact that a schema of such a model has not yet been devised, the representation of a student should be something like the example below:

<?xml version="1.0" encoding="UTF-8" ?> <student id="327">     <name>Diane dos Santos</name>     <!-- A grade in the range
         [0, 10] for each criterion
-->     <skill>        <knowledge>8.5</knowledge>        <comprehension>10</comprehension>        <application>7.3</application>        <analysis>8.3</analysis>        <synthesis>6.1</synthesis>        <evaluation>6</evaluation>     </skill> </student>

One can ask about the tiring task of filling the profiles for each student, and we should provide a way to avoid such uncomfortable situations for the teacher--remember, the idea is to make the lives of teachers easier, and not add more tasks to their day-by-day work. Two tricks can help to reduce usability problems in our first prototype: the usage of a default value for all criteria, and the the option to disable profiles during question selection. The first strategy supposes that all students have the same profile; i.e, the question selection won't be influenced by the profiles. The second and more elegant strategy provides the teachers a choice about the use of profiles, which can have a default value anyway.

Some Policies Used By Teachers While Assembling Tests

Every teacher creates his own policy on putting together tests. They usually are comfortable with and proud of these strategies. Despite the individual nature of these policies, one can detect many similarities between them. In this section, we will enumerate some common strategies used by teachers to select questions. Note to curious students: there are no secrets in the strategies below, just common sense about how to organize questions into a useful test.

  1. Less-used versus already-used: Some questions are naturally required by a subject; i.e., a teacher can't teach this subject without showing these questions to the students. Other questions are optional, and usually less frequently used during the semester. Teachers like to use the well-known questions to verify the class' attention to the subject and they like to use the less-used questions to evaluate the students' reasoning under a new scenario. The system must maintain a counter for each question in order to detect when a student has been exposed to it. The teacher must be able to modify the question counter, as well--what if the teacher uses that question several times in the classroom, and the students never download it from the Web? A negative value can indicate a question for which the counter is not applicable, and 0 can be used as the default initial value.
  2. Book-based questionnaires: Some teachers prefer to use a book to guide their teaching, including the usage of the questions printed in the book to compose the tests. This behavior suggests a flag in the question data structure, indicating what book the question came from: a bibliographic reference, or just a Boolean flag.
  3. Balanced questionnaire: There is an equilibrium between the above strategies (i.e., some new questions and some already used). This is the most common strategy, because teachers usually test several criteria during a test.

As shown above, there are several policies teachers use to select questions. Our system must be generic enough to support these rules. A primary idea is to provide a rule editor to the teacher, including a set of pre-defined rules--the common ones--from which teachers can configure how they want the system to behave. The intelligence encapsulated in the rules can be very complex, and the language used to represent these rules must be generic enough to provide a comfortable rule editor. This editor is an interesting sub-module that we have first realized with SQL syntax as the underlying language--a user-friendly interface that generates SQL expressions in an XML format. See "SQL - Part 14: XML-Related Specifications (SQL/XML)" for more information on this specification. An example of an expression we expect the system to support is:

$counter < 20 AND NOT $bibliographic
$author=Felipe? AND $counter>15
$group=FA7 AND $counter>3 AND $counter<20
$subject=looping OR $subject=decision AND $counter<10

The set of terms for the desired system is under open discussion and any suggestions would be welcome. Even the usage of SQLX is a preliminary idea, chosen because it seems to have momentum in the market.

Authorship Control

A natural issue when we think about content publishing is about authorship. Each question stored in our system must be associated with details such as author, date of creation, and school. These details will provide a way to control the accessibility of the questions; who can access the question, who can modify it, etc. Now you are probably thinking about JAAS, and so are we--but before starting the discussion about several frameworks useful for our project, let us define what we want to control.

Imagine an education institution with hundreds of teachers and thousands of students, a common scenario, and imagine also a set of professors joining their efforts to maintain the questions database. Every week, some new questions are added into the system by different teachers. Quickly, the database becomes full of questions that represent the shared knowledge of the group, but each teacher has a unique point of view about the educational process and they need a way of filtering the question-selection process. A teacher must have a way of restricting the set of questions used while assembling a test, something that can be done through the idea of groups in which the teacher is part.

Another good idea is using the idea of accessibility; the author of a question should have a way to define who can access the question. To tackle the permissions of questions we defined the criteria below, inspired by the concepts of object-oriented programming and the accessibility of a class and its members:

  1. Private questions: Used only by their creators. An example would be a question created by a teacher, one that he wants to use exclusively in final exams or in a specific kind of homework.
  2. Protected questions: Questions shared by a restricted group of academic members. The group's maintenance should be a use case of the system.
  3. Public questions: Questions accessible by anyone.

Guidelines About the Creation of a Question-Management System

From the above discussion, we can figure out the main features teachers are expecting from a question-management system, summarized in the following table:

Feature Description
Type of question The kind of question: Multiple choice, True/false, or essay.
Setting the question skill measuring The kind of skill a question is related to: Knowledge, Comprehension, Application, Analysis, Synthesis, or Evaluation.
Level of question difficulty A grade in the range [0, 10] used to evaluate the complexity of each question.
Time-response estimation The number of minutes the teacher supposes the students need to answer each question.
Applicability Establishing under what conditions the student will answer the question. In other words, what will be the question submission style: tests, quizzes or homework?
Student profile The profile of the students is very important in order to select the questions during test assembling.
Question selection policies The system must obey the policies defined by the teachers during question selection. A rule editor is suggested as a way to maintain these policies.
Authorship control The system must keep track of the authorship and ownership of questions and be able to control access.

All of these features must be grouped in a model described with XML. Though we have not yet discussed a schema for such a model, the representation of a question should look something like the example below:

<?xml version="1.0" encoding="UTF-8" ?> <question id="123">     <!-- 1:Essay, 2:True-false, ... -->     <type>1</type>     <discipline id="113">     CS II: Intermediate Programming     </discipline>     <subject id="51">Sorting</subject>     <author id="35">Gustavo Kuerten</author>     <institution id="2">FA7</institution>     <group id="10">Computing</group>     <!-- This question came from a book-->     <book id="239"/>     <downloads>322</downloads>     <creation>03/12/2003</creation>     <!-- 1:Private, 2:Protected, 3:Public, ...-->     <accessibility>2</accessibility>     <difficulty>6</difficulty>     <!-- The time estimated to it response-->     <time>20</time>     <!-- A grade in the range
         [0, 10] for each criterion
-->     <skill>        <knowledge>8.5</knowledge>        <comprehension>10</comprehension>        <application>5.3</application>        <analysis>8.3</analysis>        <synthesis>2.7</synthesis>        <evaluation>6</evaluation>     </skill>     <!-- The set of applicable formats can be
      classified by precedence instead of one by
      one specification (under discussion)
-->     <applicability>        <tests>yes</tests>        <quizzes>no</quizzes>        <homeworks>no</homeworks>     </applicability>     <description>        <paragraph>             Suppose we are comparing             implementations of insertion sort and             merge sort on the same machine. For             inputs of size n, insertion sort
            runs in 8n2 steps, while merge sort
            runs in 64n lg n steps.
            Fill the table below with the values
            of n. Does insertion sort beat merge
            sort?
        </paragraph>        <!-- A question can have figures-->        <figure id="987"/>        <paragraph>             How might one rewrite the merge sort             pseudocode to make it even faster on             small inputs?        </paragraph>     </description>     <!-- Answer not available-->     <answer/> </question>

This example must be considered the first thought about our necessities; i.e., the structure naturally suggested when we have started our discussions. In fact, this model was sketched on a napkin at the university cafeteria when some of the first Schoolbus members were brainstorming the first ideas for a question-selection tool. This model should evolve to a more complete one during the the analysis phase, the next step of our project.

Summary

This article proposes the development of a new tool for teachers, a question-management system named Quaestio. The system is based on the intuition used by teachers to assemble tests and assignments, and it is supposed to be smart enough to minimize the effort of teachers during examination periods. Some requirements were discussed and a minimum model of question representation was suggested. Other good ideas, such as the usage of the artificial intelligence in the automatic detection of student profiles, have also been debated. The next step of our project is to produce the final draft of the question model and to define the set of use cases that will be implemented in the first version of Quaestio.

The technical discussions about the project and the information needed to trigger the development phase will be discussed in the next installment. The discussion is underway and you are invited to join Schoolbus' mission: to provide easy-to-use tools in order to help good teachers.

Felipe Gaucho works as senior software engineer at Netcetera AG in Switzerland. He is a well known Brazilian JUG leader and open-source evangelist.
Related Topics >> Education   |