Paper presented at IAEA Conference, Slovenia, May 1999.
Curriculum Demands and Question Difficulty
Ayesha Ahmed and Alastair Pollitt
In order to improve the quality of examination questions and ensure that they make theintended demands on students, we need to understand the psychological processesinvolved in answering questions. We have developed a psychological model of thequestion answering process and a scale on which demands can be measured. We havealso carried out empirical work in five subjects in order to identify factors that affectquestion difficulty. This paper is an attempt to integrate the model, the scale of demandsand the work on question difficulty, in order to give a fuller and more coherent picture of demands and difficulties in examination questions. 
Model of Question Answering
The model of the psychological processes involved in answering examination questionsconsists of six stages. The first stage, Stage Zero refers to the student’s understanding of the subject knowledge, that is what the student has learned before the examination. For example, the student’s understanding of Geography, or of History, or of more specifictopic areas within subjects.Stage One is when the student forms a mental model of the question, by reading thequestion and forming a representation of it. This representation could be anunderstanding of aspects of the question and it could also consist of a misunderstandingof aspects of it. Stage Two is the process of searching the subject knowledge for relevantconcepts to use to answer the question. Stage Three consists of an attempt to match the representation of the question with theaspects of subject knowledge that have been selected during the search process. StagesTwo and Three, searching and matching, may be simultaneous in some cases. In StageFour the student generates an answer and in Stage Five the student writes an answer.Again in some cases these last two stages are simultaneous. All the stages happen inquick succession, often overlapping one another and sometimes subsuming one another.
Sources of Difficulty and Easiness
This model has been developed after empirical work on the Sources of Difficulty and Sources of Easiness (SODs and SOEs) in examination questions based srcinally on work  by Pollitt et. al. (1985). The evidence from these studies tells us which aspects of questions affect difficulty, and these SODs and SOEs can have an effect on difficulty atvarious stages in the process of question answering outlined above.
We identified Sources of Difficulty and Sources of Easiness through empirical work infive subjects. Scripts from examinations in Mathematics, Geography, Science, EnglishLanguage and French taken at age 16 were analysed statistically in order to identify themost difficult questions. The students’ answers to these questions were then analysed with the aim of discovering the most common errors made when answering thesequestions. The common errors were used to help us to identify what was causing studentsto lose marks on the questions, and from these errors we were able to hypothesise thatthere were certain Sources of Difficulty and Easiness in the questions. In order to discover whether the SODs and SOEs did have an effect on performance, wemanipulated various aspects of the questions and trialled these manipulations in schools.The manipulations were based on the hypothesised SODs and SOEs in the srcinalquestions. SODs and SOEs were systematically added or removed from the new trialquestions so that we could measure their effect on the performance of students. A list of SODs and SOEs that affected question difficulty was produced for each subject, and fromthis we put together a list of general SODs and SOEs that applied to all five subjects anthat can be applied to almost all subjects. SODs and SOEs can be either valid or invalid. A valid SOD or SOE is one that thequestion setter intended to put in the question. An invalid SOD or SOE is one notintended by the question setter. It is likely that the question setter will not be aware thatthis SOD or SOE exists in the question, and it will therefore pose a threat to constructvalidity. The students answering the question may answer in a way unanticipated by thesetter and therefore unrepresented in the mark scheme. Students may misunderstand thequestion and therefore not be able to show the marker what they know about the topic.The cognitive processes that the setter intends the question to provoke may never occur instudents’ minds.The SODs and SOEs have an effect at different stages in the question answering process.Some can affect the cognitive processes in students’ minds at more than one stage. TheSODs and SOEs are listed below, grouped into the stages in which they might have aneffect:
Stage 0 – Knowledge of the subject 
Concept difficulty - abstractness or unfamiliarity of concepts
Stage 1 – Understanding the question
Distractors - in wording of questionContext Highlighting - of words or phrasesDensity of presentation Technical terms Everyday language Inferences necessaryCommand words
Stage 2 – Searching subject knowledge
Sequence of questions Combination of topicsResources
Stage 3 – 
 Matching representation of question and subject knowledge
Stage 4 – Generating an answer 
Density of presentationInferencesMark allocation Response prompts - for content of answer Command Words
Stage 5 – Writing an answer 
Response prompts - for organisation and structure of answer Paper layout Own words - transform text into your own wordsWhole resource processing – writing a summary of a text
Scale of Cognitive Demands
In parallel to discovering how and where SODs and SOEs affect performance inexaminations, we also investigated the issue of what makes questions demanding. Wehad identified various factors that caused difficulty, but found that questions that aredifficult can make demands on students in different ways. In an attempt to describe thedifferent ways that demands are made on students we developed a scale of cognitivedemands which had four dimensions: Complexity, Abstraction, Resources and Strategy.The SODs and SOEs affect students at various stages in the question answering process,and the level of demands on each of the four dimensions in the scale reflect how manySODs and SOEs occur in a question and how likely they are to affect the process of question answering. The scale of cognitive demands was developed from a scale devised by Edwards and Dall'Alba (1981). The srcinal scale used by Edwards and Dall'Alba was designed for rating the demands in science lessons. We adapted this scale for use in rating thedemands of examination questions in all subjects (see Pollitt, 1998). The four dimensionsin our final scale: Complexity, Abstraction, Resources and Strategy were used successfully by examiners in Geography, History and Chemistry, to rate the demands of questions. These ratings are described in detail in Hughes et. al. (1998).
The complexity of a question is best described by the number of operations that have to be carried out, or the number of ideas that have to be brought together, and also thenature of the relationships or links between them. A question that has a low complexity,
would involve simple operations or ideas and no need to link them together. There would also be no real demand for comprehension of concepts – just of the question itself. Aquestion with a high level of complexity would involve combining operations and ideas,linking them and evaluating them, with a need for the comprehension of technicalconcepts. A complex question can be an easy question, but it cannot be a simple question. The next dimension refers to the resources provided with the question such as a text,diagram, table, picture, graph or photograph. It also refers to the internal resources thatstudents bring to a question, that is their mental representation of their subjectknowledge. Questions with a low demand in terms of resources will direct students as tohow to use a printed resource, and which parts of it to use. The resources will contain allthe information students need to answer the question so they will not have to makeinferences from it. It will also contain only relevant information, so that nothing has to befiltered out or inhibited by the student. A question with a high demand in terms of resources will require students to generate the information needed to answer the questionfrom their internal resources, or to select, interpret and make inferences from anyresources provided.
Abstraction refers to the extent to which the student has to deal with ideas rather thanwith concrete objects or events to answer the question. Students’ understanding of theoretical issues is tested, rather than their knowledge of specific examples. A questionwith low demands on this scale will involve working with concrete concepts. It mayfocus for example on events in a History question or in an English Literature question, oon specific places in a Geography question, or specific experiments in a Science question.A question with high demands on this scale will involve highly abstract concepts and willalso require a grasp of the technical terms referring to these more abstract concepts.
The strategy dimension refers to how students go about answering the question. It refersto how students devise a method for answering, and how they maintain their answeringstrategy, monitoring it as they go along. They may have to devise a new strategy toanswer the question or they may select a strategy they have used before. The strategy isapplied to the process of tackling the question and to the process of communicating ananswer. Some examiners found it useful to separate strategy into two dimensions: task strategy and response strategy. The task strategy is the one which the student applies tothe process of tackling the question, and the response strategy is applied to organisingand communicating an answer. A question with low demands in terms of task strategy would provide a strategy so thatstudents do not need to devise or monitor one themselves. High demands for a task strategy would arise when students are not given any support with how to tackle thequestion. This is common in unstructured questions in which no prompts are given and students have to form their own strategy for how to approach an open-ended task. Lowdemands for a response strategy would occur when students do not have to organise their own response, but are given prompts on how they should structure an answer. Highdemands for a response strategy would occur when students have to select appropriate
of 10