Focusing on the increased use of computers, we designed a 2x2 repeated measures experiment to test whether computer or paper format affected reading comprehension. We also looked at the effects of question type, namely multiple-choice and short answer. We hypothesized that subjects would perform better on the multiple-choice questions and that those completing the task on paper would perform better. Sixty-one female Mount Holyoke College students were randomly selected as participants. Each subject was randomly assigned to a condition (paper or computer). They were given a reading passage and a comprehension test consisting of multiple-choice and short answer questions. These results supported our hypothesis that participants would perform better on multiple-choice than on short answer questions. There was no difference between completing the task on paper or computer. No interaction was present between test method and question type. Applications of these findings include enhancement of learning and teaching methods.
 
 

Introduction

Computers have become a necessity and now surround us at work, at home, and at school. The overwhelming presence of computers in homes complements the influx of computer training and use in educational institutions. Educators must take advantage of computers and other new technology to engage students and enhance the learning experience. However, they need to know the most effective methods of using multimedia technology in the classroom. Some tests, such as GRE, are no longer administered in the traditional paper-and-pencil format. Other widely used standardized tests are offered in both, the paper and computer formats, allowing test takers to choose from either format.

The wide spread use of computers in classrooms and learning sparked research comparing the format of test administration and test performance (Webster & Compeau, 1996; Lee, Vispoel, Boo & Bleiler, 2001). Most of the recent research done on the topic indicates that test format (paper or computer) has no effect on test performance. This finding seems to be consistent across different types of tests and subjects. Vispoel, et.al. (2001) examined the effects of computerized and paper-and-pencil versions of the Rosenberg Self-Esteem scale (SES) using student subjects. Webster and Compeau (1996) used 95 company employees in their field experiment to assess differences between a computerized version of a computer-training questionnaire and a paper-and-pencil version. Both studies indicated that test format had little effect on test performance.

Another aspect that might affect test performance is the use of different test formats, such as multiple-choice, short answer questions. Wolff & Wogalter (1998) looked at how performance changes with the type of question being asked by investigating the comprehension of pictorial symbols. Participants showed greater comprehension when given multiple-choice questions than short answer questions. This sets the tone for the goal of our experiment, which is to explore these differences in different test taking formats and methods. Using reading comprehension as our dependent measure, the purpose of the present study is to combine both, test format (computer or paper-and-pencil) and test method (multiple-choice or short answer questions). We hypothesized that participants doing our task on paper will perform better than those who completed the task on the computer. Performance on multiple-choice questions was expected to be better than on short answer questions. For an interaction effect, we predicted that participants answering the multiple-choice questions on paper will perform better than those who answer short answer questions on paper and multiple-choice and short answer questions on the computer.
 

Method

Participants

Materials Procedure
Results

In order to analyze our data we ran a repeated measures ANOVA. We found that question type had a significant effect on the ability to answer questions correctly. Specifically, subjects performed significantly better on multiple choice questions than on short answer questions. Unfortunately we found no significant main effect for format (computer vs. paper). Also, there was not a significant interaction between format and questions type.

Discussion

Our hypothesis was partially supported. Reading comprehension will be greater on paper and pencil tests, was not supported. However, performance on multiple-choice questions will be better than that of short-answer questions, was supported.

Our experiment has general implications within the educational realm. Teachers will know that question type being multiple choice or short-answer has an effect on students test score specifically when students are being tested on short-term recall of unfamiliar subject matter. It is also beneficial for teachers and other educators to know that computer versus paper does not significantly affect short-term comprehension of short passages. These are just some of the general implications of our experiment.

The only problem we ran into was in having to share our experimental space with other groups. At one point during the study another group was being very noisy which most likely affected the comprehension of the subjects testing at the time. Fortunately, subjects in both conditions were subjected to this disruption so it did not act as a confound. For further research we would suggest to also look at the variable of time and see if there is an interaction between time to complete task and format (computer vs. paper). Another interesting addition would be to add a true/false level to the question type variable, as well as the effect of the presence of an "all of the above" or "none of the above" option in the multiple choice level.