Ian et al.:
This note is to second Ross Koning's thoughtful reply about assessment,
particularly in his emphasis on using a variety of devices for assessing
student learning, and to make a very few suggestions.
First, students respond very positively to having some role in determining
how the assessment takes place. What you can do may be limited by the size
of the class, etc., but students generally appreciate having a say in how
many tests, quizzes, lab reports, term papers, etc. they have; when they
are scheduled; how much each should count for; etc. At the beginning of
each term, I ask students to think about these questions; depending on the
course and its size, I set some limits on how many tests, term papers, etc.
They also take it well when you _seriously_ ask for and listen for their
feedback on how methods of assessment or instruction are working.
This has to be genuine-- they can spot a phony effort before its
even finished, but a genuine request for their opinion, (and action
on it, too) goes a long way. After all, don't we want to be
assessed, too?
Second, I'm going to make a plug for a different kind of
multiple-choice question, the kind that does not merely test recall (I
usually throw a few of those on a test, too, in part because students
usually are comfortable finding some there, and recall of content is part
of what we're aiming at) but test ability to analyze problems. During the
term, I have students work on multiple choice problems (on problem sets or
in-class tests) that present an experimental design, its results, and then
asks the student to interpret the results and select one or more from a
list of answers.
The _simplest_ example of one of these would be to give a picture (photo or
diagram) of a DNA agarose gel, and have the student select the correct
restriction map out of five possibilities. They can get a lot more
creative than that.
These problems tend to be quite hard. Their main pitfall is that a student
can understand, say, 85% of the material necessary to answer the question
but still get it wrong-- and do so on a bunch of questions, thereby getting
a low score that poorly reflects his or her understanding. Consequently, I
use these sparingly on tests (where there is time pressure) and more on
take-home quizzes/problem sets, where there is more time and each question
is worth fewer points.
Third, I explain to the students that the tests are designed to test both
recall and understanding, so they know there will be different kinds of
questions. Also, quite importantly, I give lots and lots of "practice"
problems of the latter type (e.g. predict the results of an experiment, or
interpret results) so that the general idea is familiar to them before
tests and they are comfortable with the mix of formats.
Fourth, my courses emphasize (more than most) the "how" of science; that is
the reason for all the work predicting and interpreting experimental
results. We also discuss experimental methods, design, and results in
class more than most courses. But I also emphasize that science gets done
by people working together, and have several methods that aim to build the
skills of teamwork. There's a whole bunch of literature on collaborative
learning that I won't summarize, but would emphasize the importance of (a)
mutual interdependence, (b) individual accountability, and (c) structured
methods, as keys to teaching this way. We start (the first day) working on
this, and students learn some skills of working together. They use these
skills through the quarter (e.g. working on specific problems), and have to
apply them to lab reports, which are authored by the teams that conducted
the exercises in lab (each set of exercises takes a month or more to
conduct). They are graded by me but also include peer scores, whereby each
team member has a fixed number of points that are allocated to team members
according to her/his assessment of each member's contribution to the report.
Fifth, I frequently include options: "from the following, select two" type
of questions. I think it's more psychological than anything else,
but it takes a bit of the pressure off of students, ahving a bit of
choice.
Finally, I try to make each test include at least some new material,
so that the students learn something new by taking the test-- say some
material that connects ideas from class, etc. One example would be a
question presenting experimental results on the distribution and function
of C3 and C4 plants at different altitudes (etc.) and have the students
interpret the results-- but it brings in the various information they have
about water loss efficiency, etc.
Above all, students want to feel they are treated with the respect of
fairness. If you operate on that as a gut instinct, you will certainly
make mistakes and certainly will learn from them. If not, you will just
make mistakes.
'Nuff said. Time to go give an exam. All of these methods have
drawbacks, and all are to some degree stolen from others.
Best wishes,
Chris Cole
Christopher T. Cole
Associate Professor of Biology
Division of Science and Mathematics
University of Minnesota-Morris
Morris, MN 56267
colect at caa.mrs.umn.edu
(320) 589-6319