Newsletter #37     Jerry Gilmore's Educational Assessment Brown Bag, plus a CCML tidbit

I have been trying for two days to come up with a succinct synopsis of Thursday's Brown Bag, and have come to the conclusion that there is no such animal. What it boiled down to was a most excellent conversation, whose focus shifted a bit from time to time, but whose content was at all times interesting and worthwhile. So I shall give up on doing it justice and just try to bring out a few of its particularly notable foci.

For a start, Jerry Gilmore brought in from his Office of Educational Assessment files a bunch of graphs of student course evaluations at the 100 level, with the student ratings charted against a variety of other factors, such as perceived workload and expected course grade. Predictably enough, mathematics shared the cellar consistently with physics and sometimes engineering. This led to a discussion of the the reasons behind that, and how much of it is inevitable and to what extent we need nonetheless to take the information seriously. We spent a certain amount of time brooding over the fact that we are expected to be handing students over to other departments with as much knowledge available as was available to students a decade ago while the students coming to us seem to many people to be appreciably less able than they were back then (though Judith entered a caveat by pointing out that at least once she compared old and new exams and after an initial shock over the computations her previous students could have done and present ones couldn't, realized that in the later exam she was making far higher demands in terms of multi-step setting up.) Basically, though, we are teaching tough courses to large numbers of people of whom many are taking the course by constraint rather than choice, which makes the outlook for sterling teaching ratings a bit bleak.

We also touched on the "How much did you learn in this course?" item on the evaluation forms, which has the unfortunate combination of having really important content, but a built-in monkey wrench for 120 and 124, and even more so for the remedial courses, becauses it is asking a lot of a student who feels that s/he was spozed to know the content from high school to admit that s/he was doing something more than just brushing up on already existing skills in the university course.

So with all these strikes against the evaluation forms, and with teaching as important as it is, what in fact should we be doing about assessing it? Collegial evaluations have a definite function, but some equally clear drawbacks and weaknesses (such as, for instance, the impact of collegiality itself.) There are places where designated committees turn up unannounced in people's classrooms and make decisions with major impact on the person's career on the basis of the ensuing hour's class. Nobody seemed overwhelmingly enthusiastic about that one. Jerry himself is a strong proponent of teaching portfolios, with a statement by the faculty member of his or her teaching philosophy and teaching plans. He commented that decisions about the research aspect of a promotion case are made not on the basis of a glance at certain specific publications, but rather of a study of the person's overall program--both its value and the degree to which it is being carried out. It would seem reasonable for teaching to have some equivalent process. A snag to that is that in the lower level mathematics courses most people are equipped to teach a large percentage of the courses, so tend to vary from year to year in what they teach. That one I wish we could have pursued a little further, because I am not convinced that that is in conflict with what Jerry meant by a teaching program. Or then again, perhaps it is.

One reason that we didn't pursue it is that another interesting issue arose, to wit, that ideally an evaluation should have some influence on how one teaches, and it is hard to glean much applicable information from the results of the bubble sheets. One solution to that is the SGID process done by the Office of Assessment (of which I remember that the SG stands for Small Group, and I know that someone from the office comes in and discusses the course with the students in mid-quarter). On a much smaller scale are the mini-essays that people sometimes use (Judith often does) in which students are given a minute or two at the end of class to respond to some question like "What do you think was the main mathematical point today?" or "What from today's class would you like to have clarified tomorrow?" It does give much more of an ongoing view of the mental state of the class, not to mention that by responding explicitly to the questions thereby raised the professor makes it clear that student questions and difficulties do have immediate importance.

I don't think I have done the conversation justice, but at least that will give you some impression of its range. I will therefore bounce swiftly and briefly to one other topic of interest: Creating a Community of Mathematics Learners. For one thing, we have had set of workshops (three identical workshops so as to offer three choices of date for the participants) on community building, probability misconceptions and preparing to teach probability in the classroom. They were fun and well received. For another, Michael Keynes, who is a CCML RA, has produced for us a really excellent Homepage, highly user-friendly, and with all sorts of nifty links outward. I recommend it highly as a browsing spot, especially if you want to keep up as the project keeps developing. Its address is http://www.math.washington.edu/~ccml/


[Back to index]