In carrying out action research to improve teaching and learning, an important role of the researcher/instructor is to collect data and evidence about the teaching process and student learning. What follows is an introduction to some of the techniques which can be used for the said purpose.
- Student Assessment
- Closed Ended Questionnaires
- Supporting Documents
- Interaction Schedules
- Learning Inventories
- Open Ended Questionnaires
- Diagnosis of Student Conceptions
- Tape Recording
Tests, examinations and continuous assessment can provide valuable data for action research. For your teaching course, you have to set up a method of student assessment and your students have to be assessed, so you might as well make use of it in your project.
You should, however, be clear about the nature of the information you can obtain from examination results or assessment grades. Comparison of one set of results with another often has limited validity as assignments, examinations, markers and marking schemes are rarely held constant. In addition most assessment is norm referenced rather than criterion referenced. (Linked with permission from CRESST,UCLA in USA.)
You also need to be very clear as to what is being assessed. Examination grades may bear little relationship to specific qualities you could be investigating. For example, if the theme of an action research project is encouraging meaningful learning, then the examination results would only be of value if they truly reflect meaningful learning. They would be of little value if they consisted of problems which could be solved by substituting numbers into a remembered formula, or essays which required the reproduction of sections from lecture notes. So think carefully about the qualities which you wish to test and whether the assessment is a true test of those qualities.
One way in which answers to assessment questions can be analysed for project purposes is by dividing them into qualitative categories. A systematic procedure for establishing categories is the SOLO taxonomy (Biggs and Collis, 1982). The SOLO taxonomy divides answers to written assessment questions into five categories, judged according to the level of learning: prestructural, unistructural, multistructural, relational and extended abstract. The five levels correspond to answers ranging from the incorrect or irrelevant, through use of appropriate data, to integration of data in an appropriate way, and ending in innovative extensions.
For more about student assessment, please visit Assessment for Learning.
Biggs, J.B. & Collis, K.F.(1982). Evaluating the quality of learning: the SOLO taxonomy (structure of the observed learning outcome). New York: Academic Press.
Closed questionnaires are ones which constrain the responses to a limited number chosen by the researcher; essentially it is a multiple choice format. Usually respondents are asked the extent to which they agree or disagree with a given statement. Responses are recorded on a Likert scale, such as the one in the example below, which ranges from 'definitely agree' to 'definitely disagree'.
Questions should be carefully constructed so the meaning is clear and unambiguous. It is a good idea to trial the questionnaire on a limited number of students before giving it to a whole group.
Closed questionnaires are easy to process and evaluate and can give clear answers to specific questions. However, the questions are defined by the researcher, so could completely miss the concerns of the respondents. You might therefore draw up the questions after a few exploratory interviews, or include some open-ended questions to give respondents a chance to raise other issues of concern.
A section of a typical closed questionnaire used for course evaluation is shown below.
Most institutions now have some form of standard teaching evaluation questionnaire available. These may be of some help in evaluating a project but in most cases the questions will not be sufficiently specific to the particular type of innovation which has been introduced. What might be more helpful are the data banks of optional or additional questions which are available. These can be used to pick or suggest questions which might be included in a more tailor-made questionnaire.
Traditionally, collecting questionnaire, survey data is done by using copies of paper questionnaire and answer sheets. With the availability of web technology, there is now the option of collecting survey data online.
To collect data using paper questionnaire, special answer sheets called OMR forms are often used. Respondents to questionnaires will be asked to mark their answers to questions of the questionnaire on OMR forms. An optical mark scanner will then be used to read the marks made on the OMR forms. The process will produce an electronic data file containing the responses to the questionnaire. The data file can then be analysed using software programs such as MS Excel or SPSS. In HKUST, both the optical mark scanner and OMR forms are available from ITSC.
In HKUST, instructors can use a specially designed web-based system called OSTEI to collect questionnaire survey data online. This system allows users to create their online questionnaires using a graphical interface and without the need to do any programming. Once the questionnaire is created, the instructor can ask students to visit the OSTEI website to complete the questionnaire online. The data will be stored in the OSTEI database. Simple reports can be generated by the users at the click of a button. To use OSTEI, please contact Mr. Tak S. Ha of CELT (firstname.lastname@example.org, Ext 6812).
Everyone involved in an action learning project should keep a diary or journal in which they record:
- their initial reflections on the topic of concern
- the plans that were made
- a record of actions which were taken
- observation of the effects of the actions
- impressions and personal opinions about the action taken and reactions to them
- results obtained from other observation techniques
- references for, and notes on, any relevant literature or supporting documents which are discovered.
Research reports are often very impersonal documents but this should not be the case for an action learning journal - quite the contrary! It should contain a record of both what you did and what you thought. In it you should regularly and systematically reflect critically on the effects of your project and how it is progressing.
Journals act as the starting points for critical reflection at the regular meetings of the project team. By sharing observations and reflections it is possible to fine-tune the innovation. Sympathetic but critical discussion can also heighten awareness and contribute to changing perspectives.
Link to journal writing tips:
Keep copies of any documents which are relevant to the course(s) you are examining. These can include:
- documents for the course development and accreditation process
- minutes of course committees
- the course syllabus
- memos between course team leaders and members
- handouts to students
- copies of tests and examinations
- lists of test results and student grades.
Interaction schedules are methods for analysing and recording what takes place during a class. A common approach is to note down at regular intervals (say every minute) who is talking, and to categorise what they were saying or doing. An alternative to time sampling is event sampling in which behaviour is noted every time a particular event occurs. Examples of categories could be; tutor asking question, tutor giving explanation, tutor giving instruction, student answering question or student asking question. The analysis can be made by an observer at the class or can be made subsequently from a tape or video recording.
Below are profiles which compare the interactions during two tutorials. An observer noted, at one minute intervals, who was talking and the type of communication. The plots can be used to compare the extent to which the tutor dominated the session and the students contributed. The example is adapted from Williams and Gillard (1986).
There are other approaches to recording and analysing happenings in a classroom situation. McKernan (1991) discusses an extensive range of techniques, gives examples of each and considers how the data gathered should be analysed.
Interviews can provide even more opportunity for respondents to raise their own issues and concerns, but are correspondingly more time-consuming and can raise difficulties in the collation and interpretation of information. The format can be on a spectrum from completely open discussion to tightly structured questions. Semi-structured interviews have a small schedule of questions to point the interviewee towards an area of interest to the researcher, but then allow interviewees to raise any items they like within the general topic area. Since interviews give an opportunity for students to raise their own agenda they are useful when issues are open, or at an exploratory stage. A small number of interviews can be useful to define issues for subsequent more tightly structured questionnaires.
Interviews are normally tape recorded. If analysis, rather than just impression is required, then transcripts have to be produced. The transcripts are normally analysed by searching for responses or themes which commonly occur. Quotations from the transcripts can be used to illuminate or illustrate findings reported in reports and papers.
There are computer programmes available to assist with the analysis of qualitative data. One example is the programme NUDIST which has facilities for indexing, text-searching, using Boolean operations on defined index nodes, and combining data from several initially independent studies.
Links to interview techniques:
Interviewing and Data-Gathering Techniques by Prof Vicki Sauter, College of Business Administration, University of Missouri-St Louis, USA (http://www.umsl.edu/~sauter/analysis/interview/interview2.html)
Student learning inventories are examples of empirically derived measuring instruments. There are many number inventories which purport to measure a wide range of characteristics. Student learning inventories have been highlighted because they examine the quality of learning. In particular they look at the categories of deep and surface learning. The inventories can be used to compare groups of students, examine approaches before and after changes to teaching methods, and to examine correlations with other variables.
The Study Process Questionnaire (SPQ) developed by John Biggs (1987) assesses students' approaches to learning. Scores are obtained for each student on deep, surface and achieving approach scales. The SPQ has been widely used in Hong Kong and its cultural applicability widely researched. A detailed account of usage of the SPQ, together with tables of norms for Hong Kong students for comparison purposes, is in Biggs (1992). The SPQ is available in English, Chinese or bilingual versions.
For action learning projects, a suitable way to use the SPQ is to apply it at the start and end of the innovation. Changes in SPQ scores can then be interpreted as a reflection of the teaching and learning context. The results will indicate whether the innovation has encouraged meaningful approaches to learning.
Biggs, J.B. (1987). The Study Process Questionnaire (SPQ): Manual. Hawthorn, Vic.: Australian Council for Educational Research.
Biggs, J.B. (1992). Why and how do Hong Kong students learn? Using the Learning and Study Process Questionnaire. Hong Kong: University of Hong Kong.
Open questionnaires have a series of specific questions but leave space for respondents to answer as they see fit. You are therefore more likely to find out the views of students but replies are more difficult to analyse and collate. The usual procedure is to search for categories of common responses.
An example of an open questionnaire is shown below.
It is not necessary to have separate questionnaires for open and closed items. The most successful questionnaires often have both open and closed items.
A good basis for improving your teaching is to diagnose your students' understanding of key concepts in a course. It is often surprising how students can pass university examinations but still have fundamental misunderstandings of key concepts. The usual method of diagnosing student conceptions is to ask a question which applies the concept to an every-day situation: one which cannot be answered by reproduction or by substitution into formulae. Answers are drawn from the students in interviews or in written form.
The students answers can usually be classified into a small number (usually two to five) of conceptions or misconceptions about the phenomenon. As with the analysis of interview data care needs to be taken when deriving classifications. These do not automatically emerge from the transcript but are subject to the experiences and knowledge of the researcher.
An example of the type of question, and categories of student conceptions which it uncovered is given below (Dahlgren, 1984).
Making tape recordings is a way of collecting a complete, accurate and detailed record of discussions in class, conversations in interviews or arguments and decisions at meetings. It is easy to obtain the recording; you simply take along cassettes and a portable recorder, and switch it on. However, the presence of a tape recorder can inhibit discussion or influence people's behaviour.
There are a number of ethical issues which need to be addressed over the use of tape recordings. The group being taped should establish the purpose of making the recording and the way in which the tapes will be used. If any quotations are made in subsequent reports it is customary to maintain the anonymity of the source.
If you need to do a detailed analysis of the conversations then it will be necessary to produce a transcript. This is a time-consuming and painstaking process, so limit the use of tape recordings to situations where it is really necessary.
Triangulation is not a specific observation technique, but is the process of comparing and justifying data from one source to another. If you do just a handful of interviews your conclusions may be viewed with skepticism. But if the interview results concur with findings from a questionnaire, trends in examination results and evidence from your journal, then the conclusions are much more convincing. The message is simple; use more than one observation technique in order to see whether your results are consistent.