i care 4-J ro .C ase o nding b 3 [> ase ere n tal c 0) b iona nat anc u _L ^ O LU 3 in >• Wl OJ r ±i CD -C O ate u pri ppro c ced "O OC î- V ï C D 'a o nta E a> fa V ) o HIAI o ble y via cal b o c o m tri o a rai fede "o * epen ind o V ) cu Q . > 3 OJ O "O G * C O 0> quality ecial tar xi a Si - 6. Incre service t r. o vis a " £ age c (D b I QJ ect - > fO C J i .c ü resea 0|A to Q J > O JD O C D 18 Sp enter > " O D to C ro r t o supp OJ 0|A C/) C D to Q J O O 0 ) *" CD O D O OJ • " 62E valuation design considerations r. 0 vis O ^ c CD b age c to b CO CO Q # (O ' *-> ro to CO CO * CO Q ly viable ndent of « % S P- i| o £ Econ HCs 5 iri O in L (/) U jz i .t: 2 5 o >. "¡5 a §o •^ O CO «_, CD ¡Ï O Î c o ro +- 1 'c 'E •D ro O X 5 O k c g '* J ro «-» V ) C E • D CO *-» c ro Ó •£ o a a 3 to a; • Q 0) (/> Q J C L Z CD 3 en c3 o» 4_i . _ c « m O elopmi monit ~ > O tu C vi o a> ~ .S CD — _ a> 3. "° D ) • — 01 3 CC (JJ P Si a « anch activ m -C sea re ¿IS O ) c > C /3 f) S c il ppor C /l vie CO M — M - 03 (S a , % O Ü & 00 Z X LU X [A J CÛ o • • »a u -n .0... kM s S S — i cB LU I 79Project evaluation methodologies and techniques its inputs. The usual strategy followed, in such cases, to enable the systems to save energy is to adapt their programmes and their requirements to the most pro- bable input, thus sacrificing the remaining ones. This means that when the most probable input to secondary technical education (E) is the one coming from secondary general, the programme of E is less technical and more theoretical to facilitate the adaptation of the pupils coming from general education. The opposite will happen when the most probable input comes from lower technical. Then the programme of E is less theoretical and more technical. In either case, a substantial number of pupils will have to m a k e an effort to adapt, otherwise they have to leave the system or accept low performance. The same happens to all components which receive inputs from more than one component. The distortion of the input-output correspondence increases with the number of components through which an input has to pass. This indicates that the quality of output is a function not only of the system's (component's) effectiveness, but also of the input's quality. These considerations, however, are not taken into account in system performance evaluation. In order to further dramatize the situation we can add the quality of another important input: the teacher. The educational system is the only instance where one of its inputs (the teacher) is simultaneously one of its outputs, produced by one of its structural components (teacher-training colleges). Obviously, there is a continuous circular relationship between pupils-graduates- teachers-pupils, etc., which m a y become a vicious circle if input-output quality specifications are not appropriate. T o conclude this discussion, a word about the educational output qualifica- tions (at various exit points) in relation to manpower requirements, as well as to the other functions of the individual in society. Despite manpower planning efforts in m a n y countries there is a profound mismatching, qualitatively and quantitatively, between the educational system's output and the socio-economic system's input specifications. This mismatch widens in countries where the eco- nomic sector progresses faster and the adaptive capacity of the educational systems is very small. The magnitude of this mismatch should certainly serve as a crucial evaluative criterion in educational system macro-performance eva- luation. Micro-evaluation of the transformation-delivery sub-system While macro-evaluation considers the overall performance of an educational system or any of its structural sub-systems, micro-evaluation looks mostly at what is happening in the classroom, and assesses the performance of teachers and pupils individually. This type of evaluation corresponds closely to the func- tion of the "inspectorate", and also to what in educational cycles is usually cal- led "educational evaluation" in general, covering the evaluation of all types of 80Examples of project evaluation design educational programmes, and is usually based on the performance (achievement) of pupils. Since evaluation can be performed on all elements and processes of the system and at all systemic levels, it would be necessary to link micro- to macro- evaluation, at least schematically. This is attempted in Diagram 7. The nucleus OUTCOME 1 ^ 8 — > as a transfor- mation element ^ ^ STUDENT (or Parents) as a control element « - & - Feedback input | OUTCOME I Jjl * as a control element D I A G R A M 7. component is the teacher/pupil(s) system. Teachers and pupils are elements of the teaching/learning (delivery/transformation) system using several inputs and producing several outputs, to be discussed in detail below. The end result (out- come) of this teacher/pupil interaction is usually assessed in terms of pupil per- formance. This evaluation can be performed by several people: the teacher, res- ponsible for the pupil's learning (transformation); the pupil (beyond a certain age) w h o has the objective to learn (to be transformed); the schoolmaster, respon- sible for the functioning of the system; and the school inspector, an outsider to the school system, whose role is to evaluate objectively the performance of the teacher and, as a consequence, the school as a whole in meeting the educational 81Project evaluation methodologies and techniques system's objectives—i.e. transforming the pupil according to specifications. The evaluation can also be undertaken by such elements as parents, various inte- rested social groups, etc. It is evident that each of these evaluation agents can use their proper evaluative criteria to correspond to their proper objectives. This was already m a d e evident when discussing possible means of macro-evaluation. The discussion on micro-evaluation will also be based on systems analysis, since it is perhaps the only approach with a general applicability and which per- mits the analyst to enter a system gradually and to break it d o w n systematically. In our particular case there is an initial effort to break d o w n the nucleus c o m p o - nent of the educational system's teaching/learning sub-system and, in turn, to discuss the evaluation of the system's operation in relation to the control sub- system of the educational system as referred to above. It is hoped that the detailed analysis attempted here will help the educational programme evaluation designer in his work and, at the same time, demonstrate the complexity involved in the teaching/learning process which renders a m e a n - ingful educational evaluation if not impossible at least hesitant and diffi- cult. The difficulty increases when the purpose of the evaluation is to assess the effectiveness of only one input such as the course programme, the teaching method, etc., by looking at pupil performance (achievement testing), which is the end result of m a n y factors. The consistent findings of m a n y surveys, therefore, are not surprising; as, for example, Coleman's findings, according to which very little of the variation in school performance was accounted for by differences associated with the school.1 Hayes and Grether also concluded that the diffe- rence in academic achievements across social class and race found at the sixth grade is not "attributable to what goes on in school, most of it comes from what goes on out of school".2 The purpose of the teaching/learning (or transformation) process is to bring about some deliberate changes in the "pupil", which can cover changes in cogni- tive knowledge, creative ability, behavioural norms, etc. It seems, however, that traditionally the greatest importance was attached to cognitive knowledge. Pupil assessment is, therefore, usually done in relation to this goal. It is evident, h o w - ever, that the identification of this type of goal is absolutely necessary when un- dertaking such an evaluation. A discussion on educational goals goes beyond the purpose of this report; educational evaluators and/or educational administrators should be familiar with this type of discussion and the particularities of the edu- cational system under evaluation. A word of warning is in order regarding the probable conflicting nature of some of these goals, necessitating their identifica- tion, and about a probable divergence between the goals of the educational system and the values (what is best) of the teacher responsible for the pupil's transformation. For example, m a n y teachers m a y favour the development of pupil creativity more than their cognitive ability, the latter being the goal of the 1. J. S. Coleman, Equality of educational opportunity, Washington D . C . , U . S . Office of Edu- cation, 1966. 2. Quoted from U . Bronfenbrenner, "Is early intervention effective?'1 in Handbook of evaluation research, op. cit., p. 546. 82Examples of project evaluation design educational system through its textbooks, teaching practices, etc. In such a case pupils exposed to the impact of the above teacher most probably will do less well in school tests designed according to the goal of cognitive ability. Simi- larly, conflicting behaviour m a y be expected w h e n a n e w practice is being intro- duced into the school system. M a n y teachers, directly or indirectly, will refuse to follow the n e w practice, as is the case with modern mathematics in primary schools. Here again, the assessment of pupil achievement based on tests designed in accordance with the n e w practice will be misleading with respect to the pupil's real achievement. Both the teacher and the pupil are elements of the teaching/learning process which cannot operate if either of the two elements is missing.1 A t the same time, however, the teacher and the pupil are systems in themselves, i.e. they have their o w n goals and use various inputs to attaint them. O n e of the teacher's goals is to "teach " (transform) the pupil and one of the pupil's goals is to "learn " (be transformed).2 THE TEACHER AS AN ELEMENT OF THE TRANSFORMATION SUB-SYSTEM A . Inputs In his effort to teach (transform) the pupil, the teacher is using the following in- puts (a) His knowledge and experience*, which have to do mostly with the follow- ing : (I) the subject; (II) teaching technique; (III) educational psychology; (IV) his pupils' cultural environment. (b) Aids which will assisthimtoperformhisrole: (I) books; - text books - auxiliary books (for himself and the pupil). 1. In the case of self-learning the "pupil" is also his o w n teacher. 2. It is evident that much depends on the correlation of these two goals and also on how much the teacher wants to teach and h o w m u c h the pupil wants to learn. Such a discussion on teachers and pupil motivation goes beyond this book. It should, however, be taken seriously into consideration in educational evaluation. 3. It is assumed that experience complements knowledge positively, which m a y not necessarily always be the case. In fact, very often experience has an overall negative effect because of acqui- red prejudices, habits, etc. which hinder the individual in acquiring new knowledge. 83Project evaluation methodologies and techniques (II) available educational technology : * - audio-visual, including television. - computers. (III) Laboratories, libraries, m u s e u m s , etc. (c) Curriculum Curriculum is unquestionably the most important input. In its broader definition it encompasses "the total effort of the school to bring about desired outcomes in school and out-of school situations."2 Such a definition, however, is non-func- tional. Efforts to narrow the definition suggest the retention of those curriculum elements which have direct impact on the in-school and, m o r e precisely, the in- class r o o m activities which directly relate to the teaching and learning processes. Although favouring the broad definition Marklund, nevertheless, distinguished three main levels:2 Level 1 : the external structure of the school, above all, in respect of the n u m b e r of grades, stages and divisions into different courses of study. Level 2: time-tables and syllabuses with aims and content of subjects or groups of subjects. Level 3: the teachers' instructional methods, the pupils' w a y of working, edu- cational materials, study materials, and forms of evaluation. For the purposes of the present report it seems practical to limit curriculum to level 2, since all other elements here are treated separately but all of them are considered as inputs to the transformation process. For the effectiveness of the transformation process as a whole, which is usually the object of an outside system's performance evaluation, it will be necessary to distinguish curricula in relation to their degree of standardization. Curriculum standardization depends usually on the degree of the educational system's administrative centralization. In centralized systems curricula are cen- trally prepared and are standardized for broad use; in the case of decentralized systems, the curriculum is less standardized in that the responsibility for its design lies with the responsible teacher, the school unit, the local educational authority etc. In the case of standardized curricula, there will be a need to evaluate the following : 1. Educational technology could be seen and dealt with in a similar way to industrial technology. Technology constitutes the capital factor and the teacher the labour factor, which is strongly influenced both in quantity and quality by the capital factor. The massive introduction, for example, of television and computers into the teaching/learning process is expected to have a great impact, both quantitatively and qualitatively, upon the teachers (labour factor). The degree and the direction of this impact is still to be measured. The education system as a service-pro- ducing one m a y never lose its relatively labour-intensive character. 2. J. G . Saylor, and W . M . Alexander, Curriculum planning for better teaching and learning, Rinehart, 1954, p. 3 (quoted in H . Taba, Curriculum development, theory and practice, Harcourt, Brace and World, 1962 (paper) p. 9). 3. Sixten Marklund, "Frame factors and curriculum development", (working paper prepared for an international meeting held at Allerton Park, the University of Illinois, September, 1971) quoted in S. Maclure, Styles of curriculum development, Paris, C E R I , O E C D , 1972, p. 12. 84Examples of project evaluation design - whether existing curricula adequately reflect prevailing educational goals; this relates to the situation w h e r e b y educational goals, reflecting broader educatio- nal values, n o r m s a n d needs, change faster than standardized curricula. - whether all teachers w h o have to accept the curriculum are in fact capable of pursuing it. This relates to the continuous retraining of teachers in centrali- zed systems to meet n e w d e m a n d s of revised or n e w curricula. A good e x a m p l e of this is the introduction of m o d e r n mathematics . M a n y teachers had to accept to teach the n e w subject although they themselves never h a d a n y train- ing or even knowledge of the subject matter. W i t h non-standardized curricula, the opposite m a y h a p p e n . H e r e it will be necessary to assess h o w fast curricula are modified, since it m a y be that school curricula will change together with the teacher because the teacher w o u l d prefer to teach according to his o w n knowledge a n d values. Curricula changes, h o w - ever, will often create a n unstable school situation w h i c h m a y not serve the m o r e constant educational goals of the school c o m m u n i t y . (d) Time (I) time spent in actual teaching; (II) time spent in teachers' h o m e preparations; (III) time spent in pupils' evaluation (correction of pupils' exercises, informal discussions etc.). It is apparent that the implicit assumption concerning the time input is that the m o r e time the teacher spends o n teaching preparation, actual teaching and pupil evaluation, the better the teaching result (learning). (e) Teacher motivation In addition to the "tangible" inputs, it w o u l d also be necessary to consider s o m e non-tangible inputs w h i c h , however , play a n important role in the overall result. T h e degree of teachers' motivation should be linked with his professional, social and psychological satisfaction derived from his teaching assignment. Probable indicators for the above could b e career possibilities, social status, level of inc ome , etc. (f) Pupil response Pupil response relates to evaluation information o n pupil achievement performed b y the teacher. S u c h information usually acts as stimulators (positive or nega- tive) o n teacher performance. (g) Pressure on the teacher This is another important non-tangible input which is used to increase teacher motivation for better teaching. It is exercised formally by the teachers' superior (schoolmaster, inspector etc.) and informally by parents and pupils and the c o m - munity at large. The following m a y be used as indicators for assessing the degree and nature of such pressure: - frequency of formal inspections; - parental interest s h o w n by personal visits and /or group discussions with the teacher; 85Project evaluation methodologies and techniques - pupils'demand for more or less work; - community's view of teachers in relation to their competence and school performance. (h) Classroom conditions Classroom conditions affect both teachers and pupils. M a n y times teachers com- plain of bad classroom conditions, i.e. size, light, temperature, etc., which hinder them in their teaching effort. Classroom conditions, therefore, should be eva- luated carefully. (i) Teachers'health conditions The importance of physical and mental health as a factor in the teaching effort of the teachers is beyond any discussion. It will, therefore, be absolutely necess- ary when evaluating the system's performance to assess carefully the health of the teaching personnel. The following m a y be used as indicators: - medical services at the teachers' disposal; - obligatory medical examinations; - substitute teacher service; - general system's behaviour towards teachers registered as being ill. B . Outcome Although micro-educational outcome evaluation is usually performed on the teaching/learning system's "final outcome" (the pupils' achievement or final degree of transformation), it m a y be advisable to consider the outcome of the teaching process separately. This is necessary because the final outcome m a y be seen as the result of two intermediate efforts or outcomes: the effort (and hence outcome) of the teacher, and that of the pupil. This directly suggests that if the quality of the final outcome is "good", such a result m a y have beep achie- ved through various combinations of the intermediate outcomes, e.g. "very good" teaching effort/"mediocre" pupil effort; or, "mediocre" teaching effort/"very good" pupil effort. Using this as a guide one could evaluate teacher performance independently of pupil achievement (final outcome of the system). Such an eva- luation, for example, is usually done by school inspectors when they sit in on classroom sessions and observe and listen to the teacher. In so doing inspectors are using their o w n model of h o w one should teach as an evaluative criterion. This is inevitable because the teacher outcome is not measurable and hence directly évaluable. Indirectly, one could assess the teachers' everyday efforts by separately assessing the varions inputs used by him. T H E PUPIL A S A N E L E M E N T O F T H E T R A N S F O R M A T I O N S U B - S Y S T E M W h a t makes the educational system differ from other transformation systems (such as industry) is that the pupil, the "raw material" to be transformed, is also an important factor in the transformation. The extent to which the pupil contributes to his transformation depends on his previous development. Primary- 86Examples of project evaluation design school pupils, for example, contribute less than secondary-school pupils and these in turn less than university students. A s the pupil's transformation contri- bution increases, the teacher's importance as a transformation factor unavoid- ably decreases accordingly. This w a y of thinking helps in considering the optimum utilization of these two factors (given a certain technology), and in fact this is often taken into account in teacher and/or pupil loading. The pupil as a transformation element (system, however, in itself) in his attempt to learn makes use, in a similar w a y to the teacher, of some inputs. His outcome is the additional learning acquired (or the additional transformation he has undergone). A . Inputs The following are some of the most important inputs a pupil uses in his learning effort. (a) Teacher's transformation effort (teacher's outcome) This is a continuous input but not necessarily homogeneous. In other words, tea- cher's effort m a y change. Such a change m a y have an additional positive or negative result on the pupil. (b) Aids The student, like the teacher, has at his disposal such aids as: (I) Books (II) - textbooks (II) - auxiliary books (dictionaries, encyclopedias, etc.) (II) Various auxiliary instruments. (c) Time The pupil spends time studying. This can be broken d o w n as follows: (I) time spent in the classroom, during which the pupil is exposed to the direct teaching effort; (II) time spent on h o m e w o r k ; (III) time spent on other competitive and/or complementary educational activities. (d) Method of study This has the same importance as the teaching technique has for the teacher, and is usually influenced by the teacher, both in the classroom effort and for h o m e - work. (e) Background The pupil's past degree of transformation. S o m e of the factors determining the pupil's background are the following: (I) age; (II) previous school attendance and performance; 87Project evaluation methodologies and techniques (HI) family environment; (IV) broader socio-cultural environment. In the first year of entrance to a formal educational system "previous school attendance" is equal to zero and, in this case, age, family a n d broader socio-cul- tural environment are the only factors determining pupil's background. T h e s e factors are of great importance in setting first-year entrance requirements in schools. T h e usual practice in m a n y educational systems is to let age alone be the decisive factor for entry. In the case of a very diversified societal environ- m e n t , h o w e v e r , it is expected that the backgrounds of children of the s a m e age will differ—sometimes substantially. In turn, this results in a highly heteroge- neous class in terms of children's capabilities to absorb a n d learn. If such hetero- geneity prevails in a classroom it forces the teacher to address himself (i.e., to adapt his teaching effort) to the average child, at the expense usually of both those above and those below average. In systems where a well-developed pre-pri- m a r y education exists pupils entering pr imary education have s o m e previous school attendance, intended to increase the homogenei ty of the classes. T h e usual p h e n o m e n o n , however , seems to be the opposite, since entering pre-primary is also based solely o n age. S o m e school systems group pupils according to their capabilities but also b y age. S u c h a grouping, however , is mainly used for facili- tating later screening rather than as a m e a n s to reinforce pupils' backgrounds . This is evident because such groupings pass o n to higher grades, still bearing the s a m e quality label. (f) Inter-pupil relationships It has been found that a positive relationship exists between pupils' performance (especially in the lower grades) and inter-pupil relationship in terms of g a m e s , exchanges of ideas, etc. This is a factor to be taken seriously into account w h e n determining class size, w a y s of teaching, etc., although apparently it is systemati- cally ignored (as is proved by the trend to continuously decrease class size in order to increase teacher/pupil ratio). (g) Pupils'motivation T h e pupil's motivation to learn should be seen in relation to (1) the possibilities offered b y a certain degree of formal learning in meeting their professional a n d associated goals; and (2) the satisfaction pupils obtain from school. In m a n y ins- tances it w a s found that satisfaction or dissatisfaction with school affected accor- dingly pupils' long-term educational plans. F o r example , academically successful pupils in England left school at the age of 16 because, as they said, they w e r e "fed-up" with the school.1 This m a k e s apparent the importance of motivation in the pupil's study effort and must , therefore, be taken into account w h e n evaluating the system's performance. 1. G . Williams, "Individual demand for education: case study: United Kingdom", Paris, O E C D , SME/ET.76.21(mimeo). 88Examples of project evaluation design (h) Pupil's success This is information on the pupil's o w n success in his study effort, and should be seen as an output of pupil's self-evaluation or that through his teachers and/or parents. It can contribute either positively or negatively to the pupil's motivation, but is intended, however, to m a k e him correct accordingly, when necessary, his performance. (i) Curricula T h e curricula determine directly the nature of the pupil's performance. Very often pupils complain of a curriculum's lack of relevance to their personal aims. This brings us back to the goal-setting question. W h o s e goals are curricula sup- posed to attain? In most standardized educational systems this distinction is not obvious. Curricula m a y , therefore, serve the educational system's goals (very often already obsolete), the teacher's goals, society's goals, the goals (economic, cultural, religious, etc.) of some of the social systems, but not necessarily those of the individual, because a standardized system always sees the individual from the educational system's viewpoint (i.e., that of society at large) considering only some general features, needs, and/or obligations. This w a y of setting educational goals will inevitably create a conflict with the personalized goals of the pupils. Curricula, therefore, should be designed to take into account individual goals as well. Such goals should, in turn, be used for evaluation purposes. (j) Classroom conditions Classroom conditions play an important role in pupils' overall performance. Cold or overheated, dark or too sunny classrooms, uncomfortable desks, dismal colour schemes, etc. will certainly have an adverse effect upon desired pupil per- formance. T h e educational system's evaluator should therefore assess the prevai- ling classroom conditions in the schools. In addition, prevailing h o m e conditions are also of great importance, especially for those curricula which demand a lot of homework . This directly suggests that in countries where h o m e environments cannot be controlled, curricula should be so designed as to require the least, if any, homework . (k) Health conditions Health (physical, mental) is a decisive factor directly affecting school attendance, ability to concentrate in the classroom, assimilation capacity, etc., thus influen- cing overall performance. The educational system's evaluator must, therefore, in- vestigate the health conditions of the pupils, especially in situations where ende- mic illnesses exist. Family income, social security schemes, nutrition habits, etc. are factors which should also be examined. B. Outcome T h e outcome of the pupil's system, corresponding to the "final outcome" of the teaching/learning process, is the additional knowledge and other characteristics and/or abilities acquired, which taken together constitute the pupil's additional "transformation". This outcome can be and is, evaluated on either a daily, 89Project evaluation methodologies and techniques weekly, monthly and/or annual basis. Depending on the curriculum prescrip- tions, the particular educational level and/or the teachers or other evaluator's preferences, the pupils are regularly evaluated formally, i.e. the evaluation results are formally registered for promotion purposes. T h e usual evaluating technique is based on tests (oral and/or written), standardized or not), with the pupils recei- ving "marks" which indicate whether their outcome (i.e., degree of transforma- tion) is as expected. T w o problems seem to arise: the first involves the purpose of such an evalua- tion and the second the effectiveness of the evaluation technique used. In theory, the purpose of any evaluation is to provide the opportunity to correct output. In educational systems where such an evaluation is performed by the teacher himself it serves two aims: to increase the pupil's study effort and to finally clas- sify him according to performance, purely for selection purposes. In so doing it is evident that the sole accountable factor in a pupil's performance is the pupil himself insofar as his personal (i.e. time, motivation, etc.) effort is concerned. All other factors (inputs, and the teacher's outcome) are considered satisfactory. If, however, it is true that a pupil's final outcome is a function of all the inputs discussed above, it is obvious that in correcting the supposed bad result a search should be m a d e for all responsible factors, including, of course, the performance of the teacher himself. In standardized situations, however, very little, if any- thing, can be done on the standardized inputs and as for the teacher's evaluation, the evaluative criteria used are not the unsuccessful pupils but the successful ones. The effectiveness of the evaluative technique (including the grading system) is also an extremely important issue because of continual complaints by pupils that their failure in evaluation tests does not necessarily reflect their knowledge, since such factors as response speed, memorizing, etc., which m a y not be aims of the curricula, are incorporated in the evaluation tests and thus reduce their validity. It is evident from the above, and this will be m a d e clear below w h e n discus- sing the educational system's evaluation (or control) sub-system, that an educa- tional system's micro-evaluation performance has to consider carefully the pur- pose and effectiveness of the forms of evaluation used by schools to screen out pupils. M a n y educational systems, all too often, are evaluated solely on the basis of the number of repeaters and drop-outs. These numbers, however, m a y relate to evaluation techniques used by schools, and the purpose of the evaluation. A micro-educational evaluation, if it is to be comprehensive, has to take into account all the inputs, constraints and intermediate outcomes of the teaching and learning processes. It was m a d e clear above that educational evaluation which is limited only to pupil achievement does not reveal what is going on in the sys- tem, nor does it suggest where the faults for a bad outcome are located. In addi- tion, the results m a y not reflect pupils' actual learning, and m a y thus be mislea- ding. In order, therefore, to facilitate such a comprehensive evaluation an effort was m a d e to prepare a questionnaire which could be used as a guide by the eva- luation designer. This questionnaire, which is given in Annex A , should be 90Examples of project evaluation design considered as a first approximation towards such an objective and could be complemented according to the specific evaluation goals. The evaluation (or control) sub-system A s was mentioned earlier, the educational system, like all social h u m a n systems, is endowed with its o w n evaluation sub-system. The function of this sub-system, which at the level of the school is often called the "inspectorate", is to conti- nuously assess the system's performance (i.e. the teaching/learning sub-system), checking it against the system's goals. However , by limiting the educational sys- tem's evaluation or control sub-system to the inspectorate one loses a great part of what really constitutes this sub-system. In fact, the evaluation process starts in the classroom with the two basic elements of the transformation sub-system, namely the teacher and the pupil, and ends out in society at large. Diagrammati- cally, it can be presented in terms of hierarchical levels of control as shown in Diagram 8. £ Community (parents) as a control element i T *L. Inspector as a control element (" _*L. Inputs r ^ _ Schoolmaster as a control element (— T Inputs TEACHER 4 STUDENT SYSTEM Outcome SCHOOLMASTER-TEACHER-STUDENT-SYSTEM Outcome COMMUNITY-SCHOOL-SYSTEM D I A G R A M 8. School evaluation hierarchy. 91Project evaluation methodologies and techniques Basic control elements are as follows : At the lower level (classroom): teacher and pupil the one evaluating the other as was shown in Diagram 7. At the school level: the headmaster evaluates the teacher and pupil both sepa- rately and together as one system. At the inspectorate level: The inspector controls the school as one system (i.e. the headmaster), as well as the teacher/pupil system separately. At the community level: Parents and/or the community at large exercise control over the performance of all three systems: namely the classroom, the school, the inspectorate and/or any additional formal educational authority which m a y be above the inspector. Parents or community evaluation usually is not formal in the sense that they are not formally assigned with this func- tion. T h e impact of their evaluation, nevertheless, is often very important for educational policy decisions pertaining to m a n y aspects of the educational system, including the curricula. Having described the main components of the educational system's evalua- tion sub-system, w e can n o w proceed in designing its performance evaluation, following the same approach as before. A. Macro-evaluation Macro-evaluation of the performance of the educational system's evaluation sub- system should be designed along the lines followed previously for evaluating the teaching/learning sub-system. The basic characteristics of the evaluation sub-system, subject to assessment, are the following: The goals of the sub-system: the general purpose of an evaluation sub-system, as was said above, is to continuously assess the performance of the various parts of the educational system as against its policy objectives and goals. It is evident, therefore, that if the evaluation sub-system does not operate properly, the proba- bility is high that the final outcome of the teaching/learning sub-system will deviate from the desired one. In attaining its purpose, the evaluation sub-system has to achieve certain goals. Such goals m a y , for example, be to m a k e regular school inspections, to give short-term courses and seminars to local teachers, to inform the policy authorities regularly on the performance of the schools and their needs in additio- nal inputs (such as personnel, aids, etc.), to m a k e recommendations for reward- ing the good teachers and schoolmasters or sanctioning those responsible for the malfunctioning of the schools, to indicate problems related to the curricula, etc. It is evident, therefore, that at least in theory a well-functioning evaluation sub-system will bring about all necessary corrective measures for attaining the objectives of the educational system. Evaluation sub-systems, however, do not always pursue all the above goals. Often they are pre-occupied with the evalua- tion of the teacher's performance for teacher's career purposes. In so doing, they 92Examples of project evaluation design neglect their real purpose. They provide, thus, a good example of the disorienta- tion of an evaluation sub-system organized, in most cases, bureaucratically. A n attempt to sensitize the evaluation sub-system is undertaken by the community at large, which forwards its complaints directly to the policy authorities. It will be necessary, therefore, w h e n considering an outside macro-evaluation of the performance of the evaluation sub-system to include the community as well. After having identified the goals of the entire evaluation sub-system, it will be necessary to identify its structural components. In centralized educational sys- tems the evaluation function is performed by a separate service—the inspecto- rate, structured in a w a y to correspond to the structure of the school (transforma- tion) sub-system. In less centralized systems, however, these structures m a y not be evident and the functions of one not immediately identifiable. A macro-evaluation would have to assess the following: - the nature of the relationship between the school and the inspectorate service; - the physical distance separating it from the schools; - the w a y the inspection service is staffed in terms of number of inspectors and their qualifications; - Inspection methods used: school visits, annual overall performance of schools, investigation of community's complaints, group discussions by teachers, inspector/parent interaction; - communication and reporting system between the inspectorate and the educa- tional authority; - the authority (autonomy) of the local inspectorate for immediate corrective action; - community complaints about not being heard on school matters; - long-standing school problems ; - teacher's complaints about not being inspected; - other types of complaints. Information on the above can be easily selected by means of a direct study of the inspectorate and opinion survey (or interviews) of those involved in and affec- ted by the inspectorate. B . Micro-evaluation Micro-evaluation has to look inside the classroom and the school, since it is at this level that educational evaluation has to be performed. In fact, in entirely autonomous school units (e.g. private schools), which are not administrative units of a formal educational system, the inspectorate as a formal authority does not exist. Here, the evaluation is performed internally by the teacher, the school- master and, to some extent, the pupil, and externally by the pupil's parents (informed by the pupil) and the school's parents' committee (if it exists). 93Project evaluation methodologies and techniques It seems, however, that the most effective evaluation is the one which takes place in the classroom, involving both the teacher and the pupil. The discussion below, therefore, will follow Diagram 7, where teacher and pupil are shown as elements both of the teaching/learning sub-system and the evaluation (control) sub-system. T H E T E A C H E R AS A N E L E M E N T OF T H E EVALUATION SUB-SYSTEM As an element of the evaluation sub-system, the teacher has to evaluate both himself and the pupil. A . Inputs In performing his evaluation function, the teacher uses the following inputs. (a) Feedback information pertaining to his teaching effort This information is difficult to identify, even by means of an opinion-searching effort, for it depends on the teacher's perception of himself, his frankness and his professional integrity. It could be said that this information relates to h o w the teacher feels after the end of his teaching effort. Is he satisfied with himself? Does he reproach himself for not having adequately responded to his pupils' que- stions? W a s he too severe, or behaved in general in an inappropriate manner? Does he plan to cover the subject better next time? Answers to the above questions enable the teacher to become aware of his o w n performance. In some cases, however, such as experimental schools, the tea- cher m a y be formally asked to explain at the end of a session, to an audience other than his pupils, w h y he taught in the w a y he did, and to present possible alternative approaches. In this w a y the teacher himself produces all necessary information for undertaking a self-evaluation. The teacher faces a similar situa- tion when he is evaluated by an inspector (or his schoolmaster), with the session ending in a discussion between the inspector and the teacher. It seems, therefore, that the most pertinent question regarding this input has to do with whether the school makes provision for teachers to present their viewpoints as to the teaching technique they use and/or are forced to use, and have them discussed in a profes- sional meeting. (b) Feedback information regarding his pupils' learning results The teacher is continuously receiving information on his pupils' learning results. Depending on the size of his class and the curriculum, he m a y not have m a n y opportunities for evaluating his pupils. In such cases he relies upon information received through formal oral and/or written tests. This suggests that there will be cases when the teacher m a y not be as informed as he should be about his pupils' performance. Outside evaluation, therefore, should raise such questions as the following: - H o w often do teachers evaluate their pupils? - H a s the school set specific evaluation rules? Are they considered satisfactory? 94Examples of project evaluation design - D o teachers complain of having too m a n y pupils in their classes and, there- fore, that they cannot properly examine (evaluate) them? - D o pupils complain of not being examined often enough by their teachers and, therefore, their marks do not adequately reflect their knowledge? - D o teachers grade their pupils without formal evaluation tests? - D o parents complain about the evaluation system in use by the school? - D o teachers in evaluating their pupils consider their relative rather than their absolute effort? In other words do they d e m a n d more from pupils k n o w n to be good than from others? (c) The evaluation form in use Standardized educational systems use standardized evaluation methods which the teacher has to follow. This means that the information the teacher needs for performing his evaluation is constrained in quality and quantity by the evalua- tion methodology employed. In such cases the teacher cannot modify the evalua- tion results obtained through the formal evaluation by means of the overall per- sonal impression he has of a pupil. In other words, the system to increase objectivity reduces the real value of the teacher as an evaluator. It will, therefore, be necessary for the external evaluator to k n o w the evalua- tion technique in practice in a school or an educational system under external evaluation. It is usually claimed that the evaluation technique is related to the curriculum. This, however, seems to have only relative value and significance since evaluation technique should also be related to a child's psychology, age, family circumstances, etc. In any event, it is apparent that the teacher needs enough flexibility in the use of a particular evaluation technique. (d) Other inputs Additional inputs entering into the evaluation function of the teacher are those more related to the teacher's personality such as: - his professional integrity; - his affection and expectations for his pupils; - his formal knowledge and experience in evaluating; - his personal goals and interests (especially those which m a y hinder him in devoting the necessary time and effort to evaluating his pupils). It isapparent, however, that an outside evaluation cannot do m u c h about them. A n outside evaluator could, however, enquire as to whether teachers learn formally to evaluate pupils in their respective training institutions or whether it is considered as something which requires experience and c o m m o n sense radier than formal training and special knowledge. B . Outcome The nature of the outcome of the teacher's evaluation effort takes the form of information regarding the adequacy of his teaching effort and the pupil's learning result. It is fed back, therefore, to himself in order to increase, decrease, or modify his teaching effort and to motivate his pupils accordingly. The informa- 95Project evaluation methodologies and techniques tion directed towards the pupil should certainly indicate what corrective meas - ures should be taken by the pupils themselves. Depending n o w on the degree of autonomy he enjoys in the system the teacher must either himself bring about the necessary changes in the inputs he uses to perform his teaching, i.e. teaching technique, textbooks, time load, etc., or provide those responsible in the system for such changes with the appropriate information. Very often, however, teacher's evaluation results never reach those respon- sible. This is an important issue to be investigated by outside evaluators, because of the inherent risk of using evaluation results solely for screening purposes and not for improving the pupil's performance. The evaluator, therefore, has to see whether the educational system under evaluation encourages teachers to m a k e suggestions for curricula changes and whether teachers do m a k e such sugges- tions. THE PUPIL AS AN ELEMENT OF THE EVALUATION SUB-SYSTEM The pupil is not a passive receptor of information. H e has all the potentialities' as an autonomous system to evaluate all information he is receiving, including that pertaining to his o w n performance. A . Input The pupil's evaluation function is based on the following inputs: (a) Feedback information on his own performance The pupil receives daily information about his performance, mainly from his teacher, but also from his peers and his parents (to the extent they are involved in his learning). This information adds to his self-awareness of whether he is per- forming the w a y he should. At some fixed interval the pupil also receives his grades, which are the formal measure of his performance. His grades as measures of absolute and relative performance (in relation to his classmates) supplement the daily information. The process of pupil self-awareness is a complex one and, therefore, difficult to investigate. Very often the pupil has to reach a compromise between conflicting information. For example, the impression pupils have of themselves regarding daily performance m a y be different (negatively or posit- ively) from the image his formal grades suggest. Such a feeling m a y be reinforced by parental behaviour. In such cases the pupil develops a pessimistic or very optimistic view of himself which m a y not be consistent with his real capabilities. A n additional often serious phenomenon arises when the educational system places great importance on formal grades used for promotion purposes. In such cases the pupil's effort for securing high grades m a y not correspond to the type 1. It is evident that these potentialities are constrained only by his mental development and not by institutional factors. Institutional factors m a y have a constraining effect upon the c o m m u n i - cation of the pupil's evaluation results. 96Examples of project evaluation design of effort needed for real learning and vice versa: pupils w h o have learned m a y not get high marks in evaluation tests. This is, of course, associated with the evaluation technique practised in schools, but, nevertheless, it is a serious matter which needs to be investigated by the evaluator. Several educational systems have abolished or drastically modified the traditional grading system, especially at the lower educational levels. The following are some types of questions which could help the evaluator in his job: - D o pupils think seriously of their marks? - D o they think their marks express their achievement accurately? - D o they complain of too severe grading? - D o parents adhere to school evaluation marks or do they use their o w n criteria which m a y indicate a different performance of their children? - D o pupils take seriously their peers opinion? Is it in accordance with the teacher's opinion (marks)? (b) Feedback information on the teacher's teaching effort This information enables the pupils to evaluate their teachers. It is a subjective evaluation which helps the pupil to create an image of the teacher's qualities and interest in his pupils. The following are some of the factors contributing to this image: - the teacher's overall reputation in school (i.e. whether he is considered as a good, just, objective teacher, etc) ; - the amount and type of h o m e w o r k he assigns; - his knowledge of the subject (judged in relation to the answers he gives to pupils' questions, his lecturing ability, etc.); - his frankness (whether he admits his errors or something he does not k n o w ) ; - his interest for the pupil's progress; - his relative objectivity vis-à-vis all pupils; - thefinalmarkshegivestoapupil. The evaluator of the sub-system's performance should attempt to raise que- stions as to the above because of the importance most of these have in motivating the pupil to increase his effort. O n the other hand, pupils seem to be a relatively reliable source of information pertaining to the teacher's training effort and abi- lity and they should, therefore, be taken seriously into account. (c) Other inputs The following are some additional inputs entering into the pupil's evaluation pro- cess: - his age (as an indicator of his mental development); - his interest in his studies; - pressure for improving himself exercised on him by his family, peers, etc.; - teacher's support for pupil's self-evaluation (e.g., m a n y teachers ask their pupils to correct their o w n tests); - class discussion on the performance of the class as a whole; - eventual sanction for not performing well. 97Project evaluation methodologies and techniques B . Outcome The outcome of the pupil's evaluation effort is in the form of information pertain- ing to the teacher's teaching effort and his o w n personal achievement. Informa- tion on the pupil's achievement m a y include indications for specific corrective action; e.g. read more, redistribute time in favour of this or that subject, etc. If the pupil cannot develop a particular strategy for correcting himself, he m a y ask the teacher to suggest one. Information regarding the performance of the teacher m a y never be directly communicated. Indirect communication m a y take place by providing the rele- vant information to the parents, w h o in turn pass it on to the headmaster and/or to other teachers, etc. The degree to which such information will reach either the evaluated teacher, his colleagues and/or the headmaster will depend on the school's tolerance of such behaviour. T h e following are some pertinent questions for investigating this problem: - Are pupils allowed to express views as to the teacher's teaching effort, the eva- luation technique he is using, etc? - Are there any examples of pupils being directly or indirectly punished for complaining against their teachers? - Does the school encourage parents to express their feelings as to the teacher's performance? - Are there examples of teachers being accused by pupils (or parents) of bad performance, w h o have been disciplined by the school? - C a n the pupils strike? - D o parents threaten to have their children transferred to another school if the school does not change a particular teacher (this evidently applies to private schools)? THE HEADMASTER AS AN ELEMENT OF THE EVALUATION SUB-SYSTEM T o complete the micro-evaluation of the educational system's evaluation sub-sy- stem, it would be necessary to include the headmaster in the discussion. The most important role of the headmaster is performed not so m u c h through direct eva- luation of both previous elements (the teacher and the pupil), but by the degree of democratic behaviour he tolerates, which pre-conditions the communication of evaluative information emanating from the pupils and their parents. In addi- tion, even in centralized educational systems the school director has enough autonomy to alter some of the input conditions affecting the performance of both teachers and pupils. H e also has increased sanctioning authority, which he can use for motivating teachers and pupils alike. The system's evaluator, therefore, has to investigate carefully the headmas- ter's administrative authority and his attitude pertaining to the school's evalua- tion by parents or pupils. 98Examples of project evaluation design A . Inputs The following are some of the inputs the headmaster uses in performing his eva- luation function. - feedback information regarding the overall result of a class and each pupil individually (individual records); - feedback information as to the satisfaction of pupils regarding their teacher's performance (such information is usually in the form of parents or pupils' complaints); - other inputs: his formally prescribed authority; his personal attributes; values, attitudes, knowledge, experience, image, etc., his personal goals and interests. B . Outcome The outcome of the headmaster's evaluation process takes the form of informa- tion pertaining to the overall performance of his school. This information, which is complemented when necessary with information regarding appropriate correc- tive action, is fed back to the teachers and pupils (parents) inside the school and his superiors outside the school. The information which is fed back to the teachers should, in theory, include questions as to the reasons w h y pupils performed less well than expected. This implies that for the headmaster a pupil's low performance is not independent from the teacher's teaching effort. In actuality, however, things seem to happen differently. The headmaster very seldom, if at all, holds his staff responsible for the pupil's low performance. The entire blame is placed on the pupil himself, and it is from the pupil that corrective action is demanded; the teacher's beha- viour thus remains always the same. It is evident that in such conditions the role of evaluation is completely distorted. The information fed back to the superior administrative echelons informs them of the school's shortcomings, requesting at the same time additional inputs for improving performance. The outside evaluator has, therefore, to investigate the schoolmaster's evalua- tion effort and to w h o m he communicates the relevant information. The following are some questions for helping the evaluator in his assessment effort. - Does the schoolmaster discuss the performance of the pupils with the tea- chers? - Are there formal procedures for doing so? - Does the headmaster inform the parents and discuss regularly with the parents' school committee? - Does the headmaster listen to pupils' and parents' complaints? - W h a t is the headmaster's formal authority on: the school's inputs; sanctions to teachers and pupils? - D o teachers complain of the headmaster being too severe to them and/or the pupils? - W h a t do parents think of the headmaster's democratic behaviour? 99Project evaluation methodologies and techniques - D o parents/pupils think the headmaster tolerates criticism? - Are there examples of pupils being punished for being too critical? - W h a t are the ways and means for reporting the school's shortcomings to the superior administrative echelons? - O n what aspects can he report? The,above information can be easily selected by studying the appropriate administrative rules and by the interviewing of appropriate persons by c o m m u - nity representatives. 100Evaluation design of experimental programmes The discussion in the preceding chapter was focused on a possible analysis of the educational system with a view to undertaking both a macro and micro per- formance evaluation. It was argued that such an evaluation is necessary for pro- ject identification purposes which intend to ameliorate the system's performance. T o follow the logic of the conceptual framework presented earlier, it is assu- m e d that the result of the performance evaluation indicated, a m o n g other things, that the middle vocational education system was defective. M o r e precisely, it was found that graduates' qualifications did not meet the economic system's needs. T o correct the situation, it was decided that the curriculum for middle-level voca- tional schools had to change. The World Bank was asked to provide the financial means for implementing a new curriculum and the World Bank, in turn, asked Unesco to administer the project. T h e following were the basic terms of reference of the contract: design the n e w curriculum; implement it on an experimental basis; evaluate its results; bring about, if necessary, changes according to the evaluation results; and implement the curriculum in its final form on a national basis. It is evident that the project has all the prerequisites to be implemented at first on an experimental basis; i.e. it is innovative, its results are of a permanent nature, and the cost involved for its large-scale implementation is high. In our particular hypothetical example the terms of reference include the eva- luation of the experiment, which is not always the case. Such an evaluation can be performed in at least two ways: undertake the evaluation after the experiment is completed, or evaluate the experiment throughout the implementation phase. It was argued earlier that the most efficient w a y would be ongoing evaluation of the experiment, which would necessitate building specific evaluation guidelines into the project's experimental design. In practice, this means that project design components also have to be evaluated, together with the actual results at each of the project stages. In the case of the curriculum concerned, it is assumed that it extends over two years of schooling, i.e. there will be a need to evaluate the results of the project at the end of both years. 101Project evaluation methodologies and techniques The curriculum design stage A meaningful approach to designing the curriculum might involve the following steps:1 Step 1 : Diagnosis of needs Step 2: Formulation of objectives Step 3 : Selection of content Step 4: Organization of content Step 5: Selection of learning experiences Step 6: Organization of learning experiences Step 7: Determination of what to evaluate and of the ways and means of doing it. Accepting the above sequence of steps as our basis for discussion, it will be necessary, for our purposes, to complement them with the following additional steps: Step 8: Determination of teaching technique Step 9: Determination of teachers'qualifications Step 10: Estimation of the number of teachers needed (for the experiment and for its implementation on a national scale) Step 11: Estimation of new type of teachers, if any Step 12: Provisions for securing the new type of teachers for the experi- ment Step 13: Provisions for securing all new teachers for its broad implemen- tation Step 14: Determination of teaching aids (technical designs, tools, machi- nery, etc.) Step 15: Provision for acquiring all necessary aids for the experi- ment and its broad implementation Step 16: Estimation of total cost of the experiment Step 17: Estimation of total cost for its broad implementation. It is evident that the additional steps have to be included in order that all provisions are taken to ensure the success of the experiment. A s already argued, any mistake at the project design stage will inevitably affect the experiment's results. If, therefore, the project's evaluation does not cover all possible aspects of the project's final outcome evaluation, which m a y indicate non-achievement of the project's goals, it will fail to indicate the reasons for such failures. The curriculum design will certainly have to provide further details on each of the above steps, which will be omitted here. For illustrative purposes, however, Table 3 presents a first approximate breakdown of the above steps, in order to indicate areas where possible evaluation will be necessary. The table is structured into two parts. The first part presents the curriculum design steps and the second part the corresponding evaluation design of the project's experimentation. 1. H . Taba, op. cit. p. 12. 102Examples of project evaluation design In the curriculum design, some of the steps refer to the final implementation of the project. It seems prudent to have, even on a preliminary basis, indications of the feasibility of implementing the curriculum on a broad scale. Evidently, this is necessary in order to avoid experimenting on something which, in all like- lihood, will never be implemented on a national scale unless one modifies accor- dingly the objectives of the projects. The curriculum experimentation stage The curriculum experimentation stage will certainly include the preparation of all necessary materials and the actual teaching of the n e w curriculum. The prepa- ration of the material m a y involve an informal evaluation, in the sense that these materials should be given to several experts for preliminary comments. The mate- rials will be corrected according to the experts' suggestions and will be put into final form for their use in the classroom. T w o crucial problems arise in this type of formative evaluation, which have to be resolved at an early stage. The first refers to w h o will be teaching the n e w curriculum and w h o will be doing the evaluation. The second relates to the means and ways of evaluation. There is no clear-cut answer to either of the two problems. A s to the first, the following are possible: (a) the teaching and the evaluation to be done by the same person; (b) the teaching and the evaluation to be done by two different persons. It is apparent that reference is m a d e here to the evaluation for the purposes of the experiment and not to the evaluation previewed in the curriculum design (Step 7), which m a y be entirely different. The one previewed in the curriculum relates more to the pupil's performance. The other, for experimental purposes, relates to all aspects of the curricula including the teaching method and the tea- cher's behaviour itself. For the experiment's purposes it might have been desirable to use experienced teachers w h o would themselves evaluate the curriculum. The risk involved in this, however, stems from the fact that in reality, i.e. in the project's general implementation, the curriculum has to be served by ordinary teachers, sometimes even entirely inexperienced. T o counterbalance this effect teaching could be done by ordinary teachers and be evaluated by "outsiders". A s to the second problem, the evaluation could be performed by means of pupil's regular achievement tests; direct observation without the evaluator being physically present in the classroom (e.g. various audiovisual means usually used for evaluation purposes), which should not, however, be k n o w n to the pupil; and finally by means of continuous discussions with the teachers. At the end of each stage (i.e. each year) there will be an additional evaluation. Summative evaluation stage U p to this point, evaluation was based on the performance of the pupil w h o 103P roject evalu ation m eth od ologies an d tech n iq u es i I s % i» '-5 a) cd 2 3 3 g 73 'S g 11 •o -ö O Cd 4) -S S vi X > O 8. C O H fl> tivrepresenta • O . c o Vi (ù ¡8 3 s Id 'S -SP £ V I a O 2? o "•S -g G o » " B o • 3 -S . » - " §• s 0 8--Ö Ê =3 o g ja °-o S C *H and ion i IX- . atio: valu e wa is of c o * S •* Deti wha and and doii O « C D i O (3 t) C O C O 104E xam p les o f p roject evalu ation d esign *^4 V cd * u 3 S Ü 9 Ö II ill 00 p . a c C /3 C C J oth erts 3 W ck them o X I U 3 Mar surv Ask peo] ssi- mpor- fied ck po y of i quali hers O S M » S l-S s e . o a -9 T H W > 5 o> Exp sugg a - o m -9 M o G O a. H t/3 o C i o inati iterm 5- • « .S g tec ichin lH 8 "3 M O U . iîiï! ¡ill! •s c S o S -a 'S § S X cd E U 0 3 « Í « X ! — u - ? x fc .S O & o c 5> (u 'S o fix a s c 5 S g 3 Ë c • .22 « •S a £ ^ o Q > » Ç J .M Ü ^ J H a e S *c /ï CD Î_ > » + J ; sure ol 'ailabilit id incen 3? CD fectiven +3 O ta ^ D • * - • D a x OJ CD > « 5 Q . CD C D " c » 3 •2 3 83 o > ,^ 3 2 1.1" a 4> ¿ i ¡S G 53 M G .O *0 O ü w e k tional dget 3 cd 3 o ary L ia cd ti 'S -O estim ital co )mpar¡ i thbu o* ç O "53 1 C XI •o scuss thbu as c*--ble it feasi u ë e 8'3 .S 8 tl ii .2 i o u C , ri w o E -a ''S o c c ü E .-I •o o o „ a . *J C D .» s -a ¡s 3 oo 3 O .S cd •S? 2 II B u 3 g u « • a M u *2 bove cd on i-J O atesf g sti i> -D num o cd O O S -M £ Jä g S S 5? fill E t! * J 60 e •p c .CCOI onte; b c f • S ' B S » ! O cru n a * S 2 S E C L, cd C c S .Í3 3 D . .S •I! J-1 a i . U s u i 1 °s I il w lit 8 V I 106Examples of project evaluation design attended the experimental classes, assessed against the norms set by the evalua- tors. Usually, however, at the end of the experiment, i.e. after two years (or even a year), there is a summative type of evaluation pertaining to the c o m p a - rative assessment of pupils following the experimental course and others w h o followed the old one. Such an evaluation implies that pupils from the two groups take c o m m o n tests to determine the performance differences between those exposed to the new and the old curricula. In such an evaluation technical problems m a y arise, such as: - the design of the tests; - the sampling technique; - the statistical technique for inferring the statistical significance of any difference. All these are aspects dealt with in social research methodology, and are there- fore beyond the purpose and scope of the present work. O n the other hand, the use of a particular research and/or statistical technique will certainly depend on the nature of the particular project. W h a t is of possible interest here is to warn the evaluator and the administrator to look beyond statistical significance in research evaluation. In other words, statistically significant or not, differences observed between two target groups m a y not necessarily reveal the nature and importance of the change a project intended to bring about. In our particular example, for instance, it m a y be more fruitful to have the correctness-of-output test performed on the job, in an industry, rather than on the basis of a control group. Because, even if pupils exposed to the n e w curriculum differ (i.e. they k n o w more and do things better) from those exposed to the old one, still this difference m a y not m a k e the graduate perform the required job so differently that it will justify the waste of energy and time involved in experimenting and imple- menting the n e w curriculum. This is a c o m m o n complaint of policy-makers w h o favour educational (performance) significance and not just statistical significance. 107Summary and conclusions The present monograph on project evaluation was primarily concerned with the managerial function of evaluation. Although acknowledged, this function is very often neglected. Thus, project evaluation tends to become a means to an end and jeopardizes, in the long run, its potential for effective programme management and control. The blame for this frequent misuse of project evaluation effort should be shared equally by policy-makers, programme managers and evaluators. This monograph sees evaluation as the process through which a decision-ma- ker (at the various hierarchical echelons of a social system) is informed of the development of a project's experimentation or implementation, its final results (impact) and its performance w h e n operating with a view to bringing about appropriate corrective measures. T o operate successfully such a process has to meet the following conditions: meaningful information should be collected with reference to the project's objectives and its final results; this information to be fed back to the decision-maker and the project's manager early enough and in a comprehensible form to allow the decision-maker and the project manager the time, opportunity and power to carry out the appropriate corrective action. Meaningful information has to be defined for each project separately by those responsible for project management. It was argued, however, that there m a y be several agencies responsible for and interested in a project's evaluation. T o avoid wasting time and money , the project designer will have to question all those inte- rested in project objectives of evaluation, w h o in turn will indicate the type of information to be collected. The evaluation function will not be effective if the recipient of the evaluation information does not act upon it. In such a case, the evaluation effort is entirely useless and should not be undertaken. T o increase the effectiveness of evaluation it was suggested that, based on a conceptual framework, it should be integrated to project planning, implementa- tion and operation. Such a conceptual framework takes into consideration simul- taneously the various levels of management within organizations such as Unesco and government departments. In order to minimize the inherent risk involved in project evaluation, particu- larly of large and complex social action programmes, it was argued that it is not enough to incorporate some vague evaluation clauses into the project design. 109Project evaluation methodologies and techniques It would be necessary to design the project's evaluation at the early stage of pro- ject implementation design and link the two together. In the case of this not being feasible, it was suggested that at least specific evaluation guidelines be incorpora- ted into the project's implementation design. T h e preparation of a project evaluation design or evaluation guidelines pre- supposes, of course, that the project is évaluable. This, however, is not always the case. For this reason it was argued that the evaluability of a project, together with the implied cost, be carefully examined in advance. T h e cost of evaluation is certainly a serious constraint when deciding whether projects should be evaluated or not and what methodology and technique be used. This aspect is not usually considered in advance, and very often the evalua- tion methodology and technique employed depend on the available m o n e y rather than on the evaluation purpose. This unfortunate situation, due mainly to the insertion of evaluation clauses in project designs without specifying in advance the evaluation effort and accordingly budgeting it, should never be permitted to happen in serious programme management efforts. T h e monograph also accepted the continuous nature of the evaluation func- tion, although it admitted that for practical reasons project evaluation has to be performed in discreet but sequential stages. For this reason, the monograph cove- red all possible project evaluation stages. It started with system performance, to discover operational defects and to suggest corrective measures, usually in the form of n e w projects; it went on to project selection assuming that, a m o n g possi- ble alternative projects and within a set of constraints, there is one project which is the most efficient; it covered in turn the experimentation and implementation stages and explicitly distinguished between the formative type of evaluation which occurs during the experimentation stage and the monitoring type of eva- luation usually employed for managing project implementation; lastly, it dealt with impact evaluation necessary for an overall assessment of a project's final results (outcome) or degree of success of project operation. During the discussion an effort was m a d e to point out the project manage- ment implication of each of the various types of evaluation. Here it was argued that the project selection stage is perhaps the most critical one. It was suggested that a thorough planning effort be undertaken at this stage to increase the proba- bilities of project success. Certain types of projects, with at least one of the follo- wing characteristics: innovative nature, long-lasting results, and high implemen- tation costs, should be experimented on a pilot basis before being broadly implemented. It is evident that errors committed during these phases will irrevo- cably affect the effectiveness of the project. T h e evaluation of the project's final results (i.e. after its final implementation) obviously has reduced importance for purely project management purposes. Information collected from such an evalua- tion effort is often used by policy-markers (a) for organizational managerial control, (b) for detecting unintended results, (c) for assessing the effectiveness of the project with a view to supplementing the effort should the results of the pro- ject be found inadequate, etc. The breadth of the subject forced the author to maintain the discussion on 110Summary and conclusions a rather general level in order to m a k e it meaningful and, it is hoped, useful to a broad audience and more particularly to project planners and managers. There seems, therefore, to be a need for an ongoing effort to reduce the level of abstraction of the present monograph by means of specific papers, either on types of evaluation or on types of project. Such works, however, have to be written in such a w a y as to help the project planners to design the evaluation of their o w n projects. M a n y handbooks fail in this respect simply because their detailed procedural descriptions fail to provide information as to h o w to perform a parti- cular operation. The present work has attempted to provide general guidelines on some of the important aspects of project evaluation design, but in order to keep its size and complexity within manageable limits it has very often had to omit explanatory details. IllBibliography A H M A N N , J.S., G L O C K , M . D . , (eds.), Measuring and evaluating educational achievement, Boston, Allyn and Bacon, Inc., 1971. A B E R T , J.G. , K A M R A S S , M . , (eds.), Social experiments and social program evaluation, Mass., Ballinger Publishing Co. , 1974. B A I N B R I D G E , J., SAPIRIE, S., Health project management: a manual of procedures for formu- lating and implementing health projects, Geneva, World Health Organization, 1974. B A U E R , R . A . , (ed.), Social indicators, Cambridge, Mass., the M . I . T . Press, 1966 (paper). C O L E M A N , J.S., Equality of educational opportunity, Washington, D . C . , U . S . Office of Educa- tion, 1966. D E U T S C H , K . , The nerves of government, N . Y . , The Free Press, 1967 (paper). F R E E M A N , H . E . , "The present status of evaluation research", Paris, Unesco, SS. 76 /WS/ lO , August 1976. G O S T O W S K I , Z . , (ed.), Toward a system of human resources indicators for less developed coun- tries. A selection of papers prepared for a Unesco research project, O S S O L I N E U M , Poland. G U T T E N T A G , M . , S T R U E N I N G , E X . , (eds.) Handbook of evaluation research, California, Sage Publications, Inc., 1975. H O L D E N , I., M c I L R O Y , P . K . , Network planning in management control systems, London, Hut- chinson Educational Ltd., 1970. H U S E N , T . , (ed.), et al., International study of achievement in mathematics, a comparison of twelve countries, Volumes I and II, Uppsala, Sweden, Almqvist & Wiksells Boktryckeri A B , 1967. INTERNATIONAL BANK FOR RECONSTRUCTION AND DEVELOPMENT, INTERNA- T I O N A L D E V E L O P M E N T A S S O C I A T I O N , Appraisal of an agricultural and rural training project in Bangladesh, Report No. 680b-BD, February 18,1976. INTERNATIONAL INSTITUTE FOR EDUCATIONAL PLANNING (UNESCO) INTERNA- TIONAL B A N K FOR RECONSTRUCTION A N D D E V E L O P M E N T , "Report of the Afri- can Regional Seminar on educational evaluation", Dar es Salaam, Tanzania, 7 April-2 May , 1975. I N T E R N A T I O N A L INSTITUTE F O R E D U C A T I O N A L P L A N N I N G (UNESCO) , "Methodo- logy for the evaluation of educational attainments", a project of the I B R D and H E P , Progress Report, IIEP/RP/15/1,12th September, 1973. 112Bibliography INTERNATIONAL INSTITUTE FOR E D U C A T I O N A L PLANNING (UNESCO), "Methodo- logy for the evaluation of educational attainments", a project of the I B R D and H E P , Phase I Report, IIEP/RP/15/2,24th January, 1974. J A N T S C H , E . , Perspectives of planning, Proceedings of the O E C D Working Symposium on Long- range Forecasting and Planning, Bellagio, Italy, 27th October-2nd November, 1968, Paris, O E C D , 1969. L O C K H E E D A I R C R A F T I N T E R N A T I O N A L I N C . , Systems analysis of Sudan transportation, progress products Al, A4, A5 and A6, June 1966. L Y O N S , G . M . , "Evaluation research in Unesco : political and cultural dimensions" (Prepared for the Unesco Symposium on "Evaluation methodology for social action programs and projects", Washington, D . C . , September 20-24, 1976), Paris, Unesco. M A C L U R E , S., Styles of curriculum development, Centre for Educational Research and Innova- tion (CERI), Paris, O E C D , 1972. M c I N T O S H , N . E . , "Evaluation and research: aids to decision-making and innovation", O E C D Third General Conference, Institutional Management in Higher Education, Paris, 13th-16th September, 1976. M c L A U G H L I N , M . W . , Evaluation and reform: the elementary and secondary education Act of 1965, Title I, The Rand Corporation, January 1974. O E C D , Handbook on curriculum development, Centre for Educational Research and Innovation (CERI), Paris, 1975. O E C D , "The measurement of learning", Social Indicators Development, Programme, C o m m o n Development Effort N o . 2, Issues Paper, S M E / S I / C D E 2 / 7 6 . 2 1 , Paris, 9th August, 1976. R O E M E R , M . , S T E R N , J.J., The appraisal of development projects, a practical guide to project analysis with case studies and solutions, N e w York, Praeger Publishers Inc., 1975. ROSSI , P . H . , W R I G H T , S.R., "Evaluation research: an assessment of current theory, practices and politics", Paris, Unesco, SS.76/WS/15, September 1976. R U T M A N , L. , "Planning project evaluations: a case study of a bilingual education project", Paris, Unesco, SS.76/WS/11, September 1976. S C H N E I D E R , H . , National objectives and project appraisal in developing countries, Development Centre Studies, O E C D , Paris, 1975. S T A K E , R . E . , (éd.) et al., Case studies in the evaluation of educational programmes, Centre for Educational Research and Innovation (CERI), Paris, O E C D . S T A K E , R . E . , Evaluating educational programmes. The need and the response, Centre for Educa- tional Research and Innovation, (CERI), Paris, O E C D , 1976. S T R U E N I N G , E.L. , G U T T E N T A G , M . , (eds.), Handbook of evaluation research, Volume I, sponsored by the Society for the Psychological Study of Social Issues, California, S A G E Publications Inc., 1975. T A B A , H . , Curriculum development. Theory and practice, foundations, process, design, and strategy for planning both primary and secondary curricula, Harcourt, Brace & World, Inc., 1962 (International Edition). T R A P P L , et al. (eds), Progress in cybernetics and systems research, N e w York, John Wiley, 1975, Vol. II. T Y L E R , R . , G A G N E , R . , S R I V E N , M . , Area monograph series on curriculum evaluation, 1: Perspectives of curriculum evaluation, Chicago, Rand McNally and Co. , 1967. Unesco, The use of socio-economic indicators in development planning, Paris, 1976. Unesco, "Guideline for project preparation mission", E / W S / 3 1 2 , M a y , 1972. U N I T E D S T A T E S A G E N C Y F O R I N T E R N A T I O N A L D E V E L O P M E N T , Project evaluation guidelines, third edition, M . O . 1026.1 Supplement I, Office of Development Program Review and Evaluation, Washington, D . C . , August 1974. 113Project evaluation methodologies and techniques U N I T E D S T A T E S A G E N C Y F O R I N T E R N A T I O N A L D E V E L O P M E N T , Evaluation hand- book, second edition, M . O . 1026.1 Supplement II, Office of Program Evaluation, Washington, D . C . , M a y 1974. W A L L E R , J.D., et al., Monitoring for government agencies, an Urban Institute Paper, Wash- ington, D . C . , February 1976. W H O L E Y , J.S., " A methodology for planning and conducting project impact evaluations in U n - esco fields", SS.76/WS/12, Paris, Unesco, September, 1976. W H O L E Y , J.S., "Designs for evaluating the impact of educational television projects", SS.76/WS/13, Paris, Unesco, September 1976. W I L L I A M S , G . "Individual demand for education: Case study: United Kingdom", Paris, O E C D , S M E / E T . 76.21, (mimeo). 114ANNEX A Questionnaire for designing a micro-educational evaluation1 A . The teacher as a transformation element2 1. Inputs (a) Knowledge and experience (I) On the subject: - Is the teacher's knowledge and experience on the subject taught adequate? - H o w is it checked (formally)? - Are there any complaints from parents and/or pupils on the inadequacies of teachers? - Is teacher retraining a prerequisite for promotion or continuing their job? - If yes, what are the time intervals? - Is the teacher asked to produce and publish some theoretical work and/or the results of his experience? - Are the teachers asked to give public lectures on educational matters? (II) Teaching techniques - Are the teachers aware of possible teaching techniques? H o w is this checked formally? - D o the teachers adapt their teaching techniques according to say, age, subject, size of the class, etc? - C a n they change the teaching technique if they so wish? (This is an aspect related to the teacher's autonomy to control his inputs. In less centralized systems teachers can freely change not only their inputs but also their goals. A s the system becomes administratively centralized, the teacher's degree of autonomy decreases; it would, therefore, be desirable that the educational systems evaluator keep this question conti- 1. The design of the questionnaire follows the same sequence as the corresponding discussion in the text. 2. W h e n appropriate, the questions should be put by subject. If the educational system's evaluator does not wish to undertake a very deep evaluation, he can accordingly decide what type of questions to use. If the system is administratively centralized, he can choose those questions which have to do mostly with systems and not necessarily with the teacher and/or the pupil, thanks to the standardized nature of most of the inputs. 115Project evaluation methodologies and techniques nuously in mind, raise it whenever applicable, and verify the degree of system auto- n o m y ) . - Are the various teaching techniques formally taught in teachers' colleges or other insti- tutions responsible for teacher's education? (This question should also be raised w h e n dealing with teacher's colleges, etc.). - If not, h o w is the problem faced by the education system? (III) Pupil psychology - Are the teachers aware of the child psychology for the age-group they teach? - Is this a subject which they have studied in teachers' college or have they learnt it through experience? - Are the complaints of the pupil and/or parents, about the use of inappropriate means of reward and punishment by the teacher, heeded? - W h a t is the attitude of society at large (community) to the use of reward and punish- ment means? - D o teachers follow society's attitudes on that matter rather than the theory? - Are teachers authoritarian, democratic, etc., in their behaviour towards the stu- dents? - H a s the educational system issued a directive on this matter or is the teacher free to behave the w a y he wishes? - If the behaviour of the teacher is controlled, h o w is this done? (IV) His pupils - D o e s the teacher follow his pupils as they pass from grade to grade? - Does the teacher k n o w both the first and family names of his pupils? - Does the teacher k n o w the parents of his pupils? - Does the teacher k n o w the expectations of his pupils? - D o e s the teacher hold informal meetings with his pupils? - If yes, h o w often? If not, w h y not? - W h a t is the attitude of the community and of the educational system towards such informal gatherings? (b) Aids (I) Books Textbooks. If the textbooks are selected by the educational system, w e would wish to k n o w the following: - H o w are the textbooks written? - W h o decides the type of books to be used? - W h o evaluates them? - W h o approves them? - H o w are they replaced? - Are these textbooks imposed by the educational system? If yes, does the teacher recommend the use of additional textbooks? If not, h o w does the teacher choose the textbooks to be assigned? - D o e s the teacher ask the pupils to take notes during his lectures? - D o e s the teacher ask the pupils to keep to the assigned textbooks? 116Annex A - Are the questions for formal and/or informal exams taken directly from the assigned textbooks? - Are the exams evaluated with reference to the assigned textbooks? - Does the teacher have to suggest and use textbooks other than those assigned by the educational system? Auxiliary books - Does the educational system allow the teacher to use exercises taken from sources other than the assigned textbooks? - D o the teachers usually consult books other than the textbooks in order to prepare themselves? - D o they use other books in their teaching effort? - D o the teachers recommend their auxiliary books to their pupils? (II) Educational technology Audio-visual, including television - H a s the educational system placed any audio-visual aids at the teacher's disposal? - C a n teachers use these aids freely? - Is the use of audio-visual aids compulsory or not? - If not, do the teachers use them? - Is television used in teaching? - If yes, h o w ? a) as a substitute for teachers? b) to supplement them? (Ill) Other - H a s the teacher at his disposal: a) library? b) laboratory (where applicable)? c) m u s e u m s (where applicable)? - C a n the teacher make use of them freely: for himself? for his pupils? - Are the library, laboratory, etc. well equipped qualitatively and quantitatively? - D o teachers make use of them when they exist? - D o the pupils make use of them? (c) Curricula (I) When the curricula are standardized: - Are the teachers happy about the curricula? - If dissatisfied, can they change them? - Is there any control for evaluating the teacher's capability when curricula are chan- ged? - Is there any mechanism whereby the teachers m a y m a k e an appeal, if they are un- happy with new and/or old curricula? - D o new teachers campaign for curricula changes? - If yes, is there any strong resistance to that change on the part of older teachers? 117Project evaluation methodologies and techniques (II) When the curricula are not standardized - Does the educational system (school) set some criteria to be followed by the teachers when defining subject matter? - Are they asked to use their o w n textbooks (own publication) or m a y they use any textbook suitable? - Is there any mechanism through which the educational system (school) evaluates the teacher's selection of the subject matter? - D o the pupils complain of lack of relevance in what they are taught? - D o parents (or others in the society) complain either of lack of relevance or unaccep- table ideology, in what teachers teach? (d) T ime (I) For actual teaching - H o w m a n y hours a week does the teacher actually teach a particular subject? - Are these hours considered adequate for meeting his aims regarding the desired degree of pupil transformation? - H o w m a n y hours a week does a teacher teach? (Total weekly time-load). - The teacher's total load is considered: too m u c h , reasonable, low, too low? - D o teachers complain of over-loading? - C a n a teacher change the hours of actual teaching if he so wishes? - W h o decides the hours necessary for actual teaching by subject? - Is the size of the class (or other factors) considered? - H o w are the decisions taken? (II) For preparation1 - D o the teachers prepare themselves before lecturing? - Is there any usual procedure? - H a s the educational system established a control device to check the adequate preparation of the teachers? - W h a t is it? - Is it used frequently? - D o the teachers complain they are too over-loaded and lack time to prepare themselves? - D o pupils complain that teachers appear in the class unprepared? 1. It is clear that this item, as well as the following "time for control", are highly subjective and it is rather difficult for the system's evaluator to get any reliable answers. There are, however, ways of checking (a) whether teachers prepare themselves before lecturing or merely rely on their experience, (b) whether teachers give their pupils enough exercices which require correc- tion, and (c) whether the corrections made by the teachers are sufficiently thorough. 118Annex A (III) For evaluation - Does the teacher give pupils classwork and/or homework? - Does he correct it? (e) Teacher's motivation N o questions will be asked here because of the problematic nature of this item. Probable questions m a y relate to income, career opportunities, autonomy, etc. (0 Pupil response Relevant questions will be raised in the discussion of the evaluation sub-system below. This is feedback information on the teacher's evaluation of his students. It has to do with the response the pupils show to the teacher's teaching effort as seen by the teacher himself. It is the outcome of the teacher as an evaluation element directed to himself in his function as a transformation element. See Diagram 7. (g) Pressure on the teacher Appropriate questions are raised in the discussion of the evaluation sub-system. (h) Classroom conditions - Are there any standards for school buildings pertaining to heating (air conditioning), lighting, ventilation, size, etc. - If so, is there any operative control system for enforcing them? - H o w m a n y desks are there in each classroom? - H o w are they placed? - D o pupils complain about classroom conditions in general? - D o parents complain? - D o teachers complain? (i) Teacher's health conditions - Are there frequent absentees due to health reasons? - Are there adequate medical services at the teachers' disposal? - Are teachers covered by social security? - Is there any obligatory (annual or otherwise) medical examination. - D o teachers complain of being compelled to return to school prematurely after an ill- ness? - Is there any teacher substitute service in operation? 119Project evaluation methodologies and techniques 2. Outcome A s discussed earlier, the outcome of the teaching process independently is in the form of the teacher's teaching effort, which is usually evaluated by the school inspectors through direct observation of the w a y they teach by means of the inspectors' perception of what good teaching implies. It is, there fore, extremely difficult to propose any appro- priate questions unless the evaluation is done indirectly by considering the inputs the teacher uses. In this case, the above questions on input also apply here. B. The pupil as a transformation element 1. Inputs (a) Teacher's transformation effort A s was said when discussing the teacher's outcome, the nature of the teacher's teaching effort is almost unknown to us although it is a very important factor in the pupil's trans- formation. The effectiveness of the teacher's effort can only be directly evaluated. W e will deal with this below when considering both the student and the teacher (plus others) as control elements. (b) Aids / (Books) Text books - H a v e all pupils the required textbooks? - D o pupils use other textbooks than those required by the educational system? - D o pupils prefer their official textbooks to other textbooks? - If not, w h y not? Additional books - D o pupils use additional books (such as dictionaries, encyclopaedias, etc.) in their stu- dies? - Is the use of such books recommended by their teachers? - If so, do they ask the pupils to present views from such books? - D o the school libraries m a k e such books available to the pupils? - W h a t is the attitude of the educational system? - D o the pupils complain of not having such books? (II) Various additional aids - D o pupils use additional aids (such as for geometry, drawing, various samples of sto- nes, metals, maps , etc.) in their studies? - If yes, do they belong to the pupil? - D o the teachers ask the pupils to use such aids? - D o the schools provide the teacher and the pupils with such aids? - W h a t is the attitude of the educational system regarding their use? - D o pupils complain of not having such aids? - D o teachers complain that pupils do not have such aids? 120Annex A (c) Time (I) Time in the classroom - D o pupils complain that the time spent in the classroom is: - excessive? - insufficient? - D o teachers complain that pupils do not pay attention in the classroom? - D o pupils attend classroom lectures frequently? - Is school attendance enforced by: the school? the teacher? - If yes, by what methods? - If not, w h y not? (II) Time spent on homework - D o pupils have to do m u c h h o m e w o r k ? - D o they complain of having: too m u c h ? not enough? - Is h o m e w o r k enforced by the educational system or solely by the teacher? - D o the teachers complain that pupils do not d o their h o m e w o r k ? - W h a t importance does the teacher (school) attach to h o m e w o r k as against classroom w o r k ? (III) Time spent (or not) on competitive or complementary educational activities - D o pupils participate in extra-curricular activities? - Is this permitted by the school? - D o e s the school encourage such activities? - If yes, h o w ? - If not, w h y not? - D o pupils have to go outside the school for foreign languages, music, dancing, etc? - D o pupils complain of too m u c h work? - D o parents encourage their children to participate in educational activities outside the school? - W h a t is the educational system's attitude? - D o pupils take private lessons (tutoring) to meet school requirements?1 - W h a t is the attitude of the educational system towards tutoring (i.e., are the teachers allowed to tutor their students for m o n e y ? ) (d) Method of Study Because this is a very subjective matter, usually influenced by the teacher, there will not be any specific questions. If interested, the educational evaluator should m a k e a survey regarding the teaching technique employed by the teachers. 1. This is a question which should be raised when evaluating the teacher's effort. 121Project evaluation methodologies and techniques (e) Background CO Age1 - Is age a factor in entering a particular transformation sub-system? - If yes, is it linked to other factors such as family environment, etc.? - If not, is pupils' social environment homogenous? - Are there any discussions for changing (lowering or increasing) the age factor? - Are there any complaints from parents that the age limit is low or high? - Are there any complaints from parents that the school requirements are above or below their children's capabilities? - Is there any policy which allows the acceptance and/or promotion of a student irres- pective of his age? (II) Previous school attendance2 - Is the development of a curriculum based on the knowledge pupils have acquired in previous years? - If yes, what is the degree of pupil's excellence required (excellent, very good, good, fair)? - H o w is this knowledge checked? - Is this also the case from one educational level to the next? - Is there any entrance examination necessary for entering a n e w educational level? - If so, what are the requirements for determining the previously acquired knowledge? - Within a particular educational level, do pupils have to repeat a certain grade if they fail or can they proceed to the next? - If they fail, is there any time-limit for remaining in the same grade? - If they cannot repeat, is there any selection mechanism w h e n they enter the next grade? - For h o w long can a student be absent from the educational system, after graduating from a certain educational level, and still be eligible to return to the educational system in the following level or grade of the same level? (III) & (IV) Family and socio-cultural environment* - W h a t is the father's occupation?4 - W h a t is the father's education (measured usually in terms of total years of formal edu- cation)? - W h a t is the mother's education? - W h a t is the population of the town (village, etc.)?5 - Is there any radio and television system in operation in the town? 1. Age is an important factor for entrance into pre-primary and primary stages. A s the child grows, the age factor determining his ability to absorb and assimilate new knowledge, decreases in importance. 2. These questions have mostly to do with detecting the existing coupling between the various transformation sub-systems in terms of curricula, entrance requirements, etc. referred to above when discussing the "correctness of output" test. 3. These two factors are of great importance during the first years of schooling. A s the individual develops, the importance of these factors is relatively decreasing. 4. Father's occupation is usually employed as one of the main indicators for showing family social status. 5. From the point of view of the entire educational system, we can reverse the question and ask for the regional distribution of students. 122Annex A - Are there other cultural institutions? - D o pupils participate in cultural events, listen to radio, watch television, read news- papers, etc? (0 Inter-pupil relationships - H o w is the size of a class determined? - W h a t is the average size of a class? - D o teachers encourage free discussions in class? - W h a t is the educational system's attitude towards free discussion in class? - Are pupils encouraged to ask questions? - Are there group activities, group work, pupil associations, etc? (g) Pupil motivation - D o pupils participate actively in class work? - D o pupils do their homework? - Are they frequently absent without being ill? - D o they complain about school life in general? - D o they complain about a teacher being too severe? - D o they participate in school activities? - D o they have and sing a particular school song? - D o they wear a uniform, school caps or suits? - If so, do they like them? (h) Pupil's success Since this input is an output of the student's evaluation, it will be dealt with when discuss- ing the evaluation sub-system. (i) Curriculum The intention of the educational system's evaluator here will be to detect any objection that pupils (and/or their parents) m a y have to the aims of the curriculum they follow: - D o pupils complain about the lack of relevance of the curriculum? - D o parents complain about the curriculum? - If yes, w h y do they do so? - D o pupils complain about spending too m u c h time on a particular subject and less on others? - D o pupils complain about a subject being very difficult (or at the same level as in previous grades)?1 1. This question will help in detecting the consistency and appropriateness of a curriculum within the same educational level and/or from one level to another. In many educational systems, as for example in the French primary level, curricula are designed in such a way as to allow much repetition of the previous year's subjects evidently to strengthen the knowledge acquired in the previous year. Sometimes, however, unnecessary repetition may have the opposite results, affecting negatively a pupil's motivation. 123Project evaluation methodologies and techniques (j) Classroom conditions (the same as for the teacher) (k) Health conditions - H o w often are pupils absent from school for health reasons? - Is there any regular pupil medical inspection service? - If yes, h o w frequently are pupils inspected? - Are pupils covered by social security? - C a n pupils have meals at school? - If not, are they fed adequately at h o m e ? - D o parents complain that their children feel tired when they are back home? 2. Outcome A s stated earlier, questions related to the oucome of the learning process, which corres- ponds to the final outcome of the teaching and learning processes, have been already raised in discussing the macro-educational evaluation. They coincide especially at some important exit points of the education al system, e.g., the end of the compulsory level, the end of secondary education and at the end of higher education studies. The tests proposed already provide the basis for a more meaningful assessment as to the true value of the educational result at the various stages which goes, and should go, beyond the assessment as to whether or not a pupil learnt all that the educational system had to teach him. It is necessary to go beyond this finding for two main reasons: firstly, because what the educational system had to teach the pupil m a y not have been of m u c h importance for the pupil's further development and career (relevance of education) and, secondly, because there are so m a n y factors involved in the highly complex teaching and learning processes that it does not m a k e sense simply to rely on a pupil's achievement test for evaluating the educational system's performance. A s to the yearly outcomes one could use a similar test and, more specifically, one could also consider the following points: - the number of pupils repeating, dropping out and succeeding (with all due reservation regarding their interpretation); - the number of promoted pupils by marks received (if there is any precise grading system, again with all due reservation as to their interpretation). 124ANNEX B Three conversations and a commentary 1. A conversation between a person w h o will commission an evaluation study and an evaluation specialist favouring a consequence orientation1 Commissioner: Thanks for taking the time to see m e today. I suspect that your teaching schedule at the University keeps you hopping, but I've been told that you occasionally carry out educational evaluations. Evaluator: That's true, m y normal teaching load here at the University is pretty heavy, but this quarter is about over. Besides, I a m working n o w with a small group of graduate students in an evaluation seminar, and when I mentioned the possibility of evaluating your district's project in Reality-Rooted Reading they became really interested. C: Y o u mean you might use students in carrying out an evaluation? E: It's really good experience for them, and they often can m a k e excellent contributions to the evaluation itself. O f course, one must be careful not to exploit students in such situations. T o o m a n y of m y colleagues view graduate students as a somewhat advanced form of migrant workers. C: Well, did you have a chance to read the write-up I sent you of our new Reality- Rooted Reading programme? W e think it holds great promise as a w a y to get poor readers more involved in developing their reading skills. E: I did read the document, and you m a y be correct. There are certainly a number of positive features in the programme. I must confess, though, that I was disturbed by the apparent lack of replicability in the programme itself. It sounds more like a six-ring cir- cus than anything which, if it does work, could be used again in the future. If you're going to the trouble of evaluating this intervention, I assume that you're contemplating its use in the future. Interventions that are not at least somewhat replicable can't really be employed very well in the future. Is your Reality-Rooted Reading programme going to be essentially reproducible? 1. W e acknowledge with thanks permission granted by the O E C D to reproduce pp. 64-75 and 79- 84 of R.E.Stake, Evaluating educational programmes - the need and the response, Paris, CERI, OECD, 1976. 125Project evaluation methodologies and techniques C: I 'm glad you brought that up. The planning commitee which has been working out the programme's details became aware of that problem a few weeks ago. They're in the process of devising instructional guides which will substantially increase the replicability of the programme. E : I just hope the planning committee itself is rooted in reality. C: Well, what about the evaluation? Will you take it on? O u r district school board is demanding formal evaluations of all new programmes such as this one, so w e can't really get under w a y until the responsibility for evaluation has been assigned. E : I'll need to get some questions answered first. C: Fire away. E : What's the purpose of the evaluation? In other words what's going to happen as a consequence of the evaluation? Unless the evaluation is going to m a k e a genuine diffe- rence in the nature of the instructional programme, w e wouldn't want to m u c k with it. T o o m a n y of us here at the University have experienced the frustrations of carrying out research studies whose only purpose seemed to be that of widening the bindings of research journals. Unless an evaluation satisfies the "so what?" criterion, I'm sure w e wouldn't be interested. C: Well, the district superintendent has indicated that the continuation of the new pro- g r a m m e will be totally dependent upon the results of its evaluation. That satisfy you? E: Sure does. N o w , there was a bit of rhetoric in your programme description about appraising the programme in terms of the "uniqueness of its innovative features". Does that imply you're more concerned with evaluating the procedural aspects of the pro- g r a m m e than with evaluating the results yielded by those procedures? This is a particu- larly important issue for m e . C: Well, w e are very proud of the programme's new features. W h a t are you getting at? E: There are too m a n y educators w h o are so caught up with the raptures of an instructio- nal innovation that they are almost oblivious of its effects on learners. A n d that, after all, is w h y we're in the game. Our instructional interventions should help learners. I want to be sure that, although w e will consider the procedures employed during the pro- g r a m m e , the main emphasis of the evaluation will focus on the consequences of that pro- gramme's use. C: O h , we'd be perfectly agreeable to that. After all, you people are the experts. Besides, I guess I share your point of view. E: I also noted an almost exclusive preoccupation with cognitive, that is, intellectual outcomes of the programme. Your people seemed to be concerned only about the skills of reading. Aren't you also worried about pupils' attitudes toward reading? C: O f course, but you can't assess that kind of stuff can you? I thought the affective domain was off-limits for the kinds of evaluators w h o , as you apparently are, are concer- ned with evidence. E: It's tough to do, but there are some reasonably good ways of getting evidence regard- ing learners' affect toward an instructional programme. W e ' d want to use them. C: H o w about tests? Will you have to build lots of new ones? E : M y guess is that w e will have to devise some new measures. The standardized teach- ing tests your district n o w uses will be worthless for this kind of an evaluation. We'll need to see if there are any available criterion-referenced tests which w e can use or adapt. C: D o you people always have to use tests? E: N o , but it is important to get sufficient evidence regarding a programme's effects so that w e are in a better position to appraise its consequences than merely by intuiting those consequences. 126Annex B C: You'll still have to m a k e judgments, won't you? E: Certainly, but judgments based on evidence tend to be better than judgments m a d e without it. Properly devised measuring devices can often be helpful in detecting a pro- gramme's effects, both those that were intended as well as any unanticipated effects. C: H o w c o m e I haven't heard you say "instructional objectives" once during our con- versation? I thought you folks were all strung out on behavioural objectives. E: Well, clearly stated instructional objectives represent a useful w a y of describing a programme's intended effects. But the effects of the programme are what w e want to attend to, not just the educator's utterances about what was supposed to happen. Conse- quence-oriented educational evaluators can function effectively even without behavioural objectives. C: Amazing! E: There are a couple of other areas we have to get into. I hope you're sincere in want- ing to contrast the new programme with alternative ways that the money it's costing might be spent. C: Absolutely. E: A n d , finally, the matter of evaluator independence. Will w e have the right to release the results of our evaluation to all relevant decision-makers involved in this project, in- cluding the public? C: Y o u think that's important to get clarified n o w ? E: It might head off some sticky problems later. W e ' d like that kind of independence. C: I think it can be assured. I'll want to check it out with m y division chief, however; E: There's also a related kind of independence I want to discuss. Unlike some of the independent evaluation firms that have sprung up in the past few years, w e really aren't in the evaluation business on a full-time basis, hence in a sense w e don't need your dis- trict's repeat business. Therefore, we'll be inclined to call our shots openly, even if it means that the programme is evaluated adversely. C: That's related to your earlier point about independence in reporting the evaluation's results. E: Y o u bet. C: Okay , we're willing to play by the rules. I hope it turns out positively though. E: So do I. O u r kids' could surely do with a bit of help in their reading programme. C: Well, what next? E: W h y don't I and some of m y students whip up a detailed plan of h o w w e want to do the evaluation and fire it off to you by mail, say, in two weeks. C: Fine. If w e have any problems with it, w e can get back to you. All right? E: Sure. C: W e haven't talked about money yet. H o w m u c h will this thing cost? E: We'll include a budget with our evaluation plan. But, because university professors are so handsomely rewarded by their o w n institutions, I'm sure the amount will be a pittance, perhaps a used chalkboard eraser or two. C: Y o u guys do live in an ivory tower, don't you? E: Didn't you take the elevator on the way up? 127Project evaluation methodologies and techniques 2. A conversation between a person w h o will commission an evaluation study and an evaluation specialist favouring a responsive approach Commissioner: A s I said in m y letter I have asked you to stop by because we need an evaluator for our National Experimental Teaching Programme. Y o u have been recom- mended very highly. But I know you are very busy. Evaluator: I was pleased to come in. The new Programme is based on some interesting ideas and I hope that m a n y teachers will benefit from your work. Whether or not I perso- nally can and should be involved remains to be seen. Let's not rule out the possibility. There might be reasons for m e to set aside other obligations to be of help here. C: Excellent. Did you have a chance to look over the programme materials I sent you? E : Yes, and by coincidence, I talked with one of your field supervisors, M r s . Bates. W e met at a party last week. She is quite enthusiastic about the plans for group problem-solv- ing activities. C." That is one thing we need evaluation help with. W h a t kind of instruments are avail- able to assess problem-solving? Given the budget w e have, should w e try to develop our o w n tests? E: Perhaps so. It is too early for m e to tell. I do not know enough about the situation. O n e thing I like to do is to get quite familiar with the teaching and learning situations, and with what other people want to know, before choosing tests, or developing new ones. Sometimes it turns out that w e cannot afford or cannot expect to get useful information from student performance measures. C: But surely we shall need to provide some kind of proof that the students are learning more, or are understanding better, than they did before! Otherwise how can w e prove the change is worthwhile? W e do have obligations to evaluate this programme. E: Perhaps you should tell m e a little about those obligations. C: Yes. Well, as you know, w e are under some pressure from the Secretary (of Health, Education and Welfare), from Members of Congress, and the newspapers. They have been calling for a documentation of "results". But just as important, w e in this office want to k n o w what our programme is accom- plishing. W e feel we cannot make the best decisions on the amount of feedback we have been getting. E: Are there other audiences for information about the National Experimental Teaching Programme? C: W e expect others to be interested. E: Is it reasonable to conclude that these different "audiences" will differ in what they consider important questions, and perhaps even what they would consider credible evi- dence? C: Yes, the researchers will want rigor, the politicians will want evidence that the costs can be reduced, and the parents of students will want to know it helps their children on the College Board Examinations. I think they would agree that it takes a person of your expertise to do the evaluation. E: A n d I will look to them, and other important constituencies.teachers and taxpayers, for example, to help identify pressing concerns and to choose kinds of evidence to gather. 128Annex B C: D o you anticipate w e are going to have trouble? E : O f course, I anticipate some problems in the programme. I think the evaluator should check out the concerns that key people have. C: I think w e must try to avoid personalities and stick to objective data. E: Yes, I agree. A n d shouldn't we find out which data will be considered relevant to people w h o care about this programme. A n d some of the most important facts m a y be facts about the problems people are having with the programme. Sometimes it does get personal. C: The personal problems are no our business. It is important to stick to the impersonal, the "hard-headed" questions, like " H o w m u c h is it costing?" and " H o w m u c h are the students learning?" E : T o answer those questions effectively I believe w e must study the programme, and the communities, and the decision-makers w h o will get our information. I want any eva- luation study I work on to be useful. A n d I do not k n o w ahead of time that the cost and achievement information I could gather would be useful. C: I think we k n o w what the funding agencies want: information on cost and effect. E: W e could give them simple statements of cost, and ignore such costs as extra work, lower morale, and opportunity costs. W e could give them gain scores on tests, and ignore what the tests do not measure. W e k n o w that cost and effect information is often superficial, sometimes even misleading. I think we have an obligation to describe the complexities of the programme, including what it is costing and what its results appear to be. A n d I think we have an obligation to say that w e cannot measure these important things as well as people think we can. C: Well, surely you can be a little less vague as to what you would do. W e have been asked to present an evaluation design by a week from next Wednesday. A n d if w e are going to have any pretesting this year we need to get at it next month. E: I a m not trying to be evasive. I prefer gradually developed plans—"progressive focusing" Parlett and Hamilton call it. I would not feel pressed by the deadline. I would perhaps present a sketch like this ,one (drawing some papers from a folder); one which Les M c L e a n used in the evaluation of an instant-access film facility. His early emphasis was on finding out what issues most concern the people in and around the project. C: I think of that as the Programme Director's job. E: Yes, and the evaluation study might be thought of—in part—as helping the Pro- g r a m m e Director with his job. C: H m m . It is the Secretary I was thinking w e would be helping. Y o u m a d e the point that different people need different information, but it seems to m e that you are avoiding the information that the Secretary and m a n y other people want. E: Let's talk a bit about what the Secretary, or any responsible official, wants. I a m not going to presume that a cost-effectiveness ratio is what he wants, or what he would find useful. W e m a y decide later that it is. First of all, I think that what a responsible official wants in this situation is evidence that the National Programme people are carrying out their contract, that the responsibi- lity for developing new teaching techniques continues to be well placed, and that objec- tionable departures from the norms of professional work are not occurring. Second, I think a responsible official wants information that can be used in discus- sions about policy and tactics. Our evaluation methodology is not refined enough to give cost-effectiveness state- ments that policy-setters or managers can use. The conditionally of our ratios and our projections is formidable. W h a t w e can do is acquaint decision-makers with this particu- 129Project evaluation methodologies and techniques lar programme, with its activities and its statistics, in a way that permits them to relate it to their experiences with other programmes. W e do not have the competence to manage educational programmes by ratios and projections—management is still an art. M a y b e it should remain an art—but for the time being w e must accept it as a highly particular- ized and judgmental art. C: I agree—in part. M a n y evaluation studies are too enormously detailed for effective use by decision-makers. M a n y of the variables they use are simplistic, even though they show us h o w their variables correlate with less simplistic measures. S o m e studies ignore the unrealistic arrangements that are made as experimental controls. But those objection- able features do not m a k e it right to de-emphasize measurement. The fact that manage- ment is an art does not mean that managers should avoid good technical information. W h a t I want from an evaluation is a good reading—using the best techniques avai- lable—a good reading of the principal costs and of the principal benefits. I have no doubt that the evaluation methodology w e have n o w is sufficient for us to show people in govern- ment, in the schools, and in the general public what the programme has accomplished. E: If I were to be your evaluator I would get you that reading. I would use the best measures of resource allocation, and of teaching effort, and of student problem-solving we can find. But I would be honest in reporting the limitations of those measures. A n d I would find other ways also of observing and reporting the accomplishments and the problems of the National Programme. C: That of course is fair. I do not want to avoid whatever real problems there m a y be. I do want to avoid collecting opinions as to what problems (and accomplishments) there might be. I want good data. I want neither balderdash nor gossip. I want m y questions answered and I want the Secretary's questions answered. A n d those questions might change as w e go along. Y o u would call that "formative evaluation"? E: Sometimes. I would also call it "responsive". C: W h a t kind of final report would you prepare for us? E: I brought along a couple of examples of previous reports. I can leave them with you. I can provide other examples if you would like. Whether there is a comprehensive or brief final report, whether there is one or several, those decisions can be m a d e later. C: N o , I'm afraid that simply won't do. If w e are to commit funds to an evaluation study, w e must have a clear idea in advance of h o w long it is going to take, what it will cost, and what kind of product to expect. That does not m e a n that w e could not change our agreement later. E: If you need a promise at the outset, w e can m a k e it. Believe m e , I do not believe it is in your best interests to put a lot of specifications into the "contract". I would urge you to choose your evaluator in terms of h o w well he has satisfied his previous clients more than on the promises he would make so early. C: It would be irresponsible of m e not to have a commitment from him. E: O f course. A n d your evaluator should take some of the initiative in proposing what should be specified and what options should be left open. C: Let m e be frank about one worry I have. I a m afraid I m a y get an evaluator w h o is going to use our funding to "piggy-back" a research project he has been wanting to do. H e might agree to do "our" evaluation study but it might have very little to do with the key decisions of the Experimental Teaching Programme. E: It is reasonable to expect any investigator to continue old interests in new surroun- dings. W h e n you buy him you buy his curiosities. H e m a y develop hypotheses, for example, about problem solving and teaching style, hypotheses that sound most relevant 130Annex B to the programme—but the test of these hypotheses m a y be of little use to those w h o sponsor, operate, or benefit from the programme. His favourite tactics, a carefully controlled comparative research effort or a historical longitudinal research study, for example, might be attractive to your staff. But he is not inclined to talk about h o w unnecessary this approach m a y be. The inertia in his past work m a y be too strong. Y o u are right, there is a danger. I think it can best be handled by looking at the assignments the evaluator has had before, and by getting him to say carefully what he is doing and why, and by the sponsor saying very carefully which he wants and does not want, and by everybody being sceptical as to the value of each under- taking, and suggesting alternatives. C: Would you anticipate publishing the evaluation study in a professional journal? E: Even when an article or book is desired it is rare for an evaluation study to be suitable for the professional market. Evaluation studies are too long, too multi-purposive, too non-generalizable and too dull for most editors. Research activities within the evalua- tion project sometimes are suitable for an audience of researchers. I usually suppose that m y evaluation work is not done for that purpose. If something worth publishing became apparent I would talk over the possibilities with you. C: I think something like that should be in writing. W h a t other assurances can you give m e that you would not take advantage of us? D o you operate with some specific "rules of confidentiality"? E: I would have no objection to a contract saying that I would not release findings about the project without your authorization. I consider the teachers, administrators, parents and children also have rights here. Sometimes I will want to get a formal release from them. Sometimes I will rely on m y judgment as to what should and should not be made public, or even passed along to you. In most regards I would follow your wishes. If I should find that you are a scoundrel, and it is relevant to m y evaluation studies, I will break m y contract and pass the word along to those w h o m I believe should know. E: I have nothing to lose, but others involved m a y have, I do not want to saction scurrilous muck-raking in the n a m e of independent evaluation. I wonder if you are too ready to depend on your o w n judgment. W h a t if it is you w h o are the scoundrel? E: I would expect you to expose m e . C: By exposing you I would be exposing m y bad judgment in selecting you—the line of thought I would return to is the safeguard you would offer us against mismanagement of the evaluation study. E: The main safeguard, I think, is what I was offering at the beginning: communication and negotiation. In day to day matters I m a k e m a n y decisions, but not alone. M y collea- gues, m y sponsors, m y information sources help m a k e those decisions. A good contract helps, but it should leave room for new responsibilities to be exercised. It should help assure us that w e will get together frequently and talk about what the evaluation study is doing and what it should be doing. C: W h a t about your quickness to look for problems in the programme? Perhaps you consider your o w n judgment a bit too precious. E: I do not think so. Perhaps. I try to get confirmation from those I work with and from those w h o see things very differently than I do. I deliberately look for disconfirmation of the judgment I make and the judgments I gather from others. If you are thinking about the judgments of what is bad teaching and learning I try to gather the judgments of people both w h o are more expert than I and those w h o have a greater stake in it than I. I cannot help but show some of m y judgments, but I will look for hard data that sup- 131Project evaluation methodologies and techniques port m y judgment and I will look just as hard for evidence that runs counter to m y opi- nion. C: That was nicely said. I did not mean to be rude. E : Y o u speak of a problem that cuts deeply. There are few dependable checks on an evaluator's judgment. I recognise that. C: Y o u would use consultation with the project staff and with m e , as a form of check and balance. E: Yes. A n d I think that you would feel assured by the demands I place upon myself for corroboration and cross-examination of findings. C: Well, there seems to m e to be a gap in the middle. Y o u have talked about h o w w e would look for problems and h o w you would treat findings—but will there be any findings? W h a t will the study yield? E: If I were to be your evaluator we might start by identifying some of the key aims, issues, arrangements, activities, people, etc. W e would ask ourselves what decisions are forthcoming, what information would w e like to have. I would check these ideas with the programme staff. I would ask you them to look over some things I and other evalua- tors have done in the past, and say what looks worth doing. The problem would soon be too big a muddle, and we would have to start our diet. C: I don't care much for the metaphor. E: That m a y be as good a basis as any for rejecting an evaluator—his bad choice of metaphors. C: I've just realized h o w late it is. I a m hoping not to be rejecting any evaluators today. Perhaps you would be willing to continue this later. E: Let m e make a proposal. I appreciate the immediacy of the situation. I know a young w o m a n with a doctorate and research experience, w h o might be available to co-ordinate the evaluation work. If so, I could probably be persuaded to be the director, on a quarter - time basis. Let m e go over your materials with her. W e would prepare a sketch of an evaluation plan, and show it to you along with some examples of her previous work. C: That is a nice offer. Let m e look at your examples and think about it before you go ahead. Would it be all right if I called you first thing tomorrow morning? G o o d . Thanks very m u c h for coming by. 132Annex B 3. A conversation between a person w h o is commissioning an independent evaluation study and the evaluator who favours a "goal-free" approach Commissioner: Well, we're very glad you were able to take this on for us. W e consider this programme in reading for the disadvantaged to be one of the most important w e have ever funded. I expect you'd like to get together with the project staff as soon as possible—the director is here now—and of course, there's quite a collection of docu- ments covering the background of the project that you'll need. W e ' v e assembled a set of these for you to take back with you tonight. Evaluator: Thanks, but I think I'll pass on meeting the staff and on the material. I will have m y secretary get in touch with the director soon, though, if you can give m e the phone numbers. C: Y o u m e a n you're planning to see them later"! But you've got so little time—we thought that bringing the director in would really speed things up. M a y b e you'd better see h i m — I ' m afraid he'll be pretty upset about making the trip for nothing. Besides, he's understandably nervous about the whole evaluation. I think his team is worried that you won't really appreciate their approach unless you spend a good deal of time with them. E : Unfortunately, I can't both evaluate achievements with reasonable objectivity and also go through a lengthy indoctrination session with them. C: Well, surely you want to k n o w what they are trying to do—what's distinctive about their approach? E : I already know more than I need to k n o w about their goals—teaching reading to disadvantaged youngsters, right? C: But that's so vague—why, they developed their o w n instruments, and a very detailed curriculum. Y o u can't cut yourself off from than Otherwise, you'll finish up criticizing them for failing to do what they never tried to do. I can't let you do that. In fact, I'm getting a little nervous about letting you go any further with the whole thing. Aren't you going to see them at all! You're proposing to evaluate a three million dollar project with- out even looking at it? E: A s far as possible, yes. O f course, I'm handicapped by being brought in so late and under a tight deadline, so I m a y have to m a k e some compromises. O n the general issue, I think you're suffering from some misconception about evaluation. You're used to the rather cosy relationship which often—in m y view—contaminates the objectivity of the evaluator. Y o u should think about the evaluation of drugs by the double-blind approach... C: But even there, the evaluator has to know the intended effect of the drug in order to set up the tests. In the educational field, it's m u c h harder to pin d o w n goals and that's where you'll have to get together with the developers. E : The drug evaluator and the educational evaluator do not even have to k n o w the direc- tion of the intended effect, stated in very general terms, let alone the intended extent of success. It's the evaluator's job to find out what effects the drug has, and to assess them. If (s)he is told in which direction to look, that's a handy hint but it's potentially prejudi- cial. O n e of the evaluator's most useful contributions m a y be to reconceptualize the effects, rather than regurgitating the experimenter's conception of them. 133Project evaluation methodologies and techniques C: This is too far-out altogether. W h a t are you suggesting the evaluator do—test for effects on every possible variable? H e can't do that. E: O h , but he has to do that anyway. I 'm not adding to his burden. H o w do you suppose he picks up side effects? Asks the experimenter for a list? That would be cosy. It's the evaluator's job to look out for effects the experimenter (or producer etc.) did not expect or notice. The so-called "side effects", whether good or bad, often wholly determine the outcome of the evaluation. It's absolutely irrelevant to the evaluator whether these are "side" or "main" effects; that language refers to the intentions of the producer and the evaluator isn't evaluating intentions but achievements. In fact, it's risky to hear even general descriptions of the intentions, because it focuses your attention away from the "side-effects" and tends to m a k e you overlook or d o w n weight them. C: Y o u still haven't answered the practical question. Y o u cant't test for all possible effects. So this posture is absurd. It's m u c h more useful to tell the producer h o w well he's achieved what he set out to achieve. E: The producer undoubtedly set out to do something really worthwhile in education. That's the really significant formulation of his goals and it's to that formulation the eva- luator must address himself. There's also a highly particularized description of the goals — or there should be — and the producer m a y need some technical help in deciding whether he got there, but that certainly isn't what you, as the dispenser of taxpayer's funds, need to know. Y o u need to k n o w if the m o n e y was wasted or well-spent etc. C." Look , I already had advice on the goals. That's what m y advisory panel tells m e when it recommends which proposal to fund. W h a t I'm paying you for is to judge suc- cess, not legitimacy of the direction of effort. E: Unfortunately for that w a y of dividing the pie, your panel can't tell what configuration of actual effects would result, and that's what I'm here to assess. Moreover, your panel is just part of the whole process that led to this product. They're not i m m u n e to criticism, nor are you, and nor is the producer. (And nor a m I.) Right n o w , you have—with assistance—produced something, and I a m going to try to determine whether it has any merit. W h e n I've produced m y evaluation, you can switch roles and evaluate it— or get someone else to do so. But it's neither possible nor proper for an evaluator to get by without assessing the merits of what has been done, not just its consonance with what someone else thought was meritorious. It isn't proper because it's passing the buck, dodging the—or one of the—issue(s). It isn't possible because (it's almost certain that) no one else has laid d o w n the merits of what has actually happened. It's very un- likely, you'll agree, that the producer has achieved exactly the original goals, without shortfall, overrun or side-effects. So—unless you want to abrogate the contract w e just signed—you really have to face the fact that I shall be passing on the merits of whatever has been done—as well as determining exactly what that is. C: I 'm thinking of at least getting someone else in to do it too—someone with a less peculiar notion of evaluation. E: I certainly hope you do. There's very little evidence about the interjudge reliability of evaluators. I would of course cooperate fully in any such arrangement by refraining from any communication whatsoever with the other evaluator. C: I 'm beginning to get the feeling you get paid rather well for speaking to no one. Will you kindly explain h o w you're going to check on all variables? O r are you going to take advantage of the fact that I have told you it's a reading p r o g r a m m e — I ' m beginning to feel that I let slip some classified information. What's your idea of an ideal evaluation situation—one where you don't k n o w what you're evaluating? E: In evaluation, blind is beautiful. R e m e m b e r that justice herself is blind, and good m e - 134Annex B dical research is double blind. The educational evaluator is severely handicapped by the impossibility of double-blind conditions in most educational contexts. But (s)he must still work very hard at keeping out prejudicial information. Y o u can't do an evaluation without knowing what it is you're supposed to evaluate—the treatment—but you do not need or want to k n o w what it's supposed to do. You've already told m e too m u c h in that direction. I still need to k n o w some-things about the nature of the treatment itself, and I'll find those out from the director, via m y secretary, w h o can filter out surplus data on intentions etc. before relaying it to m e . That data on the treatment is what cuts the problem d o w n to size; I have the knowledge about probable or possible effects of treatments like that, from the research literature, that enables m e to avoid the necessity for examining all possible variants. C: Given the weakness of research in this area, aren't you still pretty vulnerable to mis- sing an unprecedented effect? E: Somewhat, but I have a series of procedures for picking these up, from participant observation to teacher interview to sampling from a list of educational variables. I don't doubt I ship up, too; but I 'm willing to bet I miss less than anyone sloshing through the s w a m p towards goal-achievement. I really think you should hire someone else to do it independently. C: W e really don't have the budget for it . . . m a y be you can do something your way. But I don't know h o w I'm going to reassure the project staff. This is going to seem a very alien, threatening kind of approach to them, I 'm afraid. E : People that feel threatened by referees w h o won't accept their hospitality don't unders- tand about impartiality. This isn't support for the enemy, it's neutrality. I don't want to penalize them for failing to reach over-ambitious goals. I want to give them credit for doing something worthwhile in getting halfway to those goals. I don't want to restrict them to credit for their announced contracts. Educators often do more good in unexpec- ted directions than the intended ones. M y approach preserves their chance in those direc- tions. In m y experience, interviews with project staff are excessively concerned with explanations of shortfall. But shortfall has no significance for m e at all. It has some for you, because it's a measure of the reliability of the projections they m a k e in the future. If I were evaluating them as a production team, I'd look at that as part of the track record. But right n o w I'm evaluating their product—a reading programme. A n d it m a y be the best in the world even if it's only half as good as they intended. N o , I'm not wor- king in a w a y that's prejudiced against them. C: I'm still haunted by a feeling this is an unrealistic approach. For example, h o w the devil would I ever k n o w w h o to get as an evaluator except in terms of goal-loaded des- criptions. I got you—in fact, I invided you on the phone—to handle a "reading pro- g r a m m e for disadvantaged kids" which is goal-loaded. I couldn't even have worked out whether you'd had any experience in this area except by using that description. D o you think evaluators should be universal geniuses? H o w can they avoid goal-laden language in describing themselves? E : There's nothing wrong with classifying evaluators by their past performance. Y o u only risk contamination when you tell them what you want them to do this time, using the goals of this project as you do so. There's nothing unrealistic about the alternative, any more than there is about cutting names off scientific papers when you, as an editor, send them out to be refereed. Y o u could perfectly well have asked m e if I was free to take on an evaluation task in an area of previous experience—a particularly important one, you could have added—requiring, as it seemed to you, about so m u c h time and with 135Project evaluation methodologies and techniques so m u c h fees involved. I could have m a d e a tentative acceptance and then come in to look into details, as I did today. C: What details can you look at? E: Sample materials, or descriptions by an observer of the process, availability of' controls, time constraints etc. W h a t I found today m a d e it clear you simply wanted the best that could be done in a very limited time, and I took it on that basis—details later. O f course, it probably won't answer some of the crucial evaluation questions, but to do that you should have brought someone in at the beginning. Your best plan would have been to send m e reasonably typical materials and tell m e h o w long the treatment runs. That would have let m e form m y o w n tentative framework. But no evaluator gets perfect conditions. The trouble is that the loss is not his, it's the consumer's. A n d that means he's usually not very motivated to preserve his objectivity. It's more fun to be on friendly terms with the project people. B y the way , the project I'm on for you is hard to describe concisely in goal-free language, but that's not true in all cases. I often do C A I evaluations, for example, and other educational technology cases, where the descrip- tion of the project isn't goal-loaded. C: Look , h o w long after you've looked at materials before you form a pretty good idea about the goals of the project? Isn't it a bit absurd to fight over hearing it a little earlier? E: The important question is not whether I do infer the goals but whether I m a y infer some other possible effects before I a m locked-in to a 'set' towards the project's o w n goals. For example, I've looked at elementary school materials and thought to myself —vocabulary, spelling, general knowledge, two-dimensional representation conventions, book-orientation, reading skills, independent study capacity, and so on. It isn't important which of these is the main goal—if the authors have m a d e any significant headway on it, it will show up; I'm not likely to miss it altogether. A n d the other dimensions are not masked by your set if you don't have one. R e m e m b e r that even if a single side- effect doesn't s w a m p the intended effect, the totality of them m a y m a k e a very real plus for this programme by comparison with others which do about as well on the intended effect and on cost. After I've looked at materials (not including teachers' handbooks, etc.), I look at their tests. O f course, looking at materials is a little corrupting, too, if you want to talk about pure approaches. W h a t I should really be looking at is students —especially changes in students, and even more especially, changes due to these mate- rials. (I'm quite happy to be looking at their test results, for example.) But the evaluator usually has to work pretty hard before he can establish cause. It's worth realizing, h o w - ever, that if he had all that, his job is not yet half done. But I guess the most important practical argument for goal-free evaluation is one w e haven't touched yet. C: Namely? E: I'm afraid there isn't time to go into that n o w . The foregoing dialogues illustrate the difficulties a commissioner and a prospective eva- luator have in getting acquainted with what the other person needs and expects. Three were written rather than one to show h o w evaluators of different persuasions respond. There is obviously c o m m o n concern a m o n g these three evaluators, but clear differences as well. 136Annex B The first evaluator stresses the need for m a x i m u m attention to results that are directly related to the instruction. The second evaluator stresses finding out the problems that most concern the people involved in this particular programme. The third evaluator stresses the need to remain independent of sponsors and programme personnel. These three evaluators represent the approaches in the grid in Section III-2 that were called Student Gain by Testing, Transaction-Observation, and Goal-Free Evaluation Approaches. It is reasonable to expect that the three contracts they would write would be quite dif- ferent, both in terms of what they would promise and in terms of the safeguards they would set forth. 137|B.]SS 76/D97/APage 18, Diagram I for: Supra-systematic read: Supra-systemic Page 25, line 7 for: sector itself. read: system itself., Page 37, line 32 for: Diagram 2 read: Diagram 3 Page 75, line 10 for: quantitative read: qualitative Page 84, line 13 for: levels: 2 read: levels: 3 Page 137, line 5 for: Section 111-2 read: Table 2 (page 54)
Project evaluation methods, page actions.
Evaluation of the project involves a comprehensive assessment of the given project, policy, program or investments , taking into account all its stages: planning , implementation, and monitoring of results. It provides information used in the decision-making process .
Evaluations can be divided from the point of view of the project goals (evaluation of action in relation to the objectives, national or community) and operational aspects (monitoring of project activities).
There is also the separation due to the moment of performing evaluation: evaluation ex ante (before implementation), current evaluation (during implementation) and ex post evaluation (after implementation).
To achieve goal of cost -effective allocation of capital, investors use different methods to assess the rationality of investment . From the point of view of the time factor, techniques profitability of investment projects are divided into: static methods, (also known as simple) and dynamic methods (so called the discount methods).
Project evaluation methods are essential for evaluating the potential success of projects. By carefully assessing the expected costs and benefits of a project, decision-makers can make more informed decisions about whether or not to pursue it. This helps to ensure that projects are launched with greater confidence and that the potential for a successful outcome is increased.
It is proposed that simple method should be used only:
The most frequently mentioned and described static methods of investment projects evaluation include:
These methods do not take into account the effect of the time, which means that the individual values are not differentiated in the following years, and the calculation involves the sum of the expected costs and benefits, or average values selected from a specified period. These methods only approximate capture the project life cycle and the level of commitment of capital expenditures.
Project evaluation methods are used to determine the potential success of any project. These techniques are used before a project is launched to assess the expected costs and benefits, and to determine whether the project is worth pursuing. They are also used after a project is completed to measure its actual performance and determine if the project achieved its desired outcomes. Project evaluation methods provide decision-makers with the information they need to make informed decisions about launching and managing projects.
Project evaluation methods can provide many benefits for decision-makers.
Project evaluation methods have certain limitations that should be considered when assessing a project. First, the estimated costs and benefits of a project can be highly uncertain and difficult to accurately predict. This means that the results of an evaluation may not be reliable. Second, the methods are generally designed to evaluate projects in terms of financial outcomes, meaning that they do not take into account non-financial factors such as customer satisfaction or employee morale. Finally, the results of a project evaluation may be subject to bias, depending on who is performing the evaluation and how they interpret the data.
There are other approaches related to project evaluation methods, such as risk analysis, stakeholder analysis, cost-effectiveness analysis, and environmental impact assessment.
These additional approaches are used to ensure that all aspects of a project are taken into consideration, and that any potential risks or negative impacts are identified and addressed before the project is launched. Together, these methods provide a comprehensive approach to project evaluation and help decision-makers determine the potential success of a project.
In other languages.
IMAGES
VIDEO
COMMENTS
Project evaluation refers to the systematic investigation of an object’s worth or merit. The methodology is applied in projects, programs and policies. Evaluation is important to assess the worth or merit of a project and to identify areas ...
Agile methodologies have gained significant popularity in the project management world due to their flexibility and ability to adapt to changing requirements. These methodologies emphasize collaboration, continuous improvement, and iterativ...
Agile has become a buzzword in the software development industry, but what exactly is it? Is agile a methodology, or just a set of principles? In this article, we will explore the core principles of agile and answer some common questions ab...
Project evaluation is the process of measuring the success of a project, program or portfolio. This is done by gathering data about the project
Steps to Conduct a Project Evaluation: A Project Evaluation Example · Step 1: Define Project Goals and Objectives · Step 2: Establish Evaluation
What Is Project Evaluation? · Is the project on track to achieve its defined aims and objectives? · How many goals have been achieved? · What
As can be seen in Figure 2, the steps for defining an evaluation methodology are the following: Defining the purpose, defining the scope, describing the
independent evaluations and the choice of an external evaluation consultant. He/she may also provide methodological input to the evaluation process. At the
Evaluation methodologies can be categorized into two main types based on the type of data they collect: qualitative and quantitative. Qualitative methodologies
Year of publication : 1977 · 1. Indicators measuring the percentage of output flows at each class level in relation to all outputs of the educational system in
Project evaluation is a strategy used to determine the success and impact of projects, programs, or policies. It requires the evaluator to
Example methods and formulas · Cost-benefit analysis: Cost-benefit analysis is a common method used to evaluate projects. · Return on Investment
In short, the evaluation methodology is a tool to help better understand the steps needed to conduct a robust evaluation. An evaluation
... project), a background to the project (e.g. what happened), the evaluation methodology undertaken, the key findings, and recommendations for future activities.