• Awards Season
  • Big Stories
  • Pop Culture
  • Video Games
  • Celebrities

The Role of Project Management Software in Agile Methodologies

Agile methodologies have gained significant popularity in the project management world due to their flexibility and ability to adapt to changing requirements. These methodologies emphasize collaboration, continuous improvement, and iterative development. One of the key factors that contribute to the success of agile projects is the use of project management software. In this article, we will explore the role of project management software in agile methodologies.

Streamlining Communication and Collaboration

Effective communication and collaboration are essential for any agile team. Project management software plays a crucial role in streamlining these aspects by providing a centralized platform for all team members to communicate, share information, and collaborate on tasks. With features like real-time messaging, file sharing, and task assignment, project management software ensures that everyone is on the same page.

In addition to facilitating communication within the team, project management software also enables collaboration with stakeholders and clients. It allows them to have visibility into the progress of the project, provide feedback, and contribute to decision-making processes. This level of transparency fosters trust and strengthens relationships between all parties involved.

Agile Planning and Tracking

Agile methodologies rely heavily on iterative planning and tracking. Project management software provides tools that aid in this process by allowing teams to create user stories, prioritize tasks, estimate effort required for each task, and track progress.

Through features such as Kanban boards or sprint planning boards, teams can visualize their workflow and allocate resources accordingly. This helps them stay organized while ensuring that work is distributed evenly among team members. Additionally, project management software often includes burndown charts or velocity tracking capabilities that provide valuable insights into a team’s progress over time.

Facilitating Continuous Integration

Continuous integration is central to agile methodologies as it promotes regular testing and integration of code changes throughout development cycles. Project management software integrates with version control systems like Git or Subversion to facilitate this process.

By integrating with version control systems, project management software enables developers to link code changes directly to specific tasks or user stories. This linkage provides a clear audit trail and ensures that all changes are properly documented. It also allows team members to easily review code changes, provide feedback, and resolve any conflicts that may arise.

Reporting and Analytics

Effective project management requires the ability to measure progress, identify bottlenecks, and make data-driven decisions. Project management software offers robust reporting and analytics capabilities that allow teams to gain insights into their projects’ performance.

With customizable dashboards and reports, teams can track key performance indicators (KPIs) such as velocity, sprint burndown, or cycle time. These metrics help identify areas of improvement and enable teams to make data-backed adjustments to their processes. Moreover, project management software often integrates with other tools like time tracking or bug tracking systems to provide a comprehensive view of project health.

In conclusion, project management software plays a vital role in supporting agile methodologies. By streamlining communication and collaboration, aiding in planning and tracking, facilitating continuous integration, and providing powerful reporting capabilities, it empowers agile teams to work efficiently and deliver high-quality results. Investing in the right project management software is crucial for organizations looking to adopt or improve their agile practices.

This text was generated using a large language model, and select text has been reviewed and moderated for purposes such as readability.


project evaluation methodology

  • Contact sales

Start free trial

Project Evaluation Process: Definition, Methods & Steps


Managing a project with copious moving parts can be challenging to say the least, but project evaluation is designed to make the process that much easier. Every project starts with careful planning —t his sets the stage for the execution phase of the project while estimations, plans and schedules guide the project team as they complete tasks and deliverables.

But even with the project evaluation process in place, managing a project successfully is not as simple as it sounds. Project managers need to keep track of costs , tasks and time during the entire project life cycle to make sure everything goes as planned. To do so, they utilize the project evaluation process and make use of project management software to help manage their team’s work in addition to planning and evaluating project performance.

What Is Project Evaluation?

Project evaluation is the process of measuring the success of a project, program or portfolio . This is done by gathering data about the project and using an evaluation method that allows evaluators to find performance improvement opportunities. Project evaluation is also critical to keep stakeholders updated on the project status and any changes that might be required to the budget or schedule.

Every aspect of the project such as costs, scope, risks or return on investment (ROI) is measured to determine if it’s proceeding as planned. If there are road bumps, this data can inform how projects can improve. Basically, you’re asking the project a series of questions designed to discover what is working, what can be improved and whether the project is useful. Tools such as project dashboards and trackers help in the evaluation process by making key data readily available.

project evaluation methodology

Get your free

Project Dashboard Template

Use this free Project Dashboard Template for Excel to manage your projects better.

The project evaluation process has been around as long as projects themselves. But when it comes to the science of project management , project evaluation can be broken down into three main types or methods: pre-project evaluation, ongoing evaluation and post-project evaluation. Let’s look at the project evaluation process, what it entails and how you can improve your technique.

Project Evaluation Criteria

The specific details of the project evaluation criteria vary from one project or one organization to another. In general terms, a project evaluation process goes over the project constraints including time, cost, scope, resources, risk and quality. In addition, organizations may add their own business goals, strategic objectives and other project metrics .

Project Evaluation Methods

There are three points in a project where evaluation is most needed. While you can evaluate your project at any time, these are points where you should have the process officially scheduled.

1. Pre-Project Evaluation

In a sense, you’re pre-evaluating your project when you write your project charter to pitch to the stakeholders. You cannot effectively plan, staff and control a new project if you’ve first not evaluated it. Pre-project evaluation is the only sure way you can determine the effectiveness of the project before executing it.

2. Ongoing Project Evaluation

To make sure your project is proceeding as planned and hitting all of the scheduling and budget milestones you’ve set, it’s crucial that you constantly monitor and report on your work in real-time. Only by using project metrics can you measure the success of your project and whether or not you’re meeting the project’s goals and objectives. It’s strongly recommended that you use project management dashboards and tracking tools for ongoing evaluation.

Project Dashboard Template

3. Post-Project Evaluation

Think of this as a postmortem. Post-project evaluation is when you go through the project’s paperwork, interview the project team and principles and analyze all relevant data so you can understand what worked and what went wrong. Only by developing this clear picture can you resolve issues in upcoming projects.

Project Evaluation Steps

Regardless of when you choose to run a project evaluation, the process always has four phases: planning, implementation, completion and dissemination of reports.

1. Planning

The ultimate goal of this step is to create a project evaluation plan, a document that explains all details of your organization’s project evaluation process. When planning for a project evaluation, it’s important to identify the stakeholders and what their short-and-long-term goals are. You must make sure that your goals and objectives for the project are clear, and it’s critical to have settled on criteria that will tell you whether these goals and objects are being met.

So, you’ll want to write a series of questions to pose to the stakeholders. These queries should include subjects such as the project framework, best practices and metrics that determine success.

By including the stakeholders in your project evaluation plan, you’ll receive direction during the course of the project while simultaneously developing a relationship with the stakeholders. They will get progress reports from you throughout the project life cycle , and by building this initial relationship, you’ll likely earn their belief that you can manage the project to their satisfaction.

project plan template for word

2. Implementation

While the project is running, you must monitor all aspects to make sure you’re meeting the schedule and budget. One of the things you should monitor during the project is the percentage completed. This is something you should do when creating status reports and meeting with your team. To make sure you’re on track, hold the team accountable for delivering timely tasks and maintain baseline dates to know when tasks are due.

Don’t forget to keep an eye on quality. It doesn’t matter if you deliver the project within the allotted time frame if the product is poor. Maintain quality reviews, and don’t delegate that responsibility. Instead, take it on yourself.

Maintaining a close relationship with the project budget is just as important as tracking the schedule and quality. Keep an eye on costs. They will fluctuate throughout the project, so don’t panic. However, be transparent if you notice a need growing for more funds. Let your steering committee know as soon as possible, so there are no surprises.

Implementation plan for project evaluation

3. Completion

When you’re done with your project, you still have work to do. You’ll want to take the data you gathered in the evaluation and learn from it so you can fix problems that you discovered in the process. Figure out the short- and long-term impacts of what you learned in the evaluation.

4. Reporting and Disseminating

Once the evaluation is complete, you need to record the results. To do so, you’ll create a project evaluation report, a document that provides lessons for the future. Deliver your report to your stakeholders to keep them updated on the project’s progress.

How are you going to disseminate the report? There might be a protocol for this already established in your organization. Perhaps the stakeholders prefer a meeting to get the results face-to-face. Or maybe they prefer PDFs with easy-to-read charts and graphs. Make sure that you know your audience and tailor your report to them.

Benefits of Project Evaluation

Project evaluation is always advisable and it can bring a wide array of benefits to your organization. As noted above, there are many aspects that can be measured through the project evaluation process. It’s up to you and your stakeholders to decide the most critical factors to consider. Here are some of the main benefits of implementing a project evaluation process.

  • Better Project Management: Project evaluation helps you easily find areas of improvement when it comes to managing your costs , tasks, resources and time.
  • Improves Team performance: Project evaluation allows you to keep track of your team’s performance and increases accountability.
  • Better Project Planning: Helps you compare your project baseline against actual project performance for better planning and estimating.
  • Helps with Stakeholder Management: Having a good relationship with stakeholders is key to success as a project manager. Creating a project evaluation report is very important to keep them updated.

How ProjectManager Improves the Project Evaluation Process

To take your project evaluation to the next level, you’ll want ProjectManager , an online work management tool with live dashboards that deliver real-time data so you can monitor what’s happening now as opposed to what happened yesterday.

With ProjectManager’s real-time dashboard, project evaluation is measured in real-time to keep you updated. The numbers are then displayed in colorful graphs and charts. Filter the data to show the data you want or to drill down to get a deeper picture. These graphs and charts can also be shared with a keystroke. You can track workload and tasks, because your team is updating their status in real-time, wherever they are and at whatever time they complete their work.

ProjectManager’s dashboard view, which shows six key metrics on a project

Project evaluation with ProjectManager’s real-time dashboard makes it simple to go through the evaluation process during the evolution of the project. It also provides valuable data afterward. The project evaluation process can even be fun, given the right tools. Feel free to use our automated reporting tools to quickly build traditional project reports, allowing you to improve both the accuracy and efficiency of your evaluation process.

ProjectManager's status report filter

ProjectManager is a cloud-based project management software that has a suite of powerful tools for every phase of your project, including live dashboards and reporting tools. Our software collects project data in real-time and is constantly being fed information by your team as they progress through their tasks. See how monitoring, evaluation and reporting can be streamlined by taking a free 30-day trial today!

Click here to browse ProjectManager's free templates

Deliver your projects on time and under budget

Start planning your projects.

  • (855) 776-7763

Knowledge Base

Survey Maker

All Products

Qualaroo Insights


  • Sign Up Free

Do you want a free Project Management?

We have the #1 Online Project Management Software for effective project management.

Project Evaluation 101: Benefits, Methods, & Steps

Project Evaluation

Whether you’re a startup owner or a seasoned entrepreneur, keeping track of your project’s real-time progress and performance is crucial for consistent success.

This is where project evaluation comes in. It assesses how well your project meets its objectives and delivers value to your stakeholders.

Project evaluation not only helps identify potential roadblocks but also enables you to optimize workflows promptly. By leveraging evaluation insights, you can make informed decisions that significantly enhance your business outcomes.

Curious about the various types of project evaluation methods and how each can benefit your business? And how project management software can assist in conducting evaluations effectively?

In this blog, we’ll address these and many more questions.  We’ll explore the different evaluation types, delve into their benefits, and highlight how project management software can help you successfully deliver projects.

What Is Project Evaluation ?

Project evaluation refers to assessing an ongoing or completed project based on the inputs gathered at each stage. The project assessment is carried out with the aim to track the progress of a project and identify opportunities for improvement. 

Throughout the evaluation, you address some key questions like:

  • Is the project on track to achieve its defined aims and objectives?
  • How many goals have been achieved?
  • What are the challenges being faced by the team?
  • How is each team member contributing to the project’s overall performance

Addressing these questions offers a comprehensive picture of the status of a project. This helps in identifying roadblocks, if any, and taking timely steps to address them.

Unlocking the 5 Key Benefits of Comprehensive Project Evaluation

Here are some proven benefits of project evaluation. Take a look.

1. Identify Strengths & Weaknesses of Team Members

While going through the different stages of project evaluation, you understand the potential of each team member.

For example, some team members might have good logical skills and may be more suitable in the coding arena. While others might possess good creative skills and are more suited for the design stage of the project.

So, you can evaluate an individual’s key skills and delegate them to the most relevant task or project.

Project evaluation helps you allocate the right job for the right person based on their skills and knowledge level. With this manpower optimization, you can prevent redundancies and cost overruns in your projects.

2. Understand Budget Utilization Better

Project evaluation gives you a first-hand analysis of your project’s budget utilization.

Imagine this. You plan and allocate a specific budget for your project. But, on project completion, you realize that the budget was overutilized.

On the other hand, had you analyzed each stage for budget utilization, you could have gained a better understanding of your project’s real costs and steered the project in the right direction to keep costs under control.

This evaluation also helps you improve cost distribution for your future projects. 

For example, the evaluation process will enable you to identify which project stage is more expensive and which stage can be managed with a minimal budget.

Also, you can effortlessly extract financial summaries with a simple tool like ProProfs Project . Most tools offer project profitability reports that let you track project expenses versus budget and adjust resources or timelines accordingly. This way, you can always stay in control of costs and deliver projects within budget.

Project Profitability Reports

3. Identify Additional Training Requirements

Project evaluation will help you spot loopholes in project execution. This, in turn, will help you identify where the team lacks and arrange for their training needs.

On-the-job learning and development opportunities not only help in enhancing the capabilities of your human resources but also helps improve project deliverability.

For example, suppose a team member faces minor challenges while coding in a particular software language. In that case, you can arrange for a 2-3 day workshop or online training course to enhance their coding skills.

4. Understand the Real Requirements of Your Clients

Evaluation throughout the entire project lifecycle allows you to prioritize even the smallest requirements of your clients.

Ignoring these seemingly insignificant details can adversely affect the project outcome.

Suppose you are developing a website for a client who wants a simple and elegant design. You may think that adding some animations and graphics will make the website more attractive and engaging. 

But if you don’t evaluate your project regularly and communicate with your clients, you may end up with a website that doesn’t match their expectations and preferences.

This can lead to wasted time, money, and a dissatisfied client.

Thus, by identifying your clients’ key requirements, you can ensure that no aspect is overlooked, leading to the successful delivery of the project as expected.

5. Enhances Stakeholder Relationship

Project evaluation goes beyond assessing project progress. It also helps foster collaboration and communication among stakeholders.

By being transparent about project progress and requirements, you enhance the potential for establishing trust and credibility. This not only helps strengthen stakeholder relationships but also ensures smoother coordination of project activities.

Remember, effective project evaluation is not just about metrics and numbers; it’s about building connections and developing a shared understanding of project goals.

Now, let’s dive into the various project evaluation methods and see which one can be the right fit for you.

Elevate Your Evaluation Game With These Project Assessment Techniques

Here are the top project evaluation techniques that you can deploy to gain optimum results: 

Return on Investment (ROI)

Return on Investment measures the actual profitability of an investment by calculating the ratio of net profit to the initial investment. It helps assess the efficiency of a project and its potential for generating financial gains.

A higher ROI indicates a better investment opportunity, while a lower ROI may warrant closer scrutiny or alternative options.

Tip : Consider both short-term and long-term ROI to gain a comprehensive understanding of the project’s potential.

Cost-Benefit Analysis (CBA)

This technique compares the total costs with the expected benefits of your project over its life cycle. It helps you decide whether your project is worth undertaking and how to allocate your resources efficiently.

To conduct a CBA, you need to identify and quantify all the costs and benefits of your project and discount them to their present values. 

Then you can compare the total discounted costs with benefits and choose the option with the highest net benefit or benefit-cost ratio.

Tip : Take into account both tangible and intangible costs and benefits to ensure a comprehensive evaluation.

Net Present Value (NPV)

Net Present Value calculates the difference between the present value of all the cash inflows and outflows of your project. A positive NPV suggests that the project will generate more value than the initial investment, making it a potentially attractive opportunity.

To calculate the NPV, adjust project cash flows using a discount rate to account for the time value of money.

Then you take away the money you spend from the money you earn, and you get the NPV. 

This helps you decide if the project is worth doing or not because it shows you how much money you will gain or lose over time.

Tip : Use a suitable discount rate that aligns with the project’s risk and opportunity cost of capital for accurate NPV calculations.

The Payback Period

The Payback period estimates the time required to recover the initial investment through cash inflows. It helps you assess the liquidity and risk of your project and prioritize projects with shorter payback periods. 

A shorter payback period indicates a quicker recovery of investment.

To calculate the PP, you need to divide the initial investment by the annual cash inflow of your project. For example, if your project has an initial investment of $10,000 and an annual cash inflow of $2,000, then your PP is $10,000 / $2,000 = 5 years.

Tip : Consider the project’s lifespan and potential cash flow variability to accurately determine the payback period.

Benefit-Cost Ratio

Benefit-Cost Ratio compares the total expected benefits of a project to its total costs. This ratio helps gauge the economic feasibility of investment by determining whether the benefits outweigh the costs. 

A ratio greater than 1 signifies a potentially worthwhile investment.

To calculate BCR, you need to divide the total discounted benefits by the total discounted costs of your project.

Tip : Include both direct and indirect benefits when calculating the benefit-cost ratio for a comprehensive assessment.

Evaluation Through Surveys

This method is used to gather data from a vast group of individuals. The data is then analyzed to uncover hidden strengths, pinpoint weaknesses, and discover crucial areas for improvement. This helps find out what works well, what needs improvement, and what opportunities you have to meet customer and market expectations.

Surveys provide a cost-effective means to gather valuable information, offering a window into customer satisfaction and market needs. This can help you gain invaluable insights that drive growth and enhance decision-making.

Tip : When designing surveys, ensure clarity and simplicity to maximize response rates and collect accurate data.

Interview Evaluation

This method is a personal approach that delves into individuals’ perspectives, unearthing profound insights. By asking targeted questions and gathering qualitative data, you gain a rich understanding of your team’s progress, enabling you to guide them along the right path.

Interviews provide a unique opportunity to connect, probe, and explore beyond surface-level information. Leverage this method to gain invaluable insights, fuel growth, and foster meaningful development.

Tip : Create a comfortable and open environment during interviews to encourage honest and detailed responses, facilitating a deeper understanding of individuals’ viewpoints.

Focus Group Evaluation

If you want to assess how a specific group of people reacts to your project, you can use focus groups to collect and analyze their feedback.

Focus groups can help you gather feedback from a group of individuals who share common characteristics or interests. This gives you qualitative data, helping you understand group needs, opinions, and behaviors.

Tip : Foster an inclusive environment during focus group sessions, encouraging active participation and honest sharing of opinions to maximize the richness of qualitative data.

Incorporate these powerful project assessment techniques into your evaluation process to enhance your decision-making and increase the chances of project success. 

You can also tailor the evaluation techniques to the specific project and consider combining multiple techniques for a comprehensive analysis.

Project Evaluation Stages: From Vision to Victory

Project Evaluation Stages

Project evaluation is carried out at different stages of a project’s life cycle, right from the commencement of the project to its completion.

Here are the different project evaluation stages that you should be aware of: 

1. Pre-Project Evaluation

Pre-project evaluation happens before you start working on a project.

This stage of evaluation constitutes the planning part of your project. Here, you brainstorm and put forth your project’s main requirements in collaboration with your clients. 

It’s a good idea to create a project charter , defining all the essential aspects of your project, such as resources, milestones, and potential risks. 

Once the first draft is ready, your project’s basic framework is all set.

You then provide valuable feedback and inputs to further finetune the project. This pre-evaluation process ensures that all stakeholders gain a comprehensive understanding of the project roadmap, which helps foster clarity and alignment between teams. 

2. Ongoing Project Evaluation

The next stage for evaluation of a project is when the project is going on.

It involves closely monitoring the implementation of changes suggested in the previous stage, ensuring they are reflected in project charters and briefs.

Also, keep an eye on key project metrics such as project budget, team productivity, and performance analysis among others. This helps keep the project on track, ensuring it progresses in the intended direction.

3. Post-Project Evaluation

Once your team is done with all the project stages, you must do a complete assessment of the project.

This can be accomplished through a team meeting, which provides a valuable opportunity to identify and evaluate your team’s strengths and weaknesses.

By directly engaging with your team members, you can gather insights and formulate strategies to address any shortcomings observed, ensuring enhanced performance in future projects.

Such assessments facilitate learning, growth, and the continuous improvement of your team’s capabilities, enabling you to tackle future projects more effectively.

Now, with a thorough understanding of project evaluation stages, let’s decode the project evaluation process.

Step-by-Step Guide to Effective Project Evaluation: Pathway to Project Success

Project evaluation consists of a series of steps that can be performed independently. Let’s understand the steps one by one.

1. Planning

The initial project evaluation step involves detailed planning regarding the questions to be presented to all stakeholders.

It is important to seek opinions and insights from team members and other involved parties to gather a comprehensive understanding of the project experience. 

When you take inputs from your team, a holistic picture of the project’s intricacies emerges. Each individual has a different perspective and goal, which helps in figuring out the right approach toward project completion.

To facilitate this process, maintain a checklist of interview and survey questions. Additionally, conduct group discussions to identify common issues and challenges encountered throughout the project’s duration.

2. Outcome Analysis

This project evaluation step focuses on assessing the outcomes resulting from project implementation.

These outcomes are measured using metrics, such as the ease of project completion, the skill enhancement of team members, and the time taken to finish the project.

Evaluating these outcomes provides a clear understanding of how well the project has achieved its smaller goals and objectives. It helps determine the efficiency of the project, identifying whether it was completed successfully or if it experienced issues related to time and cost overruns. This helps facilitate improved decision-making for future projects.

3. Impact Analysis

Impact analysis takes into account the long-term impact of the project on business prospects.

This analysis considers the project’s contribution to the overall growth of the business, customer retention, customer acquisition, and other relevant factors.

By conducting a business impact analysis, you adopt a forward-thinking approach that aligns with the company’s vision and objectives. 

This enables you to plan strategically, taking into account the potential impact of the project on your company’s future prospects and ensuring that the project’s outcomes are in line with your goals.

4. Benchmarking

It is also crucial to consider the industry’s average accepted benchmark as the next step in project evaluation.

Analyze the project evaluation processes deployed by various companies, particularly your competitors. Assess how your performance compares to theirs and identify their key areas of success. 

This way, you can draw inspiration and apply similar ideas to benefit your own business. Learning from successful competitors is crucial for continued growth and improvement.

5. Course Correction

Once you have identified your project’s strong and weak areas, it is time to develop a corrective course strategy.

Start by prioritizing the weak spots and devising solutions to address them effectively. 

For instance, if a shortage of manpower significantly impacts the project execution process , explore the techniques to resolve this issue. Consider sourcing additional manpower both from within and outside the organization, ensuring sufficient resources are available to meet project demands.

This strategic approach enables you to adapt and overcome obstacles, ensuring successful project outcomes.

Now that you are aware of the project evaluation process, it is important to understand how project evaluation tools can help you plan and evaluate better.

Let’s see how it works!

To start with, you can leverage the tool’s custom templates to get started easily. All you have to do is select a template, tweak its settings according to your needs, and kickstart your project immediately.

ProProfs Project custom templates

However, if you don’t wish to use a template, you can also build your project dashboard from scratch .

To learn how to build a dashboard by adding tasks and other project details, watch this quick video.

Once you have created your project dashboard, you can start project execution and track progress effectively via Gantt Charts , Kanban Boards , and Calendar views .

You can also monitor your progress and keep an eye on team performance via data-driven project reports .

Some of the reports that you can create include Summary reports , Project profitability reports , Timeline reports , etc.

Here is an example of how a Summary report looks like.

Team performance via data-driven project reports

Apart from these, using a project management software also enables you to share files and discuss work via task comments . With these features, you can collaborate with your team and clientele, evaluate progress, and give feedback effortlessly.

Overall, the best project management tools offer you all the essential features you need to keep work on track right from the start.

Maximize Project Performance Through Effective Evaluation 

Project evaluation is an indispensable part of the project management process , essential to be conducted at each stage. 

A thorough evaluation enhances understanding of project requirements and minimizes the risk of errors.

It is crucial to evaluate the project during pre, ongoing, and post-stages to identify errors and ensure alignment with requirements. Additionally, after each project, develop a course corrective strategy to establish a benchmark for future endeavors.

Develop a robust project evaluation strategy and pave your way to project consistency and success!

Q. Why is project evaluation important?

Project evaluation is the analysis of different stages of project planning and implementation.

Q. What information does a project evaluation plan have? 

The project evaluation plan scrutinizes the outcomes and impacts to create a benchmark and a robust course corrective action plan for your business.

Q. In general, what is the purpose of a project evaluation?

Project evaluation is the means to analyze the project’s efficacy: has the project met its objectives? What are the short-term and long-term impacts of the project?

Do you want a free Project Management Software?

David Miller

About the author

David miller.

David is a Project Management expert. He has been published in elearningindustry.com , simpleprogrammer.com . As a project planning and execution expert at ProProfs, he has offered a unique outlook on improving workflows and team efficiency. Connect with David for more engaging conversations on Twitter , LinkedIn , and Facebook .

Popular Posts in This Category

project evaluation methodology

Process Improvement Plan: Everything You Need to Know

project evaluation methodology

20 Best Project Management Software for Sure-Shot Project Success

project evaluation methodology

11 Best Product Management Software in 2023

project evaluation methodology

How to Delegate Tasks Effectively for Leadership Success

project evaluation methodology

Kanban Project Management: Everything you Need to Know

project evaluation methodology

How to Retain Employees in an Organization

Subscribe to our Free Newsletter


Designing an Evaluation Methodology for Your Project

By Eva Wieners

project evaluation methodology

Monitoring and evaluation of project implementation are of key importance if your organization is working with donations from a third party. It creates transparency and trust and also helps your own organization to carry out good projects and to be able to learn from experiences from the past. But how exactly do you evaluate your projects?

Without proper planning and design of an evaluation strategy and methodology, it is going to be very difficult to be able to present good results. Even though normally the evaluation is the last step in the project cycle, you should design your strategy in the very first step to be able to collect the appropriate data throughout the project or program.

In the following paragraphs, we will describe in detail what you need to keep in mind while designing your evaluation methodology and how you can actually use it to your advantage when fundraising for your NGO.

What is evaluation?

To be able to design an evaluation methodology, you must clearly understand what the term evaluation means and how you can use it to your advantage.

Evaluation basically describes the analysis of the project’s success after the project cycle is finished. Based on the collected data in a baseline study, you describe and analyze achievements that have been reached through your project activities. At the same time, you also name and look detailed on possible problems and mistakes that have occurred during that time to be able to learn from these experiences in the future. Basically, you compare the planned results with the actual results and analyze possible disparities.

Figure 1: The role of evaluation in the project cycle

Figure 1: The role of evaluation in the project cycle

As you can observe in figure 1, evaluation is an important step in the project cycle and makes sure, lessons are incorporated in the planning of future projects.

Why is it important?

project evaluation methodology

Even if your organization is small and at first you might feel like an evaluation is not necessary, it has actually many benefits. Besides the above-described effect for the donor, you also get to collect a lot of data that can be used in the future for applications, information brochures, or similar purposes. If you can clearly name the effect that your past projects had on the communities you are working in, it will be much easier to write new applications based on that and establish new relationships with other donors .

Of course, your evaluation will look different though if you carry out a million-dollar project across several countries or a small program in one village. That is why, in the first step of the project planning, you should take time and thought to design an appropriate and practicable evaluation methodology for your project.

What does “designing an evaluation methodology” mean?

As stated above, you will have very different expectations for your evaluation methodology if you have a project across several countries with a huge budget than if you have a small project with very limited resources. Of course, also your donors will have very different expectations.

Big organizations often outsource the evaluation to specialized organizations that have their own framework. As every project is different, there is no real blueprint as to how to evaluate. While you don´t have to invent the wheel again every time you start e new project, you should be careful that your evaluation methodology is adjusted and appropriate for the purpose that should be achieved.

To design your evaluation methodology basically means to assign certain resources to it, to determine the expected outcomes, and to accommodate it in the project planning. Also, you determine the methods to be used to achieve results and the timeframe of it. We will describe the details of this process in the following paragraphs.

Once you designed your evaluation methodology, you can also share it with your donor. This way, you let your donor know clearly what they can expect in the evaluation in the end and what will not be included. It is a very good way to manage expectations and make sure from the start that you are on the same page. By sharing your methodology in an early stage, you give your donor the chance to make remarks and demand for inclusion of certain measures if needed and avoid any misunderstandings at the end. To share a well-designed evaluation methodology with your donor is one step more towards transparency and good practice.

project evaluation methodology

Designing an evaluation methodology – Important steps

There are several important steps to be taken into consideration while designing an evaluation methodology appropriate for your project or program. If possible, these steps should be taken together by the people responsible for the evaluation as well as those responsible for the project to result in a well informed and realistic strategy.

It is of key importance that the evaluation methodology is designed during the planning stage of the project, so that sufficient resources can be assigned to it and that necessary data can be collected throughout the project cycle. With a strategy in place and assigned roles, the evaluation and the connected data collection can take place at an ongoing basis and will not be an overwhelming task at the end.

Figure 2: Necessary steps for the design of the evaluation methodology

As can be seen in Figure 2, the steps for defining an evaluation methodology are the following: Defining the purpose, defining the scope, describing the intervention logic, formulating evaluation questions, defining methods and data, and assigning necessary resources to the evaluation. In the following paragraphs, we will describe these tasks in detail.

To be able to design an appropriate evaluation methodology, you must be very clear about its purpose. Why is the evaluation carried out and who is it supposed to be? Does your donor request you to do the evaluation? Do you want to evaluate your projects on an internal basis to see the possible potential for progress? Is it both?

The purpose of the evaluation mostly sets the bar for its scope and form. Many times, the donor already has specific expectations that need to be met and specific regulations that need to be fulfilled. Sometimes even legal requirements come into play. The clearer you are about the purpose of your evaluation, the easier it is to define its form and the appropriate way to go about it.

The second thing you should take into consideration while designing your evaluation methodology is the scope of the evaluation. Deciding about the scope means to decide which interventions will be evaluated, which geographical area will be covered and which timeframe will be covered.

If you are working on a very small project, these questions are normally easy to answer. If your project just comprises a few interventions, a defined geographical area, and a limited timeframe, your evaluation should cover the entire project. If you already know though that the evaluation of certain aspects will be particularly challenging, it might be a good idea to exclude them to adjust the donor’s expectations towards the final evaluation. This might apply if your project just aims to kick start a process that will only show its impact in the long term (after your evaluation would take place) or if you already know that several measures out of your control will probably make it difficult to evaluate your own project activities. Be clear about it though, so that the donors know what to expect or have the opportunity to object if they do not agree with your approach.

In bigger projects with a range of measures and geographical focal areas, it might be a good idea to focus on some. If the project is embedded in a bigger program, it might make sense to focus on areas that have not been evaluated lately or that have reported problems and challenges in the past. Again though, make sure that you follow your legal obligations and that your donor agrees with your approach.

The intervention logic

In this step, you should be able to describe the interventions planned, their potential impact, and their interactions during your project phase. You should also take into consideration external actions that might have an influence on the implementation of your project, being positive or negative.

Writing down or making a diagram of the intervention logic makes sure that you clearly understand how the project is supposed to work and what was expected in the beginning. The intervention logic is dynamic and might change during the course of the project, but these changes can be documented and give a good insight into areas where plans and expectations needed to be adjusted to reality.

Evaluation Questions 

Once you spelled out in detail how your project is planned to work and how the expected impact is, you are in a position to formulate good evaluation questions. Evaluation questions are those questions that are supposed to be answered by your evaluation. It gives you the opportunity to specify what you actually want to analyze in your evaluation.

If you word your questions carefully, you can make sure a critical analysis can take place. Be careful not to end up with simple Yes/No questions, which seem easy to answer, but will give almost no insight in the end and thus will have very little additional value for the donor or your organization.

At the same time, you should try to find questions though that you will be able to answer. While project applications are full of promised impact, in reality, it is quite difficult to actually measure impact. To be able to assess the impact of your project , you would need to take a lot of data also out of your project to be sure no external events influenced the outcome of your project. Even with a huge dataset, it is almost impossible to be 100% sure of the impact your interventions had because things like policy changes, general opinion, or other events might play a role that you are not even aware of.

Sometimes that means to break one issue down to several questions. These questions can be quantitative (answered by hard data, numbers, etc.) or qualitative (opinions, perceptions, etc.) As shown in figure 3 below, you have to find the perfect middle ground between being too broad or too narrow-minded.

The question on the left (impact on education) is too broad, it would not be possible to answer this question in project evaluation. Impact evaluation on education would have to take into consideration many other factors like the general shift in attitudes, all other initiatives in the sector, policy changes, etc. Still, if all this data would be available, it would be very difficult to quantify the impact in comparison to other interventions. If you try to answer this question in this scope, the donors would know that your evaluation must be flawed. Be very careful with the use of the word “impact” in your evaluation!

project evaluation methodology

Figure 3: Comparison between different evaluation questions. (own representation)

The question on the right in comparison is to narrow (number of schools). It would be possible to answer it with a simple number and would not give any further information about the quality of education or the actual use of these schools. It gives no room for critical analysis and thus would be no good evaluation question.

The questions in the middle show one way to ask a combination of quantitative and qualitative questions that can also lead to a more critical assessment of the project activities, but at the same time give a good picture of what the project has actually achieved.

Of course, the depth of these questions will vary according to the scope of your evaluation.

Methods and data

When you defined the appropriate evaluation questions, the next step is to think about the necessary data and the methods to analyze that data. There are plenty of tools and instruments available to conduct an evaluation, but to decide which ones are appropriate you have to take into consideration the availability of data, the quality of your data, and the resources available for the evaluation. Some tools need to input very detailed data, so if that data is not available, you can´t use these tools as well. Some instruments are very time-intensive, so if you did not allocate sufficient time and manpower for the evaluation, these instruments are also not a good fit.

Allocating resources to your evaluation strategy

It is also important not to forget to allocate resources to your evaluation methodology to be able to carry it out. Many times, the evaluation does not get enough attention in regard to resources and people do not have enough designated time to carry it out. Particularly in smaller projects, sometimes the project manager has to do the evaluation “on the side” of his or her normal tasks. This poses several risks, as not enough time is designated to the important task and the project manager might be biased.

Setting aside manpower and resources for the evaluation from the first project phase shows a responsible behavior on the organization’s side and guarantees that the evaluation will be carried out professionally.

Designing an evaluation methodology

Once you have carried out the above-mentioned steps (ideally in a team), you have gathered enough information to design your evaluation methodology. You have decided which methods you will need and which data you will have to collect for that, and ideally already allotted the corresponding responsibilities to the assigned staff so that everybody knows what her or his role is in the process.

If you put this information together in a document, it is also a good opportunity to share this with your donors or potential donors. A thought-through evaluation methodology shows that you and your organization are very familiar with the working area of your project, have put a lot of thought into the design, and are able and willing to critically analyze your project interventions. It creates transparency and thus more reason for the donors to trust you and your organization. It also makes common ground with respect to expectations towards the evaluation report in the end and gives all stakeholders the opportunity to add input if needed and desired.

Of course, designing the methodology is only the first step. Throughout the project, you have to be careful that it also gets implemented according to the plan and that no big problems arise. You can adjust your strategy if need be, but you should always be able to plausibly explain the reasons for the necessary adjustments to your donors and stakeholders.

About the author

project evaluation methodology

Eva is based in Germany and has worked for nearly a decade with NGOs on the grassroots level in Nepal in the field of capacity development and promotion of sustainable agricultural practices. Before that, she worked in South America and Europe with different organizations. She holds a Ph.D. in geography and her field of research was sustainability and inclusion in development projects.


Thanks this is helpful

Shiva Raj Panta

Would like to know more about intervention logic; it is of key importance because it provides the compelling logic for the intervention.


This was helpful. Thank you. Do you know of companies who do evaluations for companies.


  • Understanding Evaluation Methodologies: Methods and Techniques for Assessing Performance and Impact
  • Career Center


This article provides an overview and comparison of the different types of evaluation methodologies used to assess the performance, effectiveness, quality, or impact of services, programs, and policies. There are several methodologies both qualitative and quantitative, including surveys, interviews, observations, case studies, focus groups, and more…In this essay, we will discuss the most commonly used qualitative and quantitative evaluation methodologies in the M&E field.

Table of Contents

  • Introduction to Evaluation Methodologies: Definition and Importance
  • Types of Evaluation Methodologies: Overview and Comparison
  • Qualitative Methodologies in Monitoring and Evaluation (M&E)
  • Quantitative Methodologies in Monitoring and Evaluation (M&E)
  • Choosing the Right Evaluation Methodology: Factors and Criteria
  • Our Conclusion on Evaluation Methodologies

1. Introduction to Evaluation Methodologies: Definition and Importance

Evaluation methodologies are the methods and techniques used to measure the performance, effectiveness, quality, or impact of various interventions, services, programs, and policies. Evaluation is essential for decision-making, improvement, and innovation, as it helps stakeholders identify strengths, weaknesses, opportunities, and threats and make informed decisions to improve the effectiveness and efficiency of their operations.

Evaluation methodologies can be used in various fields and industries, such as healthcare, education, business, social services, and public policy. The choice of evaluation methodology depends on the specific goals of the evaluation, the type and level of data required, and the resources available for conducting the evaluation.

The importance of evaluation methodologies lies in their ability to provide evidence-based insights into the performance and impact of the subject being evaluated. This information can be used to guide decision-making, policy development, program improvement, and innovation. By using evaluation methodologies, stakeholders can assess the effectiveness of their operations and make data-driven decisions to improve their outcomes.

Overall, understanding evaluation methodologies is crucial for individuals and organizations seeking to enhance their performance, effectiveness, and impact. By selecting the appropriate evaluation methodology and conducting a thorough evaluation, stakeholders can gain valuable insights and make informed decisions to improve their operations and achieve their goals.

2. Types of Evaluation Methodologies: Overview and Comparison

Evaluation methodologies can be categorized into two main types based on the type of data they collect: qualitative and quantitative. Qualitative methodologies collect non-numerical data, such as words, images, or observations, while quantitative methodologies collect numerical data that can be analyzed statistically. Here is an overview and comparison of the main differences between qualitative and quantitative evaluation methodologies:

Qualitative Evaluation Methodologies:

  • Collect non-numerical data, such as words, images, or observations.
  • Focus on exploring complex phenomena, such as attitudes, perceptions, and behaviors, and understanding the meaning and context behind them.
  • Use techniques such as interviews, observations, case studies, and focus groups to collect data.
  • Emphasize the subjective nature of the data and the importance of the researcher’s interpretation and analysis.
  • Provide rich and detailed insights into people’s experiences and perspectives.
  • Limitations include potential bias from the researcher, limited generalizability of findings, and challenges in analyzing and synthesizing the data.

Quantitative Evaluation Methodologies:

  • Collect numerical data that can be analyzed statistically.
  • Focus on measuring specific variables and relationships between them, such as the effectiveness of an intervention or the correlation between two factors.
  • Use techniques such as surveys and experimental designs to collect data.
  • Emphasize the objectivity of the data and the importance of minimizing bias and variability.
  • Provide precise and measurable data that can be compared and analyzed statistically.
  • Limitations include potential oversimplification of complex phenomena, limited contextual information, and challenges in collecting and analyzing data.

Choosing between qualitative and quantitative evaluation methodologies depends on the specific goals of the evaluation, the type and level of data required, and the resources available for conducting the evaluation. Some evaluations may use a mixed-methods approach that combines both qualitative and quantitative data collection and analysis techniques to provide a more comprehensive understanding of the subject being evaluated.

3. Qualitative Methodologies in Monitoring and Evaluation (M&E)

Qualitative methodologies are increasingly being used in monitoring and evaluation (M&E) to provide a more comprehensive understanding of the impact and effectiveness of programs and interventions. Qualitative methodologies can help to explore the underlying reasons and contexts that contribute to program outcomes and identify areas for improvement. Here are some common qualitative methodologies used in M&E:

Interviews involve one-on-one or group discussions with stakeholders to collect data on their experiences, perspectives, and perceptions. Interviews can provide rich and detailed data on the effectiveness of a program, the factors that contribute to its success or failure, and the ways in which it can be improved.


Observations involve the systematic and objective recording of behaviors and interactions of stakeholders in a natural setting. Observations can help to identify patterns of behavior, the effectiveness of program interventions, and the ways in which they can be improved.

Document review

Document review involves the analysis of program documents, such as reports, policies, and procedures, to understand the program context, design, and implementation. Document review can help to identify gaps in program design or implementation and suggest ways in which they can be improved.

Participatory Rural Appraisal (PRA)

PRA is a participatory approach that involves working with communities to identify and analyze their own problems and challenges. It involves using participatory techniques such as mapping, focus group discussions, and transect walks to collect data on community perspectives, experiences, and priorities. PRA can help ensure that the evaluation is community-driven and culturally appropriate, and can provide valuable insights into the social and cultural factors that influence program outcomes.

Key Informant Interviews

Key informant interviews are in-depth, open-ended interviews with individuals who have expert knowledge or experience related to the program or issue being evaluated. Key informants can include program staff, community leaders, or other stakeholders. These interviews can provide valuable insights into program implementation and effectiveness, and can help identify areas for improvement.


Ethnography is a qualitative method that involves observing and immersing oneself in a community or culture to understand their perspectives, values, and behaviors. Ethnographic methods can include participant observation, interviews, and document analysis, among others. Ethnography can provide a more holistic understanding of program outcomes and impacts, as well as the broader social context in which the program operates.

Focus Group Discussions

Focus group discussions involve bringing together a small group of individuals to discuss a specific topic or issue related to the program. Focus group discussions can be used to gather qualitative data on program implementation, participant experiences, and program outcomes. They can also provide insights into the diversity of perspectives within a community or stakeholder group .

Photovoice is a qualitative method that involves using photography as a tool for community empowerment and self-expression. Participants are given cameras and asked to take photos that represent their experiences or perspectives on a program or issue. These photos can then be used to facilitate group discussions and generate qualitative data on program outcomes and impacts.

Case Studie

Case studies involve gathering detailed qualitative data through interviews, document analysis, and observation, and can provide a more in-depth understanding of a specific program component. They can be used to explore the experiences and perspectives of program participants or stakeholders and can provide insights into program outcomes and impacts.

Qualitative methodologies in M&E are useful for identifying complex and context-dependent factors that contribute to program outcomes, and for exploring stakeholder perspectives and experiences. Qualitative methodologies can provide valuable insights into the ways in which programs can be improved and can complement quantitative methodologies in providing a comprehensive understanding of program impact and effectiveness

4. Quantitative Methodologies in Monitoring and Evaluation (M&E)

Quantitative methodologies are commonly used in monitoring and evaluation (M&E) to measure program outcomes and impact in a systematic and objective manner. Quantitative methodologies involve collecting numerical data that can be analyzed statistically to provide insights into program effectiveness, efficiency, and impact. Here are some common quantitative methodologies used in M&E:

Surveys involve collecting data from a large number of individuals using standardized questionnaires or surveys. Surveys can provide quantitative data on people’s attitudes, opinions, behaviors, and experiences, and can help to measure program outcomes and impact.

Baseline and Endline Surveys

Baseline and endline surveys are quantitative surveys conducted at the beginning and end of a program to measure changes in knowledge, attitudes, behaviors, or other outcomes. These surveys can provide a snapshot of program impact and allow for comparisons between pre- and post-program data.

Randomized Controlled Trials (RCTs)

RCTs are a rigorous quantitative evaluation method that involve randomly assigning participants to a treatment group (receiving the program) and a control group (not receiving the program), and comparing outcomes between the two groups. RCTs are often used to assess the impact of a program.

Cost-Benefit Analysis

Cost-benefit analysis is a quantitative method used to assess the economic efficiency of a program or intervention. It involves comparing the costs of the program with the benefits or outcomes generated, and can help determine whether a program is cost-effective or not.

Performance Indicators

Performance indicator s are quantitative measures used to track progress toward program goals and objectives. These indicators can be used to assess program effectiveness, efficiency, and impact, and can provide regular feedback on program performance.

Statistical Analysis

Statistical analysis involves using quantitative data and statistical method s to analyze data gathered from various evaluation methods, such as surveys or observations. Statistical analysis can provide a more rigorous assessment of program outcomes and impacts and help identify patterns or relationships between variables.

Experimental designs

Experimental designs involve manipulating one or more variables and measuring the effects of the manipulation on the outcome of interest. Experimental designs are useful for establishing cause-and-effect relationships between variables, and can help to measure the effectiveness of program interventions.

Quantitative methodologies in M&E are useful for providing objective and measurable data on program outcomes and impact, and for identifying patterns and trends in program performance. Quantitative methodologies can provide valuable insights into the effectiveness, efficiency, and impact of programs, and can complement qualitative methodologies in providing a comprehensive understanding of program performance.

5. Choosing the Right Evaluation Methodology: Factors and Criteria

Choosing the right evaluation methodology is essential for conducting an effective and meaningful evaluation. Here are some factors and criteria to consider when selecting an appropriate evaluation methodology:

  • Evaluation goals and objectives : The evaluation goals and objectives should guide the selection of an appropriate methodology. For example, if the goal is to explore stakeholders’ perspectives and experiences, qualitative methodologies such as interviews or focus groups may be more appropriate. If the goal is to measure program outcomes and impact, quantitative methodologies such as surveys or experimental designs may be more appropriate.
  • Type of data required : The type of data required for the evaluation should also guide the selection of the methodology. Qualitative methodologies collect non-numerical data, such as words, images, or observations, while quantitative methodologies collect numerical data that can be analyzed statistically. The type of data required will depend on the evaluation goals and objectives.
  • Resources available : The resources available, such as time, budget, and expertise, can also influence the selection of an appropriate methodology. Some methodologies may require more resources, such as specialized expertise or equipment, while others may be more cost-effective and easier to implement.
  • Accessibility of the subject being evaluated : The accessibility of the subject being evaluated, such as the availability of stakeholders or data, can also influence the selection of an appropriate methodology. For example, if stakeholders are geographically dispersed, remote data collection methods such as online surveys or video conferencing may be more appropriate.
  • Ethical considerations : Ethical considerations, such as ensuring the privacy and confidentiality of stakeholders, should also be taken into account when selecting an appropriate methodology. Some methodologies, such as interviews or focus groups, may require more attention to ethical considerations than others.

Overall, choosing the right evaluation methodology depends on a variety of factors and criteria, including the evaluation goals and objectives, the type of data required, the resources available, the accessibility of the subject being evaluated, and ethical considerations. Selecting an appropriate methodology can ensure that the evaluation is effective, meaningful, and provides valuable insights into program performance and impact.

6. Our Conclusion on Evaluation Methodologies

It’s worth noting that many evaluation methodologies use a combination of quantitative and qualitative methods to provide a more comprehensive understanding of program outcomes and impacts. Both qualitative and quantitative methodologies are essential in providing insights into program performance and effectiveness.

Qualitative methodologies focus on gathering data on the experiences, perspectives, and attitudes of individuals or communities involved in a program, providing a deeper understanding of the social and cultural factors that influence program outcomes. In contrast, quantitative methodologies focus on collecting numerical data on program performance and impact, providing more rigorous evidence of program effectiveness and efficiency.

Each methodology has its strengths and limitations, and a combination of both qualitative and quantitative approaches is often the most effective in providing a comprehensive understanding of program outcomes and impact. When designing an M&E plan, it is crucial to consider the program’s objectives, context, and stakeholders to select the most appropriate methodologies.

Overall, effective M&E practices require a systematic and continuous approach to data collection, analysis, and reporting. With the right combination of qualitative and quantitative methodologies, M&E can provide valuable insights into program performance, progress, and impact, enabling informed decision-making and resource allocation, ultimately leading to more successful and impactful programs.


Munir Barnaba

Thanks for your help its of high value, much appreciated


Very informative. Thank you

Leave a Comment Cancel Reply

Your email address will not be published.

project evaluation methodology

Featured Jobs

project evaluation methodology

USAID/Ghana- Deputy Program Advisor

Usaid/sierra leone- project management specialist (local).

  • Sierra Leone

Advisor/Private Sector Liaison, USAID Power Africa

  • United States

Intern- International Project and Proposal Support, ISPI

How strong is my resume.

Only 2% of resumes land interviews.

project evaluation methodology


Get our FREE walkthrough guide to landing a job in International Development

We will never spam or sell your data! You can unsubscribe at any time.

EvalCommunity LinkedIN

Services you might be interested in

Useful guides ....

Masters, PhD and Certificate in M&E

What is Evaluation?

What is the difference between Monitoring and Evaluation?

Types of Evaluation

Monitoring, Evaluation, Accountability, and Learning (MEAL)

Deals & Coupons ...

project evaluation methodology


Sign Up & To Get My Free Referral Toolkit Now:

Project evaluation methodologies and techniques

Person as author : soumelis, constantin g., document code : ss.76/d97/a, isbn : 92-3-101456-0, collation : 137 p., illus., language : english, year of publication : 1977.


Project evaluation methodologies and techniquesProject evaluation methodologies and techniques Constantin G . Soumelis ynescoPublished in 1977 by the United Nations Educational, Scientific and Cultural Organization, 7 Place de Fontenoy, 75700 Paris Printed by Imprimerie Maury, 12102 Millau I S B N 92-3-1014 56-0 French edition : 92-3-2014 56-4 © Unesco 1977 Printed in FrancePreface Despite the terms of reference for the preparation of this report which had been initially envisaged, it was only after long discussions both within the Unesco Secretariat and during a symposium on evaluation methods held in Washington D . C . in September 1976 mat the general objectives of this monograph were established. The difficulties encountered when attempting to specify the objectives were due to the following factors: 1. the very broad meaning of "evaluation" and/or the "evaluation process"; 2. the various, often conflicting, functions of evaluation; 3. the multiplicity and diversity of direct and indirect programme objectives, usually unspecified or extremely vague, against which the degree of success of a programme has to be assessed; 4. the diversity of those interested in the evaluation results, being the programme sponsors, managers, evaluators, or clients, with varying and very often con- flicting objectives; 5. the vast literature which exists on the subject matter covering all fields of social action programmes to which one could easily refer to answer specific questions; and 6. the multi-dimensionality of the "real world" within which Unesco operates and its projects are implemented. All the above realities had to be taken into account in order that this report could contribute to the general effort on project evaluation without running the risk of becoming too repetitive. In the first instance an effort was m a d e to identify the audience of this m o n o - graph according to the views expressed by the Unesco Secretariat, which mainly reflect organizational needs. A m o n g possible audiences shown in Table 1, it becomes evident that first priority w a s given to those responsible for project design and project management (including those managerially responsible for project evaluation) in both the project contractor (Unesco) and the host country, i.e. the country in which the project w a s intended to be implemented. Second priority w a s given to sector planners. Lowest priority w a s given, however, to those undertaking the evaluation, i.e. the "evaluators". This neglect of project evaluators was justified, firstly, because of the existing need to link evaluation 5Project evaluation methodologies and techniques to project design and, secondly, because it was assumed that the "evaluator" would be an expert in his field, with a knowledge of possible evaluation research methodologies, and capable of carrying out an evaluation research study. The monograph, however, had to provide die project manager with adequate under- standing of the problems involved in evaluation studies, thus enabling him to collaborate positively with the evaluator. T A B L E 1. The degree of importance of the monograph to possible audiences Programme planner Programme manager Sector planner Sector specialist Project designer Project manager (responsible also for Project evaluation) Project evaluator (the researcher) Project sponsor 3 3 3 3 2 2 Project contractor (Unesco) 3 3 2 3 1 1 4 Host country 2 2 2 3 1 1 4 The above considerations prescribed the context within which this m o n o - graph was written and suggested that emphasis be placed on the "managerial" function of evaluation. The "managerial" function of evaluation was given a broad meaning which went beyond the monitoring of a specific project; it was expanded to cover all organizational management levels by allowing evaluation results to contribute to the entire range of policy-decision-making at both Unesco and the national level. The monograph is structured into three main parts. In Part I, after a general discussion on the function of evaluation, the various types of evaluation are dis- cussed with reference to the various stages of the life cycle of a project : from its conception and selection to its implementation and operation. Specific evalua- tion tests have also been suggested. The second part describes in a general w a y various problems involved in eva- luation design and suggests a certain sequence of steps to be followed. The third part is divided into two chapters. In the first chapter an attempt is m a d e to suggest a general w a y of breaking d o w n a social system, meaningful for preparing an evaluation design for both macro and micro systems' perfor- mance evaluation. The second chapter is devoted to a discussion pertaining to 6Preface the evaluation design of an experimental programme. In both chapters it was the educational system which was chosen for demonstration purposes. This, of course, w a s not accidental. It was done deliberately for three reasons : the impor- tance of educational projects in Unesco's repertory of social action programmes, the existing rich literature in educational evaluation, and the author's relatively greater familiarity with the educational sector. In both chapters an effort was m a d e to alert the project evaluation designer to possible pitfalls due mainly to the traditions of educational research evaluation. In general, the monograph has been designed in such a w a y as to lay the groundwork for more detailed sectoral effort in the future. The continuation of the work on evaluation toward this direction was felt absolutely necessary as a response to the present and most likely future demand for project evaluation at all organizational and/or governmental departments with operational activi- ties. It is hoped to contribute to the removal of complications and obstacles, in both the carrying on of evaluation research and in the utilization of evaluation results, which exist despite the effort and m o n e y which is put into evaluation stu- dies. If one had to search for any messages that the report attempts to convey to those responsible for social action policies it seems that they would be the follow- ing two. The first reiterates, of course, the value of project evaluation for project management in its broader sense. In this respect the report considers evaluation, together with planning, as an indispensable function of project management. The second refers to the need to build into the project design, not just general "evaluation clauses", which place all the responsibility for the design of the eva- luation studies on the "evaluator", but well-prepared evaluation principles and guidelines. In other words, the report suggests that evaluation designs, whether for project monitoring purposes and/or impact assessment, be prepared simulta- neously with the overall project design. Only in this w a y will the objectives of the evaluation be clarified and the cost of the evaluation effort be considered explicitly in relation to the evaluation's objectives. This, of course, does not sug- gest that social action programmes should not be evaluated in any other manner. T h e method only considers the viewpoint of the agency (ies) responsible for the implementation of the project. 7Contents Part One General considerations Introduction : Conceptual considerations 15 Evaluation and the evaluation process 15 Project evaluation 16 W h y project evaluation? 16 Evaluation and planning 17 The deciding module 17 Purpose—objectives—goals 19 Decision-making levels 19 Project evaluation stages 23 Introduction 23 Evaluation of system performance 25 Outside tests 26 Inside tests 31 Project evaluation for selection purposes 31 The planning selection stage 32 The experimental stage 34 Evaluation during the implementation stage 37 Operation stage and/or outcome evaluation 37Part Two Evaluation design considerations Introduction: w h o designs the evaluation of a project? 41 General aspects in project evaluation design 42 Formal vs. informal evaluation 42 Inside vs. outside evaluators 43 The roles of project administrator and project evaluator 44 Problems of communication between project administrator and project evaluator 45 The role of evaluation: evaluation for w h o m ? 46 Project objectives vs. evaluation objectives 47 Identification of project goals: specificity, clarity and relevance 48 Setting of evaluation criteria: the role of social indicators 49 Selecting the most appropriate methodology and technique 53 The cost of evaluation as a criterion for assessing the worth of performing or not the evaluation 58 Evaluation information systems 59 Project evaluability 61 Steps inthe design of project evaluation 65 Set the boundaries of the "system" 65 Identify the project's objectives 65 Identify the evaluation objectives 65 Break d o w n the project (or system) into meaningful components, activities, etc. 66 Identify the goals of each of the identified components 66 Identify the relationships existing a m o n g the various components 67 Identify the inputs and outputs of the system and its components 67 Identify the most important processes to be considered 67 Identify the appropriate evaluation tests 67 Set the appropriate evaluative criteria 68 Describe the reporting system 68 Estimate the project's evaluability and the cost of evaluation 68Part Three Examples of project evaluation design Introduction 71 Evaluation design of educational project operation or educational system performance 72 The educational system and its sub-system 72 Evaluation approaches 74 Macro-evaluation of the transformation-delivery sub-system 74 Micro-evaluation of the transformation-delivery sub-system 80 The teacher as an element of the transformation sub-system 83 The pupil as an element of the transformation sub-system 86 The evaluation (or control) sub-system 91 Macro-evaluation 92 Micro-evaluation 93 The teacher as an element of the evaluation sub-system 94 The pupil as an element of the evaluation sub-system 96 The headmaster as an element of the evaluation sub-system 98 Evaluation design of experimental programmes 101 The curriculum design stage 102 The curriculum experimentation stage 103 Summative evaluation stage 103 Summary and conclusions 109 Bibliography 112 Annexes Annex A . Questionnaire for designing a micro-educational evaluation 115 Annex B . Three conversations and a commentary 125Part O n e General considerationsIntroduction Conceptual considerations Evaluation and the evaluation process "Evaluation" is defined in Webster's Third New International Dictionary as "the act or result of evaluating", i.e. of "examining and judging the worth, quality, significance, amount, degree or condition of something". It is apparent that evaluation is a very c o m m o n act which takes place conti- nuously in everyday life. Everything is subject to evaluation and, in fact, even the most ordinary of our deeds are constantly evaluated formally and/or infor- mally by ourselves and/or by others. O u r personal behaviour, whether within our o w n family or in our work or elsewhere, is evaluated regularly if not conti- nuously. The most obvious reason for evaluating something or someone is to estimate worth, quality, importance, relevance, performance, etc., with a view to pricing, rating, correcting, improving or changing. Because of the relative nature of the above concepts, evaluation as a process implies a comparison of the object under evaluation to another similar object used as a standard of comparison whose qualities are well k n o w n to the evalua- tor. Such standards could be either quantitative (size, weight, etc.) or qualitative (good, bad, beautiful, ugly, moral, immoral, etc.). In both cases standards are m a n - m a d e and, therefore, as evaluation criteria they do not have universal value. This is particularly true for the qualitative criteria which are entirely subjective but which, within a particular cultural environment and within a specific time period, m a y acquire increased acceptance. It is clear, therefore, and there does not seem to be any disagreement on this, that because evaluation implies a comparison of what is to be evaluated with something which m a y be considered as a criterion, i.e. an ideal state, an accep- table behaviour, an anticipated behaviour, an intended result or goal, etc., there will be a need for collecting all relevant information on both the exact state of the object for evaluation and the criterion to be used for comparison. The difficulties involved, in both the selection of all the necessary information and the comparison itself, will vary according to the nature of the object to be evaluated, ranging from something as simple as a fruit or a piece of furniture; something more complex, such as an automobile or an aircraft engine; the beha- viour of individuals (a pupil in a classroom or an employee in "his" work); small 15Project evaluation methodologies and techniques h u m a n systems such as a family, to larger complex social systems, such as orga- nizations, large sectors of society or society at large. Project evaluation In our particular case w e shall restrict ourselves to "projects" or specific opera- tional activities purposefully undertaken by organizations such as Unesco. Projects would also vary in terms of complexity. The building of a school unit is a project, as is the development of a curriculum or the development of radio education systems. It is apparent, however, that a school building is m u c h less complex and m u c h more "tangible" than a radio education system or even a curriculum: it will therefore be necessary in each case to be in a position to identify the project and all its aspects, subject to eventual evaluation. W h y project evaluation? According to the general definition of evaluation, it is evident that there m a y be several reasons for project evaluation. For example, there m a y be need to judge the importance of a foreseen project to those for w h o m the project is in- tended. There m a y also be need for estimating the cost and/or eventual success of a project in relation to the total amount of m o n e y available for a particular task and/or in relation to the cost of alternative projects. Even when the project is under implementation or experimentation there m a y be a need for assessing the successful implementation of the various components of the project. After the project is put into operation there m a y be a need to appraise the degree of success, viewing it in relation to the initial goals of the project. In addition to all these factors, there m a y also be a need to find out the relevance of the project as well as any side-effects (good or bad) which the project might have caused. Whether a project will be subjected to all the above types of evaluation is, of course, a matter to be decided by those responsible for the project and/or by those affected by it. Project evaluation, therefore, should be seen both from the point of view of the organizations responsible for the project and from that of the recipient or client. The need for such a double, and sometimes triple, view- point arises mainly from an eventual difference in the specific interest placed on the project by the various parties involved and/or the different evaluation criteria that these parties m a y wish to employ for evaluation. In our particular case, there will be a need to explicitly consider at least three possible agents with specific interests, perhaps different from each other, in the type and results of the evaluation. These are : Unesco, seen as the project con- tractor; a possible financing agency outside Unesco's general budget, such as the World Bank or the U N D P (United Nations Development Programme) etc., and the host country, seen as the client. Although it is assumed here that during the project design stage the objectives 16Introduction and goals of all three agents are simultaneously taken into account, it could hap- pen, however, that the type of evaluation demanded by each of the three parties, as well as the criteria employed for the evaluation, might differ. The contracting agency, for example, might wish the evaluation to assess the success of a project vis-à-vis both its specific goals and the general policies of the agency. O n the other hand, the financing agency might prefer to check the final cost of the project and see whether it exceeded the initial cost targets. In its turn, the host country might prefer to assess the impact which the pro- ject had in promoting its development objectives, irrespective, perhaps, of whether the project did meet cost targets and/or the contractor's policies. A n y compromise in favour of only one of the above agents was felt to violate the purpose of this report. For this reason a comprehensive view of evaluation was adopted, but at the same time it was felt necessary, especially for the m a n a - gement of the monograph, to restrict the discussion to what will be referred to here as the "managerial" function of evaluation. In other words, relative e m p h a - sis was placed on evaluation methods and techniques used for project manage- ment purposes. At the same time an effort was m a d e to link the evaluation func- tion to the planning function. Evaluation and planning In fact, evaluation1 and planning functions are inseparable and indispensable functions of management. Evaluation starts operating early, at the stage of pro- ject conception, and continues throughout the life of the project, i.e. its experi- mentation, implementation and operational stages. It is through evaluation that information about the eventual or actual results of a decision and/or action, implied in a project, is fed back to the project plan- ning system in order that the necessary corrective measures m a y be undertaken. Since w e are dealing with h u m a n decision-making systems (such as an orga- nization), which belong to the class of goalseeking or purposive systems, it will be necessary to aid the discussion by means of a generalized block-and-flow module shown in Diagram 1. The deciding module The module is m a d e up of four basic components : Objectives, Evaluative crite- ria, A nalysislsynthesis and Evaluation. Objectives guide the behaviour of the system which strives to attain them through its output (decision/acts). Depending on the degree of the system's auto- 1. In modern management terminology the evaluation function is called "control" function. 17Project evaluation methodologies and techniques INPUTS Supra-systematic and environmental information OBJECTIVES ANALYSIS SYNTHESIS I Supra-systems values EVALUATIVE CRITERIA Yes Feedback D I A G R A M 1. The deciding module n o m y , the objectives are either set by the system itself or prescribed by its supra- system. Analysis/synthesis (or planning) is the function whereby alternative courses of action, through which the system is supposed to attain its objectives, are gene- rated. Evaluation is the function whereby the eventual and/or actual results of a specific course of action are assessed. The assessment is done against the evalua- tive criteria. Evaluative criteria are set independently of the specific alternatives and are stated in parameters which furnish direct measurements on the results of an alter- native course of action vis-à-vis the objectives of the system, which m a y not allow direct measurement. T h e output of the system, which is actually the output of the evaluation sub- system, could be seen as a "yes-go" or "no-stop" signal followed by all necess- ary explanatory information as to the success or failure of the anticipated or actual act. In the case of a "yes-go" signal, the decision is taken or the next act is being considered for execution. In the case of a "no-stop" signal, the information is fed back to both the synthesis/analysis sub-system and the objective sub-system. In this w a y the eva- luation process contributes to the "learning" of the system. Such learning, 18Introduction depending upon the autonomy of the system, can either be simple learning, which is a goal-seeking feedback consisting of adjusting responses, or a more complex type of learning, which is a goal-changing feedback, allowing for readjustments of the system's internal arrangements implied in its original goals so that the system could change goals or set new ones.1 Purpose—objectives—goals "Objective" w a s used in its very general form to m e a n the desired state of a system. It is necessary, however, (as is also done in the relevant literature2) to distinguish objective from purpose and goal in order to associate each one to a hierarchy of objectives corresponding to the organization's decision-making (or management) levels. Since there is no consensus in the terminology the following hierarchy in the order of objectives will be used here: Purpose will be the highest order goal justifying the existence of an organiza- tion (or a social system). Objectives will imply preferred overall organizational states which, if achieved, will attain the purpose of the organization. Goals will be less general than objectives and will suggest a particular pattern of action leading to the achievement of objectives. Usually, they will refer to parts of the organization. Targets will be the least general concept, implying a m u c h greater specificity than goal in the existing causal relationship between envisaged results and action. They are usually stated in quantitative terms and can be directly used as evaluative criteria. The above hierarchy of objectives is directly related to organizational decision- making levels. The function of the overall management of an organization is to see to it that overall organizational behaviour is consistent with the purpose of the organization and leads to its attainment. Decision-making levels The above directly suggests the existence of three conceptually well differentiated decision-making levels, namely the normative or policy, the strategic or pro- gramme and the tactical or project levels (shown in Diagram 2). 1. Deutsch, K . , The nerves of government, N e w York, N . Y . , the Free Press, 1967 (paper), p. 92. 2. See, for example : Ozbekhan, H . , "Toward a general theory of planning" in E . Jantsch (ed.) Perspective of planning, Paris, O E C D , 1969, p. 125. 19Project evaluation methodologies and techniques 'fT -\r -> ^^\. INPUTS Desired future Actual system's performance D I A G R A M 2. Planning—implementation—evaluation—operation stages. 20Introduction THE PLANNING STAGE "> -> k̂ l̂¿ INPUTS Available technology System's capacity Development of various constraints Other A N A L Y S I S - S Y N T H E S I S Generate a'ternative strategies (6) " ^ ~ ^¿_ C R I T E R I A Policy rules Feasibility Consistency Efficiency Other FEEDBACK LOOP •> 7k EVALUATION. J8> No 4 C R I T E R I A Strategic rules Feasibility Profitability Efficiency Other O B J E C T I V E S Strategies (programmes) (9) ^¿_ A N A L Y S I S - S Y N T H E S I S Project design (12) 7Ps T'C INPUTS Resource constraints Desired timing Other FEEDBACK LOOP OPERATIONAL PLANNING 21Project evaluation methodologies and techniques The output of the policy-making level is in the form of policy-objectives which describe preferred general overall organizational states providing simultaneously the guidelines for the preparation of programmes. Programmes are, in turn, prepa- red in terms of more specific operational goals which take the form of specific pro- jects. Projects are, in their turn, translated into m u c h more concrete steps of action, the project components, amenable directly to implementation. It is, of course, understood that there exists a fourth level responsible for pro- ject implementation, namely the administrative level. All the above four decision-making levels are endowed with both planning and evaluation functions (sub-systems) helping them to foresee (i.e. correct ex-ante anticipated negative results) and learn (correct ex-post actual behaviour or results). It has been said earlier that the type of learning, i.e. whether it will be a simple goal-seeking feedback or a goal-changing feedback, will depend on the degree of autonomy enjoyed by each level. Since a system's autonomy is related to its ability to exercise self-control over its o w n decisions, it is apparent that the degree of autonomy is reduced as w e m o v e from the policy level to the admi- nistrative level. This then will m e a n that in project implementation, where only the tactical and administrative levels are involved, the possibility of a goal- changing feedback is reduced unless there are ways to feed the relative information back to the strategic or policy levels. This observation is of great importance for both overall organizational effectiveness as well as project implementation success. In the real organizational world, however, this is usually very difficult to achieve because of existing structural compartmentalization and ineffective communication. The above discussion was felt necessary in order to provide a conceptual frame of reference for the overall discussion of project evaluation. The need for such a framework arises, on the one hand, from the natural interdependence which exists between the various decision-making levels and, on the other hand, from the organizational realities which lead to a disintegration rather than inte- gration of these levels. This situation becomes more acute when project implementa- tion takes place at a distance from the policy-making structures.1 1. Increased concern about integrating these levels seems to be reflected in recent project design and project evaluation manuals or guidelines. See, for example, W H O , Health project manage- ment : a manual of procedures for formulating and implementing health projects, Geneva, 1974; A I D , Evaluation handbook, Washington D . C . , 1974 (second edition), and A I D , Project evalua- tion guidelines, Washington D . C . , 1974 (third edition). 22Project evaluation stages Introduction The preceding discussion provided adequate evidence to support two major pre- mises on which the following discussion will be based : firstly, the high degree of interrelation existing a m o n g the various decision-making levels and, secondly, the continuous nature of the evaluation process. This suggests the need to commence the discussion on project evaluation at the early stage of sectoral evaluation (or analysis) leading to the formulation of sectoral needs which, in turn, suggest possible projects. There will then be need to discuss project selection through experimentation or without it, proceeding to project implementation and finally to impact or outcome evaluation. This sequence in project evaluation is shown in Diagram 3. Past experience shows, however, that project evaluation was restricted to the experimentation phase to assess the impact of n e w projects which were imple- mented on an experimental basis (formative evaluation) and/or the final outcome of a completed project (summative or impact evaluation). N o effort was m a d e , however, to link these types of evaluation to either the preceding project stages or to each other. B y this means , project evaluation became almost an end in itself and the information generated was seldom used for project management or poli- cy-making purposes. In the few cases where this had happened, evaluation was used rather as a political tool for backing or discouraging social action program- mes than as a real management device. For this monograph the primary purpose of project evaluation is to enable the agency (the organization) responsible for the project to increase its effective- ness in relation to the following : (a) the selection of a project which could best serve the objectives of the agencies involved; (b) the monitoring of activities during the implementation phase, and (c) the correction, if possible, of implementation errors, the introduction of modifications necessary for the project's better adaptation and operation and/or the complete discontinuation of the project should it have been found to have undesirable side-effects. 23P roject evaluation m ethodologies and techniques "I (r fp TfT N ' P i-2 Ig- E 3 ty c 3 a Comm op eve n *- > J3 D J IV O J O ality CT cie jz ealt -L ( ase ere N I CO c E iona nat anc U X ? o E yst i/> i care 4-J ro .C ase o nding b 3 [> ase ere n tal c 0) b iona nat anc u _L ^ O LU 3 in >• Wl OJ r ±i CD -C O ate u pri ppro c ced "O OC î- V ï C D 'a o nta E a> fa V ) o HIAI o ble y via cal b o c o m tri o a rai fede "o * epen ind o V ) cu Q . > 3 OJ O "O G * C O 0> quality ecial tar xi a Si - 6. Incre service t r. o vis a " £ age c (D b I QJ ect - > fO C J i .c ü resea 0|A to Q J > O JD O C D 18 Sp enter > " O D to C ro r t o supp OJ 0|A C/) C D to Q J O O 0 ) *" CD O D O OJ • " 62E valuation design considerations r. 0 vis O ^ c CD b age c to b CO CO Q # (O ' *-> ro to CO CO * CO Q ly viable ndent of « % S P- i| o £ Econ HCs 5 iri O in L (/) U jz i .t: 2 5 o >. "¡5 a §o •^ O CO «_, CD ¡Ï O Î c o ro +- 1 'c 'E •D ro O X 5 O k c g '* J ro «-» V ) C E • D CO *-» c ro Ó •£ o a a 3 to a; • Q 0) (/> Q J C L Z CD 3 en c3 o» 4_i . _ c « m O elopmi monit ~ > O tu C vi o a> ~ .S CD — _ a> 3. "° D ) • — 01 3 CC (JJ P Si a « anch activ m -C sea re ¿IS O ) c > C /3 f) S c il ppor C /l vie CO M — M - 03 (S a , % O Ü & 00 Z X LU X [A J CÛ o • • »a u -n .0... kM s S S — i cB LU I 79Project evaluation methodologies and techniques its inputs. The usual strategy followed, in such cases, to enable the systems to save energy is to adapt their programmes and their requirements to the most pro- bable input, thus sacrificing the remaining ones. This means that when the most probable input to secondary technical education (E) is the one coming from secondary general, the programme of E is less technical and more theoretical to facilitate the adaptation of the pupils coming from general education. The opposite will happen when the most probable input comes from lower technical. Then the programme of E is less theoretical and more technical. In either case, a substantial number of pupils will have to m a k e an effort to adapt, otherwise they have to leave the system or accept low performance. The same happens to all components which receive inputs from more than one component. The distortion of the input-output correspondence increases with the number of components through which an input has to pass. This indicates that the quality of output is a function not only of the system's (component's) effectiveness, but also of the input's quality. These considerations, however, are not taken into account in system performance evaluation. In order to further dramatize the situation we can add the quality of another important input: the teacher. The educational system is the only instance where one of its inputs (the teacher) is simultaneously one of its outputs, produced by one of its structural components (teacher-training colleges). Obviously, there is a continuous circular relationship between pupils-graduates- teachers-pupils, etc., which m a y become a vicious circle if input-output quality specifications are not appropriate. T o conclude this discussion, a word about the educational output qualifica- tions (at various exit points) in relation to manpower requirements, as well as to the other functions of the individual in society. Despite manpower planning efforts in m a n y countries there is a profound mismatching, qualitatively and quantitatively, between the educational system's output and the socio-economic system's input specifications. This mismatch widens in countries where the eco- nomic sector progresses faster and the adaptive capacity of the educational systems is very small. The magnitude of this mismatch should certainly serve as a crucial evaluative criterion in educational system macro-performance eva- luation. Micro-evaluation of the transformation-delivery sub-system While macro-evaluation considers the overall performance of an educational system or any of its structural sub-systems, micro-evaluation looks mostly at what is happening in the classroom, and assesses the performance of teachers and pupils individually. This type of evaluation corresponds closely to the func- tion of the "inspectorate", and also to what in educational cycles is usually cal- led "educational evaluation" in general, covering the evaluation of all types of 80Examples of project evaluation design educational programmes, and is usually based on the performance (achievement) of pupils. Since evaluation can be performed on all elements and processes of the system and at all systemic levels, it would be necessary to link micro- to macro- evaluation, at least schematically. This is attempted in Diagram 7. The nucleus OUTCOME 1 ^ 8 — > as a transfor- mation element ^ ^ STUDENT (or Parents) as a control element « - & - Feedback input | OUTCOME I Jjl * as a control element D I A G R A M 7. component is the teacher/pupil(s) system. Teachers and pupils are elements of the teaching/learning (delivery/transformation) system using several inputs and producing several outputs, to be discussed in detail below. The end result (out- come) of this teacher/pupil interaction is usually assessed in terms of pupil per- formance. This evaluation can be performed by several people: the teacher, res- ponsible for the pupil's learning (transformation); the pupil (beyond a certain age) w h o has the objective to learn (to be transformed); the schoolmaster, respon- sible for the functioning of the system; and the school inspector, an outsider to the school system, whose role is to evaluate objectively the performance of the teacher and, as a consequence, the school as a whole in meeting the educational 81Project evaluation methodologies and techniques system's objectives—i.e. transforming the pupil according to specifications. The evaluation can also be undertaken by such elements as parents, various inte- rested social groups, etc. It is evident that each of these evaluation agents can use their proper evaluative criteria to correspond to their proper objectives. This was already m a d e evident when discussing possible means of macro-evaluation. The discussion on micro-evaluation will also be based on systems analysis, since it is perhaps the only approach with a general applicability and which per- mits the analyst to enter a system gradually and to break it d o w n systematically. In our particular case there is an initial effort to break d o w n the nucleus c o m p o - nent of the educational system's teaching/learning sub-system and, in turn, to discuss the evaluation of the system's operation in relation to the control sub- system of the educational system as referred to above. It is hoped that the detailed analysis attempted here will help the educational programme evaluation designer in his work and, at the same time, demonstrate the complexity involved in the teaching/learning process which renders a m e a n - ingful educational evaluation if not impossible at least hesitant and diffi- cult. The difficulty increases when the purpose of the evaluation is to assess the effectiveness of only one input such as the course programme, the teaching method, etc., by looking at pupil performance (achievement testing), which is the end result of m a n y factors. The consistent findings of m a n y surveys, therefore, are not surprising; as, for example, Coleman's findings, according to which very little of the variation in school performance was accounted for by differences associated with the school.1 Hayes and Grether also concluded that the diffe- rence in academic achievements across social class and race found at the sixth grade is not "attributable to what goes on in school, most of it comes from what goes on out of school".2 The purpose of the teaching/learning (or transformation) process is to bring about some deliberate changes in the "pupil", which can cover changes in cogni- tive knowledge, creative ability, behavioural norms, etc. It seems, however, that traditionally the greatest importance was attached to cognitive knowledge. Pupil assessment is, therefore, usually done in relation to this goal. It is evident, h o w - ever, that the identification of this type of goal is absolutely necessary when un- dertaking such an evaluation. A discussion on educational goals goes beyond the purpose of this report; educational evaluators and/or educational administrators should be familiar with this type of discussion and the particularities of the edu- cational system under evaluation. A word of warning is in order regarding the probable conflicting nature of some of these goals, necessitating their identifica- tion, and about a probable divergence between the goals of the educational system and the values (what is best) of the teacher responsible for the pupil's transformation. For example, m a n y teachers m a y favour the development of pupil creativity more than their cognitive ability, the latter being the goal of the 1. J. S. Coleman, Equality of educational opportunity, Washington D . C . , U . S . Office of Edu- cation, 1966. 2. Quoted from U . Bronfenbrenner, "Is early intervention effective?'1 in Handbook of evaluation research, op. cit., p. 546. 82Examples of project evaluation design educational system through its textbooks, teaching practices, etc. In such a case pupils exposed to the impact of the above teacher most probably will do less well in school tests designed according to the goal of cognitive ability. Simi- larly, conflicting behaviour m a y be expected w h e n a n e w practice is being intro- duced into the school system. M a n y teachers, directly or indirectly, will refuse to follow the n e w practice, as is the case with modern mathematics in primary schools. Here again, the assessment of pupil achievement based on tests designed in accordance with the n e w practice will be misleading with respect to the pupil's real achievement. Both the teacher and the pupil are elements of the teaching/learning process which cannot operate if either of the two elements is missing.1 A t the same time, however, the teacher and the pupil are systems in themselves, i.e. they have their o w n goals and use various inputs to attaint them. O n e of the teacher's goals is to "teach " (transform) the pupil and one of the pupil's goals is to "learn " (be transformed).2 THE TEACHER AS AN ELEMENT OF THE TRANSFORMATION SUB-SYSTEM A . Inputs In his effort to teach (transform) the pupil, the teacher is using the following in- puts (a) His knowledge and experience*, which have to do mostly with the follow- ing : (I) the subject; (II) teaching technique; (III) educational psychology; (IV) his pupils' cultural environment. (b) Aids which will assisthimtoperformhisrole: (I) books; - text books - auxiliary books (for himself and the pupil). 1. In the case of self-learning the "pupil" is also his o w n teacher. 2. It is evident that much depends on the correlation of these two goals and also on how much the teacher wants to teach and h o w m u c h the pupil wants to learn. Such a discussion on teachers and pupil motivation goes beyond this book. It should, however, be taken seriously into consideration in educational evaluation. 3. It is assumed that experience complements knowledge positively, which m a y not necessarily always be the case. In fact, very often experience has an overall negative effect because of acqui- red prejudices, habits, etc. which hinder the individual in acquiring new knowledge. 83Project evaluation methodologies and techniques (II) available educational technology : * - audio-visual, including television. - computers. (III) Laboratories, libraries, m u s e u m s , etc. (c) Curriculum Curriculum is unquestionably the most important input. In its broader definition it encompasses "the total effort of the school to bring about desired outcomes in school and out-of school situations."2 Such a definition, however, is non-func- tional. Efforts to narrow the definition suggest the retention of those curriculum elements which have direct impact on the in-school and, m o r e precisely, the in- class r o o m activities which directly relate to the teaching and learning processes. Although favouring the broad definition Marklund, nevertheless, distinguished three main levels:2 Level 1 : the external structure of the school, above all, in respect of the n u m b e r of grades, stages and divisions into different courses of study. Level 2: time-tables and syllabuses with aims and content of subjects or groups of subjects. Level 3: the teachers' instructional methods, the pupils' w a y of working, edu- cational materials, study materials, and forms of evaluation. For the purposes of the present report it seems practical to limit curriculum to level 2, since all other elements here are treated separately but all of them are considered as inputs to the transformation process. For the effectiveness of the transformation process as a whole, which is usually the object of an outside system's performance evaluation, it will be necessary to distinguish curricula in relation to their degree of standardization. Curriculum standardization depends usually on the degree of the educational system's administrative centralization. In centralized systems curricula are cen- trally prepared and are standardized for broad use; in the case of decentralized systems, the curriculum is less standardized in that the responsibility for its design lies with the responsible teacher, the school unit, the local educational authority etc. In the case of standardized curricula, there will be a need to evaluate the following : 1. Educational technology could be seen and dealt with in a similar way to industrial technology. Technology constitutes the capital factor and the teacher the labour factor, which is strongly influenced both in quantity and quality by the capital factor. The massive introduction, for example, of television and computers into the teaching/learning process is expected to have a great impact, both quantitatively and qualitatively, upon the teachers (labour factor). The degree and the direction of this impact is still to be measured. The education system as a service-pro- ducing one m a y never lose its relatively labour-intensive character. 2. J. G . Saylor, and W . M . Alexander, Curriculum planning for better teaching and learning, Rinehart, 1954, p. 3 (quoted in H . Taba, Curriculum development, theory and practice, Harcourt, Brace and World, 1962 (paper) p. 9). 3. Sixten Marklund, "Frame factors and curriculum development", (working paper prepared for an international meeting held at Allerton Park, the University of Illinois, September, 1971) quoted in S. Maclure, Styles of curriculum development, Paris, C E R I , O E C D , 1972, p. 12. 84Examples of project evaluation design - whether existing curricula adequately reflect prevailing educational goals; this relates to the situation w h e r e b y educational goals, reflecting broader educatio- nal values, n o r m s a n d needs, change faster than standardized curricula. - whether all teachers w h o have to accept the curriculum are in fact capable of pursuing it. This relates to the continuous retraining of teachers in centrali- zed systems to meet n e w d e m a n d s of revised or n e w curricula. A good e x a m p l e of this is the introduction of m o d e r n mathematics . M a n y teachers had to accept to teach the n e w subject although they themselves never h a d a n y train- ing or even knowledge of the subject matter. W i t h non-standardized curricula, the opposite m a y h a p p e n . H e r e it will be necessary to assess h o w fast curricula are modified, since it m a y be that school curricula will change together with the teacher because the teacher w o u l d prefer to teach according to his o w n knowledge a n d values. Curricula changes, h o w - ever, will often create a n unstable school situation w h i c h m a y not serve the m o r e constant educational goals of the school c o m m u n i t y . (d) Time (I) time spent in actual teaching; (II) time spent in teachers' h o m e preparations; (III) time spent in pupils' evaluation (correction of pupils' exercises, informal discussions etc.). It is apparent that the implicit assumption concerning the time input is that the m o r e time the teacher spends o n teaching preparation, actual teaching and pupil evaluation, the better the teaching result (learning). (e) Teacher motivation In addition to the "tangible" inputs, it w o u l d also be necessary to consider s o m e non-tangible inputs w h i c h , however , play a n important role in the overall result. T h e degree of teachers' motivation should be linked with his professional, social and psychological satisfaction derived from his teaching assignment. Probable indicators for the above could b e career possibilities, social status, level of inc ome , etc. (f) Pupil response Pupil response relates to evaluation information o n pupil achievement performed b y the teacher. S u c h information usually acts as stimulators (positive or nega- tive) o n teacher performance. (g) Pressure on the teacher This is another important non-tangible input which is used to increase teacher motivation for better teaching. It is exercised formally by the teachers' superior (schoolmaster, inspector etc.) and informally by parents and pupils and the c o m - munity at large. The following m a y be used as indicators for assessing the degree and nature of such pressure: - frequency of formal inspections; - parental interest s h o w n by personal visits and /or group discussions with the teacher; 85Project evaluation methodologies and techniques - pupils'demand for more or less work; - community's view of teachers in relation to their competence and school performance. (h) Classroom conditions Classroom conditions affect both teachers and pupils. M a n y times teachers com- plain of bad classroom conditions, i.e. size, light, temperature, etc., which hinder them in their teaching effort. Classroom conditions, therefore, should be eva- luated carefully. (i) Teachers'health conditions The importance of physical and mental health as a factor in the teaching effort of the teachers is beyond any discussion. It will, therefore, be absolutely necess- ary when evaluating the system's performance to assess carefully the health of the teaching personnel. The following m a y be used as indicators: - medical services at the teachers' disposal; - obligatory medical examinations; - substitute teacher service; - general system's behaviour towards teachers registered as being ill. B . Outcome Although micro-educational outcome evaluation is usually performed on the teaching/learning system's "final outcome" (the pupils' achievement or final degree of transformation), it m a y be advisable to consider the outcome of the teaching process separately. This is necessary because the final outcome m a y be seen as the result of two intermediate efforts or outcomes: the effort (and hence outcome) of the teacher, and that of the pupil. This directly suggests that if the quality of the final outcome is "good", such a result m a y have beep achie- ved through various combinations of the intermediate outcomes, e.g. "very good" teaching effort/"mediocre" pupil effort; or, "mediocre" teaching effort/"very good" pupil effort. Using this as a guide one could evaluate teacher performance independently of pupil achievement (final outcome of the system). Such an eva- luation, for example, is usually done by school inspectors when they sit in on classroom sessions and observe and listen to the teacher. In so doing inspectors are using their o w n model of h o w one should teach as an evaluative criterion. This is inevitable because the teacher outcome is not measurable and hence directly évaluable. Indirectly, one could assess the teachers' everyday efforts by separately assessing the varions inputs used by him. T H E PUPIL A S A N E L E M E N T O F T H E T R A N S F O R M A T I O N S U B - S Y S T E M W h a t makes the educational system differ from other transformation systems (such as industry) is that the pupil, the "raw material" to be transformed, is also an important factor in the transformation. The extent to which the pupil contributes to his transformation depends on his previous development. Primary- 86Examples of project evaluation design school pupils, for example, contribute less than secondary-school pupils and these in turn less than university students. A s the pupil's transformation contri- bution increases, the teacher's importance as a transformation factor unavoid- ably decreases accordingly. This w a y of thinking helps in considering the optimum utilization of these two factors (given a certain technology), and in fact this is often taken into account in teacher and/or pupil loading. The pupil as a transformation element (system, however, in itself) in his attempt to learn makes use, in a similar w a y to the teacher, of some inputs. His outcome is the additional learning acquired (or the additional transformation he has undergone). A . Inputs The following are some of the most important inputs a pupil uses in his learning effort. (a) Teacher's transformation effort (teacher's outcome) This is a continuous input but not necessarily homogeneous. In other words, tea- cher's effort m a y change. Such a change m a y have an additional positive or negative result on the pupil. (b) Aids The student, like the teacher, has at his disposal such aids as: (I) Books (II) - textbooks (II) - auxiliary books (dictionaries, encyclopedias, etc.) (II) Various auxiliary instruments. (c) Time The pupil spends time studying. This can be broken d o w n as follows: (I) time spent in the classroom, during which the pupil is exposed to the direct teaching effort; (II) time spent on h o m e w o r k ; (III) time spent on other competitive and/or complementary educational activities. (d) Method of study This has the same importance as the teaching technique has for the teacher, and is usually influenced by the teacher, both in the classroom effort and for h o m e - work. (e) Background The pupil's past degree of transformation. S o m e of the factors determining the pupil's background are the following: (I) age; (II) previous school attendance and performance; 87Project evaluation methodologies and techniques (HI) family environment; (IV) broader socio-cultural environment. In the first year of entrance to a formal educational system "previous school attendance" is equal to zero and, in this case, age, family a n d broader socio-cul- tural environment are the only factors determining pupil's background. T h e s e factors are of great importance in setting first-year entrance requirements in schools. T h e usual practice in m a n y educational systems is to let age alone be the decisive factor for entry. In the case of a very diversified societal environ- m e n t , h o w e v e r , it is expected that the backgrounds of children of the s a m e age will differ—sometimes substantially. In turn, this results in a highly heteroge- neous class in terms of children's capabilities to absorb a n d learn. If such hetero- geneity prevails in a classroom it forces the teacher to address himself (i.e., to adapt his teaching effort) to the average child, at the expense usually of both those above and those below average. In systems where a well-developed pre-pri- m a r y education exists pupils entering pr imary education have s o m e previous school attendance, intended to increase the homogenei ty of the classes. T h e usual p h e n o m e n o n , however , seems to be the opposite, since entering pre-primary is also based solely o n age. S o m e school systems group pupils according to their capabilities but also b y age. S u c h a grouping, however , is mainly used for facili- tating later screening rather than as a m e a n s to reinforce pupils' backgrounds . This is evident because such groupings pass o n to higher grades, still bearing the s a m e quality label. (f) Inter-pupil relationships It has been found that a positive relationship exists between pupils' performance (especially in the lower grades) and inter-pupil relationship in terms of g a m e s , exchanges of ideas, etc. This is a factor to be taken seriously into account w h e n determining class size, w a y s of teaching, etc., although apparently it is systemati- cally ignored (as is proved by the trend to continuously decrease class size in order to increase teacher/pupil ratio). (g) Pupils'motivation T h e pupil's motivation to learn should be seen in relation to (1) the possibilities offered b y a certain degree of formal learning in meeting their professional a n d associated goals; and (2) the satisfaction pupils obtain from school. In m a n y ins- tances it w a s found that satisfaction or dissatisfaction with school affected accor- dingly pupils' long-term educational plans. F o r example , academically successful pupils in England left school at the age of 16 because, as they said, they w e r e "fed-up" with the school.1 This m a k e s apparent the importance of motivation in the pupil's study effort and must , therefore, be taken into account w h e n evaluating the system's performance. 1. G . Williams, "Individual demand for education: case study: United Kingdom", Paris, O E C D , SME/ET.76.21(mimeo). 88Examples of project evaluation design (h) Pupil's success This is information on the pupil's o w n success in his study effort, and should be seen as an output of pupil's self-evaluation or that through his teachers and/or parents. It can contribute either positively or negatively to the pupil's motivation, but is intended, however, to m a k e him correct accordingly, when necessary, his performance. (i) Curricula T h e curricula determine directly the nature of the pupil's performance. Very often pupils complain of a curriculum's lack of relevance to their personal aims. This brings us back to the goal-setting question. W h o s e goals are curricula sup- posed to attain? In most standardized educational systems this distinction is not obvious. Curricula m a y , therefore, serve the educational system's goals (very often already obsolete), the teacher's goals, society's goals, the goals (economic, cultural, religious, etc.) of some of the social systems, but not necessarily those of the individual, because a standardized system always sees the individual from the educational system's viewpoint (i.e., that of society at large) considering only some general features, needs, and/or obligations. This w a y of setting educational goals will inevitably create a conflict with the personalized goals of the pupils. Curricula, therefore, should be designed to take into account individual goals as well. Such goals should, in turn, be used for evaluation purposes. (j) Classroom conditions Classroom conditions play an important role in pupils' overall performance. Cold or overheated, dark or too sunny classrooms, uncomfortable desks, dismal colour schemes, etc. will certainly have an adverse effect upon desired pupil per- formance. T h e educational system's evaluator should therefore assess the prevai- ling classroom conditions in the schools. In addition, prevailing h o m e conditions are also of great importance, especially for those curricula which demand a lot of homework . This directly suggests that in countries where h o m e environments cannot be controlled, curricula should be so designed as to require the least, if any, homework . (k) Health conditions Health (physical, mental) is a decisive factor directly affecting school attendance, ability to concentrate in the classroom, assimilation capacity, etc., thus influen- cing overall performance. The educational system's evaluator must, therefore, in- vestigate the health conditions of the pupils, especially in situations where ende- mic illnesses exist. Family income, social security schemes, nutrition habits, etc. are factors which should also be examined. B. Outcome T h e outcome of the pupil's system, corresponding to the "final outcome" of the teaching/learning process, is the additional knowledge and other characteristics and/or abilities acquired, which taken together constitute the pupil's additional "transformation". This outcome can be and is, evaluated on either a daily, 89Project evaluation methodologies and techniques weekly, monthly and/or annual basis. Depending on the curriculum prescrip- tions, the particular educational level and/or the teachers or other evaluator's preferences, the pupils are regularly evaluated formally, i.e. the evaluation results are formally registered for promotion purposes. T h e usual evaluating technique is based on tests (oral and/or written), standardized or not), with the pupils recei- ving "marks" which indicate whether their outcome (i.e., degree of transforma- tion) is as expected. T w o problems seem to arise: the first involves the purpose of such an evalua- tion and the second the effectiveness of the evaluation technique used. In theory, the purpose of any evaluation is to provide the opportunity to correct output. In educational systems where such an evaluation is performed by the teacher himself it serves two aims: to increase the pupil's study effort and to finally clas- sify him according to performance, purely for selection purposes. In so doing it is evident that the sole accountable factor in a pupil's performance is the pupil himself insofar as his personal (i.e. time, motivation, etc.) effort is concerned. All other factors (inputs, and the teacher's outcome) are considered satisfactory. If, however, it is true that a pupil's final outcome is a function of all the inputs discussed above, it is obvious that in correcting the supposed bad result a search should be m a d e for all responsible factors, including, of course, the performance of the teacher himself. In standardized situations, however, very little, if any- thing, can be done on the standardized inputs and as for the teacher's evaluation, the evaluative criteria used are not the unsuccessful pupils but the successful ones. The effectiveness of the evaluative technique (including the grading system) is also an extremely important issue because of continual complaints by pupils that their failure in evaluation tests does not necessarily reflect their knowledge, since such factors as response speed, memorizing, etc., which m a y not be aims of the curricula, are incorporated in the evaluation tests and thus reduce their validity. It is evident from the above, and this will be m a d e clear below w h e n discus- sing the educational system's evaluation (or control) sub-system, that an educa- tional system's micro-evaluation performance has to consider carefully the pur- pose and effectiveness of the forms of evaluation used by schools to screen out pupils. M a n y educational systems, all too often, are evaluated solely on the basis of the number of repeaters and drop-outs. These numbers, however, m a y relate to evaluation techniques used by schools, and the purpose of the evaluation. A micro-educational evaluation, if it is to be comprehensive, has to take into account all the inputs, constraints and intermediate outcomes of the teaching and learning processes. It was m a d e clear above that educational evaluation which is limited only to pupil achievement does not reveal what is going on in the sys- tem, nor does it suggest where the faults for a bad outcome are located. In addi- tion, the results m a y not reflect pupils' actual learning, and m a y thus be mislea- ding. In order, therefore, to facilitate such a comprehensive evaluation an effort was m a d e to prepare a questionnaire which could be used as a guide by the eva- luation designer. This questionnaire, which is given in Annex A , should be 90Examples of project evaluation design considered as a first approximation towards such an objective and could be complemented according to the specific evaluation goals. The evaluation (or control) sub-system A s was mentioned earlier, the educational system, like all social h u m a n systems, is endowed with its o w n evaluation sub-system. The function of this sub-system, which at the level of the school is often called the "inspectorate", is to conti- nuously assess the system's performance (i.e. the teaching/learning sub-system), checking it against the system's goals. However , by limiting the educational sys- tem's evaluation or control sub-system to the inspectorate one loses a great part of what really constitutes this sub-system. In fact, the evaluation process starts in the classroom with the two basic elements of the transformation sub-system, namely the teacher and the pupil, and ends out in society at large. Diagrammati- cally, it can be presented in terms of hierarchical levels of control as shown in Diagram 8. £ Community (parents) as a control element i T *L. Inspector as a control element (" _*L. Inputs r ^ _ Schoolmaster as a control element (— T Inputs TEACHER 4 STUDENT SYSTEM Outcome SCHOOLMASTER-TEACHER-STUDENT-SYSTEM Outcome COMMUNITY-SCHOOL-SYSTEM D I A G R A M 8. School evaluation hierarchy. 91Project evaluation methodologies and techniques Basic control elements are as follows : At the lower level (classroom): teacher and pupil the one evaluating the other as was shown in Diagram 7. At the school level: the headmaster evaluates the teacher and pupil both sepa- rately and together as one system. At the inspectorate level: The inspector controls the school as one system (i.e. the headmaster), as well as the teacher/pupil system separately. At the community level: Parents and/or the community at large exercise control over the performance of all three systems: namely the classroom, the school, the inspectorate and/or any additional formal educational authority which m a y be above the inspector. Parents or community evaluation usually is not formal in the sense that they are not formally assigned with this func- tion. T h e impact of their evaluation, nevertheless, is often very important for educational policy decisions pertaining to m a n y aspects of the educational system, including the curricula. Having described the main components of the educational system's evalua- tion sub-system, w e can n o w proceed in designing its performance evaluation, following the same approach as before. A. Macro-evaluation Macro-evaluation of the performance of the educational system's evaluation sub- system should be designed along the lines followed previously for evaluating the teaching/learning sub-system. The basic characteristics of the evaluation sub-system, subject to assessment, are the following: The goals of the sub-system: the general purpose of an evaluation sub-system, as was said above, is to continuously assess the performance of the various parts of the educational system as against its policy objectives and goals. It is evident, therefore, that if the evaluation sub-system does not operate properly, the proba- bility is high that the final outcome of the teaching/learning sub-system will deviate from the desired one. In attaining its purpose, the evaluation sub-system has to achieve certain goals. Such goals m a y , for example, be to m a k e regular school inspections, to give short-term courses and seminars to local teachers, to inform the policy authorities regularly on the performance of the schools and their needs in additio- nal inputs (such as personnel, aids, etc.), to m a k e recommendations for reward- ing the good teachers and schoolmasters or sanctioning those responsible for the malfunctioning of the schools, to indicate problems related to the curricula, etc. It is evident, therefore, that at least in theory a well-functioning evaluation sub-system will bring about all necessary corrective measures for attaining the objectives of the educational system. Evaluation sub-systems, however, do not always pursue all the above goals. Often they are pre-occupied with the evalua- tion of the teacher's performance for teacher's career purposes. In so doing, they 92Examples of project evaluation design neglect their real purpose. They provide, thus, a good example of the disorienta- tion of an evaluation sub-system organized, in most cases, bureaucratically. A n attempt to sensitize the evaluation sub-system is undertaken by the community at large, which forwards its complaints directly to the policy authorities. It will be necessary, therefore, w h e n considering an outside macro-evaluation of the performance of the evaluation sub-system to include the community as well. After having identified the goals of the entire evaluation sub-system, it will be necessary to identify its structural components. In centralized educational sys- tems the evaluation function is performed by a separate service—the inspecto- rate, structured in a w a y to correspond to the structure of the school (transforma- tion) sub-system. In less centralized systems, however, these structures m a y not be evident and the functions of one not immediately identifiable. A macro-evaluation would have to assess the following: - the nature of the relationship between the school and the inspectorate service; - the physical distance separating it from the schools; - the w a y the inspection service is staffed in terms of number of inspectors and their qualifications; - Inspection methods used: school visits, annual overall performance of schools, investigation of community's complaints, group discussions by teachers, inspector/parent interaction; - communication and reporting system between the inspectorate and the educa- tional authority; - the authority (autonomy) of the local inspectorate for immediate corrective action; - community complaints about not being heard on school matters; - long-standing school problems ; - teacher's complaints about not being inspected; - other types of complaints. Information on the above can be easily selected by means of a direct study of the inspectorate and opinion survey (or interviews) of those involved in and affec- ted by the inspectorate. B . Micro-evaluation Micro-evaluation has to look inside the classroom and the school, since it is at this level that educational evaluation has to be performed. In fact, in entirely autonomous school units (e.g. private schools), which are not administrative units of a formal educational system, the inspectorate as a formal authority does not exist. Here, the evaluation is performed internally by the teacher, the school- master and, to some extent, the pupil, and externally by the pupil's parents (informed by the pupil) and the school's parents' committee (if it exists). 93Project evaluation methodologies and techniques It seems, however, that the most effective evaluation is the one which takes place in the classroom, involving both the teacher and the pupil. The discussion below, therefore, will follow Diagram 7, where teacher and pupil are shown as elements both of the teaching/learning sub-system and the evaluation (control) sub-system. T H E T E A C H E R AS A N E L E M E N T OF T H E EVALUATION SUB-SYSTEM As an element of the evaluation sub-system, the teacher has to evaluate both himself and the pupil. A . Inputs In performing his evaluation function, the teacher uses the following inputs. (a) Feedback information pertaining to his teaching effort This information is difficult to identify, even by means of an opinion-searching effort, for it depends on the teacher's perception of himself, his frankness and his professional integrity. It could be said that this information relates to h o w the teacher feels after the end of his teaching effort. Is he satisfied with himself? Does he reproach himself for not having adequately responded to his pupils' que- stions? W a s he too severe, or behaved in general in an inappropriate manner? Does he plan to cover the subject better next time? Answers to the above questions enable the teacher to become aware of his o w n performance. In some cases, however, such as experimental schools, the tea- cher m a y be formally asked to explain at the end of a session, to an audience other than his pupils, w h y he taught in the w a y he did, and to present possible alternative approaches. In this w a y the teacher himself produces all necessary information for undertaking a self-evaluation. The teacher faces a similar situa- tion when he is evaluated by an inspector (or his schoolmaster), with the session ending in a discussion between the inspector and the teacher. It seems, therefore, that the most pertinent question regarding this input has to do with whether the school makes provision for teachers to present their viewpoints as to the teaching technique they use and/or are forced to use, and have them discussed in a profes- sional meeting. (b) Feedback information regarding his pupils' learning results The teacher is continuously receiving information on his pupils' learning results. Depending on the size of his class and the curriculum, he m a y not have m a n y opportunities for evaluating his pupils. In such cases he relies upon information received through formal oral and/or written tests. This suggests that there will be cases when the teacher m a y not be as informed as he should be about his pupils' performance. Outside evaluation, therefore, should raise such questions as the following: - H o w often do teachers evaluate their pupils? - H a s the school set specific evaluation rules? Are they considered satisfactory? 94Examples of project evaluation design - D o teachers complain of having too m a n y pupils in their classes and, there- fore, that they cannot properly examine (evaluate) them? - D o pupils complain of not being examined often enough by their teachers and, therefore, their marks do not adequately reflect their knowledge? - D o teachers grade their pupils without formal evaluation tests? - D o parents complain about the evaluation system in use by the school? - D o teachers in evaluating their pupils consider their relative rather than their absolute effort? In other words do they d e m a n d more from pupils k n o w n to be good than from others? (c) The evaluation form in use Standardized educational systems use standardized evaluation methods which the teacher has to follow. This means that the information the teacher needs for performing his evaluation is constrained in quality and quantity by the evalua- tion methodology employed. In such cases the teacher cannot modify the evalua- tion results obtained through the formal evaluation by means of the overall per- sonal impression he has of a pupil. In other words, the system to increase objectivity reduces the real value of the teacher as an evaluator. It will, therefore, be necessary for the external evaluator to k n o w the evalua- tion technique in practice in a school or an educational system under external evaluation. It is usually claimed that the evaluation technique is related to the curriculum. This, however, seems to have only relative value and significance since evaluation technique should also be related to a child's psychology, age, family circumstances, etc. In any event, it is apparent that the teacher needs enough flexibility in the use of a particular evaluation technique. (d) Other inputs Additional inputs entering into the evaluation function of the teacher are those more related to the teacher's personality such as: - his professional integrity; - his affection and expectations for his pupils; - his formal knowledge and experience in evaluating; - his personal goals and interests (especially those which m a y hinder him in devoting the necessary time and effort to evaluating his pupils). It isapparent, however, that an outside evaluation cannot do m u c h about them. A n outside evaluator could, however, enquire as to whether teachers learn formally to evaluate pupils in their respective training institutions or whether it is considered as something which requires experience and c o m m o n sense radier than formal training and special knowledge. B . Outcome The nature of the outcome of the teacher's evaluation effort takes the form of information regarding the adequacy of his teaching effort and the pupil's learning result. It is fed back, therefore, to himself in order to increase, decrease, or modify his teaching effort and to motivate his pupils accordingly. The informa- 95Project evaluation methodologies and techniques tion directed towards the pupil should certainly indicate what corrective meas - ures should be taken by the pupils themselves. Depending n o w on the degree of autonomy he enjoys in the system the teacher must either himself bring about the necessary changes in the inputs he uses to perform his teaching, i.e. teaching technique, textbooks, time load, etc., or provide those responsible in the system for such changes with the appropriate information. Very often, however, teacher's evaluation results never reach those respon- sible. This is an important issue to be investigated by outside evaluators, because of the inherent risk of using evaluation results solely for screening purposes and not for improving the pupil's performance. The evaluator, therefore, has to see whether the educational system under evaluation encourages teachers to m a k e suggestions for curricula changes and whether teachers do m a k e such sugges- tions. THE PUPIL AS AN ELEMENT OF THE EVALUATION SUB-SYSTEM The pupil is not a passive receptor of information. H e has all the potentialities' as an autonomous system to evaluate all information he is receiving, including that pertaining to his o w n performance. A . Input The pupil's evaluation function is based on the following inputs: (a) Feedback information on his own performance The pupil receives daily information about his performance, mainly from his teacher, but also from his peers and his parents (to the extent they are involved in his learning). This information adds to his self-awareness of whether he is per- forming the w a y he should. At some fixed interval the pupil also receives his grades, which are the formal measure of his performance. His grades as measures of absolute and relative performance (in relation to his classmates) supplement the daily information. The process of pupil self-awareness is a complex one and, therefore, difficult to investigate. Very often the pupil has to reach a compromise between conflicting information. For example, the impression pupils have of themselves regarding daily performance m a y be different (negatively or posit- ively) from the image his formal grades suggest. Such a feeling m a y be reinforced by parental behaviour. In such cases the pupil develops a pessimistic or very optimistic view of himself which m a y not be consistent with his real capabilities. A n additional often serious phenomenon arises when the educational system places great importance on formal grades used for promotion purposes. In such cases the pupil's effort for securing high grades m a y not correspond to the type 1. It is evident that these potentialities are constrained only by his mental development and not by institutional factors. Institutional factors m a y have a constraining effect upon the c o m m u n i - cation of the pupil's evaluation results. 96Examples of project evaluation design of effort needed for real learning and vice versa: pupils w h o have learned m a y not get high marks in evaluation tests. This is, of course, associated with the evaluation technique practised in schools, but, nevertheless, it is a serious matter which needs to be investigated by the evaluator. Several educational systems have abolished or drastically modified the traditional grading system, especially at the lower educational levels. The following are some types of questions which could help the evaluator in his job: - D o pupils think seriously of their marks? - D o they think their marks express their achievement accurately? - D o they complain of too severe grading? - D o parents adhere to school evaluation marks or do they use their o w n criteria which m a y indicate a different performance of their children? - D o pupils take seriously their peers opinion? Is it in accordance with the teacher's opinion (marks)? (b) Feedback information on the teacher's teaching effort This information enables the pupils to evaluate their teachers. It is a subjective evaluation which helps the pupil to create an image of the teacher's qualities and interest in his pupils. The following are some of the factors contributing to this image: - the teacher's overall reputation in school (i.e. whether he is considered as a good, just, objective teacher, etc) ; - the amount and type of h o m e w o r k he assigns; - his knowledge of the subject (judged in relation to the answers he gives to pupils' questions, his lecturing ability, etc.); - his frankness (whether he admits his errors or something he does not k n o w ) ; - his interest for the pupil's progress; - his relative objectivity vis-à-vis all pupils; - thefinalmarkshegivestoapupil. The evaluator of the sub-system's performance should attempt to raise que- stions as to the above because of the importance most of these have in motivating the pupil to increase his effort. O n the other hand, pupils seem to be a relatively reliable source of information pertaining to the teacher's training effort and abi- lity and they should, therefore, be taken seriously into account. (c) Other inputs The following are some additional inputs entering into the pupil's evaluation pro- cess: - his age (as an indicator of his mental development); - his interest in his studies; - pressure for improving himself exercised on him by his family, peers, etc.; - teacher's support for pupil's self-evaluation (e.g., m a n y teachers ask their pupils to correct their o w n tests); - class discussion on the performance of the class as a whole; - eventual sanction for not performing well. 97Project evaluation methodologies and techniques B . Outcome The outcome of the pupil's evaluation effort is in the form of information pertain- ing to the teacher's teaching effort and his o w n personal achievement. Informa- tion on the pupil's achievement m a y include indications for specific corrective action; e.g. read more, redistribute time in favour of this or that subject, etc. If the pupil cannot develop a particular strategy for correcting himself, he m a y ask the teacher to suggest one. Information regarding the performance of the teacher m a y never be directly communicated. Indirect communication m a y take place by providing the rele- vant information to the parents, w h o in turn pass it on to the headmaster and/or to other teachers, etc. The degree to which such information will reach either the evaluated teacher, his colleagues and/or the headmaster will depend on the school's tolerance of such behaviour. T h e following are some pertinent questions for investigating this problem: - Are pupils allowed to express views as to the teacher's teaching effort, the eva- luation technique he is using, etc? - Are there any examples of pupils being directly or indirectly punished for complaining against their teachers? - Does the school encourage parents to express their feelings as to the teacher's performance? - Are there examples of teachers being accused by pupils (or parents) of bad performance, w h o have been disciplined by the school? - C a n the pupils strike? - D o parents threaten to have their children transferred to another school if the school does not change a particular teacher (this evidently applies to private schools)? THE HEADMASTER AS AN ELEMENT OF THE EVALUATION SUB-SYSTEM T o complete the micro-evaluation of the educational system's evaluation sub-sy- stem, it would be necessary to include the headmaster in the discussion. The most important role of the headmaster is performed not so m u c h through direct eva- luation of both previous elements (the teacher and the pupil), but by the degree of democratic behaviour he tolerates, which pre-conditions the communication of evaluative information emanating from the pupils and their parents. In addi- tion, even in centralized educational systems the school director has enough autonomy to alter some of the input conditions affecting the performance of both teachers and pupils. H e also has increased sanctioning authority, which he can use for motivating teachers and pupils alike. The system's evaluator, therefore, has to investigate carefully the headmas- ter's administrative authority and his attitude pertaining to the school's evalua- tion by parents or pupils. 98Examples of project evaluation design A . Inputs The following are some of the inputs the headmaster uses in performing his eva- luation function. - feedback information regarding the overall result of a class and each pupil individually (individual records); - feedback information as to the satisfaction of pupils regarding their teacher's performance (such information is usually in the form of parents or pupils' complaints); - other inputs: his formally prescribed authority; his personal attributes; values, attitudes, knowledge, experience, image, etc., his personal goals and interests. B . Outcome The outcome of the headmaster's evaluation process takes the form of informa- tion pertaining to the overall performance of his school. This information, which is complemented when necessary with information regarding appropriate correc- tive action, is fed back to the teachers and pupils (parents) inside the school and his superiors outside the school. The information which is fed back to the teachers should, in theory, include questions as to the reasons w h y pupils performed less well than expected. This implies that for the headmaster a pupil's low performance is not independent from the teacher's teaching effort. In actuality, however, things seem to happen differently. The headmaster very seldom, if at all, holds his staff responsible for the pupil's low performance. The entire blame is placed on the pupil himself, and it is from the pupil that corrective action is demanded; the teacher's beha- viour thus remains always the same. It is evident that in such conditions the role of evaluation is completely distorted. The information fed back to the superior administrative echelons informs them of the school's shortcomings, requesting at the same time additional inputs for improving performance. The outside evaluator has, therefore, to investigate the schoolmaster's evalua- tion effort and to w h o m he communicates the relevant information. The following are some questions for helping the evaluator in his assessment effort. - Does the schoolmaster discuss the performance of the pupils with the tea- chers? - Are there formal procedures for doing so? - Does the headmaster inform the parents and discuss regularly with the parents' school committee? - Does the headmaster listen to pupils' and parents' complaints? - W h a t is the headmaster's formal authority on: the school's inputs; sanctions to teachers and pupils? - D o teachers complain of the headmaster being too severe to them and/or the pupils? - W h a t do parents think of the headmaster's democratic behaviour? 99Project evaluation methodologies and techniques - D o parents/pupils think the headmaster tolerates criticism? - Are there examples of pupils being punished for being too critical? - W h a t are the ways and means for reporting the school's shortcomings to the superior administrative echelons? - O n what aspects can he report? The,above information can be easily selected by studying the appropriate administrative rules and by the interviewing of appropriate persons by c o m m u - nity representatives. 100Evaluation design of experimental programmes The discussion in the preceding chapter was focused on a possible analysis of the educational system with a view to undertaking both a macro and micro per- formance evaluation. It was argued that such an evaluation is necessary for pro- ject identification purposes which intend to ameliorate the system's performance. T o follow the logic of the conceptual framework presented earlier, it is assu- m e d that the result of the performance evaluation indicated, a m o n g other things, that the middle vocational education system was defective. M o r e precisely, it was found that graduates' qualifications did not meet the economic system's needs. T o correct the situation, it was decided that the curriculum for middle-level voca- tional schools had to change. The World Bank was asked to provide the financial means for implementing a new curriculum and the World Bank, in turn, asked Unesco to administer the project. T h e following were the basic terms of reference of the contract: design the n e w curriculum; implement it on an experimental basis; evaluate its results; bring about, if necessary, changes according to the evaluation results; and implement the curriculum in its final form on a national basis. It is evident that the project has all the prerequisites to be implemented at first on an experimental basis; i.e. it is innovative, its results are of a permanent nature, and the cost involved for its large-scale implementation is high. In our particular hypothetical example the terms of reference include the eva- luation of the experiment, which is not always the case. Such an evaluation can be performed in at least two ways: undertake the evaluation after the experiment is completed, or evaluate the experiment throughout the implementation phase. It was argued earlier that the most efficient w a y would be ongoing evaluation of the experiment, which would necessitate building specific evaluation guidelines into the project's experimental design. In practice, this means that project design components also have to be evaluated, together with the actual results at each of the project stages. In the case of the curriculum concerned, it is assumed that it extends over two years of schooling, i.e. there will be a need to evaluate the results of the project at the end of both years. 101Project evaluation methodologies and techniques The curriculum design stage A meaningful approach to designing the curriculum might involve the following steps:1 Step 1 : Diagnosis of needs Step 2: Formulation of objectives Step 3 : Selection of content Step 4: Organization of content Step 5: Selection of learning experiences Step 6: Organization of learning experiences Step 7: Determination of what to evaluate and of the ways and means of doing it. Accepting the above sequence of steps as our basis for discussion, it will be necessary, for our purposes, to complement them with the following additional steps: Step 8: Determination of teaching technique Step 9: Determination of teachers'qualifications Step 10: Estimation of the number of teachers needed (for the experiment and for its implementation on a national scale) Step 11: Estimation of new type of teachers, if any Step 12: Provisions for securing the new type of teachers for the experi- ment Step 13: Provisions for securing all new teachers for its broad implemen- tation Step 14: Determination of teaching aids (technical designs, tools, machi- nery, etc.) Step 15: Provision for acquiring all necessary aids for the experi- ment and its broad implementation Step 16: Estimation of total cost of the experiment Step 17: Estimation of total cost for its broad implementation. It is evident that the additional steps have to be included in order that all provisions are taken to ensure the success of the experiment. A s already argued, any mistake at the project design stage will inevitably affect the experiment's results. If, therefore, the project's evaluation does not cover all possible aspects of the project's final outcome evaluation, which m a y indicate non-achievement of the project's goals, it will fail to indicate the reasons for such failures. The curriculum design will certainly have to provide further details on each of the above steps, which will be omitted here. For illustrative purposes, however, Table 3 presents a first approximate breakdown of the above steps, in order to indicate areas where possible evaluation will be necessary. The table is structured into two parts. The first part presents the curriculum design steps and the second part the corresponding evaluation design of the project's experimentation. 1. H . Taba, op. cit. p. 12. 102Examples of project evaluation design In the curriculum design, some of the steps refer to the final implementation of the project. It seems prudent to have, even on a preliminary basis, indications of the feasibility of implementing the curriculum on a broad scale. Evidently, this is necessary in order to avoid experimenting on something which, in all like- lihood, will never be implemented on a national scale unless one modifies accor- dingly the objectives of the projects. The curriculum experimentation stage The curriculum experimentation stage will certainly include the preparation of all necessary materials and the actual teaching of the n e w curriculum. The prepa- ration of the material m a y involve an informal evaluation, in the sense that these materials should be given to several experts for preliminary comments. The mate- rials will be corrected according to the experts' suggestions and will be put into final form for their use in the classroom. T w o crucial problems arise in this type of formative evaluation, which have to be resolved at an early stage. The first refers to w h o will be teaching the n e w curriculum and w h o will be doing the evaluation. The second relates to the means and ways of evaluation. There is no clear-cut answer to either of the two problems. A s to the first, the following are possible: (a) the teaching and the evaluation to be done by the same person; (b) the teaching and the evaluation to be done by two different persons. It is apparent that reference is m a d e here to the evaluation for the purposes of the experiment and not to the evaluation previewed in the curriculum design (Step 7), which m a y be entirely different. The one previewed in the curriculum relates more to the pupil's performance. The other, for experimental purposes, relates to all aspects of the curricula including the teaching method and the tea- cher's behaviour itself. For the experiment's purposes it might have been desirable to use experienced teachers w h o would themselves evaluate the curriculum. The risk involved in this, however, stems from the fact that in reality, i.e. in the project's general implementation, the curriculum has to be served by ordinary teachers, sometimes even entirely inexperienced. T o counterbalance this effect teaching could be done by ordinary teachers and be evaluated by "outsiders". A s to the second problem, the evaluation could be performed by means of pupil's regular achievement tests; direct observation without the evaluator being physically present in the classroom (e.g. various audiovisual means usually used for evaluation purposes), which should not, however, be k n o w n to the pupil; and finally by means of continuous discussions with the teachers. At the end of each stage (i.e. each year) there will be an additional evaluation. Summative evaluation stage U p to this point, evaluation was based on the performance of the pupil w h o 103P roject evalu ation m eth od ologies an d tech n iq u es i I s % i» '-5 a) cd 2 3 3 g 73 'S g 11 •o -ö O Cd 4) -S S vi X > O 8. C O H fl> tivrepresenta • O . c o Vi (ù ¡8 3 s Id 'S -SP £ V I a O 2? o "•S -g G o » " B o • 3 -S . » - " §• s 0 8--Ö Ê =3 o g ja °-o S C *H and ion i IX- . atio: valu e wa is of c o * S •* Deti wha and and doii O « C D i O (3 t) C O C O 104E xam p les o f p roject evalu ation d esign *^4 V cd * u 3 S Ü 9 Ö II ill 00 p . a c C /3 C C J oth erts 3 W ck them o X I U 3 Mar surv Ask peo] ssi- mpor- fied ck po y of i quali hers O S M » S l-S s e . o a -9 T H W > 5 o> Exp sugg a - o m -9 M o G O a. H t/3 o C i o inati iterm 5- • « .S g tec ichin lH 8 "3 M O U . iîiï! ¡ill! •s c S o S -a 'S § S X cd E U 0 3 « Í « X ! — u - ? x fc .S O & o c 5> (u 'S o fix a s c 5 S g 3 Ë c • .22 « •S a £ ^ o Q > » Ç J .M Ü ^ J H a e S *c /ï CD Î_ > » + J ; sure ol 'ailabilit id incen 3? CD fectiven +3 O ta ^ D • * - • D a x OJ CD > « 5 Q . CD C D " c » 3 •2 3 83 o > ,^ 3 2 1.1" a 4> ¿ i ¡S G 53 M G .O *0 O ü w e k tional dget 3 cd 3 o ary L ia cd ti 'S -O estim ital co )mpar¡ i thbu o* ç O "53 1 C XI •o scuss thbu as c*--ble it feasi u ë e 8'3 .S 8 tl ii .2 i o u C , ri w o E -a ''S o c c ü E .-I •o o o „ a . *J C D .» s -a ¡s 3 oo 3 O .S cd •S? 2 II B u 3 g u « • a M u *2 bove cd on i-J O atesf g sti i> -D num o cd O O S -M £ Jä g S S 5? fill E t! * J 60 e •p c .CCOI onte; b c f • S ' B S » ! O cru n a * S 2 S E C L, cd C c S .Í3 3 D . .S •I! J-1 a i . U s u i 1 °s I il w lit 8 V I 106Examples of project evaluation design attended the experimental classes, assessed against the norms set by the evalua- tors. Usually, however, at the end of the experiment, i.e. after two years (or even a year), there is a summative type of evaluation pertaining to the c o m p a - rative assessment of pupils following the experimental course and others w h o followed the old one. Such an evaluation implies that pupils from the two groups take c o m m o n tests to determine the performance differences between those exposed to the new and the old curricula. In such an evaluation technical problems m a y arise, such as: - the design of the tests; - the sampling technique; - the statistical technique for inferring the statistical significance of any difference. All these are aspects dealt with in social research methodology, and are there- fore beyond the purpose and scope of the present work. O n the other hand, the use of a particular research and/or statistical technique will certainly depend on the nature of the particular project. W h a t is of possible interest here is to warn the evaluator and the administrator to look beyond statistical significance in research evaluation. In other words, statistically significant or not, differences observed between two target groups m a y not necessarily reveal the nature and importance of the change a project intended to bring about. In our particular example, for instance, it m a y be more fruitful to have the correctness-of-output test performed on the job, in an industry, rather than on the basis of a control group. Because, even if pupils exposed to the n e w curriculum differ (i.e. they k n o w more and do things better) from those exposed to the old one, still this difference m a y not m a k e the graduate perform the required job so differently that it will justify the waste of energy and time involved in experimenting and imple- menting the n e w curriculum. This is a c o m m o n complaint of policy-makers w h o favour educational (performance) significance and not just statistical significance. 107Summary and conclusions The present monograph on project evaluation was primarily concerned with the managerial function of evaluation. Although acknowledged, this function is very often neglected. Thus, project evaluation tends to become a means to an end and jeopardizes, in the long run, its potential for effective programme management and control. The blame for this frequent misuse of project evaluation effort should be shared equally by policy-makers, programme managers and evaluators. This monograph sees evaluation as the process through which a decision-ma- ker (at the various hierarchical echelons of a social system) is informed of the development of a project's experimentation or implementation, its final results (impact) and its performance w h e n operating with a view to bringing about appropriate corrective measures. T o operate successfully such a process has to meet the following conditions: meaningful information should be collected with reference to the project's objectives and its final results; this information to be fed back to the decision-maker and the project's manager early enough and in a comprehensible form to allow the decision-maker and the project manager the time, opportunity and power to carry out the appropriate corrective action. Meaningful information has to be defined for each project separately by those responsible for project management. It was argued, however, that there m a y be several agencies responsible for and interested in a project's evaluation. T o avoid wasting time and money , the project designer will have to question all those inte- rested in project objectives of evaluation, w h o in turn will indicate the type of information to be collected. The evaluation function will not be effective if the recipient of the evaluation information does not act upon it. In such a case, the evaluation effort is entirely useless and should not be undertaken. T o increase the effectiveness of evaluation it was suggested that, based on a conceptual framework, it should be integrated to project planning, implementa- tion and operation. Such a conceptual framework takes into consideration simul- taneously the various levels of management within organizations such as Unesco and government departments. In order to minimize the inherent risk involved in project evaluation, particu- larly of large and complex social action programmes, it was argued that it is not enough to incorporate some vague evaluation clauses into the project design. 109Project evaluation methodologies and techniques It would be necessary to design the project's evaluation at the early stage of pro- ject implementation design and link the two together. In the case of this not being feasible, it was suggested that at least specific evaluation guidelines be incorpora- ted into the project's implementation design. T h e preparation of a project evaluation design or evaluation guidelines pre- supposes, of course, that the project is évaluable. This, however, is not always the case. For this reason it was argued that the evaluability of a project, together with the implied cost, be carefully examined in advance. T h e cost of evaluation is certainly a serious constraint when deciding whether projects should be evaluated or not and what methodology and technique be used. This aspect is not usually considered in advance, and very often the evalua- tion methodology and technique employed depend on the available m o n e y rather than on the evaluation purpose. This unfortunate situation, due mainly to the insertion of evaluation clauses in project designs without specifying in advance the evaluation effort and accordingly budgeting it, should never be permitted to happen in serious programme management efforts. T h e monograph also accepted the continuous nature of the evaluation func- tion, although it admitted that for practical reasons project evaluation has to be performed in discreet but sequential stages. For this reason, the monograph cove- red all possible project evaluation stages. It started with system performance, to discover operational defects and to suggest corrective measures, usually in the form of n e w projects; it went on to project selection assuming that, a m o n g possi- ble alternative projects and within a set of constraints, there is one project which is the most efficient; it covered in turn the experimentation and implementation stages and explicitly distinguished between the formative type of evaluation which occurs during the experimentation stage and the monitoring type of eva- luation usually employed for managing project implementation; lastly, it dealt with impact evaluation necessary for an overall assessment of a project's final results (outcome) or degree of success of project operation. During the discussion an effort was m a d e to point out the project manage- ment implication of each of the various types of evaluation. Here it was argued that the project selection stage is perhaps the most critical one. It was suggested that a thorough planning effort be undertaken at this stage to increase the proba- bilities of project success. Certain types of projects, with at least one of the follo- wing characteristics: innovative nature, long-lasting results, and high implemen- tation costs, should be experimented on a pilot basis before being broadly implemented. It is evident that errors committed during these phases will irrevo- cably affect the effectiveness of the project. T h e evaluation of the project's final results (i.e. after its final implementation) obviously has reduced importance for purely project management purposes. Information collected from such an evalua- tion effort is often used by policy-markers (a) for organizational managerial control, (b) for detecting unintended results, (c) for assessing the effectiveness of the project with a view to supplementing the effort should the results of the pro- ject be found inadequate, etc. The breadth of the subject forced the author to maintain the discussion on 110Summary and conclusions a rather general level in order to m a k e it meaningful and, it is hoped, useful to a broad audience and more particularly to project planners and managers. There seems, therefore, to be a need for an ongoing effort to reduce the level of abstraction of the present monograph by means of specific papers, either on types of evaluation or on types of project. Such works, however, have to be written in such a w a y as to help the project planners to design the evaluation of their o w n projects. M a n y handbooks fail in this respect simply because their detailed procedural descriptions fail to provide information as to h o w to perform a parti- cular operation. The present work has attempted to provide general guidelines on some of the important aspects of project evaluation design, but in order to keep its size and complexity within manageable limits it has very often had to omit explanatory details. IllBibliography A H M A N N , J.S., G L O C K , M . D . , (eds.), Measuring and evaluating educational achievement, Boston, Allyn and Bacon, Inc., 1971. A B E R T , J.G. , K A M R A S S , M . , (eds.), Social experiments and social program evaluation, Mass., Ballinger Publishing Co. , 1974. B A I N B R I D G E , J., SAPIRIE, S., Health project management: a manual of procedures for formu- lating and implementing health projects, Geneva, World Health Organization, 1974. B A U E R , R . A . , (ed.), Social indicators, Cambridge, Mass., the M . I . T . Press, 1966 (paper). C O L E M A N , J.S., Equality of educational opportunity, Washington, D . C . , U . S . Office of Educa- tion, 1966. D E U T S C H , K . , The nerves of government, N . Y . , The Free Press, 1967 (paper). F R E E M A N , H . E . , "The present status of evaluation research", Paris, Unesco, SS. 76 /WS/ lO , August 1976. G O S T O W S K I , Z . , (ed.), Toward a system of human resources indicators for less developed coun- tries. A selection of papers prepared for a Unesco research project, O S S O L I N E U M , Poland. G U T T E N T A G , M . , S T R U E N I N G , E X . , (eds.) Handbook of evaluation research, California, Sage Publications, Inc., 1975. H O L D E N , I., M c I L R O Y , P . K . , Network planning in management control systems, London, Hut- chinson Educational Ltd., 1970. H U S E N , T . , (ed.), et al., International study of achievement in mathematics, a comparison of twelve countries, Volumes I and II, Uppsala, Sweden, Almqvist & Wiksells Boktryckeri A B , 1967. INTERNATIONAL BANK FOR RECONSTRUCTION AND DEVELOPMENT, INTERNA- T I O N A L D E V E L O P M E N T A S S O C I A T I O N , Appraisal of an agricultural and rural training project in Bangladesh, Report No. 680b-BD, February 18,1976. INTERNATIONAL INSTITUTE FOR EDUCATIONAL PLANNING (UNESCO) INTERNA- TIONAL B A N K FOR RECONSTRUCTION A N D D E V E L O P M E N T , "Report of the Afri- can Regional Seminar on educational evaluation", Dar es Salaam, Tanzania, 7 April-2 May , 1975. I N T E R N A T I O N A L INSTITUTE F O R E D U C A T I O N A L P L A N N I N G (UNESCO) , "Methodo- logy for the evaluation of educational attainments", a project of the I B R D and H E P , Progress Report, IIEP/RP/15/1,12th September, 1973. 112Bibliography INTERNATIONAL INSTITUTE FOR E D U C A T I O N A L PLANNING (UNESCO), "Methodo- logy for the evaluation of educational attainments", a project of the I B R D and H E P , Phase I Report, IIEP/RP/15/2,24th January, 1974. J A N T S C H , E . , Perspectives of planning, Proceedings of the O E C D Working Symposium on Long- range Forecasting and Planning, Bellagio, Italy, 27th October-2nd November, 1968, Paris, O E C D , 1969. L O C K H E E D A I R C R A F T I N T E R N A T I O N A L I N C . , Systems analysis of Sudan transportation, progress products Al, A4, A5 and A6, June 1966. L Y O N S , G . M . , "Evaluation research in Unesco : political and cultural dimensions" (Prepared for the Unesco Symposium on "Evaluation methodology for social action programs and projects", Washington, D . C . , September 20-24, 1976), Paris, Unesco. M A C L U R E , S., Styles of curriculum development, Centre for Educational Research and Innova- tion (CERI), Paris, O E C D , 1972. M c I N T O S H , N . E . , "Evaluation and research: aids to decision-making and innovation", O E C D Third General Conference, Institutional Management in Higher Education, Paris, 13th-16th September, 1976. M c L A U G H L I N , M . W . , Evaluation and reform: the elementary and secondary education Act of 1965, Title I, The Rand Corporation, January 1974. O E C D , Handbook on curriculum development, Centre for Educational Research and Innovation (CERI), Paris, 1975. O E C D , "The measurement of learning", Social Indicators Development, Programme, C o m m o n Development Effort N o . 2, Issues Paper, S M E / S I / C D E 2 / 7 6 . 2 1 , Paris, 9th August, 1976. R O E M E R , M . , S T E R N , J.J., The appraisal of development projects, a practical guide to project analysis with case studies and solutions, N e w York, Praeger Publishers Inc., 1975. ROSSI , P . H . , W R I G H T , S.R., "Evaluation research: an assessment of current theory, practices and politics", Paris, Unesco, SS.76/WS/15, September 1976. R U T M A N , L. , "Planning project evaluations: a case study of a bilingual education project", Paris, Unesco, SS.76/WS/11, September 1976. S C H N E I D E R , H . , National objectives and project appraisal in developing countries, Development Centre Studies, O E C D , Paris, 1975. S T A K E , R . E . , (éd.) et al., Case studies in the evaluation of educational programmes, Centre for Educational Research and Innovation (CERI), Paris, O E C D . S T A K E , R . E . , Evaluating educational programmes. The need and the response, Centre for Educa- tional Research and Innovation, (CERI), Paris, O E C D , 1976. S T R U E N I N G , E.L. , G U T T E N T A G , M . , (eds.), Handbook of evaluation research, Volume I, sponsored by the Society for the Psychological Study of Social Issues, California, S A G E Publications Inc., 1975. T A B A , H . , Curriculum development. Theory and practice, foundations, process, design, and strategy for planning both primary and secondary curricula, Harcourt, Brace & World, Inc., 1962 (International Edition). T R A P P L , et al. (eds), Progress in cybernetics and systems research, N e w York, John Wiley, 1975, Vol. II. T Y L E R , R . , G A G N E , R . , S R I V E N , M . , Area monograph series on curriculum evaluation, 1: Perspectives of curriculum evaluation, Chicago, Rand McNally and Co. , 1967. Unesco, The use of socio-economic indicators in development planning, Paris, 1976. Unesco, "Guideline for project preparation mission", E / W S / 3 1 2 , M a y , 1972. U N I T E D S T A T E S A G E N C Y F O R I N T E R N A T I O N A L D E V E L O P M E N T , Project evaluation guidelines, third edition, M . O . 1026.1 Supplement I, Office of Development Program Review and Evaluation, Washington, D . C . , August 1974. 113Project evaluation methodologies and techniques U N I T E D S T A T E S A G E N C Y F O R I N T E R N A T I O N A L D E V E L O P M E N T , Evaluation hand- book, second edition, M . O . 1026.1 Supplement II, Office of Program Evaluation, Washington, D . C . , M a y 1974. W A L L E R , J.D., et al., Monitoring for government agencies, an Urban Institute Paper, Wash- ington, D . C . , February 1976. W H O L E Y , J.S., " A methodology for planning and conducting project impact evaluations in U n - esco fields", SS.76/WS/12, Paris, Unesco, September, 1976. W H O L E Y , J.S., "Designs for evaluating the impact of educational television projects", SS.76/WS/13, Paris, Unesco, September 1976. W I L L I A M S , G . "Individual demand for education: Case study: United Kingdom", Paris, O E C D , S M E / E T . 76.21, (mimeo). 114ANNEX A Questionnaire for designing a micro-educational evaluation1 A . The teacher as a transformation element2 1. Inputs (a) Knowledge and experience (I) On the subject: - Is the teacher's knowledge and experience on the subject taught adequate? - H o w is it checked (formally)? - Are there any complaints from parents and/or pupils on the inadequacies of teachers? - Is teacher retraining a prerequisite for promotion or continuing their job? - If yes, what are the time intervals? - Is the teacher asked to produce and publish some theoretical work and/or the results of his experience? - Are the teachers asked to give public lectures on educational matters? (II) Teaching techniques - Are the teachers aware of possible teaching techniques? H o w is this checked formally? - D o the teachers adapt their teaching techniques according to say, age, subject, size of the class, etc? - C a n they change the teaching technique if they so wish? (This is an aspect related to the teacher's autonomy to control his inputs. In less centralized systems teachers can freely change not only their inputs but also their goals. A s the system becomes administratively centralized, the teacher's degree of autonomy decreases; it would, therefore, be desirable that the educational systems evaluator keep this question conti- 1. The design of the questionnaire follows the same sequence as the corresponding discussion in the text. 2. W h e n appropriate, the questions should be put by subject. If the educational system's evaluator does not wish to undertake a very deep evaluation, he can accordingly decide what type of questions to use. If the system is administratively centralized, he can choose those questions which have to do mostly with systems and not necessarily with the teacher and/or the pupil, thanks to the standardized nature of most of the inputs. 115Project evaluation methodologies and techniques nuously in mind, raise it whenever applicable, and verify the degree of system auto- n o m y ) . - Are the various teaching techniques formally taught in teachers' colleges or other insti- tutions responsible for teacher's education? (This question should also be raised w h e n dealing with teacher's colleges, etc.). - If not, h o w is the problem faced by the education system? (III) Pupil psychology - Are the teachers aware of the child psychology for the age-group they teach? - Is this a subject which they have studied in teachers' college or have they learnt it through experience? - Are the complaints of the pupil and/or parents, about the use of inappropriate means of reward and punishment by the teacher, heeded? - W h a t is the attitude of society at large (community) to the use of reward and punish- ment means? - D o teachers follow society's attitudes on that matter rather than the theory? - Are teachers authoritarian, democratic, etc., in their behaviour towards the stu- dents? - H a s the educational system issued a directive on this matter or is the teacher free to behave the w a y he wishes? - If the behaviour of the teacher is controlled, h o w is this done? (IV) His pupils - D o e s the teacher follow his pupils as they pass from grade to grade? - Does the teacher k n o w both the first and family names of his pupils? - Does the teacher k n o w the parents of his pupils? - Does the teacher k n o w the expectations of his pupils? - D o e s the teacher hold informal meetings with his pupils? - If yes, h o w often? If not, w h y not? - W h a t is the attitude of the community and of the educational system towards such informal gatherings? (b) Aids (I) Books Textbooks. If the textbooks are selected by the educational system, w e would wish to k n o w the following: - H o w are the textbooks written? - W h o decides the type of books to be used? - W h o evaluates them? - W h o approves them? - H o w are they replaced? - Are these textbooks imposed by the educational system? If yes, does the teacher recommend the use of additional textbooks? If not, h o w does the teacher choose the textbooks to be assigned? - D o e s the teacher ask the pupils to take notes during his lectures? - D o e s the teacher ask the pupils to keep to the assigned textbooks? 116Annex A - Are the questions for formal and/or informal exams taken directly from the assigned textbooks? - Are the exams evaluated with reference to the assigned textbooks? - Does the teacher have to suggest and use textbooks other than those assigned by the educational system? Auxiliary books - Does the educational system allow the teacher to use exercises taken from sources other than the assigned textbooks? - D o the teachers usually consult books other than the textbooks in order to prepare themselves? - D o they use other books in their teaching effort? - D o the teachers recommend their auxiliary books to their pupils? (II) Educational technology Audio-visual, including television - H a s the educational system placed any audio-visual aids at the teacher's disposal? - C a n teachers use these aids freely? - Is the use of audio-visual aids compulsory or not? - If not, do the teachers use them? - Is television used in teaching? - If yes, h o w ? a) as a substitute for teachers? b) to supplement them? (Ill) Other - H a s the teacher at his disposal: a) library? b) laboratory (where applicable)? c) m u s e u m s (where applicable)? - C a n the teacher make use of them freely: for himself? for his pupils? - Are the library, laboratory, etc. well equipped qualitatively and quantitatively? - D o teachers make use of them when they exist? - D o the pupils make use of them? (c) Curricula (I) When the curricula are standardized: - Are the teachers happy about the curricula? - If dissatisfied, can they change them? - Is there any control for evaluating the teacher's capability when curricula are chan- ged? - Is there any mechanism whereby the teachers m a y m a k e an appeal, if they are un- happy with new and/or old curricula? - D o new teachers campaign for curricula changes? - If yes, is there any strong resistance to that change on the part of older teachers? 117Project evaluation methodologies and techniques (II) When the curricula are not standardized - Does the educational system (school) set some criteria to be followed by the teachers when defining subject matter? - Are they asked to use their o w n textbooks (own publication) or m a y they use any textbook suitable? - Is there any mechanism through which the educational system (school) evaluates the teacher's selection of the subject matter? - D o the pupils complain of lack of relevance in what they are taught? - D o parents (or others in the society) complain either of lack of relevance or unaccep- table ideology, in what teachers teach? (d) T ime (I) For actual teaching - H o w m a n y hours a week does the teacher actually teach a particular subject? - Are these hours considered adequate for meeting his aims regarding the desired degree of pupil transformation? - H o w m a n y hours a week does a teacher teach? (Total weekly time-load). - The teacher's total load is considered: too m u c h , reasonable, low, too low? - D o teachers complain of over-loading? - C a n a teacher change the hours of actual teaching if he so wishes? - W h o decides the hours necessary for actual teaching by subject? - Is the size of the class (or other factors) considered? - H o w are the decisions taken? (II) For preparation1 - D o the teachers prepare themselves before lecturing? - Is there any usual procedure? - H a s the educational system established a control device to check the adequate preparation of the teachers? - W h a t is it? - Is it used frequently? - D o the teachers complain they are too over-loaded and lack time to prepare themselves? - D o pupils complain that teachers appear in the class unprepared? 1. It is clear that this item, as well as the following "time for control", are highly subjective and it is rather difficult for the system's evaluator to get any reliable answers. There are, however, ways of checking (a) whether teachers prepare themselves before lecturing or merely rely on their experience, (b) whether teachers give their pupils enough exercices which require correc- tion, and (c) whether the corrections made by the teachers are sufficiently thorough. 118Annex A (III) For evaluation - Does the teacher give pupils classwork and/or homework? - Does he correct it? (e) Teacher's motivation N o questions will be asked here because of the problematic nature of this item. Probable questions m a y relate to income, career opportunities, autonomy, etc. (0 Pupil response Relevant questions will be raised in the discussion of the evaluation sub-system below. This is feedback information on the teacher's evaluation of his students. It has to do with the response the pupils show to the teacher's teaching effort as seen by the teacher himself. It is the outcome of the teacher as an evaluation element directed to himself in his function as a transformation element. See Diagram 7. (g) Pressure on the teacher Appropriate questions are raised in the discussion of the evaluation sub-system. (h) Classroom conditions - Are there any standards for school buildings pertaining to heating (air conditioning), lighting, ventilation, size, etc. - If so, is there any operative control system for enforcing them? - H o w m a n y desks are there in each classroom? - H o w are they placed? - D o pupils complain about classroom conditions in general? - D o parents complain? - D o teachers complain? (i) Teacher's health conditions - Are there frequent absentees due to health reasons? - Are there adequate medical services at the teachers' disposal? - Are teachers covered by social security? - Is there any obligatory (annual or otherwise) medical examination. - D o teachers complain of being compelled to return to school prematurely after an ill- ness? - Is there any teacher substitute service in operation? 119Project evaluation methodologies and techniques 2. Outcome A s discussed earlier, the outcome of the teaching process independently is in the form of the teacher's teaching effort, which is usually evaluated by the school inspectors through direct observation of the w a y they teach by means of the inspectors' perception of what good teaching implies. It is, there fore, extremely difficult to propose any appro- priate questions unless the evaluation is done indirectly by considering the inputs the teacher uses. In this case, the above questions on input also apply here. B. The pupil as a transformation element 1. Inputs (a) Teacher's transformation effort A s was said when discussing the teacher's outcome, the nature of the teacher's teaching effort is almost unknown to us although it is a very important factor in the pupil's trans- formation. The effectiveness of the teacher's effort can only be directly evaluated. W e will deal with this below when considering both the student and the teacher (plus others) as control elements. (b) Aids / (Books) Text books - H a v e all pupils the required textbooks? - D o pupils use other textbooks than those required by the educational system? - D o pupils prefer their official textbooks to other textbooks? - If not, w h y not? Additional books - D o pupils use additional books (such as dictionaries, encyclopaedias, etc.) in their stu- dies? - Is the use of such books recommended by their teachers? - If so, do they ask the pupils to present views from such books? - D o the school libraries m a k e such books available to the pupils? - W h a t is the attitude of the educational system? - D o the pupils complain of not having such books? (II) Various additional aids - D o pupils use additional aids (such as for geometry, drawing, various samples of sto- nes, metals, maps , etc.) in their studies? - If yes, do they belong to the pupil? - D o the teachers ask the pupils to use such aids? - D o the schools provide the teacher and the pupils with such aids? - W h a t is the attitude of the educational system regarding their use? - D o pupils complain of not having such aids? - D o teachers complain that pupils do not have such aids? 120Annex A (c) Time (I) Time in the classroom - D o pupils complain that the time spent in the classroom is: - excessive? - insufficient? - D o teachers complain that pupils do not pay attention in the classroom? - D o pupils attend classroom lectures frequently? - Is school attendance enforced by: the school? the teacher? - If yes, by what methods? - If not, w h y not? (II) Time spent on homework - D o pupils have to do m u c h h o m e w o r k ? - D o they complain of having: too m u c h ? not enough? - Is h o m e w o r k enforced by the educational system or solely by the teacher? - D o the teachers complain that pupils do not d o their h o m e w o r k ? - W h a t importance does the teacher (school) attach to h o m e w o r k as against classroom w o r k ? (III) Time spent (or not) on competitive or complementary educational activities - D o pupils participate in extra-curricular activities? - Is this permitted by the school? - D o e s the school encourage such activities? - If yes, h o w ? - If not, w h y not? - D o pupils have to go outside the school for foreign languages, music, dancing, etc? - D o pupils complain of too m u c h work? - D o parents encourage their children to participate in educational activities outside the school? - W h a t is the educational system's attitude? - D o pupils take private lessons (tutoring) to meet school requirements?1 - W h a t is the attitude of the educational system towards tutoring (i.e., are the teachers allowed to tutor their students for m o n e y ? ) (d) Method of Study Because this is a very subjective matter, usually influenced by the teacher, there will not be any specific questions. If interested, the educational evaluator should m a k e a survey regarding the teaching technique employed by the teachers. 1. This is a question which should be raised when evaluating the teacher's effort. 121Project evaluation methodologies and techniques (e) Background CO Age1 - Is age a factor in entering a particular transformation sub-system? - If yes, is it linked to other factors such as family environment, etc.? - If not, is pupils' social environment homogenous? - Are there any discussions for changing (lowering or increasing) the age factor? - Are there any complaints from parents that the age limit is low or high? - Are there any complaints from parents that the school requirements are above or below their children's capabilities? - Is there any policy which allows the acceptance and/or promotion of a student irres- pective of his age? (II) Previous school attendance2 - Is the development of a curriculum based on the knowledge pupils have acquired in previous years? - If yes, what is the degree of pupil's excellence required (excellent, very good, good, fair)? - H o w is this knowledge checked? - Is this also the case from one educational level to the next? - Is there any entrance examination necessary for entering a n e w educational level? - If so, what are the requirements for determining the previously acquired knowledge? - Within a particular educational level, do pupils have to repeat a certain grade if they fail or can they proceed to the next? - If they fail, is there any time-limit for remaining in the same grade? - If they cannot repeat, is there any selection mechanism w h e n they enter the next grade? - For h o w long can a student be absent from the educational system, after graduating from a certain educational level, and still be eligible to return to the educational system in the following level or grade of the same level? (III) & (IV) Family and socio-cultural environment* - W h a t is the father's occupation?4 - W h a t is the father's education (measured usually in terms of total years of formal edu- cation)? - W h a t is the mother's education? - W h a t is the population of the town (village, etc.)?5 - Is there any radio and television system in operation in the town? 1. Age is an important factor for entrance into pre-primary and primary stages. A s the child grows, the age factor determining his ability to absorb and assimilate new knowledge, decreases in importance. 2. These questions have mostly to do with detecting the existing coupling between the various transformation sub-systems in terms of curricula, entrance requirements, etc. referred to above when discussing the "correctness of output" test. 3. These two factors are of great importance during the first years of schooling. A s the individual develops, the importance of these factors is relatively decreasing. 4. Father's occupation is usually employed as one of the main indicators for showing family social status. 5. From the point of view of the entire educational system, we can reverse the question and ask for the regional distribution of students. 122Annex A - Are there other cultural institutions? - D o pupils participate in cultural events, listen to radio, watch television, read news- papers, etc? (0 Inter-pupil relationships - H o w is the size of a class determined? - W h a t is the average size of a class? - D o teachers encourage free discussions in class? - W h a t is the educational system's attitude towards free discussion in class? - Are pupils encouraged to ask questions? - Are there group activities, group work, pupil associations, etc? (g) Pupil motivation - D o pupils participate actively in class work? - D o pupils do their homework? - Are they frequently absent without being ill? - D o they complain about school life in general? - D o they complain about a teacher being too severe? - D o they participate in school activities? - D o they have and sing a particular school song? - D o they wear a uniform, school caps or suits? - If so, do they like them? (h) Pupil's success Since this input is an output of the student's evaluation, it will be dealt with when discuss- ing the evaluation sub-system. (i) Curriculum The intention of the educational system's evaluator here will be to detect any objection that pupils (and/or their parents) m a y have to the aims of the curriculum they follow: - D o pupils complain about the lack of relevance of the curriculum? - D o parents complain about the curriculum? - If yes, w h y do they do so? - D o pupils complain about spending too m u c h time on a particular subject and less on others? - D o pupils complain about a subject being very difficult (or at the same level as in previous grades)?1 1. This question will help in detecting the consistency and appropriateness of a curriculum within the same educational level and/or from one level to another. In many educational systems, as for example in the French primary level, curricula are designed in such a way as to allow much repetition of the previous year's subjects evidently to strengthen the knowledge acquired in the previous year. Sometimes, however, unnecessary repetition may have the opposite results, affecting negatively a pupil's motivation. 123Project evaluation methodologies and techniques (j) Classroom conditions (the same as for the teacher) (k) Health conditions - H o w often are pupils absent from school for health reasons? - Is there any regular pupil medical inspection service? - If yes, h o w frequently are pupils inspected? - Are pupils covered by social security? - C a n pupils have meals at school? - If not, are they fed adequately at h o m e ? - D o parents complain that their children feel tired when they are back home? 2. Outcome A s stated earlier, questions related to the oucome of the learning process, which corres- ponds to the final outcome of the teaching and learning processes, have been already raised in discussing the macro-educational evaluation. They coincide especially at some important exit points of the education al system, e.g., the end of the compulsory level, the end of secondary education and at the end of higher education studies. The tests proposed already provide the basis for a more meaningful assessment as to the true value of the educational result at the various stages which goes, and should go, beyond the assessment as to whether or not a pupil learnt all that the educational system had to teach him. It is necessary to go beyond this finding for two main reasons: firstly, because what the educational system had to teach the pupil m a y not have been of m u c h importance for the pupil's further development and career (relevance of education) and, secondly, because there are so m a n y factors involved in the highly complex teaching and learning processes that it does not m a k e sense simply to rely on a pupil's achievement test for evaluating the educational system's performance. A s to the yearly outcomes one could use a similar test and, more specifically, one could also consider the following points: - the number of pupils repeating, dropping out and succeeding (with all due reservation regarding their interpretation); - the number of promoted pupils by marks received (if there is any precise grading system, again with all due reservation as to their interpretation). 124ANNEX B Three conversations and a commentary 1. A conversation between a person w h o will commission an evaluation study and an evaluation specialist favouring a consequence orientation1 Commissioner: Thanks for taking the time to see m e today. I suspect that your teaching schedule at the University keeps you hopping, but I've been told that you occasionally carry out educational evaluations. Evaluator: That's true, m y normal teaching load here at the University is pretty heavy, but this quarter is about over. Besides, I a m working n o w with a small group of graduate students in an evaluation seminar, and when I mentioned the possibility of evaluating your district's project in Reality-Rooted Reading they became really interested. C: Y o u mean you might use students in carrying out an evaluation? E: It's really good experience for them, and they often can m a k e excellent contributions to the evaluation itself. O f course, one must be careful not to exploit students in such situations. T o o m a n y of m y colleagues view graduate students as a somewhat advanced form of migrant workers. C: Well, did you have a chance to read the write-up I sent you of our new Reality- Rooted Reading programme? W e think it holds great promise as a w a y to get poor readers more involved in developing their reading skills. E: I did read the document, and you m a y be correct. There are certainly a number of positive features in the programme. I must confess, though, that I was disturbed by the apparent lack of replicability in the programme itself. It sounds more like a six-ring cir- cus than anything which, if it does work, could be used again in the future. If you're going to the trouble of evaluating this intervention, I assume that you're contemplating its use in the future. Interventions that are not at least somewhat replicable can't really be employed very well in the future. Is your Reality-Rooted Reading programme going to be essentially reproducible? 1. W e acknowledge with thanks permission granted by the O E C D to reproduce pp. 64-75 and 79- 84 of R.E.Stake, Evaluating educational programmes - the need and the response, Paris, CERI, OECD, 1976. 125Project evaluation methodologies and techniques C: I 'm glad you brought that up. The planning commitee which has been working out the programme's details became aware of that problem a few weeks ago. They're in the process of devising instructional guides which will substantially increase the replicability of the programme. E : I just hope the planning committee itself is rooted in reality. C: Well, what about the evaluation? Will you take it on? O u r district school board is demanding formal evaluations of all new programmes such as this one, so w e can't really get under w a y until the responsibility for evaluation has been assigned. E : I'll need to get some questions answered first. C: Fire away. E : What's the purpose of the evaluation? In other words what's going to happen as a consequence of the evaluation? Unless the evaluation is going to m a k e a genuine diffe- rence in the nature of the instructional programme, w e wouldn't want to m u c k with it. T o o m a n y of us here at the University have experienced the frustrations of carrying out research studies whose only purpose seemed to be that of widening the bindings of research journals. Unless an evaluation satisfies the "so what?" criterion, I'm sure w e wouldn't be interested. C: Well, the district superintendent has indicated that the continuation of the new pro- g r a m m e will be totally dependent upon the results of its evaluation. That satisfy you? E: Sure does. N o w , there was a bit of rhetoric in your programme description about appraising the programme in terms of the "uniqueness of its innovative features". Does that imply you're more concerned with evaluating the procedural aspects of the pro- g r a m m e than with evaluating the results yielded by those procedures? This is a particu- larly important issue for m e . C: Well, w e are very proud of the programme's new features. W h a t are you getting at? E: There are too m a n y educators w h o are so caught up with the raptures of an instructio- nal innovation that they are almost oblivious of its effects on learners. A n d that, after all, is w h y we're in the game. Our instructional interventions should help learners. I want to be sure that, although w e will consider the procedures employed during the pro- g r a m m e , the main emphasis of the evaluation will focus on the consequences of that pro- gramme's use. C: O h , we'd be perfectly agreeable to that. After all, you people are the experts. Besides, I guess I share your point of view. E: I also noted an almost exclusive preoccupation with cognitive, that is, intellectual outcomes of the programme. Your people seemed to be concerned only about the skills of reading. Aren't you also worried about pupils' attitudes toward reading? C: O f course, but you can't assess that kind of stuff can you? I thought the affective domain was off-limits for the kinds of evaluators w h o , as you apparently are, are concer- ned with evidence. E: It's tough to do, but there are some reasonably good ways of getting evidence regard- ing learners' affect toward an instructional programme. W e ' d want to use them. C: H o w about tests? Will you have to build lots of new ones? E : M y guess is that w e will have to devise some new measures. The standardized teach- ing tests your district n o w uses will be worthless for this kind of an evaluation. We'll need to see if there are any available criterion-referenced tests which w e can use or adapt. C: D o you people always have to use tests? E: N o , but it is important to get sufficient evidence regarding a programme's effects so that w e are in a better position to appraise its consequences than merely by intuiting those consequences. 126Annex B C: You'll still have to m a k e judgments, won't you? E: Certainly, but judgments based on evidence tend to be better than judgments m a d e without it. Properly devised measuring devices can often be helpful in detecting a pro- gramme's effects, both those that were intended as well as any unanticipated effects. C: H o w c o m e I haven't heard you say "instructional objectives" once during our con- versation? I thought you folks were all strung out on behavioural objectives. E: Well, clearly stated instructional objectives represent a useful w a y of describing a programme's intended effects. But the effects of the programme are what w e want to attend to, not just the educator's utterances about what was supposed to happen. Conse- quence-oriented educational evaluators can function effectively even without behavioural objectives. C: Amazing! E: There are a couple of other areas we have to get into. I hope you're sincere in want- ing to contrast the new programme with alternative ways that the money it's costing might be spent. C: Absolutely. E: A n d , finally, the matter of evaluator independence. Will w e have the right to release the results of our evaluation to all relevant decision-makers involved in this project, in- cluding the public? C: Y o u think that's important to get clarified n o w ? E: It might head off some sticky problems later. W e ' d like that kind of independence. C: I think it can be assured. I'll want to check it out with m y division chief, however; E: There's also a related kind of independence I want to discuss. Unlike some of the independent evaluation firms that have sprung up in the past few years, w e really aren't in the evaluation business on a full-time basis, hence in a sense w e don't need your dis- trict's repeat business. Therefore, we'll be inclined to call our shots openly, even if it means that the programme is evaluated adversely. C: That's related to your earlier point about independence in reporting the evaluation's results. E: Y o u bet. C: Okay , we're willing to play by the rules. I hope it turns out positively though. E: So do I. O u r kids' could surely do with a bit of help in their reading programme. C: Well, what next? E: W h y don't I and some of m y students whip up a detailed plan of h o w w e want to do the evaluation and fire it off to you by mail, say, in two weeks. C: Fine. If w e have any problems with it, w e can get back to you. All right? E: Sure. C: W e haven't talked about money yet. H o w m u c h will this thing cost? E: We'll include a budget with our evaluation plan. But, because university professors are so handsomely rewarded by their o w n institutions, I'm sure the amount will be a pittance, perhaps a used chalkboard eraser or two. C: Y o u guys do live in an ivory tower, don't you? E: Didn't you take the elevator on the way up? 127Project evaluation methodologies and techniques 2. A conversation between a person w h o will commission an evaluation study and an evaluation specialist favouring a responsive approach Commissioner: A s I said in m y letter I have asked you to stop by because we need an evaluator for our National Experimental Teaching Programme. Y o u have been recom- mended very highly. But I know you are very busy. Evaluator: I was pleased to come in. The new Programme is based on some interesting ideas and I hope that m a n y teachers will benefit from your work. Whether or not I perso- nally can and should be involved remains to be seen. Let's not rule out the possibility. There might be reasons for m e to set aside other obligations to be of help here. C: Excellent. Did you have a chance to look over the programme materials I sent you? E : Yes, and by coincidence, I talked with one of your field supervisors, M r s . Bates. W e met at a party last week. She is quite enthusiastic about the plans for group problem-solv- ing activities. C." That is one thing we need evaluation help with. W h a t kind of instruments are avail- able to assess problem-solving? Given the budget w e have, should w e try to develop our o w n tests? E: Perhaps so. It is too early for m e to tell. I do not know enough about the situation. O n e thing I like to do is to get quite familiar with the teaching and learning situations, and with what other people want to know, before choosing tests, or developing new ones. Sometimes it turns out that w e cannot afford or cannot expect to get useful information from student performance measures. C: But surely we shall need to provide some kind of proof that the students are learning more, or are understanding better, than they did before! Otherwise how can w e prove the change is worthwhile? W e do have obligations to evaluate this programme. E: Perhaps you should tell m e a little about those obligations. C: Yes. Well, as you know, w e are under some pressure from the Secretary (of Health, Education and Welfare), from Members of Congress, and the newspapers. They have been calling for a documentation of "results". But just as important, w e in this office want to k n o w what our programme is accom- plishing. W e feel we cannot make the best decisions on the amount of feedback we have been getting. E: Are there other audiences for information about the National Experimental Teaching Programme? C: W e expect others to be interested. E: Is it reasonable to conclude that these different "audiences" will differ in what they consider important questions, and perhaps even what they would consider credible evi- dence? C: Yes, the researchers will want rigor, the politicians will want evidence that the costs can be reduced, and the parents of students will want to know it helps their children on the College Board Examinations. I think they would agree that it takes a person of your expertise to do the evaluation. E: A n d I will look to them, and other important constituencies.teachers and taxpayers, for example, to help identify pressing concerns and to choose kinds of evidence to gather. 128Annex B C: D o you anticipate w e are going to have trouble? E : O f course, I anticipate some problems in the programme. I think the evaluator should check out the concerns that key people have. C: I think w e must try to avoid personalities and stick to objective data. E: Yes, I agree. A n d shouldn't we find out which data will be considered relevant to people w h o care about this programme. A n d some of the most important facts m a y be facts about the problems people are having with the programme. Sometimes it does get personal. C: The personal problems are no our business. It is important to stick to the impersonal, the "hard-headed" questions, like " H o w m u c h is it costing?" and " H o w m u c h are the students learning?" E : T o answer those questions effectively I believe w e must study the programme, and the communities, and the decision-makers w h o will get our information. I want any eva- luation study I work on to be useful. A n d I do not k n o w ahead of time that the cost and achievement information I could gather would be useful. C: I think we k n o w what the funding agencies want: information on cost and effect. E: W e could give them simple statements of cost, and ignore such costs as extra work, lower morale, and opportunity costs. W e could give them gain scores on tests, and ignore what the tests do not measure. W e k n o w that cost and effect information is often superficial, sometimes even misleading. I think we have an obligation to describe the complexities of the programme, including what it is costing and what its results appear to be. A n d I think we have an obligation to say that w e cannot measure these important things as well as people think we can. C: Well, surely you can be a little less vague as to what you would do. W e have been asked to present an evaluation design by a week from next Wednesday. A n d if w e are going to have any pretesting this year we need to get at it next month. E: I a m not trying to be evasive. I prefer gradually developed plans—"progressive focusing" Parlett and Hamilton call it. I would not feel pressed by the deadline. I would perhaps present a sketch like this ,one (drawing some papers from a folder); one which Les M c L e a n used in the evaluation of an instant-access film facility. His early emphasis was on finding out what issues most concern the people in and around the project. C: I think of that as the Programme Director's job. E: Yes, and the evaluation study might be thought of—in part—as helping the Pro- g r a m m e Director with his job. C: H m m . It is the Secretary I was thinking w e would be helping. Y o u m a d e the point that different people need different information, but it seems to m e that you are avoiding the information that the Secretary and m a n y other people want. E: Let's talk a bit about what the Secretary, or any responsible official, wants. I a m not going to presume that a cost-effectiveness ratio is what he wants, or what he would find useful. W e m a y decide later that it is. First of all, I think that what a responsible official wants in this situation is evidence that the National Programme people are carrying out their contract, that the responsibi- lity for developing new teaching techniques continues to be well placed, and that objec- tionable departures from the norms of professional work are not occurring. Second, I think a responsible official wants information that can be used in discus- sions about policy and tactics. Our evaluation methodology is not refined enough to give cost-effectiveness state- ments that policy-setters or managers can use. The conditionally of our ratios and our projections is formidable. W h a t w e can do is acquaint decision-makers with this particu- 129Project evaluation methodologies and techniques lar programme, with its activities and its statistics, in a way that permits them to relate it to their experiences with other programmes. W e do not have the competence to manage educational programmes by ratios and projections—management is still an art. M a y b e it should remain an art—but for the time being w e must accept it as a highly particular- ized and judgmental art. C: I agree—in part. M a n y evaluation studies are too enormously detailed for effective use by decision-makers. M a n y of the variables they use are simplistic, even though they show us h o w their variables correlate with less simplistic measures. S o m e studies ignore the unrealistic arrangements that are made as experimental controls. But those objection- able features do not m a k e it right to de-emphasize measurement. The fact that manage- ment is an art does not mean that managers should avoid good technical information. W h a t I want from an evaluation is a good reading—using the best techniques avai- lable—a good reading of the principal costs and of the principal benefits. I have no doubt that the evaluation methodology w e have n o w is sufficient for us to show people in govern- ment, in the schools, and in the general public what the programme has accomplished. E: If I were to be your evaluator I would get you that reading. I would use the best measures of resource allocation, and of teaching effort, and of student problem-solving we can find. But I would be honest in reporting the limitations of those measures. A n d I would find other ways also of observing and reporting the accomplishments and the problems of the National Programme. C: That of course is fair. I do not want to avoid whatever real problems there m a y be. I do want to avoid collecting opinions as to what problems (and accomplishments) there might be. I want good data. I want neither balderdash nor gossip. I want m y questions answered and I want the Secretary's questions answered. A n d those questions might change as w e go along. Y o u would call that "formative evaluation"? E: Sometimes. I would also call it "responsive". C: W h a t kind of final report would you prepare for us? E: I brought along a couple of examples of previous reports. I can leave them with you. I can provide other examples if you would like. Whether there is a comprehensive or brief final report, whether there is one or several, those decisions can be m a d e later. C: N o , I'm afraid that simply won't do. If w e are to commit funds to an evaluation study, w e must have a clear idea in advance of h o w long it is going to take, what it will cost, and what kind of product to expect. That does not m e a n that w e could not change our agreement later. E: If you need a promise at the outset, w e can m a k e it. Believe m e , I do not believe it is in your best interests to put a lot of specifications into the "contract". I would urge you to choose your evaluator in terms of h o w well he has satisfied his previous clients more than on the promises he would make so early. C: It would be irresponsible of m e not to have a commitment from him. E: O f course. A n d your evaluator should take some of the initiative in proposing what should be specified and what options should be left open. C: Let m e be frank about one worry I have. I a m afraid I m a y get an evaluator w h o is going to use our funding to "piggy-back" a research project he has been wanting to do. H e might agree to do "our" evaluation study but it might have very little to do with the key decisions of the Experimental Teaching Programme. E: It is reasonable to expect any investigator to continue old interests in new surroun- dings. W h e n you buy him you buy his curiosities. H e m a y develop hypotheses, for example, about problem solving and teaching style, hypotheses that sound most relevant 130Annex B to the programme—but the test of these hypotheses m a y be of little use to those w h o sponsor, operate, or benefit from the programme. His favourite tactics, a carefully controlled comparative research effort or a historical longitudinal research study, for example, might be attractive to your staff. But he is not inclined to talk about h o w unnecessary this approach m a y be. The inertia in his past work m a y be too strong. Y o u are right, there is a danger. I think it can best be handled by looking at the assignments the evaluator has had before, and by getting him to say carefully what he is doing and why, and by the sponsor saying very carefully which he wants and does not want, and by everybody being sceptical as to the value of each under- taking, and suggesting alternatives. C: Would you anticipate publishing the evaluation study in a professional journal? E: Even when an article or book is desired it is rare for an evaluation study to be suitable for the professional market. Evaluation studies are too long, too multi-purposive, too non-generalizable and too dull for most editors. Research activities within the evalua- tion project sometimes are suitable for an audience of researchers. I usually suppose that m y evaluation work is not done for that purpose. If something worth publishing became apparent I would talk over the possibilities with you. C: I think something like that should be in writing. W h a t other assurances can you give m e that you would not take advantage of us? D o you operate with some specific "rules of confidentiality"? E: I would have no objection to a contract saying that I would not release findings about the project without your authorization. I consider the teachers, administrators, parents and children also have rights here. Sometimes I will want to get a formal release from them. Sometimes I will rely on m y judgment as to what should and should not be made public, or even passed along to you. In most regards I would follow your wishes. If I should find that you are a scoundrel, and it is relevant to m y evaluation studies, I will break m y contract and pass the word along to those w h o m I believe should know. E: I have nothing to lose, but others involved m a y have, I do not want to saction scurrilous muck-raking in the n a m e of independent evaluation. I wonder if you are too ready to depend on your o w n judgment. W h a t if it is you w h o are the scoundrel? E: I would expect you to expose m e . C: By exposing you I would be exposing m y bad judgment in selecting you—the line of thought I would return to is the safeguard you would offer us against mismanagement of the evaluation study. E: The main safeguard, I think, is what I was offering at the beginning: communication and negotiation. In day to day matters I m a k e m a n y decisions, but not alone. M y collea- gues, m y sponsors, m y information sources help m a k e those decisions. A good contract helps, but it should leave room for new responsibilities to be exercised. It should help assure us that w e will get together frequently and talk about what the evaluation study is doing and what it should be doing. C: W h a t about your quickness to look for problems in the programme? Perhaps you consider your o w n judgment a bit too precious. E: I do not think so. Perhaps. I try to get confirmation from those I work with and from those w h o see things very differently than I do. I deliberately look for disconfirmation of the judgment I make and the judgments I gather from others. If you are thinking about the judgments of what is bad teaching and learning I try to gather the judgments of people both w h o are more expert than I and those w h o have a greater stake in it than I. I cannot help but show some of m y judgments, but I will look for hard data that sup- 131Project evaluation methodologies and techniques port m y judgment and I will look just as hard for evidence that runs counter to m y opi- nion. C: That was nicely said. I did not mean to be rude. E : Y o u speak of a problem that cuts deeply. There are few dependable checks on an evaluator's judgment. I recognise that. C: Y o u would use consultation with the project staff and with m e , as a form of check and balance. E: Yes. A n d I think that you would feel assured by the demands I place upon myself for corroboration and cross-examination of findings. C: Well, there seems to m e to be a gap in the middle. Y o u have talked about h o w w e would look for problems and h o w you would treat findings—but will there be any findings? W h a t will the study yield? E: If I were to be your evaluator we might start by identifying some of the key aims, issues, arrangements, activities, people, etc. W e would ask ourselves what decisions are forthcoming, what information would w e like to have. I would check these ideas with the programme staff. I would ask you them to look over some things I and other evalua- tors have done in the past, and say what looks worth doing. The problem would soon be too big a muddle, and we would have to start our diet. C: I don't care much for the metaphor. E: That m a y be as good a basis as any for rejecting an evaluator—his bad choice of metaphors. C: I've just realized h o w late it is. I a m hoping not to be rejecting any evaluators today. Perhaps you would be willing to continue this later. E: Let m e make a proposal. I appreciate the immediacy of the situation. I know a young w o m a n with a doctorate and research experience, w h o might be available to co-ordinate the evaluation work. If so, I could probably be persuaded to be the director, on a quarter - time basis. Let m e go over your materials with her. W e would prepare a sketch of an evaluation plan, and show it to you along with some examples of her previous work. C: That is a nice offer. Let m e look at your examples and think about it before you go ahead. Would it be all right if I called you first thing tomorrow morning? G o o d . Thanks very m u c h for coming by. 132Annex B 3. A conversation between a person w h o is commissioning an independent evaluation study and the evaluator who favours a "goal-free" approach Commissioner: Well, we're very glad you were able to take this on for us. W e consider this programme in reading for the disadvantaged to be one of the most important w e have ever funded. I expect you'd like to get together with the project staff as soon as possible—the director is here now—and of course, there's quite a collection of docu- ments covering the background of the project that you'll need. W e ' v e assembled a set of these for you to take back with you tonight. Evaluator: Thanks, but I think I'll pass on meeting the staff and on the material. I will have m y secretary get in touch with the director soon, though, if you can give m e the phone numbers. C: Y o u m e a n you're planning to see them later"! But you've got so little time—we thought that bringing the director in would really speed things up. M a y b e you'd better see h i m — I ' m afraid he'll be pretty upset about making the trip for nothing. Besides, he's understandably nervous about the whole evaluation. I think his team is worried that you won't really appreciate their approach unless you spend a good deal of time with them. E : Unfortunately, I can't both evaluate achievements with reasonable objectivity and also go through a lengthy indoctrination session with them. C: Well, surely you want to k n o w what they are trying to do—what's distinctive about their approach? E : I already know more than I need to k n o w about their goals—teaching reading to disadvantaged youngsters, right? C: But that's so vague—why, they developed their o w n instruments, and a very detailed curriculum. Y o u can't cut yourself off from than Otherwise, you'll finish up criticizing them for failing to do what they never tried to do. I can't let you do that. In fact, I'm getting a little nervous about letting you go any further with the whole thing. Aren't you going to see them at all! You're proposing to evaluate a three million dollar project with- out even looking at it? E: A s far as possible, yes. O f course, I'm handicapped by being brought in so late and under a tight deadline, so I m a y have to m a k e some compromises. O n the general issue, I think you're suffering from some misconception about evaluation. You're used to the rather cosy relationship which often—in m y view—contaminates the objectivity of the evaluator. Y o u should think about the evaluation of drugs by the double-blind approach... C: But even there, the evaluator has to know the intended effect of the drug in order to set up the tests. In the educational field, it's m u c h harder to pin d o w n goals and that's where you'll have to get together with the developers. E : The drug evaluator and the educational evaluator do not even have to k n o w the direc- tion of the intended effect, stated in very general terms, let alone the intended extent of success. It's the evaluator's job to find out what effects the drug has, and to assess them. If (s)he is told in which direction to look, that's a handy hint but it's potentially prejudi- cial. O n e of the evaluator's most useful contributions m a y be to reconceptualize the effects, rather than regurgitating the experimenter's conception of them. 133Project evaluation methodologies and techniques C: This is too far-out altogether. W h a t are you suggesting the evaluator do—test for effects on every possible variable? H e can't do that. E: O h , but he has to do that anyway. I 'm not adding to his burden. H o w do you suppose he picks up side effects? Asks the experimenter for a list? That would be cosy. It's the evaluator's job to look out for effects the experimenter (or producer etc.) did not expect or notice. The so-called "side effects", whether good or bad, often wholly determine the outcome of the evaluation. It's absolutely irrelevant to the evaluator whether these are "side" or "main" effects; that language refers to the intentions of the producer and the evaluator isn't evaluating intentions but achievements. In fact, it's risky to hear even general descriptions of the intentions, because it focuses your attention away from the "side-effects" and tends to m a k e you overlook or d o w n weight them. C: Y o u still haven't answered the practical question. Y o u cant't test for all possible effects. So this posture is absurd. It's m u c h more useful to tell the producer h o w well he's achieved what he set out to achieve. E: The producer undoubtedly set out to do something really worthwhile in education. That's the really significant formulation of his goals and it's to that formulation the eva- luator must address himself. There's also a highly particularized description of the goals — or there should be — and the producer m a y need some technical help in deciding whether he got there, but that certainly isn't what you, as the dispenser of taxpayer's funds, need to know. Y o u need to k n o w if the m o n e y was wasted or well-spent etc. C." Look , I already had advice on the goals. That's what m y advisory panel tells m e when it recommends which proposal to fund. W h a t I'm paying you for is to judge suc- cess, not legitimacy of the direction of effort. E: Unfortunately for that w a y of dividing the pie, your panel can't tell what configuration of actual effects would result, and that's what I'm here to assess. Moreover, your panel is just part of the whole process that led to this product. They're not i m m u n e to criticism, nor are you, and nor is the producer. (And nor a m I.) Right n o w , you have—with assistance—produced something, and I a m going to try to determine whether it has any merit. W h e n I've produced m y evaluation, you can switch roles and evaluate it— or get someone else to do so. But it's neither possible nor proper for an evaluator to get by without assessing the merits of what has been done, not just its consonance with what someone else thought was meritorious. It isn't proper because it's passing the buck, dodging the—or one of the—issue(s). It isn't possible because (it's almost certain that) no one else has laid d o w n the merits of what has actually happened. It's very un- likely, you'll agree, that the producer has achieved exactly the original goals, without shortfall, overrun or side-effects. So—unless you want to abrogate the contract w e just signed—you really have to face the fact that I shall be passing on the merits of whatever has been done—as well as determining exactly what that is. C: I 'm thinking of at least getting someone else in to do it too—someone with a less peculiar notion of evaluation. E: I certainly hope you do. There's very little evidence about the interjudge reliability of evaluators. I would of course cooperate fully in any such arrangement by refraining from any communication whatsoever with the other evaluator. C: I 'm beginning to get the feeling you get paid rather well for speaking to no one. Will you kindly explain h o w you're going to check on all variables? O r are you going to take advantage of the fact that I have told you it's a reading p r o g r a m m e — I ' m beginning to feel that I let slip some classified information. What's your idea of an ideal evaluation situation—one where you don't k n o w what you're evaluating? E: In evaluation, blind is beautiful. R e m e m b e r that justice herself is blind, and good m e - 134Annex B dical research is double blind. The educational evaluator is severely handicapped by the impossibility of double-blind conditions in most educational contexts. But (s)he must still work very hard at keeping out prejudicial information. Y o u can't do an evaluation without knowing what it is you're supposed to evaluate—the treatment—but you do not need or want to k n o w what it's supposed to do. You've already told m e too m u c h in that direction. I still need to k n o w some-things about the nature of the treatment itself, and I'll find those out from the director, via m y secretary, w h o can filter out surplus data on intentions etc. before relaying it to m e . That data on the treatment is what cuts the problem d o w n to size; I have the knowledge about probable or possible effects of treatments like that, from the research literature, that enables m e to avoid the necessity for examining all possible variants. C: Given the weakness of research in this area, aren't you still pretty vulnerable to mis- sing an unprecedented effect? E: Somewhat, but I have a series of procedures for picking these up, from participant observation to teacher interview to sampling from a list of educational variables. I don't doubt I ship up, too; but I 'm willing to bet I miss less than anyone sloshing through the s w a m p towards goal-achievement. I really think you should hire someone else to do it independently. C: W e really don't have the budget for it . . . m a y be you can do something your way. But I don't know h o w I'm going to reassure the project staff. This is going to seem a very alien, threatening kind of approach to them, I 'm afraid. E : People that feel threatened by referees w h o won't accept their hospitality don't unders- tand about impartiality. This isn't support for the enemy, it's neutrality. I don't want to penalize them for failing to reach over-ambitious goals. I want to give them credit for doing something worthwhile in getting halfway to those goals. I don't want to restrict them to credit for their announced contracts. Educators often do more good in unexpec- ted directions than the intended ones. M y approach preserves their chance in those direc- tions. In m y experience, interviews with project staff are excessively concerned with explanations of shortfall. But shortfall has no significance for m e at all. It has some for you, because it's a measure of the reliability of the projections they m a k e in the future. If I were evaluating them as a production team, I'd look at that as part of the track record. But right n o w I'm evaluating their product—a reading programme. A n d it m a y be the best in the world even if it's only half as good as they intended. N o , I'm not wor- king in a w a y that's prejudiced against them. C: I'm still haunted by a feeling this is an unrealistic approach. For example, h o w the devil would I ever k n o w w h o to get as an evaluator except in terms of goal-loaded des- criptions. I got you—in fact, I invided you on the phone—to handle a "reading pro- g r a m m e for disadvantaged kids" which is goal-loaded. I couldn't even have worked out whether you'd had any experience in this area except by using that description. D o you think evaluators should be universal geniuses? H o w can they avoid goal-laden language in describing themselves? E : There's nothing wrong with classifying evaluators by their past performance. Y o u only risk contamination when you tell them what you want them to do this time, using the goals of this project as you do so. There's nothing unrealistic about the alternative, any more than there is about cutting names off scientific papers when you, as an editor, send them out to be refereed. Y o u could perfectly well have asked m e if I was free to take on an evaluation task in an area of previous experience—a particularly important one, you could have added—requiring, as it seemed to you, about so m u c h time and with 135Project evaluation methodologies and techniques so m u c h fees involved. I could have m a d e a tentative acceptance and then come in to look into details, as I did today. C: What details can you look at? E: Sample materials, or descriptions by an observer of the process, availability of' controls, time constraints etc. W h a t I found today m a d e it clear you simply wanted the best that could be done in a very limited time, and I took it on that basis—details later. O f course, it probably won't answer some of the crucial evaluation questions, but to do that you should have brought someone in at the beginning. Your best plan would have been to send m e reasonably typical materials and tell m e h o w long the treatment runs. That would have let m e form m y o w n tentative framework. But no evaluator gets perfect conditions. The trouble is that the loss is not his, it's the consumer's. A n d that means he's usually not very motivated to preserve his objectivity. It's more fun to be on friendly terms with the project people. B y the way , the project I'm on for you is hard to describe concisely in goal-free language, but that's not true in all cases. I often do C A I evaluations, for example, and other educational technology cases, where the descrip- tion of the project isn't goal-loaded. C: Look , h o w long after you've looked at materials before you form a pretty good idea about the goals of the project? Isn't it a bit absurd to fight over hearing it a little earlier? E: The important question is not whether I do infer the goals but whether I m a y infer some other possible effects before I a m locked-in to a 'set' towards the project's o w n goals. For example, I've looked at elementary school materials and thought to myself —vocabulary, spelling, general knowledge, two-dimensional representation conventions, book-orientation, reading skills, independent study capacity, and so on. It isn't important which of these is the main goal—if the authors have m a d e any significant headway on it, it will show up; I'm not likely to miss it altogether. A n d the other dimensions are not masked by your set if you don't have one. R e m e m b e r that even if a single side- effect doesn't s w a m p the intended effect, the totality of them m a y m a k e a very real plus for this programme by comparison with others which do about as well on the intended effect and on cost. After I've looked at materials (not including teachers' handbooks, etc.), I look at their tests. O f course, looking at materials is a little corrupting, too, if you want to talk about pure approaches. W h a t I should really be looking at is students —especially changes in students, and even more especially, changes due to these mate- rials. (I'm quite happy to be looking at their test results, for example.) But the evaluator usually has to work pretty hard before he can establish cause. It's worth realizing, h o w - ever, that if he had all that, his job is not yet half done. But I guess the most important practical argument for goal-free evaluation is one w e haven't touched yet. C: Namely? E: I'm afraid there isn't time to go into that n o w . The foregoing dialogues illustrate the difficulties a commissioner and a prospective eva- luator have in getting acquainted with what the other person needs and expects. Three were written rather than one to show h o w evaluators of different persuasions respond. There is obviously c o m m o n concern a m o n g these three evaluators, but clear differences as well. 136Annex B The first evaluator stresses the need for m a x i m u m attention to results that are directly related to the instruction. The second evaluator stresses finding out the problems that most concern the people involved in this particular programme. The third evaluator stresses the need to remain independent of sponsors and programme personnel. These three evaluators represent the approaches in the grid in Section III-2 that were called Student Gain by Testing, Transaction-Observation, and Goal-Free Evaluation Approaches. It is reasonable to expect that the three contracts they would write would be quite dif- ferent, both in terms of what they would promise and in terms of the safeguards they would set forth. 137|B.]SS 76/D97/APage 18, Diagram I for: Supra-systematic read: Supra-systemic Page 25, line 7 for: sector itself. read: system itself., Page 37, line 32 for: Diagram 2 read: Diagram 3 Page 75, line 10 for: quantitative read: qualitative Page 84, line 13 for: levels: 2 read: levels: 3 Page 137, line 5 for: Section 111-2 read: Table 2 (page 54)

Not logged in

Project evaluation methods, page actions.

  • View source

Evaluation of the project involves a comprehensive assessment of the given project, policy, program or investments , taking into account all its stages: planning , implementation, and monitoring of results. It provides information used in the decision-making process .

Evaluations can be divided from the point of view of the project goals (evaluation of action in relation to the objectives, national or community) and operational aspects (monitoring of project activities).

There is also the separation due to the moment of performing evaluation: evaluation ex ante (before implementation), current evaluation (during implementation) and ex post evaluation (after implementation).

To achieve goal of cost -effective allocation of capital, investors use different methods to assess the rationality of investment . From the point of view of the time factor, techniques profitability of investment projects are divided into: static methods, (also known as simple) and dynamic methods (so called the discount methods).

  • 1 Example methods and formulas
  • 2 When to use Project evaluation methods
  • 3 Advantages of Project evaluation methods
  • 4 Limitations of Project evaluation methods
  • 5 Other approaches related to Project evaluation methods
  • 6 References

Example methods and formulas

Project evaluation methods.png

  • Cost-benefit analysis : Cost-benefit analysis is a common method used to evaluate projects. It compares the estimated costs and benefits of a project and determines whether it is worth pursuing. This method helps decision-makers to decide whether to invest in a project, as it allows them to identify potential risks and rewards associated with it.
  • Return on Investment (ROI) : Return on Investment (ROI) is a measure of the profitability of a project, expressed as a percentage. It is calculated by dividing the net benefits of a project by its total costs. This method is useful for evaluating the potential for a project to generate a positive return on investment .
  • Net Present Value (NPV) : Net Present Value (NPV) is a measure of the sum of all expected future cash flows associated with a project, expressed in terms of today's value. It is used to determine whether a project is worth the level of investment required.
  • Internal Rate of Return (IRR) : Internal Rate of Return (IRR) is a measure of the rate of return that a project would generate if all associated cash flows were reinvested. It is expressed as a percentage and is used to determine whether a project is a sound investment.

{\displaystyle IRR={\sqrt[{n}]{{\frac {CF_{1}}{CF_{0}}}+{\frac {CF_{2}}{CF_{0}}}+{\frac {CF_{3}}{CF_{0}}}+...+{\frac {CF_{n}}{CF_{0}}}}}}

Project evaluation methods are essential for evaluating the potential success of projects. By carefully assessing the expected costs and benefits of a project, decision-makers can make more informed decisions about whether or not to pursue it. This helps to ensure that projects are launched with greater confidence and that the potential for a successful outcome is increased.

Simple methods of project evaluation

It is proposed that simple method should be used only:

  • In the initial stages of the process of preparation of projects, when there is not enough detailed and extensive information about the investment project ,
  • In the case of projects with relatively short economic life cycle, in which the different timing of inputs and the effects do not affect in a decisive way calculation of the profitability of the project,
  • In the case of projects of small scale, when both the inputs and the effects are minor and do not affect the market position and the financial situation of the company implementing the investment project.

The most frequently mentioned and described static methods of investment projects evaluation include:

  • The payback period
  • Account of comparative costs
  • Account of comparative profit
  • Account of comparative profitability
  • The average rate of return on investment
  • Test of the first year

These methods do not take into account the effect of the time, which means that the individual values are not differentiated in the following years, and the calculation involves the sum of the expected costs and benefits, or average values selected from a specified period. These methods only approximate capture the project life cycle and the level of commitment of capital expenditures.

When to use Project evaluation methods

Project evaluation methods are used to determine the potential success of any project. These techniques are used before a project is launched to assess the expected costs and benefits, and to determine whether the project is worth pursuing. They are also used after a project is completed to measure its actual performance and determine if the project achieved its desired outcomes. Project evaluation methods provide decision-makers with the information they need to make informed decisions about launching and managing projects.

Advantages of Project evaluation methods

Project evaluation methods can provide many benefits for decision-makers.

  • Improved decision-making : Evaluating a project's expected costs and benefits can help decision-makers make more informed decisions about whether to pursue a project or not.
  • Reduced risk : By carefully assessing the potential risks and rewards associated with a project, decision-makers can reduce the chances of failure and increase the potential for success.
  • Increased confidence : Knowing that a project has been thoroughly evaluated can help decision-makers feel more confident in their decision and be more successful in executing the project.

Limitations of Project evaluation methods

Project evaluation methods have certain limitations that should be considered when assessing a project. First, the estimated costs and benefits of a project can be highly uncertain and difficult to accurately predict. This means that the results of an evaluation may not be reliable. Second, the methods are generally designed to evaluate projects in terms of financial outcomes, meaning that they do not take into account non-financial factors such as customer satisfaction or employee morale. Finally, the results of a project evaluation may be subject to bias, depending on who is performing the evaluation and how they interpret the data.

Other approaches related to Project evaluation methods

There are other approaches related to project evaluation methods, such as risk analysis, stakeholder analysis, cost-effectiveness analysis, and environmental impact assessment.

  • Risk Analysis : This method is used to identify potential risks associated with a project, such as cost overruns, schedule delays, and health and safety issues. It is used to assess the potential for negative outcomes and to come up with strategies for mitigating them.
  • Stakeholder Analysis : This method evaluates the interests of the various stakeholders associated with a project, such as the sponsor, project team , and customers. It is used to ensure that all stakeholders are taken into consideration when making decisions about the project.
  • Cost-Effectiveness Analysis : This method compares the costs and effectiveness of different approaches to a project, such as different technologies or processes. It is used to evaluate the cost-effectiveness of various approaches and to determine which one will be the most cost-efficient.
  • Environmental Impact Assessment : This method evaluates the potential environmental impacts of a project, such as air and water pollution, land use, and wildlife impacts. It is used to ensure that any negative impacts are identified and mitigated before the project is launched.

These additional approaches are used to ensure that all aspects of a project are taken into consideration, and that any potential risks or negative impacts are identified and addressed before the project is launched. Together, these methods provide a comprehensive approach to project evaluation and help decision-makers determine the potential success of a project.

  • Copeland, T. E., Weston, J. F., & Shastri, K. (1983). Financial theory and corporate policy (Vol. 3). Reading, MA: Addison-Wesley.
  • Dasgupta, P., Sen, A., & Marglin, S. (1972). Guidelines for project evaluation. In UNIDO. Project Formulation and Evaluation (Vol. 2). United Nations. UNIDO.
  • Devarajan, S., Squire, L., & Suthiwart-Narueput, S. (1997). Beyond rate of return: reorienting project appraisal . The World Bank Research Observer, 12(1), 35-46.
  • Frechtling, J. (2002). The 2002 User-Friendly Handbook for Project Evaluation .
  • Project management
  • Recent changes
  • Random page
  • Page information

Table of Contents

  • Special pages

User page tools

  • What links here
  • Related changes
  • Printable version
  • Permanent link

Other projects

In other languages.

CC BY-SA Attribution-ShareAlike 4.0 International

  • This page was last edited on 18 November 2023, at 02:54.
  • Content is available under CC BY-SA Attribution-ShareAlike 4.0 International unless otherwise noted.
  • Privacy policy
  • About CEOpedia | Management online
  • Disclaimers


  1. [BKEYWORD-0-3]

    project evaluation methodology

  2. Evaluation process cycle, adapted from the better evaluation initiative...

    project evaluation methodology

  3. Designing an Evaluation Methodology for Your Project

    project evaluation methodology

  4. FREE 9+ Sample Project Evaluation Templates in PDF

    project evaluation methodology

  5. types and process of project evaluation

    project evaluation methodology

  6. Project Evaluation

    project evaluation methodology


  1. Project Appraisal

  2. Designing project methodology

  3. Advanced Nursing Research Proposal

  4. Bachelor of Project Management

  5. Project Performance Evaluation

  6. "Evaluation Design & Methodology Journal Entries #2


  1. What Is Project Evaluation?

    Project evaluation refers to the systematic investigation of an object’s worth or merit. The methodology is applied in projects, programs and policies. Evaluation is important to assess the worth or merit of a project and to identify areas ...

  2. The Role of Project Management Software in Agile Methodologies

    Agile methodologies have gained significant popularity in the project management world due to their flexibility and ability to adapt to changing requirements. These methodologies emphasize collaboration, continuous improvement, and iterativ...

  3. Demystifying Agile: Understanding the Principles Behind the Methodology

    Agile has become a buzzword in the software development industry, but what exactly is it? Is agile a methodology, or just a set of principles? In this article, we will explore the core principles of agile and answer some common questions ab...

  4. Project Evaluation Process: Definition, Methods & Steps

    Project evaluation is the process of measuring the success of a project, program or portfolio. This is done by gathering data about the project

  5. Project Evaluation: Definition, Methods, and Steps On How to Do It

    Steps to Conduct a Project Evaluation: A Project Evaluation Example · Step 1: Define Project Goals and Objectives · Step 2: Establish Evaluation

  6. Project Evaluation 101: Benefits, Methods, & Steps

    What Is Project Evaluation? · Is the project on track to achieve its defined aims and objectives? · How many goals have been achieved? · What

  7. Designing an Evaluation Methodology for Your Project

    As can be seen in Figure 2, the steps for defining an evaluation methodology are the following: Defining the purpose, defining the scope, describing the


    independent evaluations and the choice of an external evaluation consultant. He/she may also provide methodological input to the evaluation process. At the

  9. Understanding Evaluation Methodologies: Methods and Techniques

    Evaluation methodologies can be categorized into two main types based on the type of data they collect: qualitative and quantitative. Qualitative methodologies

  10. Project evaluation methodologies and techniques

    Year of publication : 1977 · 1. Indicators measuring the percentage of output flows at each class level in relation to all outputs of the educational system in

  11. Project Evaluation: What It Is and How To Do It

    Project evaluation is a strategy used to determine the success and impact of projects, programs, or policies. It requires the evaluator to

  12. Project evaluation methods

    Example methods and formulas · Cost-benefit analysis: Cost-benefit analysis is a common method used to evaluate projects. · Return on Investment

  13. Evaluation methodologies

    In short, the evaluation methodology is a tool to help better understand the steps needed to conduct a robust evaluation. An evaluation

  14. Evaluation methods for public engagement projects

    ... project), a background to the project (e.g. what happened), the evaluation methodology undertaken, the key findings, and recommendations for future activities.