- Schools Online
- Students Online

SACE Board of South Australia
Find out more about the subjects on offer, assessment strategies, and professional learning opportunities.
- Professional learning
- Resulting and Schools Online
- Subject renewal
Find out how to help students meet the requirements of the SACE, support teachers in VET and special provisions, and complete your admin tasks.
- Getting ready for end of year
- Getting started
- Administration
- Marketing toolkit
- VET and recognised learning
- Electronic assessment
- Special provisions
Students can learn about their SACE journey, the comprehensive range of subjects on offer, and flexible pathways they can take.
- Your SACE journey
- Help and support
- SACE Board Strategic Plan
- Drivers for change
- Our approach
- Professional Learning
- Revitalising the Personal Learning Plan
- Revitalising the Research Project
- Capabilities & Learner Profile
- Recognition of Aboriginal Cultural Knowledge and Learning
- SACE Change Network
Evaluation – RPB | Research Project | SACE - Research Project
- Teaching the SACE
- Research Project
- Evaluation – RPB
Navigation Menu
Research project | 2014 | support materials | assessment type exemplars | rpb - evaluation, evaluation - research project b.
The following exemplars include graded student work. Documents will continue to be uploaded as they become available.
- RPB A+ Evaluation: Empathy [DOC 79KB]
- RPB A Evaluation: Fairytales [PDF 1MB]
- RPB B Evaluation: Hairy-nosed Wombat [DOC 57KB]
- RPB C+ Evaluation: A car and its owner [PDF 1.3MB]
- RPB C Evaluation: Defending a property from bushfire [DOC 78KB]
- RPB D+ Evaluation: Roller coaster design [DOC 49KB]
- RPB D Evaluation: Fruitarian diet [DOC 44KB]
You are here
Evaluating research projects.
These Guidelines are neither a handbook on evaluation, nor a manual on how to evaluate, but a guide for the development, adaptation, or assessment of evaluation methods. They are a reference and a guide of good practice about building a specific guide for evaluating a given situation.
This page's content, needless to remind, is aimed at the authors of a specific guide : in the present case a guide for evaluating research projects. The specific guide's authors will pick from this material what is relevant for their needs and situation.
Objectives of evaluating research projects
The two most common situations faced by evaluators of development research projects are ex ante evaluations and ex post evaluations. In a few cases an intermediate evaluation may be performed, also sometimes called a "mid-term" evaluation. The formulation of the objectives in the specific guide will obviously depend on the situation, on the needs of the stakeholders , but also on the researcher's environment and on ethical considerations .
Ex ante evaluation refers to the evaluation of a project proposal, for example for deciding whether or not to finance it, or to provide scientific support.
Ex post evaluation is conducted after a research is completed, again for a variety of reasons such as deciding to publish or to apply the results, to grant an award or a fellowship to the author(s), or to build a new research along a similar line.
An intermediate evaluation is aimed basically at helping to decide to go on, or to reorient the course of the research.
Such objectives are examined in detail below, in the pages on evaluation of research projects ex ante and on evaluation of projects ex post . A final section deals briefly with intermediate evaluation.
Importance of project evaluation
Evaluating research projects is a fundamental dimension in the evaluation of development research, for basically two reasons:
- many of our evaluation concepts and practices are derived from our experience with research projects,
- evaluation of projects is essential for achieving our long term goal of maintaining and improving the quality of development research - and particularly of strengthening research capacity .
Dimensions of the evaluation of development research projects
Scientific quality is a basic requirement for all scientific research projects, and the role of publications is here determinant. Such is obviously the case of ex post evaluation, but publications are also necessary in the case of ex ante situations, where the evaluator needs to trust to a certain extent the proposal's authors, and will largely take into account their past publications.
For more details see the page on evaluation of scientific publications and the annexes on scientific quality and on valorisation .
While scientific quality is a necessary dimension in each evaluation of a development research project, it is not sufficient. An equally indispensable dimension is relevance to development.
Other dimensions will be justified by the context, the evaluation's objectives, the evaluation sponsor's requirements, etc.
- Send a comment
Search form

An official website of the United States government
The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.
The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.
- Publications
- Account settings
- Advanced Search
- Journal List
- Int J Environ Res Public Health

Research Project Evaluation—Learnings from the PATHWAYS Project Experience
Aleksander galas.
1 Epidemiology and Preventive Medicine, Jagiellonian University Medical College, 31-034 Krakow, Poland; [email protected] (A.G.); [email protected] (A.P.)
Aleksandra Pilat
Matilde leonardi.
2 Fondazione IRCCS, Neurological Institute Carlo Besta, 20-133 Milano, Italy; [email protected]
Beata Tobiasz-Adamczyk
Background: Every research project faces challenges regarding how to achieve its goals in a timely and effective manner. The purpose of this paper is to present a project evaluation methodology gathered during the implementation of the Participation to Healthy Workplaces and Inclusive Strategies in the Work Sector (the EU PATHWAYS Project). The PATHWAYS project involved multiple countries and multi-cultural aspects of re/integrating chronically ill patients into labor markets in different countries. This paper describes key project’s evaluation issues including: (1) purposes, (2) advisability, (3) tools, (4) implementation, and (5) possible benefits and presents the advantages of a continuous monitoring. Methods: Project evaluation tool to assess structure and resources, process, management and communication, achievements, and outcomes. The project used a mixed evaluation approach and included Strengths (S), Weaknesses (W), Opportunities (O), and Threats (SWOT) analysis. Results: A methodology for longitudinal EU projects’ evaluation is described. The evaluation process allowed to highlight strengths and weaknesses and highlighted good coordination and communication between project partners as well as some key issues such as: the need for a shared glossary covering areas investigated by the project, problematic issues related to the involvement of stakeholders from outside the project, and issues with timing. Numerical SWOT analysis showed improvement in project performance over time. The proportion of participating project partners in the evaluation varied from 100% to 83.3%. Conclusions: There is a need for the implementation of a structured evaluation process in multidisciplinary projects involving different stakeholders in diverse socio-environmental and political conditions. Based on the PATHWAYS experience, a clear monitoring methodology is suggested as essential in every multidisciplinary research projects.
1. Introduction
Over the last few decades, a strong discussion on the role of the evaluation process in research has developed, especially in interdisciplinary or multidimensional research [ 1 , 2 , 3 , 4 , 5 ]. Despite existing concepts and definitions, the importance of the role of evaluation is often underestimated. These dismissive attitudes towards the evaluation process, along with a lack of real knowledge in this area, demonstrate why we need research evaluation and how research evaluation can improve the quality of research. Having firm definitions of ‘evaluation’ can link the purpose of research, general questions associated with methodological issues, expected results, and the implementation of results to specific strategies or practices.
Attention paid to projects’ evaluation shows two concurrent lines of thought in this area. The first is strongly associated with total quality management practices and operational performance; the second focuses on the evaluation processes needed for public health research and interventions [ 6 , 7 ].
The design and implementation of process’ evaluations in fields different from public health have been described as multidimensional. According to Baranowski and Stables, process evaluation consists of eleven components: recruitment (potential participants for corresponding parts of the program); maintenance (keeping participants involved in the program and data collection); context (an aspect of environment of intervention); resources (the materials necessary to attain project goals); implementation (the extent to which the program is implemented as designed); reach (the extent to which contacts are received by the targeted group); barriers (problems encountered in reaching participants); exposure (the extent to which participants view or read material); initial use (the extent to which a participant conducts activities specified in the materials); continued use (the extent to which a participant continues to do any of the activities); contamination (the extent to which participants receive interventions from outside the program and the extent to which the control group receives the treatment) [ 8 ].
There are two main factors shaping the evaluation process. These are: (1) what is evaluated (whether the evaluation process revolves around project itself or the outcomes which are external to the project), and (2) who is an evaluator (whether an evaluator is internal or external to the project team and program). Although there are several existing gaps in current knowledge about the evaluation process of external outcomes, the use of a formal evaluation process of a research project itself is very rare.
To define a clear evaluation and monitoring methodology we performed different steps. The purpose of this article is to present experiences from the project evaluation process implemented in the Participation to Healthy Workplaces and Inclusive Strategies in the Work Sector (the EU PATHWAYS project. The manuscript describes key project evaluation issues as: (1) purposes, (2) advisability, (3) tools, (4) implementation, and (5) possible benefits. The PATHWAYS project can be understood as a specific case study—presented through a multidimensional approach—and based on the experience associated with general evaluation, we can develop patterns of good practices which can be used in other projects.
1.1. Theoretical Framework
The first step has been the clear definition of what is an evaluation strategy or methodology . The term evaluation is defined by the Cambridge Dictionary as the process of judging something’s quality, importance, or value, or a report that includes this information [ 9 ] or in a similar way by the Oxford Dictionary as the making of a judgment about the amount, number, or value of something [ 10 ]; assessment and in the activity, it is frequently understood as associated with the end rather than with the process. Stufflebeam, in its monograph, defines evaluation as a study designed and conducted to assist some audience to assess an object’s merit and worth. Considering this definition, there are four categories of evaluation approaches: (1) pseudo-evaluation; (2) questions and/or methods-oriented evaluation; (3) improvement/accountability evaluation; (4) social agenda/advocacy evaluation [ 11 ].
In brief, considering Stufflebeam’s classification, pseudo-evaluations promote invalid or incomplete findings. This happens when findings are selectively released or falsified. There are two pseudo-evaluation types proposed by Stufflebeam: (1) public relations-inspired studies (studies which do not seek truth but gather information to solicit positive impressions of program), and (2) politically controlled studies (studies which seek the truth but inappropriately control the release of findings to right-to-know audiences).
The questions and/or methods-oriented approach uses rather narrow questions, which are oriented on operational objectives of the project. Questions oriented uses specific questions, which are of interest by accountability requirements or an expert’s opinions of what is important, while method oriented evaluations favor the technical qualities of program/process. The general concept of these two is that it is better to ask a few pointed questions well to get information on program merit and worth [ 11 ]. In this group, one may find the following evaluation types: (a) objectives-based studies: typically focus on whether the program objectives have been achieved through an internal perspective (by project executors); (b) accountability, particularly payment by results studies: stress the importance of obtaining an external, impartial perspective; (c) objective testing program: uses standardized, multiple-choice, norm-referenced tests; (d) outcome evaluation as value-added assessment: a recurrent evaluation linked with hierarchical gain score analysis; (e) performance testing: incorporates the assessment of performance (by written or spoken answers, or psychomotor presentations) and skills; (f) experimental studies: program evaluators perform a controlled experiment and contrast the outcomes observed; (g) management information system: provide information needed for managers to conduct their programs; (h) benefit-cost analysis approach: mainly sets of quantitative procedures to assess the full cost of a program and its returns; (i) clarification hearing: an evaluation of a trial in which role-playing evaluators competitively implement both a damning prosecution of a program—arguing that it failed, and a defense of the program—and arguing that it succeeded. Next, a judge hears arguments within the framework of a jury trial and controls the proceedings according to advance agreements on rules of evidence and trial procedures; (j) case study evaluation: focused, in-depth description, analysis, and synthesis of a particular program; (k) criticism and connoisseurship: certain experts in a given area do in-depth analysis and evaluation that could not be done in other way; (l) program theory-based evaluation: based on the theory beginning with another validated theory of how programs of a certain type within similar settings operate to produce outcomes (e.g., Health Believe Model, Predisposing, Reinforcing and Enabling Constructs in Educational Diagnosis and Evaluation and Policy, Regulatory, and Organizational Constructs in Educational and Environmental Development - thus so called PRECEDE-PROCEED model proposed by L. W. Green or Stage of Change Theory by Prochaska); (m) mixed method studies: include different qualitative and quantitative methods.
The third group of methods considered in evaluation theory are improvement/accountability-oriented evaluation approaches. Among these, there are the following: (a) decision/accountability oriented studies: emphasizes that evaluation should be used proactively to help improve a program and retroactively to assess its merit and worth; (b) consumer-oriented studies: wherein the evaluator is a surrogate consumer who draws direct conclusions about the evaluated program; (c) accreditation/certification approach: an accreditation study to verify whether certification requirements have been/are fulfilled.
Finally, a social agenda/advocacy evaluation approach focuses on the assessment of difference, which is/was intended to be the effect of the program evaluation. The evaluation process in this type of approach works in a loop, starting with an independent evaluator who provides counsel and advice towards understanding, judging and improving programs as evaluations to serve the client’s needs. In this group, there are: (a) client-centered studies (or responsive evaluation): evaluators work with, and for, the support of diverse client groups; (b) constructivist evaluation: evaluators are authorized and expected to maneuver the evaluation to emancipate and empower involved and affected disenfranchised people; (c) deliberative democratic evaluation: evaluators work within an explicit democratic framework and uphold democratic principles in reaching defensible conclusions; (d) utilization-focused evaluation: explicitly geared to ensure that program evaluations make an impact.
1.2. Implementation of the Evaluation Process in the EU PATHWAYS Project
The idea to involve the evaluation process as an integrated goal of the PATHWAYS project was determined by several factors relating to the main goal of the project, defined as a special intervention to existing attitudes to occupational mobility and work activity reintegration of people of working age, suffering from specific chronic conditions into the labor market in 12 European Countries. Participating countries had different cultural and social backgrounds and different pervasive attitudes towards people suffering from chronic conditions.
The components of evaluation processes previously discussed proved helpful when planning the PATHWAYS evaluation, especially in relation to different aspects of environmental contexts. The PATHWAYS project focused on chronic conditions including: mental health issues, neurological diseases, metabolic disorders, musculoskeletal disorders, respiratory diseases, cardiovascular diseases, and persons with cancer. Within this group, the project found a hierarchy of patients and social and medical statuses defined by the nature of their health conditions.
According to the project’s monitoring and evaluation plan, the evaluation process followed specific challenges defined by the project’s broad and specific goals and monitored the progress of implementing key components by assessing the effectiveness of consecutive steps and identifying conditions supporting the contextual effectiveness. Another significant aim of the evaluation component on the PATHWAYS project was to recognize the value and effectiveness of using a purposely developed methodology—consisting of a wide set of quantitative and qualitative methods. The triangulation of methods was very useful and provided the opportunity to develop a multidimensional approach to the project [ 12 ].
From the theoretical framework, special attention was paid to the explanation of medical, cultural, social and institutional barriers influencing the chance of employment of chronically ill persons in relation to the characteristics of the participating countries.
Levels of satisfaction with project participation, as well as with expected or achieved results and coping with challenges on local–community levels and macro-social levels, were another source of evaluation.
In the PATHWAYS project, the evaluation was implemented for an unusual purpose. This quasi-experimental design was developed to assess different aspects of the multidimensional project that used a variety of methods (systematic review of literature, content analysis of existing documents, acts, data and reports, surveys on different country-levels, deep interviews) in the different phases of the 3 years. The evaluation monitored each stage of the project and focused on process implementation, with the goal of improving every step of the project. The evaluation process allowed to perform critical assessments and deep analysis of benefits and shortages of the specific phase of the project.
The purpose of the evaluation was to monitor the main steps of the Project, including the expectations associated with a multidimensional, methodological approach used by PATHWAYS partners, as well as improving communication between partners, from different professional and methodological backgrounds involved in the project in all its phases, so as to avoid errors in understanding the specific steps as well as the main goals.
2. Materials and Methods
The paper describes methodology and results gathered during the implementation of Work Package 3, Evaluation of the Participation to Healthy Workplaces and Inclusive Strategies in the Work Sector (the PATHWAYS) project. The work package was intended to keep internal control over the run of the project to achieve timely fulfillment of tasks, milestones, and purpose by all project partners.
2.1. Participants
The project consortium involved 12 partners from 10 different European countries. There were academics (representing cross-disciplinary research including socio-environmental determinants of health, clinicians), institutions actively working for the integration of people with chronic and mental health problems and disability, educational bodies (working in the area of disability and focusing on inclusive education), national health institutes (for rehabilitation of patients with functional and workplace impairments), an institution for inter-professional rehabilitation at a country level (coordinating medical, social, educational, pre-vocational and vocational rehabilitation), a company providing patient-centered services (in neurorehabilitation). All the partners represented vast knowledge and high-level expertise in the area of interest and all agreed with the World Health Organization’s (WHO) International Classification of Functioning, Disability and Health-ICF and of the biopsychosocial model of health and functioning. The consortium was created based on the following criteria:
- vision, mission, and activities in the area of project purposes,
- high level of experience in the area (supported by publications) and in doing research (being involved in international projects, collaboration with the coordinator and/or other partners in the past),
- being able to get broad geographical, cultural and socio-political representation from EU countries,
- represent different stakeholder type in the area.
2.2. Project Evaluation Tool
The tool development process involved the following steps:
- (1) Review definitions of ‘evaluation’ and adopt one which consorts best with the reality of public health research area;
- (2) Review evaluation approaches and decide on the content which should be applicable in the public health research;
- (3) Create items to be used in the evaluation tool;
- (4) Decide on implementation timing.
According to the PATHWAYS project protocol, an evaluation tool for the internal project evaluation was required to collect information about: (1) structure and resources; (2) process, management and communication; (3) achievements and/or outcomes and (4) SWOT analysis. A mixed methods approach was chosen. The specific evaluation process purpose and approach are presented in Table 1 .
Evaluation purposes and approaches adopted for the purpose in the PATHWAYS project.
* Open ended questions are not counted here.
The tool was prepared following different steps. In the paragraph to assess structure and resources, there were questions about the number of partners, professional competences, assigned roles, human, financial and time resources, defined activities and tasks, and the communication plan. The second paragraph, process, management and communication, collected information about the coordination process, consensus level, quality of communication among coordinators, work package leaders, and partners, whether project was carried out according to the plan, involvement of target groups, usefulness of developed materials, and any difficulties in the project realization. Finally, the paragraph achievements and outcomes gathered information about project specific activities such as public-awareness raising, stakeholder participation and involvement, whether planned outcomes (e.g., milestones) were achieved, dissemination activities, and opinions on whether project outcomes met the needs of the target groups. Additionally, it was decided to implement SWOT analysis as a part of the evaluation process. SWOT analysis derives its name from the evaluation of Strengths (S), Weaknesses (W), Opportunities (O), and Threats (T) faced by a company, industry or, in this case, project consortium. SWOT analysis comes from the business world and was developed in the 1960s at Harvard Business School as a tool for improving management strategies among companies, institutions, or organization [ 13 , 14 ]. However, in recent years, SWOT analysis has been adapted in the context of research to improve programs or projects.
For a better understanding of SWOT analysis, it is important to highlight the internal features of Strengths and Weaknesses, which are considered controllable. Strengths refers to work inside the project such as capabilities and competences of partners, whereas weaknesses refers to aspects, which needs improvement, such as resources. Conversely, Opportunities and Threats are considered outside factors and uncontrollable [ 15 ]. Opportunities are maximized to fit the organization’s values and resources and threats are the factors that the organization is not well equipped to deal with [ 9 ].
The PATHWAYS project members participated in SWOT analyses every three months. They answered four open questions about strengths, weaknesses, opportunities, and threats identified in evaluated period (last three months). They were then asked to assess those items on 10-point scale. The sample included results from nine evaluated periods from partners from ten different countries.
The tool for the internal evaluation of the PATHWAYS project is presented in Appendix A .
2.3. Tool Implementation and Data Collection
The PATHWAYS on-going evaluation took place at three-month intervals. It consisted of on-line surveys, and every partner assigned a representative who was expected to have good knowledge on the progress of project’s progress. The structure and resources were assessed only twice, at the beginning (3rd month) and at the end (36th month) of the project. The process, management, and communication questions, as well as SWOT analysis questions, were asked every three months. The achievements and outcomes questions started after the first year of implementation (i.e., after 15th month), and some of items in this paragraph, (results achieved, whether project outcomes meet the needs of the target groups and published regular publications), were only implemented at the end of the project (36th month).
2.4. Evaluation Team
The evaluation team was created from professionals with different backgrounds and extensive experience in research methodology, sociology, social research methods and public health.
The project started in 2015 and was carried out for 36 months. There were 12 partners in the PATHWAYS project, representing Austria, Belgium, Czech Republic, Germany, Greece, Italy, Norway, Poland, Slovenia and Spain and a European Organization. The on-line questionnaire was sent to all partners one week after the specified period ended and project partners had at least 2 weeks to fill in/answer the survey. Eleven rounds of the survey were performed.
The participation rate in the consecutive evaluation surveys was 11 (91.7%), 12 (100%), 12 (100%), 11 (91.7%), 10 (83.3%), 11 (91.7%), 11 (91.7%), 10 (83.3%), and 11 (91.7%) till the project end. Overall, it rarely covered the whole group, which may have resulted from a lack of coercive mechanisms at a project level to answer project evaluation questions.
3.1. Evaluation Results Considering Structure and Resources (3rd Month Only)
A total of 11 out of 12 project partners participated in the first evaluation survey. The structure and resources of the project were not assessed by the project coordinator and as such, the results in represent the opinions of the other 10 participating partners. The majority of respondents rated the project consortium as having at least adequate professional competencies. In total eight to nine project partners found human, financial and time resources ‘just right’ and the communication plan ‘clear’. More concerns were observed regarding the clarity of tasks, what is expected from each partner, and how specific project activities should be or were assigned.
3.2. Evaluation Results Considering Process, Management and Communication
The opinions about project coordination, communication processes (with coordinator, between WP leaders, and between individual partners/researchers) were assessed as ‘good’ and ‘very good’, along the whole period. There were some issues, however, when it came to the realization of specific goals, deliverables, or milestones of the project.
Given the broad scope of the project and participating partner countries, we created a glossary to unify the common terms used in the project. It was a challenge, as during the project implementation there were several discussions and inconsistencies in the concepts provided ( Figure 1 ).

Partners’ opinions about the consensus around terms (shared glossary) in the project consortium across evaluation waves (W1—after 3-month realization period, and at 3-month intervals thereafter).
Other issues, which appeared during project implementation, were recruitment of, involvement with, and cooperation with stakeholders. There was a range of groups to be contacted and investigated during the project including individual patients suffering from chronic conditions, patients’ advocacy groups and national governmental organizations, policy makers, employers, and international organizations. It was found that during the project, the interest and the involvement level of the aforementioned groups was quite low and difficult to achieve, which led to some delays in project implementation ( Figure 2 ). This was the main cause of smaller percentages of “what was expected to be done in designated periods of project realization time”. The issue was monitored and eliminated by intensification of activities in this area ( Figure 3 ).

Partners’ reports on whether the project had been carried out according to the plan ( a ) and the experience of any problems in the process of project realization ( b ) (W1—after 3-month realization period, and at 3-month intervals thereafter).

Partners’ reports on an approximate estimation (in percent) of the project plan implementation (what has been done according to the plan) ( a ) and the involvement of target groups (W1—after 3-month realization period, and at 3-month intervals thereafter) ( b ).
3.3. Evaluation Results Considering Achievements and Outcomes
The evaluation process was prepared to monitor project milestones and deliverables. One of the PATHWAYS project goals was to raise public awareness surrounding the reintegration of chronically ill people into the labor market. This was assessed subjectively by cooperating partners and only half (six) felt they achieved complete success on that measure. The evaluation process monitored planned outcomes according to: (1) determination of strategies for awareness rising activities, (2) assessment of employment-related needs, and (3) development of guidelines (which were planned by the project). The majority of partners completely fulfilled this task. Furthermore, the dissemination process was also carried out according to the plan.
3.4. Evaluation Results from SWOT
3.4.1. strengths.
Amongst the key issues identified across all nine evaluated periods ( Figure 4 ), the “strong consortium” was highlighted as the most important strength of the PATHWAYS project. The most common arguments for this assessment were the coordinator’s experience in international projects, involvement of interdisciplinary experts who could guarantee a holistic approach to the subject, and a highly motivated team. This was followed by the uniqueness of the topic. Project implementers pointed to the relevance of the analyzed issues, which are consistent with social needs. They also highlighted that this topic concerned an unexplored area in employment policy. The interdisciplinary and international approach was also emphasized. According to the project implementers, the international approach allowed mapping of vocational and prevocational processes among patients with chronic conditions and disability throughout Europe. The interdisciplinary approach, on the other hand, enabled researchers to create a holistic framework that stimulates innovation by thinking across boundaries of particular disciplines—especially as the PATHWAYS project brings together health scientists from diverse fields (physicians, psychologists, medical sociologists, etc.) from ten European countries. This interdisciplinary approach is also supported by the methodology, which is based on a mixed-method approach (qualitative and quantitative data). The involvement of an advocacy group was another strength identified by the project implementers. It was stressed that the involvement of different types of stakeholders increased validity and social triangulation. It was also assumed that it would allow for the integration of relevant stakeholders. The last strength, the usefulness of results, was identified only in the last two evaluation waves, when the first results had been measured.

SWOT Analysis—a summary of main issues reported by PATHWAYS project partners.
3.4.2. Weaknesses
The survey respondents agreed that the main weaknesses of the project were time and human resources. The subject of the PATHWAYS project turned out to be very broad, and therefore the implementers pointed to the insufficient human resources and inadequate time for the implementation of individual tasks, as well as the project overall. This was related to the broad categories of chronic diseases chosen for analysis in the project. On one hand, the implementers complained about the insufficient number of chronic diseases taken into account in the project. On the other hand, they admitted that it was not possible to cover all chronic diseases in details. The scope of the project was reported as another weakness. In the successive waves of evaluation, the implementers more often pointed out that it was hard to cover all relevant topics.
Nevertheless, some of the major weaknesses reported during the project evaluation were methodological problems. Respondents pointed to problems with the implementation of tasks on a regular basis. For example, survey respondents highlighted the need for more open questions in the survey that the questionnaire was too long or too complicated, that the tools were not adjusted for relevancy in the national context, etc. Another issue was that the working language was English, but all tools or survey questionnaire needed to be translated into different languages and this issue was not always considered by the Commission in terms of timing and resources. This issue could provide useful for further projects, as well as for future collaborations.
The difficulties of involving stakeholders were reported, especially during tasks, which required their active commitment, like participation in in-depth interviews or online questionnaires. Interestingly, the international approach was considered both strength and weakness of the project. The implementers highlighted the complexity of making comparisons between health care and/or social care in different countries. The budget was also identified as a weakness by the project implementers. More funds obtained from the partners could have helped PATHWAYS enhance dissemination and stakeholders’ participation.
3.4.3. Opportunities
A list of seven issues within the opportunities category reflects the positive outlook of survey respondents from the beginning of the project to its final stage. Social utility was ranked as the top opportunity. The implementers emphasized that the project could fill a gap between the existing solutions and the real needs of people with chronic diseases and mental disorders. The implementers also highlighted the role of future recommendations, which would consist of proposed solutions for professionals, employees, employers, and politicians. These advantages are strongly associated with increasing awareness of employment situations of people with chronic diseases in Europe and the relevance of the problem. Alignment with policies, strategies, and stakeholders’ interests were also identified as opportunities. The topic is actively discussed on the European and national level, and labor market and employment issues are increasingly emphasized in the public discourse. What is more relevant is that the European Commission considers the issue crucial, and the results of the project are in line with its requests for the future. The implementers also observed increasing interest from the stakeholders, which is very important for the future of the project. Without doubt, the social network of project implementers provides a huge opportunity for the sustainability of results and the implementation of recommendations.
3.4.4. Threats
Insufficient response from stakeholders was the top perceived threat selected by survey respondents. The implementers indicated that insufficient involvement of stakeholders resulted in low response rates in the research phase, which posed a huge threat for the project. The interdisciplinary nature of the PATHWAYS project was highlighted as a potential threat due to differences in technical terminology and different systems of regulating the employment of persons with reduced work capacity in each country, as well as many differences in the legislation process. Insufficient funding and lack of existing data were identified as the last two threats.
One novel aspect of the evaluation process in the PATHWAYS project was a numerical SWOT analysis. Participants were asked to score strengths, weaknesses, opportunities, and threats from 0 (meaning the lack of/no strengths, weaknesses) to 10 (meaning a lot of ... several ... strengths, weaknesses). This concept enabled us to get a subjective score of how partners perceive the PATHWAYS project itself and the performance of the project, as well as how that perception changes over time. Data showed an increase in both strengths and opportunities and a decrease in weaknesses and threats over the course of project implementation ( Figure 5 ).

Numerical SWOT, combined, over a period of 36 months of project realization (W1—after 3-month realization period, and at 3-month intervals thereafter).
4. Discussion
The need for project evaluation was born from an industry facing challenges regarding how to achieve market goals in more efficient way. Nowadays, every process, including research project implementation, faces questions regarding its effectiveness and efficiency.
The challenge of a research project evaluation is that the majority of research projects are described as unique, although we believe several projects face similar issues and challenges as those observed in the PATHWAYS project.
The main objectives of the PATHWAYS Project were (a) to identify integration and re-integration strategies that are available in Europe and beyond for individuals with chronic diseases and mental disorders experiencing work-related problems (such as unemployment, absenteeism, reduced productivity, stigmatization), (b) to determine their effectiveness, (c) to assess the specific employment-related needs of those people, and (d) to develop guidelines supporting the implementation of effective strategies of professional integration and reintegration. The broad area of investigation, partial knowledge in the field, diversity of determinants across European Union countries, and involvement with stakeholders representing different groups caused several challenges in the project, including:
- problem : uncovered, challenging, demanding (how to encourage stakeholders to participate, share experiences),
- diversity : different European regions; different determinants: political, social, cultural; different public health and welfare systems; differences in law regulations; different employment policies and issues in the system,
- multidimensionality of research: some quantitative, qualitative studies including focus groups, opinions from professionals, small surveys in target groups (workers with chronic conditions).
The challenges to the project consequently led to several key issues, which should be taken, into account during project realization:
- partners : with their own expertise and interests; different expectations; different views on what is more important to focused on and highlighted;
- issues associated with unification : between different countries with different systems (law, work-related and welfare definitions, disability classification, others);
- coordination : as multidimensionality of the project may have caused some research activities by partners to move in a wrong direction (data, knowledge which is not needed for the project purposes), a lack of project vision in (some) partners might postpone activities through misunderstanding;
- exchange of information : multidimensionality, the fact that different tasks were accomplished by different centers and obstacles to data collection required good communication methods and smooth exchange of information.
Identified Issues and Implemented Solutions
There were several issues identified through the semi-internal evaluation process performed during the project. Those, which might be more relevant for the project realization, are mentioned in the Table 2 .
Issues identified by the evaluation process and solutions implemented.
The PATHWAYS project included diverse partners representing different areas of expertise and activity (considering broad aspect of chronic diseases, decline in functioning and of disability, and its role in a labor market) in different countries and social security systems, which caused a challenge when developing a common language to achieve effective communication and better understanding of facts and circumstances in different countries. The implementation of continuous project process monitoring, and proper adjustment, enabled the team to overcome these challenges.
The evaluation tool has several benefits. First, it covers all key areas of the research project including structure and available resources, the run of the process, quality and timing of management and communication, as well as project achievements and outcomes. Continuous evaluation of all of these areas provides in-depth knowledge about project performance. Second, the implementation of SWOT tool provided opportunities to share out good and bad experiences by all project partners, and the use of a numerical version of SWOT provided a good picture about inter-relations strengths—weaknesses and opportunities—threats in the project and showed the changes in their intensity over time. Additionally, numerical SWOT may verify whether perception of a project improves over time (as was observed in the PATHWAYS project) showing an increase in strengths and opportunities and a decrease in weaknesses and threats. Third, the intervals in which partners were ‘screened’ by the evaluation questionnaire seems to be appropriate, as it was not very demanding but frequent enough to diagnose on-time some issues in the project process.
The experiences with the evaluation also revealed some limitations. There were no coercive mechanisms for participation in the evaluation questionnaires, which may have caused a less than 100% response rate in some screening surveys. Practically, that was not a problem in the PATHWAYS project. Theoretically, however, this might lead to unrevealed problems, as partners experiencing troubles might not report them. Another point is asking about quality of the consortium to the project coordinator, which has no great value (the consortium is created by the coordinator in the best achievable way and it is hard to expect other comments especially at the beginning of the project). Regarding the tool itself, the question Could you give us approximate estimation (in percent) of the project plan realization (what has been done according to the plan)? was expected to collect information about the project partners collecting data on what has been done out of what should be done during each evaluation period, meaning that 100% was what should be done in 3-month time in our project. This question, however, was slightly confusing at the beginning, as it was interpreted as percentage of all tasks and activities planned for the whole duration of the project. Additionally, this question only works provided that precise, clear plans on the type and timing of tasks were allocated to the project partners. Lastly, there were some questions with very low variability in answer types across evaluation surveys (mainly about coordination and communication). Our opinion is that if the project runs/performs in a smooth manner, one may think such questions useless, but in more complicated projects, these questions may reveal potential causes of troubles.
5. Conclusions
The PATHWAYS project experience shows a need for the implementation of structured evaluation processes in multidisciplinary projects involving different stakeholders in diverse socio-environmental and political conditions. Based on the PATHWAYS experience, a clear monitoring methodology is suggested as essential in every project and we suggest the following steps while doing multidisciplinary research:
- Define area/s of interest (decision maker level/s; providers; beneficiaries: direct, indirect),
- Identify 2–3 possible partners for each area (chain sampling easier, more knowledge about; check for publications),
- Prepare a research plan (propose, ask for supportive information, clarify, negotiate),
- Create a cross-partner groups of experts,
- Prepare a communication strategy (communication channels, responsible individuals, timing),
- Prepare a glossary covering all the important issues covered by the research project,
- Monitor the project process and timing, identify concerns, troubles, causes of delays,
- Prepare for the next steps in advance, inform project partners about the upcoming activities,
- Summarize, show good practices, successful strategies (during project realization, to achieve better project performance).
Acknowledgments
The current study was part of the PATHWAYS project, that has received funding from the European Union’s Health Program (2014–2020) Grant agreement no. 663474.
The evaluation questionnaire developed for the PATHWAYS Project.
SWOT analysis:
What are strengths and weaknesses of the project? (list, please)
What are threats and opportunities? (list, please)
Visual SWOT:
Please, rate the project on the following continua:
How would you rate:
(no strengths) 0 1 2 3 4 5 6 7 8 9 10 (a lot of strengths, very strong)
(no weaknesses) 0 1 2 3 4 5 6 7 8 9 10 (a lot of weaknesses, very weak)
(no risks) 0 1 2 3 4 5 6 7 8 9 10 (several risks, inability to accomplish the task(s))
(no opportunities) 0 1 2 3 4 5 6 7 8 9 10 (project has a lot of opportunities)
Author Contributions
A.G., A.P., B.T.-A. and M.L. conceived and designed the concept; A.G., A.P., B.T.-A. finalized evaluation questionnaire and participated in data collection; A.G. analyzed the data; all authors contributed to writing the manuscript. All authors agreed on the content of the manuscript.
Conflicts of Interest
The authors declare no conflict of interest.
- INTRODUCTORY DAY
- Step 1 - Brainstorming
- Step 2 - Planning your Sources
- Step 3 - Capabilities
- Step 4 - Question Refinement
- Step 5 - Research Processes Chart
- Step 6 - Ethical Research
- Step 7 - Research and Findings Charts
- Step 8 - Source Analysis
- Step 9 - Key Findings and Cross Referencing
- Step 10 - Revisit Capabilities
- Step 11 - Bibliography
- Step 12 - 10 Pages
- Step 13 - Substantiating and Writing the Outcome
- Step 14 - SACE Cover Page
- Step 17 - Planning the Evaluation
- Step 18 - Substantiating and Writing the Evaluation
- Step 19- SACE Cover Page
- Step 20 - Electronic Submission
RPB - EVALUATION
What is the evaluation, doing rpa click on the button below, important information about the evaluation, evaluation performance standards.

Summary of tasks to be completed:
- Read the information 'What is the Evaluation?'
- Read the important information on the Evaluation and external assessment
- Download and save the Evaluation performance standards

- Skip to main content
- Skip to primary sidebar
- Skip to footer
- QuestionPro

- Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case NPS+ Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
- Resources Blog eBooks Survey Templates Case Studies Training Help center

Home Market Research
Evaluation Research: Definition, Methods and Examples

Content Index
- What is evaluation research
- Why do evaluation research

Quantitative methods
Qualitative methods.
- Process evaluation research question examples
- Outcome evaluation research question examples
What is evaluation research?
Evaluation research, also known as program evaluation, refers to research purpose instead of a specific method. Evaluation research is the systematic assessment of the worth or merit of time, money, effort and resources spent in order to achieve a goal.
Evaluation research is closely related to but slightly different from more conventional social research . It uses many of the same methods used in traditional social research, but because it takes place within an organizational context, it requires team skills, interpersonal skills, management skills, political smartness, and other research skills that social research does not need much. Evaluation research also requires one to keep in mind the interests of the stakeholders.
Evaluation research is a type of applied research, and so it is intended to have some real-world effect. Many methods like surveys and experiments can be used to do evaluation research. The process of evaluation research consisting of data analysis and reporting is a rigorous, systematic process that involves collecting data about organizations, processes, projects, services, and/or resources. Evaluation research enhances knowledge and decision-making, and leads to practical applications.
LEARN ABOUT: Action Research
Why do evaluation research?
The common goal of most evaluations is to extract meaningful information from the audience and provide valuable insights to evaluators such as sponsors, donors, client-groups, administrators, staff, and other relevant constituencies. Most often, feedback is perceived value as useful if it helps in decision-making. However, evaluation research does not always create an impact that can be applied anywhere else, sometimes they fail to influence short-term decisions. It is also equally true that initially, it might seem to not have any influence, but can have a delayed impact when the situation is more favorable. In spite of this, there is a general agreement that the major goal of evaluation research should be to improve decision-making through the systematic utilization of measurable feedback.
Below are some of the benefits of evaluation research
- Gain insights about a project or program and its operations
Evaluation Research lets you understand what works and what doesn’t, where we were, where we are and where we are headed towards. You can find out the areas of improvement and identify strengths. So, it will help you to figure out what do you need to focus more on and if there are any threats to your business. You can also find out if there are currently hidden sectors in the market that are yet untapped.
- Improve practice
It is essential to gauge your past performance and understand what went wrong in order to deliver better services to your customers. Unless it is a two-way communication, there is no way to improve on what you have to offer. Evaluation research gives an opportunity to your employees and customers to express how they feel and if there’s anything they would like to change. It also lets you modify or adopt a practice such that it increases the chances of success.
- Assess the effects
After evaluating the efforts, you can see how well you are meeting objectives and targets. Evaluations let you measure if the intended benefits are really reaching the targeted audience and if yes, then how effectively.
- Build capacity
Evaluations help you to analyze the demand pattern and predict if you will need more funds, upgrade skills and improve the efficiency of operations. It lets you find the gaps in the production to delivery chain and possible ways to fill them.
Methods of evaluation research
All market research methods involve collecting and analyzing the data, making decisions about the validity of the information and deriving relevant inferences from it. Evaluation research comprises of planning, conducting and analyzing the results which include the use of data collection techniques and applying statistical methods.
Some of the evaluation methods which are quite popular are input measurement, output or performance measurement, impact or outcomes assessment, quality assessment, process evaluation, benchmarking, standards, cost analysis, organizational effectiveness, program evaluation methods, and LIS-centered methods. There are also a few types of evaluations that do not always result in a meaningful assessment such as descriptive studies, formative evaluations, and implementation analysis. Evaluation research is more about information-processing and feedback functions of evaluation.
These methods can be broadly classified as quantitative and qualitative methods.
The outcome of the quantitative research methods is an answer to the questions below and is used to measure anything tangible.
- Who was involved?
- What were the outcomes?
- What was the price?
The best way to collect quantitative data is through surveys , questionnaires , and polls . You can also create pre-tests and post-tests, review existing documents and databases or gather clinical data.
Surveys are used to gather opinions, feedback or ideas of your employees or customers and consist of various question types . They can be conducted by a person face-to-face or by telephone, by mail, or online. Online surveys do not require the intervention of any human and are far more efficient and practical. You can see the survey results on dashboard of research tools and dig deeper using filter criteria based on various factors such as age, gender, location, etc. You can also keep survey logic such as branching, quotas, chain survey, looping, etc in the survey questions and reduce the time to both create and respond to the donor survey . You can also generate a number of reports that involve statistical formulae and present data that can be readily absorbed in the meetings. To learn more about how research tool works and whether it is suitable for you, sign up for a free account now.
Create a free account!
Quantitative data measure the depth and breadth of an initiative, for instance, the number of people who participated in the non-profit event, the number of people who enrolled for a new course at the university. Quantitative data collected before and after a program can show its results and impact.
The accuracy of quantitative data to be used for evaluation research depends on how well the sample represents the population, the ease of analysis, and their consistency. Quantitative methods can fail if the questions are not framed correctly and not distributed to the right audience. Also, quantitative data do not provide an understanding of the context and may not be apt for complex issues.
Learn more: Quantitative Market Research: The Complete Guide
Qualitative research methods are used where quantitative methods cannot solve the research problem , i.e. they are used to measure intangible values. They answer questions such as
- What is the value added?
- How satisfied are you with our service?
- How likely are you to recommend us to your friends?
- What will improve your experience?
LEARN ABOUT: Qualitative Interview
Qualitative data is collected through observation, interviews, case studies, and focus groups. The steps for creating a qualitative study involve examining, comparing and contrasting, and understanding patterns. Analysts conclude after identification of themes, clustering similar data, and finally reducing to points that make sense.
Observations may help explain behaviors as well as the social context that is generally not discovered by quantitative methods. Observations of behavior and body language can be done by watching a participant, recording audio or video. Structured interviews can be conducted with people alone or in a group under controlled conditions, or they may be asked open-ended qualitative research questions . Qualitative research methods are also used to understand a person’s perceptions and motivations.
LEARN ABOUT: Social Communication Questionnaire
The strength of this method is that group discussion can provide ideas and stimulate memories with topics cascading as discussion occurs. The accuracy of qualitative data depends on how well contextual data explains complex issues and complements quantitative data. It helps get the answer of “why” and “how”, after getting an answer to “what”. The limitations of qualitative data for evaluation research are that they are subjective, time-consuming, costly and difficult to analyze and interpret.
Learn more: Qualitative Market Research: The Complete Guide
Survey software can be used for both the evaluation research methods. You can use above sample questions for evaluation research and send a survey in minutes using research software. Using a tool for research simplifies the process right from creating a survey, importing contacts, distributing the survey and generating reports that aid in research.
Examples of evaluation research
Evaluation research questions lay the foundation of a successful evaluation. They define the topics that will be evaluated. Keeping evaluation questions ready not only saves time and money, but also makes it easier to decide what data to collect, how to analyze it, and how to report it.
Evaluation research questions must be developed and agreed on in the planning stage, however, ready-made research templates can also be used.
Process evaluation research question examples:
- How often do you use our product in a day?
- Were approvals taken from all stakeholders?
- Can you report the issue from the system?
- Can you submit the feedback from the system?
- Was each task done as per the standard operating procedure?
- What were the barriers to the implementation of each task?
- Were any improvement areas discovered?
Outcome evaluation research question examples:
- How satisfied are you with our product?
- Did the program produce intended outcomes?
- What were the unintended outcomes?
- Has the program increased the knowledge of participants?
- Were the participants of the program employable before the course started?
- Do participants of the program have the skills to find a job after the course ended?
- Is the knowledge of participants better compared to those who did not participate in the program?
MORE LIKE THIS

Release Notes – November 2023
Dec 1, 2023

Clinical Data Management: What It Is, Stages + Tools
Nov 30, 2023

QuestionPro CX: Leading VoC Technology Provider in 2023
Nov 29, 2023

Understanding the Significance of QuestionPro’s Australian Data Center and Superior Academic Survey Research Software
Other categories.
- Academic Research
- Artificial Intelligence
- Assessments
- Brand Awareness
- Case Studies
- Communities
- Consumer Insights
- Customer effort score
- Customer Engagement
- Customer Experience
- Customer Loyalty
- Customer Research
- Customer Satisfaction
- Employee Benefits
- Employee Engagement
- Employee Retention
- Friday Five
- General Data Protection Regulation
- Insights Hub
- Life@QuestionPro
- Market Research
- Mobile diaries
- Mobile Surveys
- New Features
- Online Communities
- Question Types
- Questionnaire
- QuestionPro Products
- Release Notes
- Research Tools and Apps
- Revenue at Risk
- Survey Templates
- Training Tips
- Uncategorized
- Video Learning Series
- What’s Coming Up
- Workforce Intelligence

Supporting community-led action research
Evaluating Research
Introduction, we’ve all been there. we know evaluating your project is important and that you should be doing it, but sometimes it’s tempting to leave it as an afterthought..
If, on the other hand we get some of the simple stuff right at the start of our project, it will make things a whole lot easier.
This short guide introduces what evaluation means and helps you to begin thinking about how you will evaluate your community-led action research.
What is evaluation?
When you evaluate something, you are measuring how much something has worked. This will help you to show others what is good about it, and also to learn how to do it better next time. You can even start to think about evaluation as part of an action research cycle, as you can see below (click on the image to enlarge).

The “reflecting/evaluating” stage of the cycle is when you measure what you have done and you can then move on to thinking about conducting more research.
In reality, things are a bit more complex and it’s good to think of this cycle as an ongoing process. It’s also good to think about your evaluation right from the start, when you plan what you’re going to do.
Why evaluate community-led action research?
Community-led action research is no different to other projects your group carries out, in that it is important to be able to show your success and learn what works and what doesn’t. For instance, if you were trying out a research method you haven’t used before, such as video diaries, it will be useful to know how community researchers found this. Do they think it made doing the research more fun and engaging? Did it lead to useful findings that helped change something for the community?
How to evaluate community-led action research
You can evaluate your research in a number of ways, including using more fun and engaging methods such as research diaries .
But one of the most straightforward ways is to use a participant questionnaire. This will involve your group in scoring against a set of questions. The questions can be whatever you want but should be things that relate to your research plan ( see our guide to planning your research ). They should also be things you can realistically achieve.
The Knowledge is Power evaluation exercise is an example of a simple monitoring questionnaire. You can either answer the questions as a group or as individuals. If you answer it as individuals, you could then work out the average scores to get your group response.
If you answer the questions at the start, middle and end of your research project you will be able to see progress over time.
Your progress can then be illustrated by using your results to complete the Knowledge is Power radar chart generator , which is really just a basic Excel spreadsheet. This will automatically create a chart like the one below.

This ‘radar chart’ shows change over time for each statement, and also gives an overall indication of progress.
Find out more
Hopefully, this has given you a helpful introduction to evaluation, as well as some practical tips to doing it. Here are some links to further information, including other ways of doing evaluation.
Our Knowledge is Power evaluation instructions give more detail on the above example.
The Knowledge is Power guide to planning your research (You’ll need to plan your research so you know what it is you want to evaluate!)
Evaluation Support Scotland’s website is the best place to learn all about evaluation. They have handy and accessible guides, videos and blogs to all aspects of evaluation, and it’s worth keeping an eye out for workshops they are running as well.
Back to Taking Action

On this page
Evaluation exercise (Word)
Evaluation instructions (Word)
Radar chart generator (Excel)
Research diaries (Word)
Contact | Privacy | Terms

Contact Us (315) 303-2040
Evaluation Market Research
Discover expert insights, methodologies, and strategies to make informed business and stay ahead in your industry with evaluation research.
Request a Quote Read the Blog
- Market Research Company Blog
What is Evaluation Research? + [Methods & Examples]
by Emily Taylor
Posted at: 2/20/2023 1:30 PM
Every business and organization has goals.
But, how do you know if the time, money, and resources spent on strategies to achieve these goals are working?
Or, if they’re even worth it?
Evaluation research is a great way to answer these common questions as it measures how effective a specific program or strategy is.
In this post, we’ll cover what evaluation research is, how to conduct it, the benefits of doing so, and more.
Article Contents
- Definition of evaluation research
- The purpose of program evaluation
- Evaluation research advantages and disadvantages
- Research evaluation methods
- Examples and types of evaluation research
- Evaluation research questions
Evaluation Research: Definition
Evaluation research, also known as program evaluation, is a systematic analysis that evaluates whether a program or strategy is worth the effort, time, money, and resources spent to achieve a goal.
Based on the project’s objectives, the study may target different audiences such as:
- Stakeholders
- Prospective customers
- Board members
The feedback gathered from program evaluation research is used to validate whether something should continue or be changed in any way to better meet organizational goals.

The Purpose of Program Evaluation
The main purpose of evaluation research is to understand whether or not a process or strategy has delivered the desired results.
It is especially helpful when launching new products, services, or concepts.
That’s because research program evaluation allows you to gather feedback from target audiences to learn what is working and what needs improvement.
It is a vehicle for hearing people’s experiences with your new concept to gauge whether it is the right fit for the intended audience.
And with data-driven companies being 23 times more likely to acquire customers, it seems like a no-brainer.

As a result of evaluation research, organizations can better build a program or solution that provides audiences with exactly what they need.
Better yet, it’s done without wasting time and money figuring out new iterations before landing on the final product.
Evaluation Research Advantages & Disadvantages
In this section, our market research company dives more into the benefits and drawbacks of conducting research evaluation methods.
Understanding these pros and cons will help determine if it’s right for your business.
Advantages of Evaluation Research
In many instances, the pros of program evaluation outweigh the cons.
It is an effective tool for data-driven decision-making and sets organizations on a clear path to success.
Here are just a few of the many benefits of conducting research evaluation methods.
Justifies the time, money, and resources spent
First, evaluation research helps justify all of the resources spent on a program or strategy.
Without evaluation research, it can be difficult to promote the continuation of a costly or time-intensive activity with no evidence it’s working.
Rather than relying on opinions and gut reactions about the effectiveness of a program or strategy, evaluation research measures levels of effectiveness through data collected.
Identifies unknown negative or positive impacts of a strategy
Second, program research helps users better understand how projects are carried out, who helps them come to fruition, who is affected, and more.
These finer details shed light on how a program or strategy affects all facets of an organization.
As a result, you may learn there are unrealized effects that surprise you and your decision-makers.
Helps organizations improve
The research can highlight areas of strengths (i.e., factors of the program/strategy that should not be changed) and weaknesses (i.e., factors of the programs/strategy that could be improved).
Disadvantages of Evaluation Research
Despite its many advantages, there are still limitations and drawbacks to evaluation research.
Here are a few challenges to keep in mind before moving forward.
It can be costly
The cost of market research varies based on methodology, audience type, incentives, and more.
For instance, a focus group will be more expensive than an online survey.
Though, I’ll also make the argument that conducting evaluation research can save brands money down the line from investing in something that is a dud.
Poor memory recall
Many research evaluation methods are dependent on feedback from customers, employees, and other audiences.
If the study is not conducted right after a process or strategy is implemented, it can be harder for these audiences to remember their true opinions and feelings on the matter.
Therefore, the data might be less accurate because of the lapse in time and memory.
Research Evaluation Methods
Evaluation research can include a mix of qualitative and quantitative methods depending on your objectives.
A market research company , like Drive Research , can design an approach to best meet your goals, objectives, and budget for a successful study.
Below we share different approaches to evaluation research.
But, here is a quick graphic that explains the main differences between qualitative and quantitative research methodologies .

Quantitative Research Methods
Quantitative evaluation research aims to measure audience feedback.
Metrics quantitative market research often measures include:
- Level of impact
- Level of awareness
- Level of satisfaction
- Level of perception
- Expected usage
- Usage of competitors
In addition to other metrics to gauge the success of a program or strategy.
This type of evaluation research can be done through online surveys or phone surveys.
Online surveys
Perhaps the most common form of quantitative research , online surveys are extremely effective for gathering feedback.
They are commonly used for evaluation research because they offer quick, cost-effective, and actionable insights.
Typically, the survey is conducted by a third-party online survey company to ensure anonymity and limit bias from the respondents.
The market research firm develops the survey, conducts fieldwork, and creates a report based on the results.
For instance, here is the online survey process followed by Drive Research when conducting program evaluations for our clients.

Phone surveys
Another way to conduct evaluation research is with phone surveys .
This type of market research allows trained interviewers to have one-on-one conversations with your target audience.
Oftentimes they range from 15 to 30-minute discussions to gather enough information and feedback.
The benefit of phone surveys for program evaluation research is that interviewers can ask respondents to explain their answers in more detail.
Whereas, an online survey is limited to multiple-choice questions with pre-determined answer options (with the addition of a few open ends).
Though, online surveys are much faster and more cost-effective to complete.
Recommended Reading: What is the Most Cost-Effective Market Research Methodology?
Qualitative Research Methods
Qualitative evaluation research aims to explore audience feedback.
Factors quantitative market research often evaluates include:
- Areas of satisfaction
- Areas of weaknesses
- Recommendations
This type of exploratory evaluation research can be completed through in-depth interviews or focus groups.
It involves working with a qualitative recruiting company to recruit specific types of people for the research, developing a specific line of questioning, and then summarizing the results to ensure anonymity.
For instance, here is the process Drive Research follows when recruiting people to participate in evaluation research.

Focus groups
If you are considering conducting qualitative evaluation research, it’s likely that focus groups are your top methodology of choice.
Focus groups are a great way to collect feedback from targeted audiences all at once.
It is also a helpful methodology for showing product markups, logo designs, commercials, and more.
Though, a great alternative to traditional focus groups is online focus groups .
Remote focus groups can reduce the costs of evaluation research because it eliminates many of the fees associated with in-person groups.
For instance, there are no facility rental fees.
Plus, recruiting participants is cheaper because you can cast a wider net being that they can join an online forum from anywhere in the country.

In-depth interviews (IDIs)
Similar to focus groups, in-depth interviews gather tremendous amounts of information and feedback from target consumers.
In this setting though, interviewers speak with participants one-on-one, rather than in a group.
This level of attention allows interviewers to expand on more areas of what satisfies and dissatisfies someone about a product, service, or program.
Additionally, it eliminates group bias in evaluation research.
This is because participants are more comfortable providing honest opinions without being intimidated by others in a focus group.
Examples and Types of Evaluation Research
There are different types of evaluation research based on the business and audience type.
Most commonly it is carried out for product concepts, marketing strategies, and programs.
We share a few examples of each below.
Product Evaluation Research Example
Each year, 95 percent of new products introduced to the market fail.

Therefore market research for new product development is critical in determining what could deter the success of a concept before it reaches shelves.
Lego is a great example of a brand using evaluation research for new product concepts.
In 2011 they learned 90% of their buyers were boys.
Although boys were not their sole target demographic, the brand had more products that were appealing to this audience such as Star Wars and superheroes.
To grow its audience, Lego conducted evaluation research to determine what topics and themes would entice female buyers.
With this insight, Lego launched Lego Friends. It included more details and features girls were looking for.
Marketing Evaluation Research Example
Marketing evaluation research or campaign evaluation surveys is a technique used to measure the effectiveness of advertising and marketing strategies.
An example of this would be surveying a target audience before and after launching a paid social media campaign.
Brands can determine if factors such as awareness, perception, and likelihood to purchase have changed due to the advertisements.
Recommended Reading: Advertising Testing with Market Research
Process Evaluation Research Example
Process evaluations are commonly used to understand the implementation of a new program.
It helps decision-makers evaluate how a program’s goal or outcome was achieved.
Additionally, process evaluation research quantifies how often the program was used, who benefited from the program, the resources used to implement the new process, any problems encountered, and more.
Examples of programs and processes where evaluation research is beneficial are:
- Customer loyalty programs
- Client referral programs
- Customer retention programs
- Workplace wellness programs
- Orientation of new employees
- Employee buddy programs
Evaluation Research Questions
Evaluation research design sets the tone for a successful study.
It is important to ask the right questions in order to achieve the intended results.
Product evaluation research questions include:
- How appealing is the following product concept?
- If available in a store near you, how likely are you to purchase [product]?
- Which of the following packaging types do you prefer?
- Which of the following [colors, flavors, sizes, etc.] would you be most interested in purchasing?
Marketing evaluation research questions include:
- Please rate your level of awareness for [Brand].
- What is your perception of [Brand]?
- Do you remember seeing advertisements for [Brand] in the past 3 months?
- Where did you see or hear the advertising for [Brand]? ie. Facebook, TV, radio, etc.
- How likely are you to make a purchase from [Brand]?
Process evaluation research questions include:
- Please rate your level of satisfaction with [Process].
- Please explain why you provided [Rating].
- What barriers existed to implementing [Process]?
- How likely are you to use [Process] moving forward?
- Please rate your level of agreement with the following statement: I find a lot of value in [Process].
While these are great examples of what evaluation research questions to ask, keep in mind they should be reflective of your unique goals and objectives.
Our evaluation research company can help design, program, field, and analyze your survey to assure you are using quality data to drive decision-making.
Contact Our Evaluation Research Company
Wondering if continuing an employee or customer program is still offering value to your organization? Or, perhaps you need to determine if a new product concept is working as effectively as it should be. Evaluation research can help achieve these objectives and plenty of others.
Drive Research is a full-service market research company specializing in evaluation research through surveys, focus groups, and IDIs. Contact our team by filling out the form below or emailing [email protected] .

Emily Taylor
As a Research Manager, Emily is approaching a decade of experience in the market research industry and loves to challenge the status quo. She is a certified VoC professional with a passion for storytelling.
Learn more about Emily, here .

Categories: Market Research Glossary
Need help with your project? Get in touch with Drive Research.
View Our Blog

Yearly plans are up to 65% off for a limited Black Friday sale. ⏰ 🏷️
- Form Builder
- Survey Maker
- AI Form Generator
- Store Builder
- WordPress Plugin
- Integrations
- Popular Forms
- Job Application Form Template
- Rental Application Form Template
- Hotel Accommodation Form Template
- Online Registration Form Template
- Employment Application Form Template
- Application Forms
- Booking Forms
- Consent Forms
- Contact Forms
- Donation Forms
- Customer Satisfaction Surveys
- Employee Satisfaction Surveys
- Evaluation Surveys
- Feedback Surveys
- Market Research Surveys
- Personality Quiz Template
- Geography Quiz Template
- Math Quiz Template
- Science Quiz Template
- Vocabulary Quiz Template
Try without registration Quick Start

HubSpot CRM

Google Sheets

Google Analytics

Microsoft Excel

Read engaging stories, how-to guides, learn about forms.app features.
Inspirational ready-to-use templates for getting started fast and powerful.
Spot-on guides on how to use forms.app and make the most out of it.

See the technical measures we take and learn how we keep your data safe and secure.
- Help Center
- Sign In Sign Up Free
- What is evaluation research: Methods & examples

Defne Çobanoğlu
You have created a program or a product that has been running for some time, and you want to check how efficient it is. You can conduct evaluation research to get the insight you want about the project. And there are more than one method and way to obtain this information.
Afterward, when you collect the appropriate data about the program on its effectiveness, budget-friendliness, and opinions from customers, you can go one step further. The valuable information you collect from the research allows you to have a clear idea of what to do next. You can discard the project, upgrade it, make changes, or replace it. Now, let us go into detail about evaluation research and its methods.
- First things first: Definition of evaluation research
Basically, evaluation research is a research process where you measure the effectiveness and success of a particular program, policy, intervention, or project. This type of research lets you know if the goal of that product was met successfully and shows you any areas that need improvement . The data gathered from the evaluation research gives a good insight into whether or not the time, money, and energy put into that project is worth it.
The findings from evaluation research can be used to form decisions about whether to continue, modify, discontinue, and improve future programs or interventions . Therefore, in other words, it means doing research to evaluate the quality and effectiveness of the overall project.

What is evaluation research?
Why conduct evaluation research & when?
Conducting evaluation research is an effective way of usability testing and cost-effectiveness of the current project or product. Findings gathered from evaluative research play a key role in assessing what works and what doesn't and identifying areas of improvement for sponsors and administrators. This type of evaluation is a good means for data collection, and it provides a concrete result for decision-making processes.
There are different methods to collect feedback ranging from online surveys to focus groups. Evaluation research is best used when:
- You are planning a different approach
- You want to make sure everything is going as you want them to
- You want to prove the effectiveness of an activity to the stakeholders and administrators
- You want to set realistic goals for the future.
- Methods to conduct an evaluation research
When you want to conduct evaluation research, there are different types of evaluation research methods. You can go through possible methods and choose the most suitable one(s) for you according to your target audience, manpower, and budget to go through with the research steps. Let us look at the qualitative and quantitative research methodologies.
Quantitative methods
These are the type of methods that asks questions to get tangible answers that rely on numerical data and statistical analysis to draw conclusions . These questions can be “ How many people? ”, “ What is the price? ”, “ What is the profit rate? ” etc. Therefore, they provide researchers with quantitative data to draw concrete conclusions. Now, let us look at the quantitative research methods.
1 - Online surveys
Surveys involve collecting data from a large number of people using appropriate evaluation questions to gather accurate feedback . This type of method allows for reaching a wider audience in a short time in a cost-effective manner. You can ask about various topics, from user satisfaction to market research. And, It would be quite helpful to use a free survey maker such as forms.app to help with your next research!
2 - Phone surveys
Phone surveys are a type of survey that involves conducting interviews with participants over the phone . They are a form of quantitative research and are commonly used by organizations and researchers to collect data from people in a short time. During a phone survey, a trained interviewer will call the participant and ask them a series of questions.
Qualitative methods
This type of research method basically aims to explore audience feedback. These methods are used to study phenomena that cannot be easily measured using statistical techniques, such as opinions, attitudes, and behaviors . Techniques such as observation, interviews, and case studies are used to form evaluation for this method.
1 - Case studies
Case studies involve the analysis of a single case or a small number of cases to be explored further. In a case study, the researcher collects data from a variety of sources, such as interviews, observations, and documents. The data collected from case studies are often analyzed to identify patterns and themes .
2 - Focus groups
Using focus groups means having a small group of people and presenting them with a certain topic. A focus group usually consists of 6-10 people. The focus groups are introduced to a topic, product, or concept, and they present their reviews . Focus groups are a good way to obtain data as the responses are immediate. This method is commonly used by businesses to gain insight into their customers.
- Evaluation research examples
Conducting evaluation research has helped many businesses to further themselves in the market because a big part of success comes from listening to your audience. For example, Lego found out that only around %10 of their customers were girls in 2011. They wanted to expand their audience. So, Lego conducted evaluation research to find and launch products that will appeal to girls.
- Surveys questions to use in your own evaluation research
No matter the type of method you decide to go with, there are some essential questions you should include in your research process. If you prepare your questions beforehand and ask the same questions to all participants/customers, you will end up with a uniform set of answers. That will allow you to form a better judgment. Now, here are some good questions to include:
1 - How often do you use the product?
2 - How satisfied are you with the features of the product?
3 - How would you rate the product on a scale of 1-5?
4 - How easy is it to use our product/service?
5 - How was your experience completing tasks using the product?
6 - Will you recommend this product to others?
7 - Are you excited about using the product in the future?
8 - What would you like to change in the product/project?
9 - Did the program produce the intended outcomes?
10 - What were the unintended outcomes?
- What’s the difference between generative vs. evaluative research?
Generative research is conducted to generate new ideas or hypotheses by understanding your users' motivations, pain points, and behaviors. The goal of generative research is to define the possible research questions and develop new theories and plan the best possible solution for those problems . Generative research is often used at the beginning of a research project or product.
Evaluative research, on the other hand, is conducted to measure the effectiveness of a project or program. The goal of evaluative research is to measure whether the existing project, program, or product has achieved its intended objectives . This method is used to assess the project at hand to ensure it is usable, works as intended, and meets users' demands and expectations. This type of research will play a role in deciding whether to continue, modify, or put an to the project.
You can determine either to use generative or evaluation research by figuring out what you need to find out. However, of course, both methods can be useful throughout the research process in obtaining different types of evidence. Therefore, firstly determine your goal of conducting evaluation research, and then you can decide on the method to go with.
Conducting evaluation research means making sure everything is going as you want them to in your project or finding areas of improvement for your next steps. There are more than one methods to go with. You can do focus groups or case studies on collecting opinions, or you can do online surveys to get tangible answers.
If you choose to do online surveys, you can try forms.app, as it is one of the best survey makers out there. It has more than 1000 ready-to-go templates. If you wish to know more about forms.app, you can check out our article on user experience questions !
Defne is a content writer at forms.app. She is also a translator specializing in literary translation. Defne loves reading, writing, and translating professionally and as a hobby. Her expertise lies in survey research, research methodologies, content writing, and translation.
- Form Features
- Data Collection
Table of Contents
- Why conduct evaluation research & when?
Related Posts

What is experimental research: Definition, types & examples

What is desk research: Definition, tips & examples

7 Effective methods to identify and meet customer needs
Freya Laskowski

IMAGES
VIDEO
COMMENTS
Project evaluation refers to the systematic investigation of an object’s worth or merit. The methodology is applied in projects, programs and policies. Evaluation is important to assess the worth or merit of a project and to identify areas ...
Glass cutting companies play a crucial role in various industries, from construction to manufacturing. Whether you’re working on a DIY project or need custom glass pieces for your business, finding the right glass cutting company is essenti...
Analytical research is a specific type of research that involves critical thinking skills and the evaluation of facts and information relative to the research being conducted. Research of any type is a method to discover information.
Evaluation - Research Project B · RPB A+ Evaluation: Empathy [DOC 79KB] · RPB A Evaluation: Fairytales [PDF 1MB] · RPB B Evaluation: Hairy-nosed Wombat [DOC
In a few cases an intermediate evaluation may be performed, also sometimes called a "mid-term" evaluation. The formulation of the objectives in the specific
The evaluation monitored each stage of the project and focused on process implementation, with the goal of improving every step of the project.
Methods: Project evaluation tool to assess structure and resources, process, management and communication, achievements, and outcomes. The project used a mixed
Evaluation tools can help you to measure, compare, or illustrate various aspects of your research project, such as the inputs, outputs, outcomes
' - SACE Research Project subject outline 2021/2022
They include policy advice, research, direct action and training that the project delivers. Which indicators measure(d) the achievement of outcomes or
Evaluation research, also known as program evaluation, refers to research purpose instead of a specific method. Evaluation research is the systematic assessment
You can even start to think about evaluation as part of an action research
Rather than relying on opinions and gut reactions about the effectiveness of a program or strategy, evaluation research measures levels of
Basically, evaluation research is a research process where you measure the effectiveness and success of a particular program, policy