Grad Coach

Research Design 101

Everything You Need To Get Started (With Examples)

By: Derek Jansen (MBA) | Reviewers: Eunice Rautenbach (DTech) & Kerryn Warren (PhD) | April 2023

Research design for qualitative and quantitative studies

Navigating the world of research can be daunting, especially if you’re a first-time researcher. One concept you’re bound to run into fairly early in your research journey is that of “ research design ”. Here, we’ll guide you through the basics using practical examples , so that you can approach your research with confidence.

Overview: Research Design 101

What is research design.

  • Research design types for quantitative studies
  • Video explainer : quantitative research design
  • Research design types for qualitative studies
  • Video explainer : qualitative research design
  • How to choose a research design
  • Key takeaways

Research design refers to the overall plan, structure or strategy that guides a research project , from its conception to the final data analysis. A good research design serves as the blueprint for how you, as the researcher, will collect and analyse data while ensuring consistency, reliability and validity throughout your study.

Understanding different types of research designs is essential as helps ensure that your approach is suitable  given your research aims, objectives and questions , as well as the resources you have available to you. Without a clear big-picture view of how you’ll design your research, you run the risk of potentially making misaligned choices in terms of your methodology – especially your sampling , data collection and data analysis decisions.

The problem with defining research design…

One of the reasons students struggle with a clear definition of research design is because the term is used very loosely across the internet, and even within academia.

Some sources claim that the three research design types are qualitative, quantitative and mixed methods , which isn’t quite accurate (these just refer to the type of data that you’ll collect and analyse). Other sources state that research design refers to the sum of all your design choices, suggesting it’s more like a research methodology . Others run off on other less common tangents. No wonder there’s confusion!

In this article, we’ll clear up the confusion. We’ll explain the most common research design types for both qualitative and quantitative research projects, whether that is for a full dissertation or thesis, or a smaller research paper or article.

Free Webinar: Research Methodology 101

Research Design: Quantitative Studies

Quantitative research involves collecting and analysing data in a numerical form. Broadly speaking, there are four types of quantitative research designs: descriptive , correlational , experimental , and quasi-experimental . 

Descriptive Research Design

As the name suggests, descriptive research design focuses on describing existing conditions, behaviours, or characteristics by systematically gathering information without manipulating any variables. In other words, there is no intervention on the researcher’s part – only data collection.

For example, if you’re studying smartphone addiction among adolescents in your community, you could deploy a survey to a sample of teens asking them to rate their agreement with certain statements that relate to smartphone addiction. The collected data would then provide insight regarding how widespread the issue may be – in other words, it would describe the situation.

The key defining attribute of this type of research design is that it purely describes the situation . In other words, descriptive research design does not explore potential relationships between different variables or the causes that may underlie those relationships. Therefore, descriptive research is useful for generating insight into a research problem by describing its characteristics . By doing so, it can provide valuable insights and is often used as a precursor to other research design types.

Correlational Research Design

Correlational design is a popular choice for researchers aiming to identify and measure the relationship between two or more variables without manipulating them . In other words, this type of research design is useful when you want to know whether a change in one thing tends to be accompanied by a change in another thing.

For example, if you wanted to explore the relationship between exercise frequency and overall health, you could use a correlational design to help you achieve this. In this case, you might gather data on participants’ exercise habits, as well as records of their health indicators like blood pressure, heart rate, or body mass index. Thereafter, you’d use a statistical test to assess whether there’s a relationship between the two variables (exercise frequency and health).

As you can see, correlational research design is useful when you want to explore potential relationships between variables that cannot be manipulated or controlled for ethical, practical, or logistical reasons. It is particularly helpful in terms of developing predictions , and given that it doesn’t involve the manipulation of variables, it can be implemented at a large scale more easily than experimental designs (which will look at next).

That said, it’s important to keep in mind that correlational research design has limitations – most notably that it cannot be used to establish causality . In other words, correlation does not equal causation . To establish causality, you’ll need to move into the realm of experimental design, coming up next…

Need a helping hand?

research design report means

Experimental Research Design

Experimental research design is used to determine if there is a causal relationship between two or more variables . With this type of research design, you, as the researcher, manipulate one variable (the independent variable) while controlling others (dependent variables). Doing so allows you to observe the effect of the former on the latter and draw conclusions about potential causality.

For example, if you wanted to measure if/how different types of fertiliser affect plant growth, you could set up several groups of plants, with each group receiving a different type of fertiliser, as well as one with no fertiliser at all. You could then measure how much each plant group grew (on average) over time and compare the results from the different groups to see which fertiliser was most effective.

Overall, experimental research design provides researchers with a powerful way to identify and measure causal relationships (and the direction of causality) between variables. However, developing a rigorous experimental design can be challenging as it’s not always easy to control all the variables in a study. This often results in smaller sample sizes , which can reduce the statistical power and generalisability of the results.

Moreover, experimental research design requires random assignment . This means that the researcher needs to assign participants to different groups or conditions in a way that each participant has an equal chance of being assigned to any group (note that this is not the same as random sampling ). Doing so helps reduce the potential for bias and confounding variables . This need for random assignment can lead to ethics-related issues . For example, withholding a potentially beneficial medical treatment from a control group may be considered unethical in certain situations.

Quasi-Experimental Research Design

Quasi-experimental research design is used when the research aims involve identifying causal relations , but one cannot (or doesn’t want to) randomly assign participants to different groups (for practical or ethical reasons). Instead, with a quasi-experimental research design, the researcher relies on existing groups or pre-existing conditions to form groups for comparison.

For example, if you were studying the effects of a new teaching method on student achievement in a particular school district, you may be unable to randomly assign students to either group and instead have to choose classes or schools that already use different teaching methods. This way, you still achieve separate groups, without having to assign participants to specific groups yourself.

Naturally, quasi-experimental research designs have limitations when compared to experimental designs. Given that participant assignment is not random, it’s more difficult to confidently establish causality between variables, and, as a researcher, you have less control over other variables that may impact findings.

All that said, quasi-experimental designs can still be valuable in research contexts where random assignment is not possible and can often be undertaken on a much larger scale than experimental research, thus increasing the statistical power of the results. What’s important is that you, as the researcher, understand the limitations of the design and conduct your quasi-experiment as rigorously as possible, paying careful attention to any potential confounding variables .

The four most common quantitative research design types are descriptive, correlational, experimental and quasi-experimental.

Research Design: Qualitative Studies

There are many different research design types when it comes to qualitative studies, but here we’ll narrow our focus to explore the “Big 4”. Specifically, we’ll look at phenomenological design, grounded theory design, ethnographic design, and case study design.

Phenomenological Research Design

Phenomenological design involves exploring the meaning of lived experiences and how they are perceived by individuals. This type of research design seeks to understand people’s perspectives , emotions, and behaviours in specific situations. Here, the aim for researchers is to uncover the essence of human experience without making any assumptions or imposing preconceived ideas on their subjects.

For example, you could adopt a phenomenological design to study why cancer survivors have such varied perceptions of their lives after overcoming their disease. This could be achieved by interviewing survivors and then analysing the data using a qualitative analysis method such as thematic analysis to identify commonalities and differences.

Phenomenological research design typically involves in-depth interviews or open-ended questionnaires to collect rich, detailed data about participants’ subjective experiences. This richness is one of the key strengths of phenomenological research design but, naturally, it also has limitations. These include potential biases in data collection and interpretation and the lack of generalisability of findings to broader populations.

Grounded Theory Research Design

Grounded theory (also referred to as “GT”) aims to develop theories by continuously and iteratively analysing and comparing data collected from a relatively large number of participants in a study. It takes an inductive (bottom-up) approach, with a focus on letting the data “speak for itself”, without being influenced by preexisting theories or the researcher’s preconceptions.

As an example, let’s assume your research aims involved understanding how people cope with chronic pain from a specific medical condition, with a view to developing a theory around this. In this case, grounded theory design would allow you to explore this concept thoroughly without preconceptions about what coping mechanisms might exist. You may find that some patients prefer cognitive-behavioural therapy (CBT) while others prefer to rely on herbal remedies. Based on multiple, iterative rounds of analysis, you could then develop a theory in this regard, derived directly from the data (as opposed to other preexisting theories and models).

Grounded theory typically involves collecting data through interviews or observations and then analysing it to identify patterns and themes that emerge from the data. These emerging ideas are then validated by collecting more data until a saturation point is reached (i.e., no new information can be squeezed from the data). From that base, a theory can then be developed .

As you can see, grounded theory is ideally suited to studies where the research aims involve theory generation , especially in under-researched areas. Keep in mind though that this type of research design can be quite time-intensive , given the need for multiple rounds of data collection and analysis.

research design report means

Ethnographic Research Design

Ethnographic design involves observing and studying a culture-sharing group of people in their natural setting to gain insight into their behaviours, beliefs, and values. The focus here is on observing participants in their natural environment (as opposed to a controlled environment). This typically involves the researcher spending an extended period of time with the participants in their environment, carefully observing and taking field notes .

All of this is not to say that ethnographic research design relies purely on observation. On the contrary, this design typically also involves in-depth interviews to explore participants’ views, beliefs, etc. However, unobtrusive observation is a core component of the ethnographic approach.

As an example, an ethnographer may study how different communities celebrate traditional festivals or how individuals from different generations interact with technology differently. This may involve a lengthy period of observation, combined with in-depth interviews to further explore specific areas of interest that emerge as a result of the observations that the researcher has made.

As you can probably imagine, ethnographic research design has the ability to provide rich, contextually embedded insights into the socio-cultural dynamics of human behaviour within a natural, uncontrived setting. Naturally, however, it does come with its own set of challenges, including researcher bias (since the researcher can become quite immersed in the group), participant confidentiality and, predictably, ethical complexities . All of these need to be carefully managed if you choose to adopt this type of research design.

Case Study Design

With case study research design, you, as the researcher, investigate a single individual (or a single group of individuals) to gain an in-depth understanding of their experiences, behaviours or outcomes. Unlike other research designs that are aimed at larger sample sizes, case studies offer a deep dive into the specific circumstances surrounding a person, group of people, event or phenomenon, generally within a bounded setting or context .

As an example, a case study design could be used to explore the factors influencing the success of a specific small business. This would involve diving deeply into the organisation to explore and understand what makes it tick – from marketing to HR to finance. In terms of data collection, this could include interviews with staff and management, review of policy documents and financial statements, surveying customers, etc.

While the above example is focused squarely on one organisation, it’s worth noting that case study research designs can have different variation s, including single-case, multiple-case and longitudinal designs. As you can see in the example, a single-case design involves intensely examining a single entity to understand its unique characteristics and complexities. Conversely, in a multiple-case design , multiple cases are compared and contrasted to identify patterns and commonalities. Lastly, in a longitudinal case design , a single case or multiple cases are studied over an extended period of time to understand how factors develop over time.

As you can see, a case study research design is particularly useful where a deep and contextualised understanding of a specific phenomenon or issue is desired. However, this strength is also its weakness. In other words, you can’t generalise the findings from a case study to the broader population. So, keep this in mind if you’re considering going the case study route.

Case study design often involves investigating an individual to gain an in-depth understanding of their experiences, behaviours or outcomes.

How To Choose A Research Design

Having worked through all of these potential research designs, you’d be forgiven for feeling a little overwhelmed and wondering, “ But how do I decide which research design to use? ”. While we could write an entire post covering that alone, here are a few factors to consider that will help you choose a suitable research design for your study.

Data type: The first determining factor is naturally the type of data you plan to be collecting – i.e., qualitative or quantitative. This may sound obvious, but we have to be clear about this – don’t try to use a quantitative research design on qualitative data (or vice versa)!

Research aim(s) and question(s): As with all methodological decisions, your research aim and research questions will heavily influence your research design. For example, if your research aims involve developing a theory from qualitative data, grounded theory would be a strong option. Similarly, if your research aims involve identifying and measuring relationships between variables, one of the experimental designs would likely be a better option.

Time: It’s essential that you consider any time constraints you have, as this will impact the type of research design you can choose. For example, if you’ve only got a month to complete your project, a lengthy design such as ethnography wouldn’t be a good fit.

Resources: Take into account the resources realistically available to you, as these need to factor into your research design choice. For example, if you require highly specialised lab equipment to execute an experimental design, you need to be sure that you’ll have access to that before you make a decision.

Keep in mind that when it comes to research, it’s important to manage your risks and play as conservatively as possible. If your entire project relies on you achieving a huge sample, having access to niche equipment or holding interviews with very difficult-to-reach participants, you’re creating risks that could kill your project. So, be sure to think through your choices carefully and make sure that you have backup plans for any existential risks. Remember that a relatively simple methodology executed well generally will typically earn better marks than a highly-complex methodology executed poorly.

research design report means

Recap: Key Takeaways

We’ve covered a lot of ground here. Let’s recap by looking at the key takeaways:

  • Research design refers to the overall plan, structure or strategy that guides a research project, from its conception to the final analysis of data.
  • Research designs for quantitative studies include descriptive , correlational , experimental and quasi-experimenta l designs.
  • Research designs for qualitative studies include phenomenological , grounded theory , ethnographic and case study designs.
  • When choosing a research design, you need to consider a variety of factors, including the type of data you’ll be working with, your research aims and questions, your time and the resources available to you.

If you need a helping hand with your research design (or any other aspect of your research), check out our private coaching services .

research design report means

Psst… there’s more (for free)

This post is part of our dissertation mini-course, which covers everything you need to get started with your dissertation, thesis or research project. 

You Might Also Like:

Survey Design 101: The Basics

Is there any blog article explaining more on Case study research design? Is there a Case study write-up template? Thank you.

Solly Khan

Thanks this was quite valuable to clarify such an important concept.

hetty

Thanks for this simplified explanations. it is quite very helpful.

Belz

This was really helpful. thanks

Imur

Thank you for your explanation. I think case study research design and the use of secondary data in researches needs to be talked about more in your videos and articles because there a lot of case studies research design tailored projects out there.

Please is there any template for a case study research design whose data type is a secondary data on your repository?

Sam Msongole

This post is very clear, comprehensive and has been very helpful to me. It has cleared the confusion I had in regard to research design and methodology.

Robyn Pritchard

This post is helpful, easy to understand, and deconstructs what a research design is. Thanks

kelebogile

how to cite this page

Peter

Thank you very much for the post. It is wonderful and has cleared many worries in my mind regarding research designs. I really appreciate .

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

  • Print Friendly

IdeaScale Logo

What is a Research Design? Definition, Types, Methods and Examples

By Nick Jain

Published on: September 8, 2023

What is Research Design?

Table of Contents

What is a Research Design?

12 types of research design, top 16 research design methods, research design examples.

A research design is defined as the overall plan or structure that guides the process of conducting research. It is a critical component of the research process and serves as a blueprint for how a study will be carried out, including the methods and techniques that will be used to collect and analyze data. A well-designed research study is essential for ensuring that the research objectives are met and that the results are valid and reliable.

Key elements of research design include:

  • Research Objectives: Clearly define the goals and objectives of the research study. What is the research trying to achieve or investigate?
  • Research Questions or Hypotheses: Formulating specific research questions or hypotheses that address the objectives of the study. These questions guide the research process.
  • Data Collection Methods: Determining how data will be collected, whether through surveys, experiments, observations, interviews, archival research, or a combination of these methods.
  • Sampling: Deciding on the target population and selecting a sample that represents that population. Sampling methods can vary, such as random sampling, stratified sampling, or convenience sampling.
  • Data Collection Instruments: Developing or selecting the tools and instruments needed to collect data, such as questionnaires, surveys, or experimental equipment.
  • Data Analysis: Defining the statistical or analytical techniques that will be used to analyze the collected data. This may involve qualitative or quantitative methods , depending on the research goals.
  • Time Frame: Establishing a timeline for the research project, including when data will be collected, analyzed, and reported.
  • Ethical Considerations: Addressing ethical issues, including obtaining informed consent from participants, ensuring the privacy and confidentiality of data, and adhering to ethical guidelines.
  • Resources: Identifying the resources needed for the research , including funding, personnel, equipment, and access to data sources.
  • Data Presentation and Reporting: Planning how the research findings will be presented and reported, whether through written reports, presentations, or other formats.

There are various research designs, such as experimental, observational, survey, case study, and longitudinal designs, each suited to different research questions and objectives. The choice of research design depends on the nature of the research and the goals of the study.

A well-constructed research design is crucial because it helps ensure the validity, reliability, and generalizability of research findings, allowing researchers to draw meaningful conclusions and contribute to the body of knowledge in their field.

Types of Research Design

There are several types of research designs, each tailored to answer specific research questions and achieve particular objectives. The choice of research design depends on the nature of the research problem and the goals of the study. Here are several typical types of research designs:

1. Experimental Research Design

Randomized Controlled Trial (RCT): In a randomized controlled trial (RCT), individuals are assigned randomly to either an experimental group or a control group. This design is often used to assess the impact of a treatment or intervention.

2. Quasi-Experimental Research Design

Non-equivalent Group Design: In this design, two or more groups are compared, but participants are not randomly assigned. This is common when random assignment is not feasible or ethical.

3. Observational Research Design

Cross-Sectional Study: In a cross-sectional study, data is collected from a single point in time to examine relationships or differences between variables. It does not involve follow-up over time.

Longitudinal Study: This design involves collecting data from the same group of participants over an extended period to study changes and trends over time.

4. Descriptive Research Design

Survey Research: Surveys involve collecting data from a sample of individuals through questionnaires or interviews to describe characteristics, attitudes, or opinions.

Case Study: Case studies involve an in-depth examination of a single individual, group, or phenomenon. They are often used to gain a deep understanding of a unique case.

5. Correlational Research Design

Correlational Study: This design examines the relationships between two or more variables to determine if they are associated. However, it does not establish causation.

6. Ex Post Facto Research Design

In this design, researchers examine existing conditions or behaviors and look for potential causes retrospectively. It’s useful when it’s not feasible to manipulate variables.

7. Exploratory Research Design

Pilot Study: A pilot study is a small-scale preliminary investigation conducted before a full-scale research project to test research procedures and gather initial data.

8. Cohort Study

Cohort studies follow a group of individuals (cohort) over a period of time to assess the development of specific outcomes or conditions. They are common in epidemiology.

9. Action Research

Action research is often used in educational or organizational settings. Researchers work collaboratively with practitioners to address practical problems and make improvements.

10. Meta-Analysis

A meta-analysis involves the statistical synthesis of data from multiple studies on the same topic to provide a more comprehensive overview of research findings.

11. Cross-Sequential Design

This design combines elements of both cross-sectional and longitudinal research to examine age-related changes while comparing different cohorts.

12. Grounded Theory

Grounded theory is a qualitative research approach that focuses on developing theories or explanations grounded in the data collected during the research process.

Each of these research designs has its strengths and weaknesses, and the choice of design depends on the research question, available resources, ethical considerations, and the nature of the data needed to address the research objectives. Researchers often select the design that best aligns with their specific research goals and constraints..

Learn more: What is Research?

Research design methods refer to the systematic approaches and techniques used to plan, structure, and conduct a research study. The choice of research design method depends on the research questions, objectives, and the nature of the study. Here are some key research design methods commonly used in various fields:

1. Experimental Method

Controlled Experiments: In controlled experiments, researchers manipulate one or more independent variables and measure their effects on dependent variables while controlling for confounding factors.

2. Observational Method

Naturalistic Observation: Researchers observe and record behavior in its natural setting without intervening. This method is often used in psychology and anthropology.

Structured Observation: Observations are made using a predetermined set of criteria or a structured observation schedule.

3. Survey Method

Questionnaires: Researchers collect data by administering structured questionnaires to participants. This method is widely used for collecting quantitative research data.

Interviews: In interviews, researchers ask questions directly to participants, allowing for more in-depth responses. Interviews can take on structured, semi-structured, or unstructured formats.

4. Case Study Method

Single-Case Study: Focuses on a single individual or entity, providing an in-depth analysis of that case.

Multiple-Case Study: Involves the examination of multiple cases to identify patterns, commonalities, or differences.

5. Content Analysis

Researchers analyze textual, visual, or audio data to identify patterns, themes, and trends. This method is commonly used in media studies and social sciences.

6. Historical Research

Researchers examine historical documents, records, and artifacts to understand past events, trends, and contexts.

7. Action Research

Researchers work collaboratively with practitioners to address practical problems or implement interventions in real-world settings.

8. Ethnographic Research

Researchers immerse themselves in a particular cultural or social group to gain a deep understanding of their behaviors, beliefs, and practices.

9. Cross-sectional and Longitudinal Surveys

Cross-sectional surveys collect data from a sample of participants at a single point in time.

Longitudinal surveys collect data from the same participants over an extended period, allowing for the study of changes over time.

Researchers conduct a quantitative synthesis of data from multiple studies to provide a comprehensive overview of research findings on a particular topic.

11. Mixed-Methods Research

Combines qualitative and quantitative research methods to provide a more holistic understanding of a research problem.

A qualitative research method that aims to develop theories or explanations grounded in the data collected during the research process.

13. Simulation and Modeling

Researchers use mathematical or computational models to simulate real-world phenomena and explore various scenarios.

14. Survey Experiments

Combines elements of surveys and experiments, allowing researchers to manipulate variables within a survey context.

15. Case-Control Studies and Cohort Studies

These epidemiological research methods are used to study the causes and risk factors associated with diseases and health outcomes.

16. Cross-Sequential Design

Combines elements of cross-sectional and longitudinal research to examine both age-related changes and cohort differences.

The selection of a specific research design method should align with the research objectives, the type of data needed, available resources, ethical considerations, and the overall research approach. Researchers often choose methods that best suit the nature of their study and research questions to ensure that they collect relevant and valid data.

Learn more: What is Research Objective?

Research Design Examples

Research designs can vary significantly depending on the research questions and objectives. Here are some examples of research designs across different disciplines:

  • Experimental Design: A pharmaceutical company conducts a randomized controlled trial (RCT) to test the efficacy of a new drug. Participants are randomly assigned to two groups: one receiving the new drug and the other a placebo. The company measures the health outcomes of both groups over a specific period.
  • Observational Design: An ecologist observes the behavior of a particular bird species in its natural habitat to understand its feeding patterns, mating rituals, and migration habits.
  • Survey Design: A market research firm conducts a survey to gather data on consumer preferences for a new product. They distribute a questionnaire to a representative sample of the target population and analyze the responses.
  • Case Study Design: A psychologist conducts a case study on an individual with a rare psychological disorder to gain insights into the causes, symptoms, and potential treatments of the condition.
  • Content Analysis: Researchers analyze a large dataset of social media posts to identify trends in public opinion and sentiment during a political election campaign.
  • Historical Research: A historian examines primary sources such as letters, diaries, and official documents to reconstruct the events and circumstances leading up to a significant historical event.
  • Action Research: A school teacher collaborates with colleagues to implement a new teaching method in their classrooms and assess its impact on student learning outcomes through continuous reflection and adjustment.
  • Ethnographic Research: An anthropologist lives with and observes an indigenous community for an extended period to understand their culture, social structures, and daily lives.
  • Cross-Sectional Survey: A public health agency conducts a cross-sectional survey to assess the prevalence of smoking among different age groups in a specific region during a particular year.
  • Longitudinal Study: A developmental psychologist follows a group of children from infancy through adolescence to study their cognitive, emotional, and social development over time.
  • Meta-Analysis: Researchers aggregate and analyze the results of multiple studies on the effectiveness of a specific type of therapy to provide a comprehensive overview of its outcomes.
  • Mixed-Methods Research: A sociologist combines surveys and in-depth interviews to study the impact of a community development program on residents’ quality of life.
  • Grounded Theory: A sociologist conducts interviews with homeless individuals to develop a theory explaining the factors that contribute to homelessness and the strategies they use to cope.
  • Simulation and Modeling: Climate scientists use computer models to simulate the effects of various greenhouse gas emission scenarios on global temperatures and sea levels.
  • Case-Control Study: Epidemiologists investigate a disease outbreak by comparing a group of individuals who contracted the disease (cases) with a group of individuals who did not (controls) to identify potential risk factors.

These examples demonstrate the diversity of research designs used in different fields to address a wide range of research questions and objectives. Researchers select the most appropriate design based on the specific context and goals of their study.

Learn more: What is Competitive Research?

Enhance Your Research

Collect feedback and conduct research with IdeaScale’s award-winning software

Elevate Research And Feedback With Your IdeaScale Community!

IdeaScale is an innovation management solution that inspires people to take action on their ideas. Your community’s ideas can change lives, your business and the world. Connect to the ideas that matter and start co-creating the future.

Copyright © 2024 IdeaScale

Privacy Overview

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology

Research Design | Step-by-Step Guide with Examples

Published on 5 May 2022 by Shona McCombes . Revised on 20 March 2023.

A research design is a strategy for answering your research question  using empirical data. Creating a research design means making decisions about:

  • Your overall aims and approach
  • The type of research design you’ll use
  • Your sampling methods or criteria for selecting subjects
  • Your data collection methods
  • The procedures you’ll follow to collect data
  • Your data analysis methods

A well-planned research design helps ensure that your methods match your research aims and that you use the right kind of analysis for your data.

Table of contents

Step 1: consider your aims and approach, step 2: choose a type of research design, step 3: identify your population and sampling method, step 4: choose your data collection methods, step 5: plan your data collection procedures, step 6: decide on your data analysis strategies, frequently asked questions.

  • Introduction

Before you can start designing your research, you should already have a clear idea of the research question you want to investigate.

There are many different ways you could go about answering this question. Your research design choices should be driven by your aims and priorities – start by thinking carefully about what you want to achieve.

The first choice you need to make is whether you’ll take a qualitative or quantitative approach.

Qualitative research designs tend to be more flexible and inductive , allowing you to adjust your approach based on what you find throughout the research process.

Quantitative research designs tend to be more fixed and deductive , with variables and hypotheses clearly defined in advance of data collection.

It’s also possible to use a mixed methods design that integrates aspects of both approaches. By combining qualitative and quantitative insights, you can gain a more complete picture of the problem you’re studying and strengthen the credibility of your conclusions.

Practical and ethical considerations when designing research

As well as scientific considerations, you need to think practically when designing your research. If your research involves people or animals, you also need to consider research ethics .

  • How much time do you have to collect data and write up the research?
  • Will you be able to gain access to the data you need (e.g., by travelling to a specific location or contacting specific people)?
  • Do you have the necessary research skills (e.g., statistical analysis or interview techniques)?
  • Will you need ethical approval ?

At each stage of the research design process, make sure that your choices are practically feasible.

Prevent plagiarism, run a free check.

Within both qualitative and quantitative approaches, there are several types of research design to choose from. Each type provides a framework for the overall shape of your research.

Types of quantitative research designs

Quantitative designs can be split into four main types. Experimental and   quasi-experimental designs allow you to test cause-and-effect relationships, while descriptive and correlational designs allow you to measure variables and describe relationships between them.

With descriptive and correlational designs, you can get a clear picture of characteristics, trends, and relationships as they exist in the real world. However, you can’t draw conclusions about cause and effect (because correlation doesn’t imply causation ).

Experiments are the strongest way to test cause-and-effect relationships without the risk of other variables influencing the results. However, their controlled conditions may not always reflect how things work in the real world. They’re often also more difficult and expensive to implement.

Types of qualitative research designs

Qualitative designs are less strictly defined. This approach is about gaining a rich, detailed understanding of a specific context or phenomenon, and you can often be more creative and flexible in designing your research.

The table below shows some common types of qualitative design. They often have similar approaches in terms of data collection, but focus on different aspects when analysing the data.

Your research design should clearly define who or what your research will focus on, and how you’ll go about choosing your participants or subjects.

In research, a population is the entire group that you want to draw conclusions about, while a sample is the smaller group of individuals you’ll actually collect data from.

Defining the population

A population can be made up of anything you want to study – plants, animals, organisations, texts, countries, etc. In the social sciences, it most often refers to a group of people.

For example, will you focus on people from a specific demographic, region, or background? Are you interested in people with a certain job or medical condition, or users of a particular product?

The more precisely you define your population, the easier it will be to gather a representative sample.

Sampling methods

Even with a narrowly defined population, it’s rarely possible to collect data from every individual. Instead, you’ll collect data from a sample.

To select a sample, there are two main approaches: probability sampling and non-probability sampling . The sampling method you use affects how confidently you can generalise your results to the population as a whole.

Probability sampling is the most statistically valid option, but it’s often difficult to achieve unless you’re dealing with a very small and accessible population.

For practical reasons, many studies use non-probability sampling, but it’s important to be aware of the limitations and carefully consider potential biases. You should always make an effort to gather a sample that’s as representative as possible of the population.

Case selection in qualitative research

In some types of qualitative designs, sampling may not be relevant.

For example, in an ethnography or a case study, your aim is to deeply understand a specific context, not to generalise to a population. Instead of sampling, you may simply aim to collect as much data as possible about the context you are studying.

In these types of design, you still have to carefully consider your choice of case or community. You should have a clear rationale for why this particular case is suitable for answering your research question.

For example, you might choose a case study that reveals an unusual or neglected aspect of your research problem, or you might choose several very similar or very different cases in order to compare them.

Data collection methods are ways of directly measuring variables and gathering information. They allow you to gain first-hand knowledge and original insights into your research problem.

You can choose just one data collection method, or use several methods in the same study.

Survey methods

Surveys allow you to collect data about opinions, behaviours, experiences, and characteristics by asking people directly. There are two main survey methods to choose from: questionnaires and interviews.

Observation methods

Observations allow you to collect data unobtrusively, observing characteristics, behaviours, or social interactions without relying on self-reporting.

Observations may be conducted in real time, taking notes as you observe, or you might make audiovisual recordings for later analysis. They can be qualitative or quantitative.

Other methods of data collection

There are many other ways you might collect data depending on your field and topic.

If you’re not sure which methods will work best for your research design, try reading some papers in your field to see what data collection methods they used.

Secondary data

If you don’t have the time or resources to collect data from the population you’re interested in, you can also choose to use secondary data that other researchers already collected – for example, datasets from government surveys or previous studies on your topic.

With this raw data, you can do your own analysis to answer new research questions that weren’t addressed by the original study.

Using secondary data can expand the scope of your research, as you may be able to access much larger and more varied samples than you could collect yourself.

However, it also means you don’t have any control over which variables to measure or how to measure them, so the conclusions you can draw may be limited.

As well as deciding on your methods, you need to plan exactly how you’ll use these methods to collect data that’s consistent, accurate, and unbiased.

Planning systematic procedures is especially important in quantitative research, where you need to precisely define your variables and ensure your measurements are reliable and valid.

Operationalisation

Some variables, like height or age, are easily measured. But often you’ll be dealing with more abstract concepts, like satisfaction, anxiety, or competence. Operationalisation means turning these fuzzy ideas into measurable indicators.

If you’re using observations , which events or actions will you count?

If you’re using surveys , which questions will you ask and what range of responses will be offered?

You may also choose to use or adapt existing materials designed to measure the concept you’re interested in – for example, questionnaires or inventories whose reliability and validity has already been established.

Reliability and validity

Reliability means your results can be consistently reproduced , while validity means that you’re actually measuring the concept you’re interested in.

For valid and reliable results, your measurement materials should be thoroughly researched and carefully designed. Plan your procedures to make sure you carry out the same steps in the same way for each participant.

If you’re developing a new questionnaire or other instrument to measure a specific concept, running a pilot study allows you to check its validity and reliability in advance.

Sampling procedures

As well as choosing an appropriate sampling method, you need a concrete plan for how you’ll actually contact and recruit your selected sample.

That means making decisions about things like:

  • How many participants do you need for an adequate sample size?
  • What inclusion and exclusion criteria will you use to identify eligible participants?
  • How will you contact your sample – by mail, online, by phone, or in person?

If you’re using a probability sampling method, it’s important that everyone who is randomly selected actually participates in the study. How will you ensure a high response rate?

If you’re using a non-probability method, how will you avoid bias and ensure a representative sample?

Data management

It’s also important to create a data management plan for organising and storing your data.

Will you need to transcribe interviews or perform data entry for observations? You should anonymise and safeguard any sensitive data, and make sure it’s backed up regularly.

Keeping your data well organised will save time when it comes to analysing them. It can also help other researchers validate and add to your findings.

On their own, raw data can’t answer your research question. The last step of designing your research is planning how you’ll analyse the data.

Quantitative data analysis

In quantitative research, you’ll most likely use some form of statistical analysis . With statistics, you can summarise your sample data, make estimates, and test hypotheses.

Using descriptive statistics , you can summarise your sample data in terms of:

  • The distribution of the data (e.g., the frequency of each score on a test)
  • The central tendency of the data (e.g., the mean to describe the average score)
  • The variability of the data (e.g., the standard deviation to describe how spread out the scores are)

The specific calculations you can do depend on the level of measurement of your variables.

Using inferential statistics , you can:

  • Make estimates about the population based on your sample data.
  • Test hypotheses about a relationship between variables.

Regression and correlation tests look for associations between two or more variables, while comparison tests (such as t tests and ANOVAs ) look for differences in the outcomes of different groups.

Your choice of statistical test depends on various aspects of your research design, including the types of variables you’re dealing with and the distribution of your data.

Qualitative data analysis

In qualitative research, your data will usually be very dense with information and ideas. Instead of summing it up in numbers, you’ll need to comb through the data in detail, interpret its meanings, identify patterns, and extract the parts that are most relevant to your research question.

Two of the most common approaches to doing this are thematic analysis and discourse analysis .

There are many other ways of analysing qualitative data depending on the aims of your research. To get a sense of potential approaches, try reading some qualitative research papers in your field.

A sample is a subset of individuals from a larger population. Sampling means selecting the group that you will actually collect data from in your research.

For example, if you are researching the opinions of students in your university, you could survey a sample of 100 students.

Statistical sampling allows you to test a hypothesis about the characteristics of a population. There are various sampling methods you can use to ensure that your sample is representative of the population as a whole.

Operationalisation means turning abstract conceptual ideas into measurable observations.

For example, the concept of social anxiety isn’t directly observable, but it can be operationally defined in terms of self-rating scores, behavioural avoidance of crowded places, or physical anxiety symptoms in social situations.

Before collecting data , it’s important to consider how you will operationalise the variables that you want to measure.

The research methods you use depend on the type of data you need to answer your research question .

  • If you want to measure something or test a hypothesis , use quantitative methods . If you want to explore ideas, thoughts, and meanings, use qualitative methods .
  • If you want to analyse a large amount of readily available data, use secondary data. If you want data specific to your purposes with control over how they are generated, collect primary data.
  • If you want to establish cause-and-effect relationships between variables , use experimental methods. If you want to understand the characteristics of a research subject, use descriptive methods.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

McCombes, S. (2023, March 20). Research Design | Step-by-Step Guide with Examples. Scribbr. Retrieved 12 February 2024, from https://www.scribbr.co.uk/research-methods/research-design/

Is this article helpful?

Shona McCombes

Shona McCombes

  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case NPS+ Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

research design report means

Home Market Research Research Tools and Apps

Research Design: What it is, Elements & Types

Research Design

Can you imagine doing research without a plan? Probably not. When we discuss a strategy to collect, study, and evaluate data, we talk about research design. This design addresses problems and creates a consistent and logical model for data analysis. Let’s learn more about it.

What is Research Design?

Research design is the framework of research methods and techniques chosen by a researcher to conduct a study. The design allows researchers to sharpen the research methods suitable for the subject matter and set up their studies for success.

Creating a research topic explains the type of research (experimental,  survey research ,  correlational , semi-experimental, review) and its sub-type (experimental design, research problem , descriptive case-study). 

There are three main types of designs for research:

  • Data collection
  • Measurement
  • Data Analysis

The research problem an organization faces will determine the design, not vice-versa. The design phase of a study determines which tools to use and how they are used.

The Process of Research Design

The research design process is a systematic and structured approach to conducting research. The process is essential to ensure that the study is valid, reliable, and produces meaningful results.

  • Consider your aims and approaches: Determine the research questions and objectives, and identify the theoretical framework and methodology for the study.
  • Choose a type of Research Design: Select the appropriate research design, such as experimental, correlational, survey, case study, or ethnographic, based on the research questions and objectives.
  • Identify your population and sampling method: Determine the target population and sample size, and choose the sampling method, such as random , stratified random sampling , or convenience sampling.
  • Choose your data collection methods: Decide on the methods, such as surveys, interviews, observations, or experiments, and select the appropriate instruments or tools for collecting data.
  • Plan your data collection procedures: Develop a plan for data collection, including the timeframe, location, and personnel involved, and ensure ethical considerations.
  • Decide on your data analysis strategies: Select the appropriate data analysis techniques, such as statistical analysis , content analysis, or discourse analysis, and plan how to interpret the results.

The process of research design is a critical step in conducting research. By following the steps of research design, researchers can ensure that their study is well-planned, ethical, and rigorous.

Research Design Elements

Impactful research usually creates a minimum bias in data and increases trust in the accuracy of collected data. A design that produces the slightest margin of error in experimental research is generally considered the desired outcome. The essential elements are:

  • Accurate purpose statement
  • Techniques to be implemented for collecting and analyzing research
  • The method applied for analyzing collected details
  • Type of research methodology
  • Probable objections to research
  • Settings for the research study
  • Measurement of analysis

Characteristics of Research Design

A proper design sets your study up for success. Successful research studies provide insights that are accurate and unbiased. You’ll need to create a survey that meets all of the main characteristics of a design. There are four key characteristics:

Characteristics of Research Design

  • Neutrality: When you set up your study, you may have to make assumptions about the data you expect to collect. The results projected in the research should be free from research bias and neutral. Understand opinions about the final evaluated scores and conclusions from multiple individuals and consider those who agree with the results.
  • Reliability: With regularly conducted research, the researcher expects similar results every time. You’ll only be able to reach the desired results if your design is reliable. Your plan should indicate how to form research questions to ensure the standard of results.
  • Validity: There are multiple measuring tools available. However, the only correct measuring tools are those which help a researcher in gauging results according to the objective of the research. The  questionnaire  developed from this design will then be valid.
  • Generalization:  The outcome of your design should apply to a population and not just a restricted sample . A generalized method implies that your survey can be conducted on any part of a population with similar accuracy.

The above factors affect how respondents answer the research questions, so they should balance all the above characteristics in a good design. If you want, you can also learn about Selection Bias through our blog.

Research Design Types

A researcher must clearly understand the various types to select which model to implement for a study. Like the research itself, the design of your analysis can be broadly classified into quantitative and qualitative.

Qualitative research

Qualitative research determines relationships between collected data and observations based on mathematical calculations. Statistical methods can prove or disprove theories related to a naturally existing phenomenon. Researchers rely on qualitative observation research methods that conclude “why” a particular theory exists and “what” respondents have to say about it.

Quantitative research

Quantitative research is for cases where statistical conclusions to collect actionable insights are essential. Numbers provide a better perspective for making critical business decisions. Quantitative research methods are necessary for the growth of any organization. Insights drawn from complex numerical data and analysis prove to be highly effective when making decisions about the business’s future.

Qualitative Research vs Quantitative Research

Here is a chart that highlights the major differences between qualitative and quantitative research:

In summary or analysis , the step of qualitative research is more exploratory and focuses on understanding the subjective experiences of individuals, while quantitative research is more focused on objective data and statistical analysis.

You can further break down the types of research design into five categories:

types of research design

1. Descriptive: In a descriptive composition, a researcher is solely interested in describing the situation or case under their research study. It is a theory-based design method created by gathering, analyzing, and presenting collected data. This allows a researcher to provide insights into the why and how of research. Descriptive design helps others better understand the need for the research. If the problem statement is not clear, you can conduct exploratory research. 

2. Experimental: Experimental research establishes a relationship between the cause and effect of a situation. It is a causal research design where one observes the impact caused by the independent variable on the dependent variable. For example, one monitors the influence of an independent variable such as a price on a dependent variable such as customer satisfaction or brand loyalty. It is an efficient research method as it contributes to solving a problem.

The independent variables are manipulated to monitor the change it has on the dependent variable. Social sciences often use it to observe human behavior by analyzing two groups. Researchers can have participants change their actions and study how the people around them react to understand social psychology better.

3. Correlational research: Correlational research  is a non-experimental research technique. It helps researchers establish a relationship between two closely connected variables. There is no assumption while evaluating a relationship between two other variables, and statistical analysis techniques calculate the relationship between them. This type of research requires two different groups.

A correlation coefficient determines the correlation between two variables whose values range between -1 and +1. If the correlation coefficient is towards +1, it indicates a positive relationship between the variables, and -1 means a negative relationship between the two variables. 

4. Diagnostic research: In diagnostic design, the researcher is looking to evaluate the underlying cause of a specific topic or phenomenon. This method helps one learn more about the factors that create troublesome situations. 

This design has three parts of the research:

  • Inception of the issue
  • Diagnosis of the issue
  • Solution for the issue

5. Explanatory research : Explanatory design uses a researcher’s ideas and thoughts on a subject to further explore their theories. The study explains unexplored aspects of a subject and details the research questions’ what, how, and why.

Benefits of Research Design

There are several benefits of having a well-designed research plan. Including:

  • Clarity of research objectives: Research design provides a clear understanding of the research objectives and the desired outcomes.
  • Increased validity and reliability: To ensure the validity and reliability of results, research design help to minimize the risk of bias and helps to control extraneous variables.
  • Improved data collection: Research design helps to ensure that the proper data is collected and data is collected systematically and consistently.
  • Better data analysis: Research design helps ensure that the collected data can be analyzed effectively, providing meaningful insights and conclusions.
  • Improved communication: A well-designed research helps ensure the results are clean and influential within the research team and external stakeholders.
  • Efficient use of resources: reducing the risk of waste and maximizing the impact of the research, research design helps to ensure that resources are used efficiently.

A well-designed research plan is essential for successful research, providing clear and meaningful insights and ensuring that resources are practical.

QuestionPro offers a comprehensive solution for researchers looking to conduct research. With its user-friendly interface, robust data collection and analysis tools, and the ability to integrate results from multiple sources, QuestionPro provides a versatile platform for designing and executing research projects.

Our robust suite of research tools provides you with all you need to derive research results. Our online survey platform includes custom point-and-click logic and advanced question types. Uncover the insights that matter the most.

FREE TRIAL         LEARN MORE

MORE LIKE THIS

opinion mining

Opinion Mining: What it is, Types & Techniques to Follow

Feb 14, 2024

omnichannel customer journey

Omnichannel Customer Journey: Strategies & Solutions

Feb 13, 2024

machine customers

Machine Customers: Future of Customer Decision-Making

research design report means

How to Use Survey Data to Decrease Turnover

Feb 12, 2024

Other categories

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Uncategorized
  • Video Learning Series
  • What’s Coming Up
  • Workforce Intelligence
  • How it works

How to Write a Research Design – Guide with Examples

Published by Alaxendra Bets at August 14th, 2021 , Revised On October 3, 2023

A research design is a structure that combines different components of research. It involves the use of different data collection and data analysis techniques logically to answer the  research questions .

It would be best to make some decisions about addressing the research questions adequately before starting the research process, which is achieved with the help of the research design.

Below are the key aspects of the decision-making process:

  • Data type required for research
  • Research resources
  • Participants required for research
  • Hypothesis based upon research question(s)
  • Data analysis  methodologies
  • Variables (Independent, dependent, and confounding)
  • The location and timescale for conducting the data
  • The time period required for research

The research design provides the strategy of investigation for your project. Furthermore, it defines the parameters and criteria to compile the data to evaluate results and conclude.

Your project’s validity depends on the data collection and  interpretation techniques.  A strong research design reflects a strong  dissertation , scientific paper, or research proposal .

Steps of research design

Step 1: Establish Priorities for Research Design

Before conducting any research study, you must address an important question: “how to create a research design.”

The research design depends on the researcher’s priorities and choices because every research has different priorities. For a complex research study involving multiple methods, you may choose to have more than one research design.

Multimethodology or multimethod research includes using more than one data collection method or research in a research study or set of related studies.

If one research design is weak in one area, then another research design can cover that weakness. For instance, a  dissertation analyzing different situations or cases will have more than one research design.

For example:

  • Experimental research involves experimental investigation and laboratory experience, but it does not accurately investigate the real world.
  • Quantitative research is good for the  statistical part of the project, but it may not provide an in-depth understanding of the  topic .
  • Also, correlational research will not provide experimental results because it is a technique that assesses the statistical relationship between two variables.

While scientific considerations are a fundamental aspect of the research design, It is equally important that the researcher think practically before deciding on its structure. Here are some questions that you should think of;

  • Do you have enough time to gather data and complete the write-up?
  • Will you be able to collect the necessary data by interviewing a specific person or visiting a specific location?
  • Do you have in-depth knowledge about the  different statistical analysis and data collection techniques to address the research questions  or test the  hypothesis ?

If you think that the chosen research design cannot answer the research questions properly, you can refine your research questions to gain better insight.

Step 2: Data Type you Need for Research

Decide on the type of data you need for your research. The type of data you need to collect depends on your research questions or research hypothesis. Two types of research data can be used to answer the research questions:

Primary Data Vs. Secondary Data

Qualitative vs. quantitative data.

Also, see; Research methods, design, and analysis .

Need help with a thesis chapter?

  • Hire an expert from ResearchProspect today!
  • Statistical analysis, research methodology, discussion of the results or conclusion – our experts can help you no matter how complex the requirements are.

analysis image

Step 3: Data Collection Techniques

Once you have selected the type of research to answer your research question, you need to decide where and how to collect the data.

It is time to determine your research method to address the  research problem . Research methods involve procedures, techniques, materials, and tools used for the study.

For instance, a dissertation research design includes the different resources and data collection techniques and helps establish your  dissertation’s structure .

The following table shows the characteristics of the most popularly employed research methods.

Research Methods

Step 4: Procedure of Data Analysis

Use of the  correct data and statistical analysis technique is necessary for the validity of your research. Therefore, you need to be certain about the data type that would best address the research problem. Choosing an appropriate analysis method is the final step for the research design. It can be split into two main categories;

Quantitative Data Analysis

The quantitative data analysis technique involves analyzing the numerical data with the help of different applications such as; SPSS, STATA, Excel, origin lab, etc.

This data analysis strategy tests different variables such as spectrum, frequencies, averages, and more. The research question and the hypothesis must be established to identify the variables for testing.

Qualitative Data Analysis

Qualitative data analysis of figures, themes, and words allows for flexibility and the researcher’s subjective opinions. This means that the researcher’s primary focus will be interpreting patterns, tendencies, and accounts and understanding the implications and social framework.

You should be clear about your research objectives before starting to analyze the data. For example, you should ask yourself whether you need to explain respondents’ experiences and insights or do you also need to evaluate their responses with reference to a certain social framework.

Step 5: Write your Research Proposal

The research design is an important component of a research proposal because it plans the project’s execution. You can share it with the supervisor, who would evaluate the feasibility and capacity of the results  and  conclusion .

Read our guidelines to write a research proposal  if you have already formulated your research design. The research proposal is written in the future tense because you are writing your proposal before conducting research.

The  research methodology  or research design, on the other hand, is generally written in the past tense.

How to Write a Research Design – Conclusion

A research design is the plan, structure, strategy of investigation conceived to answer the research question and test the hypothesis. The dissertation research design can be classified based on the type of data and the type of analysis.

Above mentioned five steps are the answer to how to write a research design. So, follow these steps to  formulate the perfect research design for your dissertation .

ResearchProspect writers have years of experience creating research designs that align with the dissertation’s aim and objectives. If you are struggling with your dissertation methodology chapter, you might want to look at our dissertation part-writing service.

Our dissertation writers can also help you with the full dissertation paper . No matter how urgent or complex your need may be, ResearchProspect can help. We also offer PhD level research paper writing services.

Frequently Asked Questions

What is research design.

Research design is a systematic plan that guides the research process, outlining the methodology and procedures for collecting and analysing data. It determines the structure of the study, ensuring the research question is answered effectively, reliably, and validly. It serves as the blueprint for the entire research project.

How to write a research design?

To write a research design, define your research question, identify the research method (qualitative, quantitative, or mixed), choose data collection techniques (e.g., surveys, interviews), determine the sample size and sampling method, outline data analysis procedures, and highlight potential limitations and ethical considerations for the study.

How to write the design section of a research paper?

In the design section of a research paper, describe the research methodology chosen and justify its selection. Outline the data collection methods, participants or samples, instruments used, and procedures followed. Detail any experimental controls, if applicable. Ensure clarity and precision to enable replication of the study by other researchers.

How to write a research design in methodology?

To write a research design in methodology, clearly outline the research strategy (e.g., experimental, survey, case study). Describe the sampling technique, participants, and data collection methods. Detail the procedures for data collection and analysis. Justify choices by linking them to research objectives, addressing reliability and validity.

You May Also Like

Make sure that your selected topic is intriguing, manageable, and relevant. Here are some guidelines to help understand how to find a good dissertation topic.

Struggling to find relevant and up-to-date topics for your dissertation? Here is all you need to know if unsure about how to choose dissertation topic.

To help students organise their dissertation proposal paper correctly, we have put together detailed guidelines on how to structure a dissertation proposal.

USEFUL LINKS

LEARNING RESOURCES

DMCA.com Protection Status

COMPANY DETAILS

Research-Prospect-Writing-Service

  • How It Works

The Four Types of Research Design — Everything You Need to Know

Jenny Romanchuk

Updated: December 11, 2023

Published: January 18, 2023

When you conduct research, you need to have a clear idea of what you want to achieve and how to accomplish it. A good research design enables you to collect accurate and reliable data to draw valid conclusions.

research design used to test different beauty products

In this blog post, we'll outline the key features of the four common types of research design with real-life examples from UnderArmor, Carmex, and more. Then, you can easily choose the right approach for your project.

Table of Contents

What is research design?

The four types of research design, research design examples.

Research design is the process of planning and executing a study to answer specific questions. This process allows you to test hypotheses in the business or scientific fields.

Research design involves choosing the right methodology, selecting the most appropriate data collection methods, and devising a plan (or framework) for analyzing the data. In short, a good research design helps us to structure our research.

Marketers use different types of research design when conducting research .

There are four common types of research design — descriptive, correlational, experimental, and diagnostic designs. Let’s take a look at each in more detail.

Researchers use different designs to accomplish different research objectives. Here, we'll discuss how to choose the right type, the benefits of each, and use cases.

Research can also be classified as quantitative or qualitative at a higher level. Some experiments exhibit both qualitative and quantitative characteristics.

research design report means

Free Market Research Kit

5 Research and Planning Templates + a Free Guide on How to Use Them in Your Market Research

  • SWOT Analysis Template
  • Survey Template
  • Focus Group Template

You're all set!

Click this link to access this resource at any time.

Experimental

An experimental design is used when the researcher wants to examine how variables interact with each other. The researcher manipulates one variable (the independent variable) and observes the effect on another variable (the dependent variable).

In other words, the researcher wants to test a causal relationship between two or more variables.

In marketing, an example of experimental research would be comparing the effects of a television commercial versus an online advertisement conducted in a controlled environment (e.g. a lab). The objective of the research is to test which advertisement gets more attention among people of different age groups, gender, etc.

Another example is a study of the effect of music on productivity. A researcher assigns participants to one of two groups — those who listen to music while working and those who don't — and measure their productivity.

The main benefit of an experimental design is that it allows the researcher to draw causal relationships between variables.

One limitation: This research requires a great deal of control over the environment and participants, making it difficult to replicate in the real world. In addition, it’s quite costly.

Best for: Testing a cause-and-effect relationship (i.e., the effect of an independent variable on a dependent variable).

Correlational

A correlational design examines the relationship between two or more variables without intervening in the process.

Correlational design allows the analyst to observe natural relationships between variables. This results in data being more reflective of real-world situations.

For example, marketers can use correlational design to examine the relationship between brand loyalty and customer satisfaction. In particular, the researcher would look for patterns or trends in the data to see if there is a relationship between these two entities.

Similarly, you can study the relationship between physical activity and mental health. The analyst here would ask participants to complete surveys about their physical activity levels and mental health status. Data would show how the two variables are related.

Best for: Understanding the extent to which two or more variables are associated with each other in the real world.

Descriptive

Descriptive research refers to a systematic process of observing and describing what a subject does without influencing them.

Methods include surveys, interviews, case studies, and observations. Descriptive research aims to gather an in-depth understanding of a phenomenon and answers when/what/where.

SaaS companies use descriptive design to understand how customers interact with specific features. Findings can be used to spot patterns and roadblocks.

For instance, product managers can use screen recordings by Hotjar to observe in-app user behavior. This way, the team can precisely understand what is happening at a certain stage of the user journey and act accordingly.

Brand24, a social listening tool, tripled its sign-up conversion rate from 2.56% to 7.42%, thanks to locating friction points in the sign-up form through screen recordings.

different types of research design: descriptive research example.

Carma Laboratories worked with research company MMR to measure customers’ reactions to the lip-care company’s packaging and product . The goal was to find the cause of low sales for a recently launched line extension in Europe.

The team moderated a live, online focus group. Participants were shown w product samples, while AI and NLP natural language processing identified key themes in customer feedback.

This helped uncover key reasons for poor performance and guided changes in packaging.

research design example, tweezerman

Research Design in Business and Management pp 1–17 Cite as

Introducing Research Designs

  • Stefan Hunziker 3 &
  • Michael Blankenagel 3  
  • First Online: 10 November 2021

2948 Accesses

We define research design as a combination of decisions within a research process. These decisions enable us to make a specific type of argument by answering the research question. It is the implementation plan for the research study that allows reaching the desired (type of) conclusion. Different research designs make it possible to draw different conclusions. These conclusions produce various kinds of intellectual contributions. As all kinds of intellectual contributions are necessary to increase the body of knowledge, no research design is inherently better than another, only more appropriate to answer a specific question.

This is a preview of subscription content, log in via an institution .

Buying options

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF

Tax calculation will be finalised at checkout

Purchases are for personal use only

Alvesson, M., & Skoldburg, K. (2000). Reflexive methodology . SAGE.

Google Scholar  

Alvesson, M. (2004). Reflexive methodology: New vistas for qualitative research. SAGE.

Attia, M., & Edge, J. (2017). Be(com)ing a reflexive researcher: A developmental approach to research methodology. Open Review of Educational Research, 4 (1), 33–45.

Article   Google Scholar  

Brahler, C. (2018). Chapter 9 “Validity in Experimental Design”. University of Dayton. Retrieved May 27, 2021, from https://www.coursehero.com/file/30778216/CHAPTER-9-VALIDITY-IN-EXPERIMENTAL-DESIGN-KEYdocx/ .

Brown, J. D. (1996). Testing in language programs. Prentice Hall Regents.

Cambridge University Press. (n.d.a). Design. In  Cambridge dictionary . Retrieved May 19, 2021, from  https://dictionary.cambridge.org/dictionary/english/design .

Cambridge University Press. (n.d.b). Method. In  Cambridge dictionary . Retrieved May 19, 2021, from https://dictionary.cambridge.org/dictionary/english/method .

Cambridge University Press. (n.d.c). Methodology. In  Cambridge dictionary . Retrieved June 8, 2021, from https://dictionary.cambridge.org/dictionary/english/methodology .

Charmaz, K. (2017). The power of constructivist grounded theory for critical inquiry. Qualitative Inquiry, 23 (1), 34–45.

Cohen, D. J., & Crabtree, B. F. (2008). Evaluative criteria for qualitative research in health care: Controversies and recommendations. Annals of Family Medicine, 6 (4), 331–339.

de Vaus, D. A. (2001). Research design in social research. Reprinted . SAGE.

Hall, W. A., & Callery, P. (2001). Enhancing the rigor of grounded theory: Incorporating reflexivity and relationality. Qualitative Health Research, 11 (2), 257–272.

Haynes, K. (2012). Reflexivity in qualitative research. In Qualitative organizational research: Core methods and current challenges (pp. 72–89).

Koch, T., & Harrington, A. (1998). Reconceptualizing rigour: The case for reflexivity. Journal of Advanced Nursing., 28 (4), 882–890.

Lincoln, Y. S., & Guba, E. G. (1985). Naturalistic Inquiry . Sage.

Malterud, K. (2001). Qualitative research: Standards, challenges and guidelines. The Lancet, 358 , 483–488.

Orr, K., & Bennett, M. (2009). Reflexivity in the co-production of academic-practitioner research. Qual Research in Orgs & Mgmt, 4, 85–102.

Trochim, W. (2005). Research methods: The concise knowledge base. Atomic Dog Pub.

Subramani, S. (2019). Practising reflexivity: Ethics, methodology and theory construction. Methodological Innovations , 12 (2).

Sue, V., & Ritter, L. (Eds.). (2007). Conducting online surveys . SAGE.

Yin, R. K. (1994). Discovering the future of the case study. method in evaluation research. American Journal of Evaluation, 15 (3), 283–290.

Yin, R. K. (2014). Case study research. Design and methods (5th ed.). SAGE.

Download references

Author information

Authors and affiliations.

Wirtschaft/IFZ – Campus Zug-Rotkreuz, Hochschule Luzern, Zug-Rotkreuz, Zug , Switzerland

Stefan Hunziker & Michael Blankenagel

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Stefan Hunziker .

Rights and permissions

Reprints and permissions

Copyright information

© 2021 The Author(s), under exclusive license to Springer Fachmedien Wiesbaden GmbH, part of Springer Nature

About this chapter

Cite this chapter.

Hunziker, S., Blankenagel, M. (2021). Introducing Research Designs. In: Research Design in Business and Management. Springer Gabler, Wiesbaden. https://doi.org/10.1007/978-3-658-34357-6_1

Download citation

DOI : https://doi.org/10.1007/978-3-658-34357-6_1

Published : 10 November 2021

Publisher Name : Springer Gabler, Wiesbaden

Print ISBN : 978-3-658-34356-9

Online ISBN : 978-3-658-34357-6

eBook Packages : Business and Economics (German Language)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Advanced Search
  • Journal List
  • Springer Open Choice

Research design: the methodology for interdisciplinary research framework

1 Biometris, Wageningen University and Research, PO Box 16, 6700 AA Wageningen, The Netherlands

Jarl K. Kampen

2 Statua, Dept. of Epidemiology and Medical Statistics, Antwerp University, Venusstraat 35, 2000 Antwerp, Belgium

Many of today’s global scientific challenges require the joint involvement of researchers from different disciplinary backgrounds (social sciences, environmental sciences, climatology, medicine, etc.). Such interdisciplinary research teams face many challenges resulting from differences in training and scientific culture. Interdisciplinary education programs are required to train truly interdisciplinary scientists with respect to the critical factor skills and competences. For that purpose this paper presents the Methodology for Interdisciplinary Research (MIR) framework. The MIR framework was developed to help cross disciplinary borders, especially those between the natural sciences and the social sciences. The framework has been specifically constructed to facilitate the design of interdisciplinary scientific research, and can be applied in an educational program, as a reference for monitoring the phases of interdisciplinary research, and as a tool to design such research in a process approach. It is suitable for research projects of different sizes and levels of complexity, and it allows for a range of methods’ combinations (case study, mixed methods, etc.). The different phases of designing interdisciplinary research in the MIR framework are described and illustrated by real-life applications in teaching and research. We further discuss the framework’s utility in research design in landscape architecture, mixed methods research, and provide an outlook to the framework’s potential in inclusive interdisciplinary research, and last but not least, research integrity.

Introduction

Current challenges, e.g., energy, water, food security, one world health and urbanization, involve the interaction between humans and their environment. A (mono)disciplinary approach, be it a psychological, economical or technical one, is too limited to capture any one of these challenges. The study of the interaction between humans and their environment requires knowledge, ideas and research methodology from different disciplines (e.g., ecology or chemistry in the natural sciences, psychology or economy in the social sciences). So collaboration between natural and social sciences is called for (Walsh et al. 1975 ).

Over the past decades, different forms of collaboration have been distinguished although the terminology used is diverse and ambiguous. For the present paper, the term interdisciplinary research is used for (Aboelela et al. 2007 , p. 341):

any study or group of studies undertaken by scholars from two or more distinct scientific disciplines. The research is based upon a conceptual model that links or integrates theoretical frameworks from those disciplines, uses study design and methodology that is not limited to any one field, and requires the use of perspectives and skills of the involved disciplines throughout multiple phases of the research process.

Scientific disciplines (e.g., ecology, chemistry, biology, psychology, sociology, economy, philosophy, linguistics, etc.) are categorized into distinct scientific cultures: the natural sciences, the social sciences and the humanities (Kagan 2009 ). Interdisciplinary research may involve different disciplines within a single scientific culture, and it can also cross cultural boundaries as in the study of humans and their environment.

A systematic review of the literature on natural-social science collaboration (Fischer et al. 2011 ) confirmed the general impression of this collaboration to be a challenge. The nearly 100 papers in their analytic set mentioned more instances of barriers than of opportunities (72 and 46, respectively). Four critical factors for success or failure in natural-social science collaboration were identified: the paradigms or epistemologies in the current (mono-disciplinary) sciences, the skills and competences of the scientists involved, the institutional context of the research, and the organization of collaborations (Fischer et al. 2011 ). The so-called “paradigm war” between neopositivist versus constructivists within the social and behavioral sciences (Onwuegbuzie and Leech 2005 ) may complicate pragmatic collaboration further.

It has been argued that interdisciplinary education programs are required to train truly interdisciplinary scientists with respect to the critical factor skills and competences (Frischknecht 2000 ) and accordingly, some interdisciplinary programs have been developed since (Baker and Little 2006 ; Spelt et al. 2009 ). The overall effect of interdisciplinary programs can be expected to be small as most programs are mono-disciplinary and based on a single paradigm (positivist-constructivist, qualitative-quantitative; see e.g., Onwuegbuzie and Leech 2005 ). We saw in our methodology teaching, consultancy and research practices working with heterogeneous groups of students and staff, that most had received mono-disciplinary training with a minority that had received multidisciplinary training, with few exceptions within the same paradigm. During our teaching and consultancy for heterogeneous groups of students and staff aimed at designing interdisciplinary research, we built the framework for methodology in interdisciplinary research (MIR). With the MIR framework, we aspire to contribute to the critical factors skills and competences (Fischer et al. 2011 ) for social and natural sciences collaboration. Note that the scale of interdisciplinary research projects we have in mind may vary from comparably modest ones (e.g., finding a link between noise reducing asphalt and quality of life; Vuye et al. 2016 ) to very large projects (finding a link between anthropogenic greenhouse gas emissions, climate change, and food security; IPCC 2015 ).

In the following section of this paper we describe the MIR framework and elaborate on its components. The third section gives two examples of the application of the MIR framework. The paper concludes with a discussion of the MIR framework in the broader contexts of mixed methods research, inclusive research, and other promising strains of research.

The methodology in interdisciplinary research framework

Research as a process in the methodology in interdisciplinary research framework.

The Methodology for Interdisciplinary Research (MIR) framework was built on the process approach (Kumar 1999 ), because in the process approach, the research question or hypothesis is leading for all decisions in the various stages of research. That means that it helps the MIR framework to put the common goal of the researchers at the center, instead of the diversity of their respective backgrounds. The MIR framework also introduces an agenda: the research team needs to carefully think through different parts of the design of their study before starting its execution (Fig.  1 ). First, the team discusses the conceptual design of their study which contains the ‘why’ and ‘what’ of the research. Second, the team discusses the technical design of the study which contains the ‘how’ of the research. Only after the team agrees that the complete research design is sufficiently crystalized, the execution of the work (including fieldwork) starts.

An external file that holds a picture, illustration, etc.
Object name is 11135_2017_513_Fig1_HTML.jpg

The Methodology of Interdisciplinary Research framework

Whereas the conceptual and technical designs are by definition interdisciplinary team work, the respective team members may do their (mono)disciplinary parts of fieldwork and data analysis on a modular basis (see Bruns et al. 2017 : p. 21). Finally, when all evidence is collected, an interdisciplinary synthesis of analyses follows which conclusions are input for the final report. This implies that the MIR framework allows for a range of scales of research projects, e.g., a mixed methods project and its smaller qualitative and quantitative modules, or a multi-national sustainability project and its national sociological, economic and ecological modules.

The conceptual design

Interdisciplinary research design starts with the “conceptual design” which addresses the ‘why’ and ‘what’ of a research project at a conceptual level to ascertain the common goals pivotal to interdisciplinary collaboration (Fischer et al. 2011 ). The conceptual design includes mostly activities such as thinking, exchanging interdisciplinary knowledge, reading and discussing. The product of the conceptual design is called the “conceptual frame work” which comprises of the research objective (what is to be achieved by the research), the theory or theories that are central in the research project, the research questions (what knowledge is to be produced), and the (partial) operationalization of constructs and concepts that will be measured or recorded during execution. While the members of the interdisciplinary team and the commissioner of the research must reach a consensus about the research objective, the ‘why’, the focus in research design must be the production of the knowledge required to achieve that objective the ‘what’.

With respect to the ‘why’ of a research project, an interdisciplinary team typically starts with a general aim as requested by the commissioner or funding agency, and a set of theories to formulate a research objective. This role of theory is not always obvious to students from the natural sciences, who tend to think in terms of ‘models’ with directly observable variables. On the other hand, students from the social sciences tend to think in theories with little attention to observable variables. In the MIR framework, models as simplified descriptions or explanations of what is studied in the natural sciences play the same role in informing research design, raising research questions, and informing how a concept is understood, as do theories in social science.

Research questions concern concepts, i.e. general notions or ideas based on theory or common sense that are multifaceted and not directly visible or measurable. For example, neither food security (with its many different facets) nor a person’s attitude towards food storage may be directly observed. The operationalization of concepts, the transformation of concepts into observable indicators, in interdisciplinary research requires multiple steps, each informed by theory. For instance, in line with particular theoretical frameworks, sustainability and food security may be seen as the composite of a social, an economic and an ecological dimension (e.g., Godfray et al. 2010 ).

As the concept of interest is multi-disciplinary and multi-dimensional, the interdisciplinary team will need to read, discuss and decide on how these dimensions and their indicators are weighted to measure the composite interdisciplinary concept to get the required interdisciplinary measurements. The resulting measure or measures for the interdisciplinary concept may be of the nominal, ordinal, interval and ratio level, or a combination thereof. This operationalization procedure is known as the port-folio approach to widely defined measurements (Tobi 2014 ). Only after the research team has finalized the operationalization of the concepts under study, the research questions and hypotheses can be made operational. For example, a module with descriptive research questions may now be turned into an operational one like, what are the means and variances of X1, X2, and X3 in a given population? A causal research question may take on the form, is X (a composite of X1, X2 and X3) a plausible cause for the presence or absence of Y? A typical qualitative module could study, how do people talk about X1, X2 and X3 in their everyday lives?

The technical design

Members of an interdisciplinary team usually have had different training with respect to research methods, which makes discussing and deciding on the technical design more challenging but also potentially more creative than in a mono-disciplinary team. The technical design addresses the issues ‘how, where and when will research units be studied’ (study design), ‘how will measurement proceed’ (instrument selection or design), ‘how and how many research units will be recruited’ (sampling plan), and ‘how will collected data be analyzed and synthesized’ (analysis plan). The MIR framework provides the team a set of topics and their relationships to one another and to generally accepted quality criteria (see Fig.  1 ), which helps in designing this part of the project.

Interdisciplinary teams need be pragmatic as the research questions agreed on are leading in decisions on the data collection set-up (e.g., a cross-sectional study of inhabitants of a region, a laboratory experiment, a cohort study, a case control study, etc.), the so-called “study design” (e.g., Kumar 2014 ; De Vaus 2001 ; Adler and Clark 2011 ; Tobi and van den Brink 2017 ) instead of traditional ‘pet’ approaches. Typical study designs for descriptive research questions and research questions on associations are the cross-sectional study design. Longitudinal study designs are required to investigate development over time and cause-effect relationships ideally are studied in experiments (e.g., Kumar 2014 ; Shipley 2016 ). Phenomenological questions concern a phenomenon about which little is known and which has to be studied in the environment where it takes place, which calls for a case study design (e.g., Adler and Clark 2011 : p. 178). For each module, the study design is to be further explicated by the number of data collection waves, the level of control by the researcher and its reference period (e.g., Kumar 2014 ) to ensure the teams common understanding.

Then, decisions about the way data is to be collected, e.g., by means of certified instruments, observation, interviews, questionnaires, queries on existing data bases, or a combination of these are to be made. It is especially important to discuss the role of the observer (researcher) as this is often a source of misunderstanding in interdisciplinary teams. In the sciences, the observer is usually considered a neutral outsider when reading a standardized measurement instrument (e.g., a pyranometer to measure incoming solar radiation). In contrast, in the social sciences, the observer may be (part of) the measurement instrument, for example in participant observation or when doing in-depth interviews. After all, in participant observation the researcher observes from a member’s perspective and influences what is observed owing to the researcher’s participation (Flick 2006 : p. 220). Similarly in interviews, by which we mean “a conversation that has a structure and a purpose determined by the one party—the interviewer” (Kvale 2007 : p. 7), the interviewer and the interviewee are part of the measurement instrument (Kvale and Brinkmann 2009 : p. 2). In on-line and mail questionnaires the interviewer is eliminated as part of the instrument by standardizing the questions and answer options. Queries on existing data bases refer to the use of secondary data or secondary analysis. Different disciplines tend to use different bibliographic data bases (e.g., CAB Abstracts, ABI/INFORM or ERIC) and different data repositories (e.g., the European Social Survey at europeansocialsurvey.org or the International Council for Science data repository hosted by www.pangaea.de ).

Depending on whether or not the available, existing, measurement instruments tally with the interdisciplinary operationalisations from the conceptual design, the research team may or may not need to design instruments. Note that in some cases the social scientists’ instinct may be to rely on a questionnaire whereas the collaboration with another discipline may result in more objective possibilities (e.g., compare asking people about what they do with surplus medication, versus measuring chemical components from their input into the sewer system). Instrument design may take on different forms, such as the design of a device (e.g., pyranometer), a questionnaire (Dillman 2007 ) or a part thereof (e.g., a scale see DeVellis 2012 ; Danner et al. 2016 ), an interview guide with topics or questions for the interviewees, or a data extraction form in the context of secondary analysis and literature review (e.g., the Cochrane Collaboration aiming at health and medical sciences or the Campbell Collaboration aiming at evidence based policies).

Researchers from different disciplines are inclined to think of different research objects (e.g., animals, humans or plots), which is where the (specific) research questions come in as these identify the (possibly different) research objects unambiguously. In general, research questions that aim at making an inventory, whether it is an inventory of biodiversity or of lodging, call for a random sampling design. Both in the biodiversity and lodging example, one may opt for random sampling of geographic areas by means of a list of coordinates. Studies that aim to explain a particular phenomenon in a particular context would call for a purposive sampling design (non-random selection). Because studies of biodiversity and housing obey the same laws in terms of appropriate sampling design for similar research questions, individual students and researchers are sensitized to commonalities of their respective (mono)disciplines. For example, a research team interested in the effects of landslides on a socio-ecological system may select for their study one village that suffered from landslides and one village that did not suffer from landslides that have other characteristics in common (e.g., kind of soil, land use, land property legislation, family structure, income distribution, et cetera).

The data analysis plan describes how data will be analysed, for each of the separate modules and for the project at large. In the context of a multi-disciplinary quantitative research project, the data analysis plan will list the intended uni-, bi- and multivariate analyses such as measures for distributions (e.g., means and variances), measures for association (e.g., Pearson Chi square or Kendall Tau) and data reduction and modelling techniques (e.g., factor analysis and multiple linear regression or structural equation modelling) for each of the research modules using the data collected. When applicable, it will describe interim analyses and follow-up rules. In addition to the plans at modular level, the data analysis plan must describe how the input from the separate modules, i.e. different analyses, will be synthesized to answer the overall research question. In case of mixed methods research, the particular type of mixed methods design chosen describes how, when, and to what extent the team will synthesize the results from the different modules.

Unfortunately, in our experience, when some of the research modules rely on a qualitative approach, teams tend to refrain from designing a data analysis plan before starting the field work. While absence of a data analysis plan may be regarded acceptable in fields that rely exclusively on qualitative research (e.g., ethnography), failure to communicate how data will be analysed and what potential evidence will be produced posits a deathblow to interdisciplinarity. For many researchers not familiar with qualitative research, the black box presented as “qualitative data analysis” is a big hurdle, and a transparent and systematic plan is a sine qua non for any scientific collaboration. The absence of a data analysis plan for all modules results in an absence of synthesis of perspectives and skills of the disciplines involved, and in separate (disciplinary) research papers or separate chapters in the research report without an answer to the overall research question. So, although researchers may find it hard to write the data analysis plan for qualitative data, it is pivotal in interdisciplinary research teams.

Similar to the quantitative data analysis plan, the qualitative data analysis plan presents the description of how the researcher will get acquainted with the data collected (e.g., by constructing a narrative summary per interviewee or a paired-comparison of essays). Additionally, the rules to decide on data saturation need be presented. Finally, the types of qualitative analyses are to be described in the data analysis plan. Because there is little or no standardized terminology in qualitative data analysis, it is important to include a precise description as well as references to the works that describe the method intended (e.g., domain analysis as described by Spradley 1979 ; or grounded theory by means of constant-comparison as described by Boeije 2009 ).

Integration

To benefit optimally from the research being interdisciplinary the modules need to be brought together in the integration stage. The modules may be mono- or interdisciplinary and may rely on quantitative, qualitative or mixed methods approaches. So the MIR framework fits the view that distinguishes three multimethods approaches (quali–quali, quanti–quanti, and quali–quant).

Although the MIR framework has not been designed with the intention to promote mixed methods research, it is suitable for the design of mixed methods research as the kind of research that calls for both quantitative and qualitative components (Creswell and Piano Clark 2011 ). Indeed, just like the pioneers in mixed methods research (Creswell and Piano Clark 2011 : p. 2), the MIR framework deconstructs the package deals of paradigm and data to be collected. The synthesis of the different mono or interdisciplinary modules may benefit from research done on “the unique challenges and possibilities of integration of qualitative and quantitative approaches” (Fetters and Molina-Azorin 2017 : p. 5). We distinguish (sub) sets of modules being designed as convergent, sequential or embedded (adapted from mixed methods design e.g., Creswell and Piano Clark 2011 : pp. 69–70). Convergent modules, whether mono or interdisciplinary, may be done parallel and are integrated after completion. Sequential modules are done after one another and the first modules inform the latter ones (this includes transformative and multiphase mixed methods design). Embedded modules are intertwined. Here, modules depend on one another for data collection and analysis, and synthesis may be planned both during and after completion of the embedded modules.

Scientific quality and ethical considerations in the design of interdisciplinary research

A minimum set of jargon related to the assessment of scientific quality of research (e.g., triangulation, validity, reliability, saturation, etc.) can be found scattered in Fig.  1 . Some terms are reserved by particular paradigms, others may be seen in several paradigms with more or less subtle differences in meaning. In the latter case, it is important that team members are prepared to explain and share ownership of the term and respect the different meanings. By paying explicit attention to the quality concepts, researchers from different disciplines learn to appreciate each other’s concerns for good quality research and recognize commonalities. For example, the team may discuss measurement validity of both a standardized quantitative instrument and that of an interview and discover that the calibration of the machine serves a similar purpose as the confirmation of the guarantee of anonymity at the start of an interview.

Throughout the process of research design, ethics require explicit discussion among all stakeholders in the project. Ethical issues run through all components in the MIR framework in Fig.  1 . Where social and medical scientists may be more sensitive to ethical issues related to humans (e.g., the 1979 Belmont Report criteria of beneficence, justice, and respect), others may be more sensitive to issues related to animal welfare, ecology, legislation, the funding agency (e.g., implications for policy), data and information sharing (e.g., open access publishing), sloppy research practices, or long term consequences of the research. This is why ethics are an issue for the entire interdisciplinary team and cannot be discussed on project module level only.

The MIR framework in practice: two examples

Teaching research methodology to heterogeneous groups of students, institutional context and background of the mir framework.

Wageningen University and Research (WUR) advocates in its teaching and research an interdisciplinary approach to the study of global issues related to the motto “To explore the potential of nature to improve the quality of life.” Wageningen University’s student population is multidisciplinary and international (e.g., Tobi and Kampen 2013 ). Traditionally, this challenge of diversity in one classroom is met by covering a width of methodological topics and examples from different disciplines. However, when students of various programmes received methodological education in mixed classes, students of some disciplines would regard with disinterest or even disdain methods and techniques of the other disciplines. Different disciplines, especially from the qualitative respectively quantitative tradition in the social sciences (Onwuegbuzie and Leech 2005 : p. 273), claim certain study designs, methods of data collection and analysis as their territory, a claim reflected in many textbooks. We found that students from a qualitative tradition would not be interested, and would not even study, content like the design of experiments and quantitative data collection; and students from a quantitative tradition would ignore case study design and qualitative data collection. These students assumed they didn’t need any knowledge about ‘the other tradition’ for their future careers, despite the call for interdisciplinarity.

To enhance interdisciplinarity, WUR provides an MSc course mandatory for most students, in which multi-disciplinary teams do research for a commissioner. Students reported difficulties similar to the ones found in the literature: miscommunication due to talking different scientific languages and feelings of distrust and disrespect due to prejudice. This suggested that research methodology courses ought help prepare for interdisciplinary collaboration by introducing a single methodological framework that 1) creates sensitivity to the pros and challenges of interdisciplinary research by means of a common vocabulary and fosters respect for other disciplines, 2) starts from the research questions as pivotal in decision making on research methods instead of tradition or ontology, and 3) allows available methodologies and methods to be potentially applicable to any scientific research problem.

Teaching with MIR—the conceptual framework

As a first step, we replaced textbooks by ones refusing the idea that any scientific tradition has exclusive ownership of any methodological approach or method. The MIR framework further guides our methodology teaching in two ways. First, it presents a logical sequence of topics (first conceptual design, then technical design; first research question(s) or hypotheses, then study design; etc.). Second, it allows for a conceptual separation of topics (e.g., study design from instrument design). Educational programmes at Wageningen University and Research consistently stress the vital importance of good research design. In fact, 50% of the mark in most BSc and MSc courses in research methodology is based on the assessment of a research proposal that students design in small (2-4 students) and heterogeneous (discipline, gender and nationality) groups. The research proposal must describe a project which can be executed in practice, and which limitations (measurement, internal, and external validity) are carefully discussed.

Groups start by selecting a general research topic. They discuss together previously attained courses from a range of programs to identify personal and group interests, with the aim to reach an initial research objective and a general research question as input for the conceptual design. Often, their initial research objective and research question are too broad to be researchable (e.g., Kumar 2014 : p. 64; Adler and Clark 2011 : p. 71). In plenary sessions, the (basics of) critical assessment of empirical research papers is taught with special attention to the ‘what’ and ‘why’ section of research papers. During tutorials students generate research questions until the group agrees on a research objective, with one general research question that consists of a small set of specific research questions. Each of the specific research questions may stem from a different discipline, whereas answering the general research question requires integrating the answers to all specific research questions.

The group then identifies the key concepts in their research questions, while exchanging thoughts on possible attributes based on what they have learnt from previous courses (theories) and literature. When doing so they may judge the research question as too broad, in which case they will turn to the question strategies toolbox again. Once they agree on the formulation of the research questions and the choice of concepts, tasks are divided. In general, each student turns to the literature he/she is most familiar with or interested in, for the operationalization of the concept into measurable attributes and writes a paragraph or two about it. In the next meeting, the groups read and discuss the input and decide on the set-up and division of tasks with respect to the technical design.

Teaching with MIR—the technical framework

The technical part of research design distinguishes between study design, instrument design, sampling design, and the data analysis plan. In class, we first present students with a range of study designs (cross sectional, experimental, etc.). Student groups select an appropriate study design by comparing the demands made by the research questions with criteria for internal validity. When a (specific) research question calls for a study design that is not seen as practically feasible or ethically possible, they will rephrase the research question until the demands of the research question tally with the characteristics of at least one ethical, feasible and internally valid study design.

While following plenary sessions during which different random and non-random sampling or selection strategies are taught, groups start working on their sampling design. The groups make two decisions informed by their research question: the population(s) of research units, and the requirements of the sampling strategy for each population. Like many other aspects in research design, this can be an iterative process. For example, suppose the research question mentioned “local policy makers,” which is too vague for a sampling design. Then the decision may be to limit the study to “policy makers at the municipality level in the Netherlands” and adapt the general and the specific research questions accordingly. Next, the group identifies whether a sample design needs to focus on diversity (e.g., when the objective is to make an inventory of possible local policies), representativeness (e.g., when the objective is to estimate prevalence of types of local policies), or people with particular information (e.g., when the objective is to study people having experience with a given local policy). When a sample has to representative, the students must produce an assessment of external validity, whereas when the aim is to map diversity the students must discuss possible ways of source triangulation. Finally, in conjunction with the data analysis plan, students decide on the sample size and/or the saturation criteria.

When the group has agreed on their population(s) and the strategy for recruiting research units, the next step is to finalize the technical aspects of operationalisation i.e. addressing the issue of exactly how information will be extracted from the research units. Depending on what is practically feasible qua measurement, the choice of a data collection instrument may be a standardised (e.g., a spectrograph, a questionnaire) or less standardised (e.g., semi-structured interviews, visual inspection) one. The students have to discuss the possibilities of method triangulation, and explain the possible weaknesses of their data collection plan in terms of measurement validity and reliability.

Recent developments

Presently little attention is payed to the data analysis plan, procedures for synthesis and reporting because the programmes differ on their offer in data analysis courses, and because execution of the research is not part of the BSc and MSc methodology courses. Recently, we have designed one course for an interdisciplinary BSc program in which the research question is put central in learning and deciding on statistics and qualitative data analysis. Nonetheless, during the past years the number of methodology courses for graduate students that supported the MIR framework have been expanded, e.g., a course “From Topic to Proposal”; separate training modules on questionnaire construction, interviewing, and observation; and optional courses on quantitative and qualitative data analysis. These courses are open to (and attended by) PhD students regardless of their program. In Flanders (Belgium), the Flemish Training Network for Statistics and Methodology (FLAMES) has for the last four years successfully applied the approach outlined in Fig.  1 in its courses for research design and data collection methods. The division of the research process in terms of a conceptual design, technical design, operationalisation, analysis plan, and sampling plan, has proved to be appealing for students of disciplines ranging from linguistics to bioengineering.

Researching with MIR: noise reducing asphalt layers and quality of life

Research objective and research question.

This example of the application of the MIR framework comes from a study about the effects of “noise reducing asphalt layers” on the quality of life (Vuye et al. 2016 ), a project commissioned by the City of Antwerp in 2015 and executed by a multidisciplinary research team of Antwerp University (Belgium). The principal researcher was an engineer from the Faculty of Applied Engineering (dept. Construction), supported by two researchers from the Faculty of Medicine and Health Sciences (dept. of Epidemiology and Social Statistics), one with a background in qualitative and one with a background in quantitative research methods. A number of meetings were held where the research team and the commissioners discussed the research objective (the ‘what’ and ‘why’).The research objective was in part dictated by the European Noise Directive 2002/49/EC, which forces all EU member states to draft noise action plans, and the challenge in this study was to produce evidence of a link between the acoustic and mechanical properties of different types of asphalt, and the quality of life of people living in the vicinity of the treated roads. While there was literature available about the effects of road surface on sound, and other studies had studied the link between noise and health, no study was found that produced evidence simultaneously about noise levels of roads and quality of life. The team therefore decided to test the hypothesis that traffic noise reduction has a beneficial effect on the quality of life of people into the central research. The general research question was, “to what extent does the placing of noise reducing asphalt layers increase the quality of life of the residents?”

Study design

In order to test the effect of types of asphalt, initially a pretest–posttest experiment was designed, which was expanded by several added experimental (change of road surface) and control (no change of road surface) groups. The research team gradually became aware that quality of life may not be instantly affected by lower noise levels, and that a time lag is involved. A second posttest aimed to follow up on this effect although it could only be implemented in a selection of experimental sites.

Instrument selection and design

Sound pressure levels were measured by an ISO-standardized procedure called the Statistical Pass-By (SPB) method. A detailed description of the method is in Vuye et al. ( 2016 ). No such objective procedure is available for measuring quality of life, which can only be assessed by self-reports of the residents. Some time was needed for the research team to accept that measuring a multidimensional concept like quality of life is more complicated than just having people rate their “quality of life” on a 10 point scale. For instance, questions had to be phrased in a way that gave not away the purpose of the research (Hawthorne effect), leading to the inclusion of questions about more nuisances than traffic noise alone. This led to the design of a self-administered questionnaire, with questions of Flanders Survey on Living Environment (Departement Leefmilieu, Natuur & Energie 2013 ) appended by new questions. Among other things, the questionnaire probed for experienced nuisance by sound, quality of sleep, effort to concentrate, effort to have a conversation inside or outside the home, physical complaints such as headaches, etc.

Sampling design

The selected sites needed to accommodate both types of measurements: that of noise from traffic and quality of life of residents. This was a complicating factor that required several rounds of deliberation. While countrywide only certain roads were available for changing the road surface, these roads had to be mutually comparable in terms of the composition of the population, type of residential area (e.g., reports from the top floor of a tall apartment building cannot be compared to those at ground level), average volume of traffic, vicinity of hospitals, railroads and airports, etc. At the level of roads therefore, targeted sampling was applied, whereas at the level of residents the aim was to realize a census of all households within a given perimeter from the treated road surfaces. Considerations about the reliability of applied instruments were guiding decisions with respect to sampling. While the measurements of the SPB method were sufficiently reliable to allow for relatively few measurements, the questionnaire suffered from considerable nonresponse which hampered statistical power. It was therefore decided to increase the power of the study by adding control groups in areas where the road surface was not replaced. This way, detecting an effect of the intervention did not solely depend on the turnout of the pre and the post-test.

Data analysis plan

The statistical analysis had to account for the fact that data were collected at two different levels: the level of the residents filling out the questionnaires, and the level of the roads which surface was changed. Because survey participation was confidential, results of the pre- and posttest could only be compared at aggregate (street) level. The analysis had to control for confounding variables (e.g., sample composition, variety in traffic volume, etc.), experimental factors (varieties in experimental conditions, and controls), and non-normal dependent variables. The statistical model appropriate for analysis of such data is a Generalised Linear Mixed Model.

Data were collected during the course of 2015, 2016 and 2017 and are awaiting final analysis in Spring 2017. Intermediate analyses resulted in several MSc theses, conference presentations, and working papers that reported on parts of the research.

In this paper we presented the Methodology in Interdisciplinary Research framework that we developed over the past decade building on our experience as lecturers, consultants and researchers. The MIR framework recognizes research methodology and methods as important content in the critical factor skills and competences. It approaches research and collaboration as a process that needs to be designed with the sole purpose to answer the general research question. For the conceptual design the team members have to discuss and agree on the objective of their communal efforts without squeezing it into one single discipline and, thus, ignoring complexity. The specific research questions, when formulated, contribute to (self) respect in collaboration as they represent and stand witness of the need for interdisciplinarity. In the technical design, different parts were distinguished to stimulate researchers to think and design research out of their respective disciplinary boxes and consider, for example, an experimental design with qualitative data collection, or a case study design based on quantitative information.

In our teaching and consultancy, we first developed a MIR framework for social sciences, economics, health and environmental sciences interdisciplinarity. It was challenged to include research in the design discipline of landscape architecture. What characterizes research in landscape architecture and other design principles, is that the design product as well as the design process may be the object of study. Lenzholder et al. ( 2017 ) therefore distinguish three kinds of research in landscape architecture. The first kind, “Research into design” studies the design product post hoc and the MIR framework suits the interdisciplinary study of such a product. In contrast, “Research for design” generates knowledge that feeds into the noun and the verb ‘design’, which means it precedes the design(ing). The third kind, Research through Design(ing) employs designing as a research method. At first, just like Deming and Swaffield ( 2011 ), we were a bit skeptical about “designing” as a research method. Lenzholder et al. ( 2017 ) pose that the meaning of research through design has evolved through a (neo)positivist, constructivist and transformative paradigm to include a pragmatic stance that resembles the pragmatic stance assumed in the MIR framework. We learned that, because landscape architecture is such an interdisciplinary field, the process approach and the distinction between a conceptual and technical research design was considered very helpful and embraced by researchers in landscape architecture (Tobi and van den Brink 2017 ).

Mixed methods research (MMR) has been considered to study topics as diverse as education (e.g., Powell et al. 2008 ), environmental management (e.g., Molina-Azorin and Lopez-Gamero 2016 ), health psychology (e.g., Bishop 2015 ) and information systems (e.g., Venkatesh et al. 2013 ). Nonetheless, the MIR framework is the first to put MMR in the context of integrating disciplines beyond social inquiry (Greene 2008 ). The splitting of the research into modules stimulates the identification and recognition of the contribution of both distinct and collaborating disciplines irrespective of whether they contribute qualitative and/or quantitative research in the interdisciplinary research design. As mentioned in Sect.  2.4 the integration of the different research modules in one interdisciplinary project design may follow one of the mixed methods designs. For example, we witnessed at several occasions the integration of social and health sciences in interdisciplinary teams opting for sequential modules in a sequential exploratory mixed methods fashion (e.g., Adamson 2005 : 234). In sustainability science research, we have seen the design of concurrent modules for a concurrent nested mixed methods strategy (ibid) in research integrating the social and natural sciences and economics.

The limitations of the MIR framework are those of any kind of collaboration: it cannot work wonders in the absence of awareness of the necessity and it requires the willingness to work, learn, and research together. We developed MIR framework in and alongside our own teaching, consultancy and research, it has not been formally evaluated and compared in an experiment with teaching, consultancy and research with, for example, the regulative cycle for problem solving (van Strien 1986 ), or the wheel of science from Babbie ( 2013 ). In fact, although we wrote “developed” in the previous sentence, we are fully aware of the need to further develop and refine the framework as is.

The importance of the MIR framework lies in the complex, multifaceted nature of issues like sustainability, food security and one world health. For progress in the study of these pressing issues the understanding, construction and quality of interdisciplinary portfolio measurements (Tobi 2014 ) are pivotal and require further study as well as procedures facilitating the integration across different disciplines.

Another important strain of further research relates to the continuum of Responsible Conduct of Research (RCR), Questionable Research Practices (QRP), and deliberate misconduct (Steneck 2006 ). QRP includes failing to report all of a study’s conditions, stopping collecting data earlier than planned because one found the result one had been looking for, etc. (e.g., John et al. 2012 ; Simmons et al. 2011 ; Kampen and Tamás 2014 ). A meta-analysis on selfreports obtained through surveys revealed that about 2% of researchers had admitted to research misconduct at least once, whereas up to 33% admitted to QRPs (Fanelli 2009 ). While the frequency of QRPs may easily eclipse that of deliberate fraud (John et al. 2012 ) these practices have received less attention than deliberate misconduct. Claimed research findings may often be accurate measures of the prevailing biases and methodological rigor in a research field (Fanelli and Ioannidis 2013 ; Fanelli 2010 ). If research misconduct and QRP are to be understood then the disciplinary context must be grasped as a locus of both legitimate and illegitimate activity (Fox 1990 ). It would be valuable to investigate how working in interdisciplinary teams and, consequently, exposure to other standards of QRP and RCR influence research integrity as the appropriate research behavior from the perspective of different professional standards (Steneck 2006 : p. 56). These differences in scientific cultures concern criteria for quality in design and execution of research, reporting (e.g., criteria for authorship of a paper, preferred publication outlets, citation practices, etc.), archiving and sharing of data, and so on.

Other strains of research include interdisciplinary collaboration and negotiation, where we expect contributions from the “science of team science” (Falk-Krzesinski et al. 2010 ); and compatibility of the MIR framework with new research paradigms such as “inclusive research” (a mode of research involving people with intellectual disabilities as more than just objects of research; e.g., Walmsley and Johnson 2003 ). Because of the complexity and novelty of inclusive health research a consensus statement was developed on how to conduct health research inclusively (Frankena et al., under review). The eight attributes of inclusive health research identified may also be taken as guiding attributes in the design of inclusive research according to the MIR framework. For starters, there is the possibility of inclusiveness in the conceptual framework, particularly in determining research objectives, and in discussing possible theoretical frameworks with team members with an intellectual disability which Frankena et al. labelled the “Designing the study” attribute. There are also opportunities for inclusiveness in the technical design, and in execution. For example, the inclusiveness attribute “generating data” overlaps with the operationalization and measurement instrument design/selection and the attribute “analyzing data” aligns with the data analysis plan in the technical design.

On a final note, we hope to have aroused the reader’s interest in, and to have demonstrated the need for, a methodology for interdisciplinary research design. We further hope that the MIR framework proposed and explained in this article helps those involved in designing an interdisciplinary research project to get a clearer view of the various processes that must be secured during the project’s design and execution. And we look forward to further collaboration with scientists from all cultures to contribute to improving the MIR framework and make interdisciplinary collaborations successful.

Acknowledgements

The MIR framework is the result of many discussions with students, researchers and colleagues, with special thanks to Peter Tamás, Jennifer Barrett, Loes Maas, Giel Dik, Ruud Zaalberg, Jurian Meijering, Vanessa Torres van Grinsven, Matthijs Brink, Gerda Casimir, and, last but not least, Jenneken Naaldenberg.

  • Aboelela SW, Larson E, Bakken S, Carrasquillo O, Formicola A, Glied SA, Gebbie KM. Defining interdisciplinary research: conclusions from a critical review of the literature. Health Serv. Res. 2007; 42 (1):329–346. doi: 10.1111/j.1475-6773.2006.00621.x. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Adamson J. Combined qualitative and quantitative designs. In: Bowling A, Ebrahim S, editors. Handbook of Health Research Methods: Investigation, Measurement and Analysis. Maidenhead: Open University Press; 2005. pp. 230–245. [ Google Scholar ]
  • Adler ES, Clark R. An Invitation to Social Research: How it’s Done. 4. London: Sage; 2011. [ Google Scholar ]
  • Babbie ER. The Practice of Social Research. 13. Belmont Ca: Wadsworth Cengage Learning; 2013. [ Google Scholar ]
  • Baker GH, Little RG. Enhancing homeland security: development of a course on critical infrastructure systems. J. Homel. Secur. Emerg. Manag. 2006 [ Google Scholar ]
  • Bishop FL. Using mixed methods research designs in health psychology: an illustrated discussion from a pragmatist perspective. Br. J. Health. Psychol. 2015; 20 (1):5–20. doi: 10.1111/bjhp.12122. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Boeije HR. Analysis in Qualitative Research. London: Sage; 2009. [ Google Scholar ]
  • Bruns D, van den Brink A, Tobi H, Bell S. Advancing landscape architecture research. In: van den Brink A, Bruns D, Tobi H, Bell S, editors. Research in Landscape Architecture: Methods And Methodology. New York: Routledge; 2017. pp. 11–23. [ Google Scholar ]
  • Creswell JW, Piano Clark VL. Designing and Conducting Mixed Methods Research. 2. Los Angeles: Sage; 2011. [ Google Scholar ]
  • Danner D, Blasius J, Breyer B, Eifler S, Menold N, Paulhus DL, Ziegler M. Current challenges, new developments, and future directions in scale construction. Eur. J. Psychol. Assess. 2016; 32 (3):175–180. doi: 10.1027/1015-5759/a000375. [ CrossRef ] [ Google Scholar ]
  • Deming ME, Swaffield S. Landscape Architecture Research. Hoboken: Wiley; 2011. [ Google Scholar ]
  • Departement Leefmilieu, Natuur en Energie: Uitvoeren van een uitgebreide schriftelijke enquête en een beperkte CAWI-enquête ter bepaling van het percentage gehinderden door geur, geluid en licht in Vlaanderen–SLO-3. Leuven: Market Analysis & Synthesis. www.lne.be/sites/default/files/atoms/files/lne-slo-3-eindrapport.pdf (2013). Accessed 8 March 2017
  • De Vaus D. Research Design in Social Research. London: Sage; 2001. [ Google Scholar ]
  • DeVellis RF. Scale Development: Theory and Applications. 3. Los Angeles: Sage; 2012. [ Google Scholar ]
  • Dillman DA. Mail and Internet Surveys. 2. Hobroken: Wiley; 2007. [ Google Scholar ]
  • Falk-Krzesinski HJ, Borner K, Contractor N, Fiore SM, Hall KL, Keyton J, Uzzi B, et al. Advancing the science of team science. CTS Clin. Transl. Sci. 2010; 3 (5):263–266. doi: 10.1111/j.1752-8062.2010.00223.x. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Fanelli D. How many scientists fabricate and falsify research? A systematic review and metaanalysis of survey data. PLoS ONE. 2009 [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Fanelli D. Positive results increase down the hierarchy of the sciences. PLoS ONE. 2010 [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Fanelli D, Ioannidis JPA. US studies may overestimate effect sizes in softer research. Proc. Natl. Acad. Sci. USA. 2013; 110 (37):15031–15036. doi: 10.1073/pnas.1302997110. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Fetters MD, Molina-Azorin JF. The journal of mixed methods research starts a new decade: principles for bringing in the new and divesting of the old language of the field. J. Mixed Methods Res. 2017; 11 (1):3–10. doi: 10.1177/1558689816682092. [ CrossRef ] [ Google Scholar ]
  • Fischer ARH, Tobi H, Ronteltap A. When natural met social: a review of collaboration between the natural and social sciences. Interdiscip. Sci. Rev. 2011; 36 (4):341–358. doi: 10.1179/030801811X13160755918688. [ CrossRef ] [ Google Scholar ]
  • Flick U. An Introduction to Qualitative Research. 3. London: Sage; 2006. [ Google Scholar ]
  • Fox MF. Fraud, ethics, and the disciplinary contexts of science and scholarship. Am. Sociol. 1990; 21 (1):67–71. doi: 10.1007/BF02691783. [ CrossRef ] [ Google Scholar ]
  • Frischknecht PM. Environmental science education at the Swiss Federal Institute of Technology (ETH) Water Sci. Technol. 2000; 41 (2):31–36. [ Google Scholar ]
  • Godfray HCJ, Beddington JR, Crute IR, Haddad L, Lawrence D, Muir JF, Pretty J, Robinson S, Thomas SM, Toulmin C. Food security: the challenge of feeding 9 billion people. Science. 2010; 327 (5967):812–818. doi: 10.1126/science.1185383. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Greene JC. Is mixed methods social inquiry a distinctive methodology? J. Mixed Methods Res. 2008; 2 (1):7–22. doi: 10.1177/1558689807309969. [ CrossRef ] [ Google Scholar ]
  • IPCC.: Climate Change 2014 Synthesis Report. Geneva: Intergovernmental Panel on Climate Change. www.ipcc.ch/pdf/assessment-report/ar5/syr/SYR_AR5_FINAL_full_wcover.pdf (2015) Accessed 8 March 2017
  • John LK, Loewenstein G, Prelec D. Measuring the prevalence of questionable research practices with incentives for truth telling. Psychol. Sci. 2012; 23 (5):524–532. doi: 10.1177/0956797611430953. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Kagan J. The Three Cultures: Natural Sciences, Social Sciences and the Humanities in the 21st Century. Cambridge: Cambridge University Press; 2009. [ Google Scholar ]
  • Kampen JK, Tamás P. Should I take this seriously? A simple checklist for calling bullshit on policy supporting research. Qual. Quant. 2014; 48 :1213–1223. doi: 10.1007/s11135-013-9830-8. [ CrossRef ] [ Google Scholar ]
  • Kumar R. Research Methodology: A Step-by-Step Guide for Beginners. 1. Los Angeles: Sage; 1999. [ Google Scholar ]
  • Kumar R. Research Methodology: A Step-by-Step Guide for Beginners. 4. Los Angeles: Sage; 2014. [ Google Scholar ]
  • Kvale S. Doing Interviews. London: Sage; 2007. [ Google Scholar ]
  • Kvale S, Brinkmann S. Interviews: Learning the Craft of Qualitative Interviews. 2. London: Sage; 2009. [ Google Scholar ]
  • Lenzholder S, Duchhart I, van den Brink A. The relationship between research design. In: van den Brink A, Bruns D, Tobi H, Bell S, editors. Research in Landscape Architecture: Methods and Methodology. New York: Routledge; 2017. pp. 54–64. [ Google Scholar ]
  • Molina-Azorin JF, Lopez-Gamero MD. Mixed methods studies in environmental management research: prevalence, purposes and designs. Bus. Strateg. Environ. 2016; 25 (2):134–148. doi: 10.1002/bse.1862. [ CrossRef ] [ Google Scholar ]
  • Onwuegbuzie AJ, Leech NL. Taking the “Q” out of research: teaching research methodology courses without the divide between quantitative and qualitative paradigms. Qual. Quant. 2005; 39 (3):267–296. doi: 10.1007/s11135-004-1670-0. [ CrossRef ] [ Google Scholar ]
  • Powell H, Mihalas S, Onwuegbuzie AJ, Suldo S, Daley CE. Mixed methods research in school psychology: a mixed methods investigation of trends in the literature. Psychol. Sch. 2008; 45 (4):291–309. doi: 10.1002/pits.20296. [ CrossRef ] [ Google Scholar ]
  • Shipley B. Cause and Correlation in Biology. 2. Cambridge: Cambridge University Press; 2016. [ Google Scholar ]
  • Simmons JP, Nelson LD, Simonsohn U. False positive psychology: undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychol. Sci. 2011; 22 :1359–1366. doi: 10.1177/0956797611417632. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Spelt EJH, Biemans HJA, Tobi H, Luning PA, Mulder M. Teaching and learning in interdisciplinary higher education: a systematic review. Educ. Psychol. Rev. 2009; 21 (4):365–378. doi: 10.1007/s10648-009-9113-z. [ CrossRef ] [ Google Scholar ]
  • Spradley JP. The Ethnographic Interview. New York: Holt, Rinehart and Winston; 1979. [ Google Scholar ]
  • Steneck NH. Fostering integrity in research: definitions, current knowledge, and future directions. Sci. Eng. Eth. 2006; 12 (1):53–74. doi: 10.1007/s11948-006-0006-y. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Tobi H. Measurement in interdisciplinary research: the contributions of widely-defined measurement and portfolio representations. Measurement. 2014; 48 :228–231. doi: 10.1016/j.measurement.2013.11.013. [ CrossRef ] [ Google Scholar ]
  • Tobi H, Kampen JK. Survey error in an international context: an empirical assessment of crosscultural differences regarding scale effects. Qual. Quant. 2013; 47 (1):553–559. doi: 10.1007/s11135-011-9476-3. [ CrossRef ] [ Google Scholar ]
  • Tobi H, van den Brink A. A process approach to research in landscape architecture. In: van den Brink A, Bruns D, Tobi H, Bell S, editors. Research in Landscape Architecture: Methods and Methodology. New York: Routledge; 2017. pp. 24–34. [ Google Scholar ]
  • van Strien PJ. Praktijk als wetenschap: Methodologie van het sociaal-wetenschappelijk handelen [Practice as science. Methodology of social scientific acting.] Assen: Van Gorcum; 1986. [ Google Scholar ]
  • Venkatesh V, Brown SA, Bala H. Bridging the qualitative-quantitative divide: guidelines for conducting mixed methods research in information systems. MIS Q. 2013; 37 (1):21–54. doi: 10.25300/MISQ/2013/37.1.02. [ CrossRef ] [ Google Scholar ]
  • Vuye C, Bergiers A, Vanhooreweder B. The acoustical durability of thin noise reducing asphalt layers. Coatings. 2016 [ Google Scholar ]
  • Walmsley J, Johnson K. Inclusive Research with People with Learning Disabilities: Past, Present and Futures. London: Jessica Kingsley; 2003. [ Google Scholar ]
  • Walsh WB, Smith GL, London M. Developing an interface between engineering and social sciences- interdisciplinary team-approach to solving societal problems. Am. Psychol. 1975; 30 (11):1067–1071. doi: 10.1037/0003-066X.30.11.1067. [ CrossRef ] [ Google Scholar ]

Pfeiffer Library

Research Methodologies

Research design, external validity, internal validity, threats to validity.

  • What are research methodologies?
  • What are research methods?
  • Additional Sources

According to Jenkins-Smith, et al. (2017), a research design is the set of steps you take to collect and analyze your research data.  In other words, it is the general plan to answer your research topic or question.  You can also think of it as a combination of your research methodology and your research method.  Your research design should include the following: 

  • A clear research question
  • Theoretical frameworks you will use to analyze your data
  • Key concepts
  • Your hypothesis/hypotheses
  • Independent and dependent variables (if applicable)
  • Strengths and weaknesses of your chosen design

There are two types of research designs:

  • Experimental design: This design is like a standard science lab experiment because the researcher controls as many variables as they can and assigns research subjects to groups.  The researcher manipulates the experimental treatment and gives it to one group.  The other group receives the unmanipulated treatment (or not treatment) and the researcher examines affect of the treatment in each group (dependent variable).  This design can have more than two groups depending on your study requirements.
  • Observational design: This is when the researcher has no control over the independent variable and which research participants get exposed to it.  Depending on your research topic, this is the only design you can use.  This is a more natural approach to a study because you are not controlling the experimental treatment.  You are allowing the variable to occur on its own without your interference.  Weather experiments are a great example of observational design because the researcher has no control over the weather and how it changes.

When considering your research design, you will also need to consider your study's validity and any potential threats to its validity.  There are two types of validity: external and internal validity.  Each type demonstrates a degree of accuracy and thoughtfulness in a study and they contribute to a study's reliability.  Information about external and internal validity is included below.

External validity is the degree to which you can generalize the findings of your research study.  It is determining whether or not the findings are applicable to other settings (Jenkins-Smith, 2017).  In many cases, the external validity of a study is strongly linked to the sample population.  For example, if you studied a group of twenty-five year old male Americans, you could potentially generalize your findings to all twenty-five year old American males.  External validity is also the ability for someone else to replicate your study and achieve the same results (Jenkins-Smith, 2017).  If someone replicates your exact study and gets different results, then your study may have weak external validity.

Questions to ask when assessing external validity:

  • Do my conclusions apply to other studies?
  • If someone were to replicate my study, would they get the same results?
  • Are my findings generalizable to a certain population?

Internal validity is when a researcher can conclude a causal relationship between their independent variable and their dependent variable.  It is a way to verify the study's findings because it draws a relationship between the variables (Jenkins-Smith, 2017).  In other words, it is the actual factors that result in the study's outcome (Singh, 2007).  According to Singh (2007), internal validity can be placed into 4 subcategories:

  • Face validity: This confirms the fact that the measure accurately reflects the research question.
  • Content validity: This assesses the measurement technique's compatibility with other literature on the topic.  It determines how well the tool used to gather data measures the item or concept that the researcher is interested in.
  • Criterion validity: This demonstrates the accuracy of a study by comparing it to a similar study.
  • Construct validity: This measures the appropriateness of the conclusions drawn from a study.

According to Jenkins-Smith (2017), there are several threats that may impact the internal and external validity of a study:

Threats to External Validity

  • Interaction with testing: Any testing done before the actual experiment may decrease participants' sensitivity to the actual treatment.
  • Sample misrepresentation: A population sample that is unrepresentative of the entire population.
  • Selection bias: Researchers may have bias towards selecting certain subjects to participate in the study who may be more or less sensitive to the experimental treatment.
  • Environment: If the study was conducted in a lab setting, the findings may not be able to transfer to a more natural setting.

Threats to Internal Validity

  • Unplanned events that occur during the experiment that effect the results.
  • Changes to the participants during the experiment, such as fatigue, aging, etc.
  • Selection bias: When research subjects are not selected randomly.
  • If participants drop out of the study without completing it.
  • Changing the way the data is collected or measured during the study.
  • << Previous: Welcome
  • Next: What are research methodologies? >>
  • Last Updated: Aug 2, 2022 2:36 PM
  • URL: https://library.tiffin.edu/researchmethodologies
  • USC Libraries
  • Research Guides

Organizing Your Social Sciences Research Paper

  • Types of Research Designs
  • Purpose of Guide
  • Design Flaws to Avoid
  • Independent and Dependent Variables
  • Glossary of Research Terms
  • Reading Research Effectively
  • Narrowing a Topic Idea
  • Broadening a Topic Idea
  • Extending the Timeliness of a Topic Idea
  • Academic Writing Style
  • Choosing a Title
  • Making an Outline
  • Paragraph Development
  • Research Process Video Series
  • Executive Summary
  • The C.A.R.S. Model
  • Background Information
  • The Research Problem/Question
  • Theoretical Framework
  • Citation Tracking
  • Content Alert Services
  • Evaluating Sources
  • Primary Sources
  • Secondary Sources
  • Tiertiary Sources
  • Scholarly vs. Popular Publications
  • Qualitative Methods
  • Quantitative Methods
  • Insiderness
  • Using Non-Textual Elements
  • Limitations of the Study
  • Common Grammar Mistakes
  • Writing Concisely
  • Avoiding Plagiarism
  • Footnotes or Endnotes?
  • Further Readings
  • Generative AI and Writing
  • USC Libraries Tutorials and Other Guides
  • Bibliography

Introduction

Before beginning your paper, you need to decide how you plan to design the study .

The research design refers to the overall strategy and analytical approach that you have chosen in order to integrate, in a coherent and logical way, the different components of the study, thus ensuring that the research problem will be thoroughly investigated. It constitutes the blueprint for the collection, measurement, and interpretation of information and data. Note that the research problem determines the type of design you choose, not the other way around!

De Vaus, D. A. Research Design in Social Research . London: SAGE, 2001; Trochim, William M.K. Research Methods Knowledge Base. 2006.

General Structure and Writing Style

The function of a research design is to ensure that the evidence obtained enables you to effectively address the research problem logically and as unambiguously as possible . In social sciences research, obtaining information relevant to the research problem generally entails specifying the type of evidence needed to test the underlying assumptions of a theory, to evaluate a program, or to accurately describe and assess meaning related to an observable phenomenon.

With this in mind, a common mistake made by researchers is that they begin their investigations before they have thought critically about what information is required to address the research problem. Without attending to these design issues beforehand, the overall research problem will not be adequately addressed and any conclusions drawn will run the risk of being weak and unconvincing. As a consequence, the overall validity of the study will be undermined.

The length and complexity of describing the research design in your paper can vary considerably, but any well-developed description will achieve the following :

  • Identify the research problem clearly and justify its selection, particularly in relation to any valid alternative designs that could have been used,
  • Review and synthesize previously published literature associated with the research problem,
  • Clearly and explicitly specify hypotheses [i.e., research questions] central to the problem,
  • Effectively describe the information and/or data which will be necessary for an adequate testing of the hypotheses and explain how such information and/or data will be obtained, and
  • Describe the methods of analysis to be applied to the data in determining whether or not the hypotheses are true or false.

The research design is usually incorporated into the introduction of your paper . You can obtain an overall sense of what to do by reviewing studies that have utilized the same research design [e.g., using a case study approach]. This can help you develop an outline to follow for your own paper.

NOTE : Use the SAGE Research Methods Online and Cases and the SAGE Research Methods Videos databases to search for scholarly resources on how to apply specific research designs and methods . The Research Methods Online database contains links to more than 175,000 pages of SAGE publisher's book, journal, and reference content on quantitative, qualitative, and mixed research methodologies. Also included is a collection of case studies of social research projects that can be used to help you better understand abstract or complex methodological concepts. The Research Methods Videos database contains hours of tutorials, interviews, video case studies, and mini-documentaries covering the entire research process.

Creswell, John W. and J. David Creswell. Research Design: Qualitative, Quantitative, and Mixed Methods Approaches . 5th edition. Thousand Oaks, CA: Sage, 2018; De Vaus, D. A. Research Design in Social Research . London: SAGE, 2001; Gorard, Stephen. Research Design: Creating Robust Approaches for the Social Sciences . Thousand Oaks, CA: Sage, 2013; Leedy, Paul D. and Jeanne Ellis Ormrod. Practical Research: Planning and Design . Tenth edition. Boston, MA: Pearson, 2013; Vogt, W. Paul, Dianna C. Gardner, and Lynne M. Haeffele. When to Use What Research Design . New York: Guilford, 2012.

Action Research Design

Definition and Purpose

The essentials of action research design follow a characteristic cycle whereby initially an exploratory stance is adopted, where an understanding of a problem is developed and plans are made for some form of interventionary strategy. Then the intervention is carried out [the "action" in action research] during which time, pertinent observations are collected in various forms. The new interventional strategies are carried out, and this cyclic process repeats, continuing until a sufficient understanding of [or a valid implementation solution for] the problem is achieved. The protocol is iterative or cyclical in nature and is intended to foster deeper understanding of a given situation, starting with conceptualizing and particularizing the problem and moving through several interventions and evaluations.

What do these studies tell you ?

  • This is a collaborative and adaptive research design that lends itself to use in work or community situations.
  • Design focuses on pragmatic and solution-driven research outcomes rather than testing theories.
  • When practitioners use action research, it has the potential to increase the amount they learn consciously from their experience; the action research cycle can be regarded as a learning cycle.
  • Action research studies often have direct and obvious relevance to improving practice and advocating for change.
  • There are no hidden controls or preemption of direction by the researcher.

What these studies don't tell you ?

  • It is harder to do than conducting conventional research because the researcher takes on responsibilities of advocating for change as well as for researching the topic.
  • Action research is much harder to write up because it is less likely that you can use a standard format to report your findings effectively [i.e., data is often in the form of stories or observation].
  • Personal over-involvement of the researcher may bias research results.
  • The cyclic nature of action research to achieve its twin outcomes of action [e.g. change] and research [e.g. understanding] is time-consuming and complex to conduct.
  • Advocating for change usually requires buy-in from study participants.

Coghlan, David and Mary Brydon-Miller. The Sage Encyclopedia of Action Research . Thousand Oaks, CA:  Sage, 2014; Efron, Sara Efrat and Ruth Ravid. Action Research in Education: A Practical Guide . New York: Guilford, 2013; Gall, Meredith. Educational Research: An Introduction . Chapter 18, Action Research. 8th ed. Boston, MA: Pearson/Allyn and Bacon, 2007; Gorard, Stephen. Research Design: Creating Robust Approaches for the Social Sciences . Thousand Oaks, CA: Sage, 2013; Kemmis, Stephen and Robin McTaggart. “Participatory Action Research.” In Handbook of Qualitative Research . Norman Denzin and Yvonna S. Lincoln, eds. 2nd ed. (Thousand Oaks, CA: SAGE, 2000), pp. 567-605; McNiff, Jean. Writing and Doing Action Research . London: Sage, 2014; Reason, Peter and Hilary Bradbury. Handbook of Action Research: Participative Inquiry and Practice . Thousand Oaks, CA: SAGE, 2001.

Case Study Design

A case study is an in-depth study of a particular research problem rather than a sweeping statistical survey or comprehensive comparative inquiry. It is often used to narrow down a very broad field of research into one or a few easily researchable examples. The case study research design is also useful for testing whether a specific theory and model actually applies to phenomena in the real world. It is a useful design when not much is known about an issue or phenomenon.

  • Approach excels at bringing us to an understanding of a complex issue through detailed contextual analysis of a limited number of events or conditions and their relationships.
  • A researcher using a case study design can apply a variety of methodologies and rely on a variety of sources to investigate a research problem.
  • Design can extend experience or add strength to what is already known through previous research.
  • Social scientists, in particular, make wide use of this research design to examine contemporary real-life situations and provide the basis for the application of concepts and theories and the extension of methodologies.
  • The design can provide detailed descriptions of specific and rare cases.
  • A single or small number of cases offers little basis for establishing reliability or to generalize the findings to a wider population of people, places, or things.
  • Intense exposure to the study of a case may bias a researcher's interpretation of the findings.
  • Design does not facilitate assessment of cause and effect relationships.
  • Vital information may be missing, making the case hard to interpret.
  • The case may not be representative or typical of the larger problem being investigated.
  • If the criteria for selecting a case is because it represents a very unusual or unique phenomenon or problem for study, then your interpretation of the findings can only apply to that particular case.

Case Studies. Writing@CSU. Colorado State University; Anastas, Jeane W. Research Design for Social Work and the Human Services . Chapter 4, Flexible Methods: Case Study Design. 2nd ed. New York: Columbia University Press, 1999; Gerring, John. “What Is a Case Study and What Is It Good for?” American Political Science Review 98 (May 2004): 341-354; Greenhalgh, Trisha, editor. Case Study Evaluation: Past, Present and Future Challenges . Bingley, UK: Emerald Group Publishing, 2015; Mills, Albert J. , Gabrielle Durepos, and Eiden Wiebe, editors. Encyclopedia of Case Study Research . Thousand Oaks, CA: SAGE Publications, 2010; Stake, Robert E. The Art of Case Study Research . Thousand Oaks, CA: SAGE, 1995; Yin, Robert K. Case Study Research: Design and Theory . Applied Social Research Methods Series, no. 5. 3rd ed. Thousand Oaks, CA: SAGE, 2003.

Causal Design

Causality studies may be thought of as understanding a phenomenon in terms of conditional statements in the form, “If X, then Y.” This type of research is used to measure what impact a specific change will have on existing norms and assumptions. Most social scientists seek causal explanations that reflect tests of hypotheses. Causal effect (nomothetic perspective) occurs when variation in one phenomenon, an independent variable, leads to or results, on average, in variation in another phenomenon, the dependent variable.

Conditions necessary for determining causality:

  • Empirical association -- a valid conclusion is based on finding an association between the independent variable and the dependent variable.
  • Appropriate time order -- to conclude that causation was involved, one must see that cases were exposed to variation in the independent variable before variation in the dependent variable.
  • Nonspuriousness -- a relationship between two variables that is not due to variation in a third variable.
  • Causality research designs assist researchers in understanding why the world works the way it does through the process of proving a causal link between variables and by the process of eliminating other possibilities.
  • Replication is possible.
  • There is greater confidence the study has internal validity due to the systematic subject selection and equity of groups being compared.
  • Not all relationships are causal! The possibility always exists that, by sheer coincidence, two unrelated events appear to be related [e.g., Punxatawney Phil could accurately predict the duration of Winter for five consecutive years but, the fact remains, he's just a big, furry rodent].
  • Conclusions about causal relationships are difficult to determine due to a variety of extraneous and confounding variables that exist in a social environment. This means causality can only be inferred, never proven.
  • If two variables are correlated, the cause must come before the effect. However, even though two variables might be causally related, it can sometimes be difficult to determine which variable comes first and, therefore, to establish which variable is the actual cause and which is the  actual effect.

Beach, Derek and Rasmus Brun Pedersen. Causal Case Study Methods: Foundations and Guidelines for Comparing, Matching, and Tracing . Ann Arbor, MI: University of Michigan Press, 2016; Bachman, Ronet. The Practice of Research in Criminology and Criminal Justice . Chapter 5, Causation and Research Designs. 3rd ed. Thousand Oaks, CA: Pine Forge Press, 2007; Brewer, Ernest W. and Jennifer Kubn. “Causal-Comparative Design.” In Encyclopedia of Research Design . Neil J. Salkind, editor. (Thousand Oaks, CA: Sage, 2010), pp. 125-132; Causal Research Design: Experimentation. Anonymous SlideShare Presentation; Gall, Meredith. Educational Research: An Introduction . Chapter 11, Nonexperimental Research: Correlational Designs. 8th ed. Boston, MA: Pearson/Allyn and Bacon, 2007; Trochim, William M.K. Research Methods Knowledge Base. 2006.

Cohort Design

Often used in the medical sciences, but also found in the applied social sciences, a cohort study generally refers to a study conducted over a period of time involving members of a population which the subject or representative member comes from, and who are united by some commonality or similarity. Using a quantitative framework, a cohort study makes note of statistical occurrence within a specialized subgroup, united by same or similar characteristics that are relevant to the research problem being investigated, rather than studying statistical occurrence within the general population. Using a qualitative framework, cohort studies generally gather data using methods of observation. Cohorts can be either "open" or "closed."

  • Open Cohort Studies [dynamic populations, such as the population of Los Angeles] involve a population that is defined just by the state of being a part of the study in question (and being monitored for the outcome). Date of entry and exit from the study is individually defined, therefore, the size of the study population is not constant. In open cohort studies, researchers can only calculate rate based data, such as, incidence rates and variants thereof.
  • Closed Cohort Studies [static populations, such as patients entered into a clinical trial] involve participants who enter into the study at one defining point in time and where it is presumed that no new participants can enter the cohort. Given this, the number of study participants remains constant (or can only decrease).
  • The use of cohorts is often mandatory because a randomized control study may be unethical. For example, you cannot deliberately expose people to asbestos, you can only study its effects on those who have already been exposed. Research that measures risk factors often relies upon cohort designs.
  • Because cohort studies measure potential causes before the outcome has occurred, they can demonstrate that these “causes” preceded the outcome, thereby avoiding the debate as to which is the cause and which is the effect.
  • Cohort analysis is highly flexible and can provide insight into effects over time and related to a variety of different types of changes [e.g., social, cultural, political, economic, etc.].
  • Either original data or secondary data can be used in this design.
  • In cases where a comparative analysis of two cohorts is made [e.g., studying the effects of one group exposed to asbestos and one that has not], a researcher cannot control for all other factors that might differ between the two groups. These factors are known as confounding variables.
  • Cohort studies can end up taking a long time to complete if the researcher must wait for the conditions of interest to develop within the group. This also increases the chance that key variables change during the course of the study, potentially impacting the validity of the findings.
  • Due to the lack of randominization in the cohort design, its external validity is lower than that of study designs where the researcher randomly assigns participants.

Healy P, Devane D. “Methodological Considerations in Cohort Study Designs.” Nurse Researcher 18 (2011): 32-36; Glenn, Norval D, editor. Cohort Analysis . 2nd edition. Thousand Oaks, CA: Sage, 2005; Levin, Kate Ann. Study Design IV: Cohort Studies. Evidence-Based Dentistry 7 (2003): 51–52; Payne, Geoff. “Cohort Study.” In The SAGE Dictionary of Social Research Methods . Victor Jupp, editor. (Thousand Oaks, CA: Sage, 2006), pp. 31-33; Study Design 101. Himmelfarb Health Sciences Library. George Washington University, November 2011; Cohort Study. Wikipedia.

Cross-Sectional Design

Cross-sectional research designs have three distinctive features: no time dimension; a reliance on existing differences rather than change following intervention; and, groups are selected based on existing differences rather than random allocation. The cross-sectional design can only measure differences between or from among a variety of people, subjects, or phenomena rather than a process of change. As such, researchers using this design can only employ a relatively passive approach to making causal inferences based on findings.

  • Cross-sectional studies provide a clear 'snapshot' of the outcome and the characteristics associated with it, at a specific point in time.
  • Unlike an experimental design, where there is an active intervention by the researcher to produce and measure change or to create differences, cross-sectional designs focus on studying and drawing inferences from existing differences between people, subjects, or phenomena.
  • Entails collecting data at and concerning one point in time. While longitudinal studies involve taking multiple measures over an extended period of time, cross-sectional research is focused on finding relationships between variables at one moment in time.
  • Groups identified for study are purposely selected based upon existing differences in the sample rather than seeking random sampling.
  • Cross-section studies are capable of using data from a large number of subjects and, unlike observational studies, is not geographically bound.
  • Can estimate prevalence of an outcome of interest because the sample is usually taken from the whole population.
  • Because cross-sectional designs generally use survey techniques to gather data, they are relatively inexpensive and take up little time to conduct.
  • Finding people, subjects, or phenomena to study that are very similar except in one specific variable can be difficult.
  • Results are static and time bound and, therefore, give no indication of a sequence of events or reveal historical or temporal contexts.
  • Studies cannot be utilized to establish cause and effect relationships.
  • This design only provides a snapshot of analysis so there is always the possibility that a study could have differing results if another time-frame had been chosen.
  • There is no follow up to the findings.

Bethlehem, Jelke. "7: Cross-sectional Research." In Research Methodology in the Social, Behavioural and Life Sciences . Herman J Adèr and Gideon J Mellenbergh, editors. (London, England: Sage, 1999), pp. 110-43; Bourque, Linda B. “Cross-Sectional Design.” In  The SAGE Encyclopedia of Social Science Research Methods . Michael S. Lewis-Beck, Alan Bryman, and Tim Futing Liao. (Thousand Oaks, CA: 2004), pp. 230-231; Hall, John. “Cross-Sectional Survey Design.” In Encyclopedia of Survey Research Methods . Paul J. Lavrakas, ed. (Thousand Oaks, CA: Sage, 2008), pp. 173-174; Helen Barratt, Maria Kirwan. Cross-Sectional Studies: Design Application, Strengths and Weaknesses of Cross-Sectional Studies. Healthknowledge, 2009. Cross-Sectional Study. Wikipedia.

Descriptive Design

Descriptive research designs help provide answers to the questions of who, what, when, where, and how associated with a particular research problem; a descriptive study cannot conclusively ascertain answers to why. Descriptive research is used to obtain information concerning the current status of the phenomena and to describe "what exists" with respect to variables or conditions in a situation.

  • The subject is being observed in a completely natural and unchanged natural environment. True experiments, whilst giving analyzable data, often adversely influence the normal behavior of the subject [a.k.a., the Heisenberg effect whereby measurements of certain systems cannot be made without affecting the systems].
  • Descriptive research is often used as a pre-cursor to more quantitative research designs with the general overview giving some valuable pointers as to what variables are worth testing quantitatively.
  • If the limitations are understood, they can be a useful tool in developing a more focused study.
  • Descriptive studies can yield rich data that lead to important recommendations in practice.
  • Appoach collects a large amount of data for detailed analysis.
  • The results from a descriptive research cannot be used to discover a definitive answer or to disprove a hypothesis.
  • Because descriptive designs often utilize observational methods [as opposed to quantitative methods], the results cannot be replicated.
  • The descriptive function of research is heavily dependent on instrumentation for measurement and observation.

Anastas, Jeane W. Research Design for Social Work and the Human Services . Chapter 5, Flexible Methods: Descriptive Research. 2nd ed. New York: Columbia University Press, 1999; Given, Lisa M. "Descriptive Research." In Encyclopedia of Measurement and Statistics . Neil J. Salkind and Kristin Rasmussen, editors. (Thousand Oaks, CA: Sage, 2007), pp. 251-254; McNabb, Connie. Descriptive Research Methodologies. Powerpoint Presentation; Shuttleworth, Martyn. Descriptive Research Design, September 26, 2008; Erickson, G. Scott. "Descriptive Research Design." In New Methods of Market Research and Analysis . (Northampton, MA: Edward Elgar Publishing, 2017), pp. 51-77; Sahin, Sagufta, and Jayanta Mete. "A Brief Study on Descriptive Research: Its Nature and Application in Social Science." International Journal of Research and Analysis in Humanities 1 (2021): 11; K. Swatzell and P. Jennings. “Descriptive Research: The Nuts and Bolts.” Journal of the American Academy of Physician Assistants 20 (2007), pp. 55-56; Kane, E. Doing Your Own Research: Basic Descriptive Research in the Social Sciences and Humanities . London: Marion Boyars, 1985.

Experimental Design

A blueprint of the procedure that enables the researcher to maintain control over all factors that may affect the result of an experiment. In doing this, the researcher attempts to determine or predict what may occur. Experimental research is often used where there is time priority in a causal relationship (cause precedes effect), there is consistency in a causal relationship (a cause will always lead to the same effect), and the magnitude of the correlation is great. The classic experimental design specifies an experimental group and a control group. The independent variable is administered to the experimental group and not to the control group, and both groups are measured on the same dependent variable. Subsequent experimental designs have used more groups and more measurements over longer periods. True experiments must have control, randomization, and manipulation.

  • Experimental research allows the researcher to control the situation. In so doing, it allows researchers to answer the question, “What causes something to occur?”
  • Permits the researcher to identify cause and effect relationships between variables and to distinguish placebo effects from treatment effects.
  • Experimental research designs support the ability to limit alternative explanations and to infer direct causal relationships in the study.
  • Approach provides the highest level of evidence for single studies.
  • The design is artificial, and results may not generalize well to the real world.
  • The artificial settings of experiments may alter the behaviors or responses of participants.
  • Experimental designs can be costly if special equipment or facilities are needed.
  • Some research problems cannot be studied using an experiment because of ethical or technical reasons.
  • Difficult to apply ethnographic and other qualitative methods to experimentally designed studies.

Anastas, Jeane W. Research Design for Social Work and the Human Services . Chapter 7, Flexible Methods: Experimental Research. 2nd ed. New York: Columbia University Press, 1999; Chapter 2: Research Design, Experimental Designs. School of Psychology, University of New England, 2000; Chow, Siu L. "Experimental Design." In Encyclopedia of Research Design . Neil J. Salkind, editor. (Thousand Oaks, CA: Sage, 2010), pp. 448-453; "Experimental Design." In Social Research Methods . Nicholas Walliman, editor. (London, England: Sage, 2006), pp, 101-110; Experimental Research. Research Methods by Dummies. Department of Psychology. California State University, Fresno, 2006; Kirk, Roger E. Experimental Design: Procedures for the Behavioral Sciences . 4th edition. Thousand Oaks, CA: Sage, 2013; Trochim, William M.K. Experimental Design. Research Methods Knowledge Base. 2006; Rasool, Shafqat. Experimental Research. Slideshare presentation.

Exploratory Design

An exploratory design is conducted about a research problem when there are few or no earlier studies to refer to or rely upon to predict an outcome . The focus is on gaining insights and familiarity for later investigation or undertaken when research problems are in a preliminary stage of investigation. Exploratory designs are often used to establish an understanding of how best to proceed in studying an issue or what methodology would effectively apply to gathering information about the issue.

The goals of exploratory research are intended to produce the following possible insights:

  • Familiarity with basic details, settings, and concerns.
  • Well grounded picture of the situation being developed.
  • Generation of new ideas and assumptions.
  • Development of tentative theories or hypotheses.
  • Determination about whether a study is feasible in the future.
  • Issues get refined for more systematic investigation and formulation of new research questions.
  • Direction for future research and techniques get developed.
  • Design is a useful approach for gaining background information on a particular topic.
  • Exploratory research is flexible and can address research questions of all types (what, why, how).
  • Provides an opportunity to define new terms and clarify existing concepts.
  • Exploratory research is often used to generate formal hypotheses and develop more precise research problems.
  • In the policy arena or applied to practice, exploratory studies help establish research priorities and where resources should be allocated.
  • Exploratory research generally utilizes small sample sizes and, thus, findings are typically not generalizable to the population at large.
  • The exploratory nature of the research inhibits an ability to make definitive conclusions about the findings. They provide insight but not definitive conclusions.
  • The research process underpinning exploratory studies is flexible but often unstructured, leading to only tentative results that have limited value to decision-makers.
  • Design lacks rigorous standards applied to methods of data gathering and analysis because one of the areas for exploration could be to determine what method or methodologies could best fit the research problem.

Cuthill, Michael. “Exploratory Research: Citizen Participation, Local Government, and Sustainable Development in Australia.” Sustainable Development 10 (2002): 79-89; Streb, Christoph K. "Exploratory Case Study." In Encyclopedia of Case Study Research . Albert J. Mills, Gabrielle Durepos and Eiden Wiebe, editors. (Thousand Oaks, CA: Sage, 2010), pp. 372-374; Taylor, P. J., G. Catalano, and D.R.F. Walker. “Exploratory Analysis of the World City Network.” Urban Studies 39 (December 2002): 2377-2394; Exploratory Research. Wikipedia.

Field Research Design

Sometimes referred to as ethnography or participant observation, designs around field research encompass a variety of interpretative procedures [e.g., observation and interviews] rooted in qualitative approaches to studying people individually or in groups while inhabiting their natural environment as opposed to using survey instruments or other forms of impersonal methods of data gathering. Information acquired from observational research takes the form of “ field notes ” that involves documenting what the researcher actually sees and hears while in the field. Findings do not consist of conclusive statements derived from numbers and statistics because field research involves analysis of words and observations of behavior. Conclusions, therefore, are developed from an interpretation of findings that reveal overriding themes, concepts, and ideas. More information can be found HERE .

  • Field research is often necessary to fill gaps in understanding the research problem applied to local conditions or to specific groups of people that cannot be ascertained from existing data.
  • The research helps contextualize already known information about a research problem, thereby facilitating ways to assess the origins, scope, and scale of a problem and to gage the causes, consequences, and means to resolve an issue based on deliberate interaction with people in their natural inhabited spaces.
  • Enables the researcher to corroborate or confirm data by gathering additional information that supports or refutes findings reported in prior studies of the topic.
  • Because the researcher in embedded in the field, they are better able to make observations or ask questions that reflect the specific cultural context of the setting being investigated.
  • Observing the local reality offers the opportunity to gain new perspectives or obtain unique data that challenges existing theoretical propositions or long-standing assumptions found in the literature.

What these studies don't tell you

  • A field research study requires extensive time and resources to carry out the multiple steps involved with preparing for the gathering of information, including for example, examining background information about the study site, obtaining permission to access the study site, and building trust and rapport with subjects.
  • Requires a commitment to staying engaged in the field to ensure that you can adequately document events and behaviors as they unfold.
  • The unpredictable nature of fieldwork means that researchers can never fully control the process of data gathering. They must maintain a flexible approach to studying the setting because events and circumstances can change quickly or unexpectedly.
  • Findings can be difficult to interpret and verify without access to documents and other source materials that help to enhance the credibility of information obtained from the field  [i.e., the act of triangulating the data].
  • Linking the research problem to the selection of study participants inhabiting their natural environment is critical. However, this specificity limits the ability to generalize findings to different situations or in other contexts or to infer courses of action applied to other settings or groups of people.
  • The reporting of findings must take into account how the researcher themselves may have inadvertently affected respondents and their behaviors.

Historical Design

The purpose of a historical research design is to collect, verify, and synthesize evidence from the past to establish facts that defend or refute a hypothesis. It uses secondary sources and a variety of primary documentary evidence, such as, diaries, official records, reports, archives, and non-textual information [maps, pictures, audio and visual recordings]. The limitation is that the sources must be both authentic and valid.

  • The historical research design is unobtrusive; the act of research does not affect the results of the study.
  • The historical approach is well suited for trend analysis.
  • Historical records can add important contextual background required to more fully understand and interpret a research problem.
  • There is often no possibility of researcher-subject interaction that could affect the findings.
  • Historical sources can be used over and over to study different research problems or to replicate a previous study.
  • The ability to fulfill the aims of your research are directly related to the amount and quality of documentation available to understand the research problem.
  • Since historical research relies on data from the past, there is no way to manipulate it to control for contemporary contexts.
  • Interpreting historical sources can be very time consuming.
  • The sources of historical materials must be archived consistently to ensure access. This may especially challenging for digital or online-only sources.
  • Original authors bring their own perspectives and biases to the interpretation of past events and these biases are more difficult to ascertain in historical resources.
  • Due to the lack of control over external variables, historical research is very weak with regard to the demands of internal validity.
  • It is rare that the entirety of historical documentation needed to fully address a research problem is available for interpretation, therefore, gaps need to be acknowledged.

Howell, Martha C. and Walter Prevenier. From Reliable Sources: An Introduction to Historical Methods . Ithaca, NY: Cornell University Press, 2001; Lundy, Karen Saucier. "Historical Research." In The Sage Encyclopedia of Qualitative Research Methods . Lisa M. Given, editor. (Thousand Oaks, CA: Sage, 2008), pp. 396-400; Marius, Richard. and Melvin E. Page. A Short Guide to Writing about History . 9th edition. Boston, MA: Pearson, 2015; Savitt, Ronald. “Historical Research in Marketing.” Journal of Marketing 44 (Autumn, 1980): 52-58;  Gall, Meredith. Educational Research: An Introduction . Chapter 16, Historical Research. 8th ed. Boston, MA: Pearson/Allyn and Bacon, 2007.

Longitudinal Design

A longitudinal study follows the same sample over time and makes repeated observations. For example, with longitudinal surveys, the same group of people is interviewed at regular intervals, enabling researchers to track changes over time and to relate them to variables that might explain why the changes occur. Longitudinal research designs describe patterns of change and help establish the direction and magnitude of causal relationships. Measurements are taken on each variable over two or more distinct time periods. This allows the researcher to measure change in variables over time. It is a type of observational study sometimes referred to as a panel study.

  • Longitudinal data facilitate the analysis of the duration of a particular phenomenon.
  • Enables survey researchers to get close to the kinds of causal explanations usually attainable only with experiments.
  • The design permits the measurement of differences or change in a variable from one period to another [i.e., the description of patterns of change over time].
  • Longitudinal studies facilitate the prediction of future outcomes based upon earlier factors.
  • The data collection method may change over time.
  • Maintaining the integrity of the original sample can be difficult over an extended period of time.
  • It can be difficult to show more than one variable at a time.
  • This design often needs qualitative research data to explain fluctuations in the results.
  • A longitudinal research design assumes present trends will continue unchanged.
  • It can take a long period of time to gather results.
  • There is a need to have a large sample size and accurate sampling to reach representativness.

Anastas, Jeane W. Research Design for Social Work and the Human Services . Chapter 6, Flexible Methods: Relational and Longitudinal Research. 2nd ed. New York: Columbia University Press, 1999; Forgues, Bernard, and Isabelle Vandangeon-Derumez. "Longitudinal Analyses." In Doing Management Research . Raymond-Alain Thiétart and Samantha Wauchope, editors. (London, England: Sage, 2001), pp. 332-351; Kalaian, Sema A. and Rafa M. Kasim. "Longitudinal Studies." In Encyclopedia of Survey Research Methods . Paul J. Lavrakas, ed. (Thousand Oaks, CA: Sage, 2008), pp. 440-441; Menard, Scott, editor. Longitudinal Research . Thousand Oaks, CA: Sage, 2002; Ployhart, Robert E. and Robert J. Vandenberg. "Longitudinal Research: The Theory, Design, and Analysis of Change.” Journal of Management 36 (January 2010): 94-120; Longitudinal Study. Wikipedia.

Meta-Analysis Design

Meta-analysis is an analytical methodology designed to systematically evaluate and summarize the results from a number of individual studies, thereby, increasing the overall sample size and the ability of the researcher to study effects of interest. The purpose is to not simply summarize existing knowledge, but to develop a new understanding of a research problem using synoptic reasoning. The main objectives of meta-analysis include analyzing differences in the results among studies and increasing the precision by which effects are estimated. A well-designed meta-analysis depends upon strict adherence to the criteria used for selecting studies and the availability of information in each study to properly analyze their findings. Lack of information can severely limit the type of analyzes and conclusions that can be reached. In addition, the more dissimilarity there is in the results among individual studies [heterogeneity], the more difficult it is to justify interpretations that govern a valid synopsis of results. A meta-analysis needs to fulfill the following requirements to ensure the validity of your findings:

  • Clearly defined description of objectives, including precise definitions of the variables and outcomes that are being evaluated;
  • A well-reasoned and well-documented justification for identification and selection of the studies;
  • Assessment and explicit acknowledgment of any researcher bias in the identification and selection of those studies;
  • Description and evaluation of the degree of heterogeneity among the sample size of studies reviewed; and,
  • Justification of the techniques used to evaluate the studies.
  • Can be an effective strategy for determining gaps in the literature.
  • Provides a means of reviewing research published about a particular topic over an extended period of time and from a variety of sources.
  • Is useful in clarifying what policy or programmatic actions can be justified on the basis of analyzing research results from multiple studies.
  • Provides a method for overcoming small sample sizes in individual studies that previously may have had little relationship to each other.
  • Can be used to generate new hypotheses or highlight research problems for future studies.
  • Small violations in defining the criteria used for content analysis can lead to difficult to interpret and/or meaningless findings.
  • A large sample size can yield reliable, but not necessarily valid, results.
  • A lack of uniformity regarding, for example, the type of literature reviewed, how methods are applied, and how findings are measured within the sample of studies you are analyzing, can make the process of synthesis difficult to perform.
  • Depending on the sample size, the process of reviewing and synthesizing multiple studies can be very time consuming.

Beck, Lewis W. "The Synoptic Method." The Journal of Philosophy 36 (1939): 337-345; Cooper, Harris, Larry V. Hedges, and Jeffrey C. Valentine, eds. The Handbook of Research Synthesis and Meta-Analysis . 2nd edition. New York: Russell Sage Foundation, 2009; Guzzo, Richard A., Susan E. Jackson and Raymond A. Katzell. “Meta-Analysis Analysis.” In Research in Organizational Behavior , Volume 9. (Greenwich, CT: JAI Press, 1987), pp 407-442; Lipsey, Mark W. and David B. Wilson. Practical Meta-Analysis . Thousand Oaks, CA: Sage Publications, 2001; Study Design 101. Meta-Analysis. The Himmelfarb Health Sciences Library, George Washington University; Timulak, Ladislav. “Qualitative Meta-Analysis.” In The SAGE Handbook of Qualitative Data Analysis . Uwe Flick, editor. (Los Angeles, CA: Sage, 2013), pp. 481-495; Walker, Esteban, Adrian V. Hernandez, and Micheal W. Kattan. "Meta-Analysis: It's Strengths and Limitations." Cleveland Clinic Journal of Medicine 75 (June 2008): 431-439.

Mixed-Method Design

  • Narrative and non-textual information can add meaning to numeric data, while numeric data can add precision to narrative and non-textual information.
  • Can utilize existing data while at the same time generating and testing a grounded theory approach to describe and explain the phenomenon under study.
  • A broader, more complex research problem can be investigated because the researcher is not constrained by using only one method.
  • The strengths of one method can be used to overcome the inherent weaknesses of another method.
  • Can provide stronger, more robust evidence to support a conclusion or set of recommendations.
  • May generate new knowledge new insights or uncover hidden insights, patterns, or relationships that a single methodological approach might not reveal.
  • Produces more complete knowledge and understanding of the research problem that can be used to increase the generalizability of findings applied to theory or practice.
  • A researcher must be proficient in understanding how to apply multiple methods to investigating a research problem as well as be proficient in optimizing how to design a study that coherently melds them together.
  • Can increase the likelihood of conflicting results or ambiguous findings that inhibit drawing a valid conclusion or setting forth a recommended course of action [e.g., sample interview responses do not support existing statistical data].
  • Because the research design can be very complex, reporting the findings requires a well-organized narrative, clear writing style, and precise word choice.
  • Design invites collaboration among experts. However, merging different investigative approaches and writing styles requires more attention to the overall research process than studies conducted using only one methodological paradigm.
  • Concurrent merging of quantitative and qualitative research requires greater attention to having adequate sample sizes, using comparable samples, and applying a consistent unit of analysis. For sequential designs where one phase of qualitative research builds on the quantitative phase or vice versa, decisions about what results from the first phase to use in the next phase, the choice of samples and estimating reasonable sample sizes for both phases, and the interpretation of results from both phases can be difficult.
  • Due to multiple forms of data being collected and analyzed, this design requires extensive time and resources to carry out the multiple steps involved in data gathering and interpretation.

Burch, Patricia and Carolyn J. Heinrich. Mixed Methods for Policy Research and Program Evaluation . Thousand Oaks, CA: Sage, 2016; Creswell, John w. et al. Best Practices for Mixed Methods Research in the Health Sciences . Bethesda, MD: Office of Behavioral and Social Sciences Research, National Institutes of Health, 2010Creswell, John W. Research Design: Qualitative, Quantitative, and Mixed Methods Approaches . 4th edition. Thousand Oaks, CA: Sage Publications, 2014; Domínguez, Silvia, editor. Mixed Methods Social Networks Research . Cambridge, UK: Cambridge University Press, 2014; Hesse-Biber, Sharlene Nagy. Mixed Methods Research: Merging Theory with Practice . New York: Guilford Press, 2010; Niglas, Katrin. “How the Novice Researcher Can Make Sense of Mixed Methods Designs.” International Journal of Multiple Research Approaches 3 (2009): 34-46; Onwuegbuzie, Anthony J. and Nancy L. Leech. “Linking Research Questions to Mixed Methods Data Analysis Procedures.” The Qualitative Report 11 (September 2006): 474-498; Tashakorri, Abbas and John W. Creswell. “The New Era of Mixed Methods.” Journal of Mixed Methods Research 1 (January 2007): 3-7; Zhanga, Wanqing. “Mixed Methods Application in Health Intervention Research: A Multiple Case Study.” International Journal of Multiple Research Approaches 8 (2014): 24-35 .

Observational Design

This type of research design draws a conclusion by comparing subjects against a control group, in cases where the researcher has no control over the experiment. There are two general types of observational designs. In direct observations, people know that you are watching them. Unobtrusive measures involve any method for studying behavior where individuals do not know they are being observed. An observational study allows a useful insight into a phenomenon and avoids the ethical and practical difficulties of setting up a large and cumbersome research project.

  • Observational studies are usually flexible and do not necessarily need to be structured around a hypothesis about what you expect to observe [data is emergent rather than pre-existing].
  • The researcher is able to collect in-depth information about a particular behavior.
  • Can reveal interrelationships among multifaceted dimensions of group interactions.
  • You can generalize your results to real life situations.
  • Observational research is useful for discovering what variables may be important before applying other methods like experiments.
  • Observation research designs account for the complexity of group behaviors.
  • Reliability of data is low because seeing behaviors occur over and over again may be a time consuming task and are difficult to replicate.
  • In observational research, findings may only reflect a unique sample population and, thus, cannot be generalized to other groups.
  • There can be problems with bias as the researcher may only "see what they want to see."
  • There is no possibility to determine "cause and effect" relationships since nothing is manipulated.
  • Sources or subjects may not all be equally credible.
  • Any group that is knowingly studied is altered to some degree by the presence of the researcher, therefore, potentially skewing any data collected.

Atkinson, Paul and Martyn Hammersley. “Ethnography and Participant Observation.” In Handbook of Qualitative Research . Norman K. Denzin and Yvonna S. Lincoln, eds. (Thousand Oaks, CA: Sage, 1994), pp. 248-261; Observational Research. Research Methods by Dummies. Department of Psychology. California State University, Fresno, 2006; Patton Michael Quinn. Qualitiative Research and Evaluation Methods . Chapter 6, Fieldwork Strategies and Observational Methods. 3rd ed. Thousand Oaks, CA: Sage, 2002; Payne, Geoff and Judy Payne. "Observation." In Key Concepts in Social Research . The SAGE Key Concepts series. (London, England: Sage, 2004), pp. 158-162; Rosenbaum, Paul R. Design of Observational Studies . New York: Springer, 2010;Williams, J. Patrick. "Nonparticipant Observation." In The Sage Encyclopedia of Qualitative Research Methods . Lisa M. Given, editor.(Thousand Oaks, CA: Sage, 2008), pp. 562-563.

Philosophical Design

Understood more as an broad approach to examining a research problem than a methodological design, philosophical analysis and argumentation is intended to challenge deeply embedded, often intractable, assumptions underpinning an area of study. This approach uses the tools of argumentation derived from philosophical traditions, concepts, models, and theories to critically explore and challenge, for example, the relevance of logic and evidence in academic debates, to analyze arguments about fundamental issues, or to discuss the root of existing discourse about a research problem. These overarching tools of analysis can be framed in three ways:

  • Ontology -- the study that describes the nature of reality; for example, what is real and what is not, what is fundamental and what is derivative?
  • Epistemology -- the study that explores the nature of knowledge; for example, by what means does knowledge and understanding depend upon and how can we be certain of what we know?
  • Axiology -- the study of values; for example, what values does an individual or group hold and why? How are values related to interest, desire, will, experience, and means-to-end? And, what is the difference between a matter of fact and a matter of value?
  • Can provide a basis for applying ethical decision-making to practice.
  • Functions as a means of gaining greater self-understanding and self-knowledge about the purposes of research.
  • Brings clarity to general guiding practices and principles of an individual or group.
  • Philosophy informs methodology.
  • Refine concepts and theories that are invoked in relatively unreflective modes of thought and discourse.
  • Beyond methodology, philosophy also informs critical thinking about epistemology and the structure of reality (metaphysics).
  • Offers clarity and definition to the practical and theoretical uses of terms, concepts, and ideas.
  • Limited application to specific research problems [answering the "So What?" question in social science research].
  • Analysis can be abstract, argumentative, and limited in its practical application to real-life issues.
  • While a philosophical analysis may render problematic that which was once simple or taken-for-granted, the writing can be dense and subject to unnecessary jargon, overstatement, and/or excessive quotation and documentation.
  • There are limitations in the use of metaphor as a vehicle of philosophical analysis.
  • There can be analytical difficulties in moving from philosophy to advocacy and between abstract thought and application to the phenomenal world.

Burton, Dawn. "Part I, Philosophy of the Social Sciences." In Research Training for Social Scientists . (London, England: Sage, 2000), pp. 1-5; Chapter 4, Research Methodology and Design. Unisa Institutional Repository (UnisaIR), University of South Africa; Jarvie, Ian C., and Jesús Zamora-Bonilla, editors. The SAGE Handbook of the Philosophy of Social Sciences . London: Sage, 2011; Labaree, Robert V. and Ross Scimeca. “The Philosophical Problem of Truth in Librarianship.” The Library Quarterly 78 (January 2008): 43-70; Maykut, Pamela S. Beginning Qualitative Research: A Philosophic and Practical Guide . Washington, DC: Falmer Press, 1994; McLaughlin, Hugh. "The Philosophy of Social Research." In Understanding Social Work Research . 2nd edition. (London: SAGE Publications Ltd., 2012), pp. 24-47; Stanford Encyclopedia of Philosophy . Metaphysics Research Lab, CSLI, Stanford University, 2013.

Sequential Design

  • The researcher has a limitless option when it comes to sample size and the sampling schedule.
  • Due to the repetitive nature of this research design, minor changes and adjustments can be done during the initial parts of the study to correct and hone the research method.
  • This is a useful design for exploratory studies.
  • There is very little effort on the part of the researcher when performing this technique. It is generally not expensive, time consuming, or workforce intensive.
  • Because the study is conducted serially, the results of one sample are known before the next sample is taken and analyzed. This provides opportunities for continuous improvement of sampling and methods of analysis.
  • The sampling method is not representative of the entire population. The only possibility of approaching representativeness is when the researcher chooses to use a very large sample size significant enough to represent a significant portion of the entire population. In this case, moving on to study a second or more specific sample can be difficult.
  • The design cannot be used to create conclusions and interpretations that pertain to an entire population because the sampling technique is not randomized. Generalizability from findings is, therefore, limited.
  • Difficult to account for and interpret variation from one sample to another over time, particularly when using qualitative methods of data collection.

Betensky, Rebecca. Harvard University, Course Lecture Note slides; Bovaird, James A. and Kevin A. Kupzyk. "Sequential Design." In Encyclopedia of Research Design . Neil J. Salkind, editor. (Thousand Oaks, CA: Sage, 2010), pp. 1347-1352; Cresswell, John W. Et al. “Advanced Mixed-Methods Research Designs.” In Handbook of Mixed Methods in Social and Behavioral Research . Abbas Tashakkori and Charles Teddle, eds. (Thousand Oaks, CA: Sage, 2003), pp. 209-240; Henry, Gary T. "Sequential Sampling." In The SAGE Encyclopedia of Social Science Research Methods . Michael S. Lewis-Beck, Alan Bryman and Tim Futing Liao, editors. (Thousand Oaks, CA: Sage, 2004), pp. 1027-1028; Nataliya V. Ivankova. “Using Mixed-Methods Sequential Explanatory Design: From Theory to Practice.” Field Methods 18 (February 2006): 3-20; Bovaird, James A. and Kevin A. Kupzyk. “Sequential Design.” In Encyclopedia of Research Design . Neil J. Salkind, ed. Thousand Oaks, CA: Sage, 2010; Sequential Analysis. Wikipedia.

Systematic Review

  • A systematic review synthesizes the findings of multiple studies related to each other by incorporating strategies of analysis and interpretation intended to reduce biases and random errors.
  • The application of critical exploration, evaluation, and synthesis methods separates insignificant, unsound, or redundant research from the most salient and relevant studies worthy of reflection.
  • They can be use to identify, justify, and refine hypotheses, recognize and avoid hidden problems in prior studies, and explain data inconsistencies and conflicts in data.
  • Systematic reviews can be used to help policy makers formulate evidence-based guidelines and regulations.
  • The use of strict, explicit, and pre-determined methods of synthesis, when applied appropriately, provide reliable estimates about the effects of interventions, evaluations, and effects related to the overarching research problem investigated by each study under review.
  • Systematic reviews illuminate where knowledge or thorough understanding of a research problem is lacking and, therefore, can then be used to guide future research.
  • The accepted inclusion of unpublished studies [i.e., grey literature] ensures the broadest possible way to analyze and interpret research on a topic.
  • Results of the synthesis can be generalized and the findings extrapolated into the general population with more validity than most other types of studies .
  • Systematic reviews do not create new knowledge per se; they are a method for synthesizing existing studies about a research problem in order to gain new insights and determine gaps in the literature.
  • The way researchers have carried out their investigations [e.g., the period of time covered, number of participants, sources of data analyzed, etc.] can make it difficult to effectively synthesize studies.
  • The inclusion of unpublished studies can introduce bias into the review because they may not have undergone a rigorous peer-review process prior to publication. Examples may include conference presentations or proceedings, publications from government agencies, white papers, working papers, and internal documents from organizations, and doctoral dissertations and Master's theses.

Denyer, David and David Tranfield. "Producing a Systematic Review." In The Sage Handbook of Organizational Research Methods .  David A. Buchanan and Alan Bryman, editors. ( Thousand Oaks, CA: Sage Publications, 2009), pp. 671-689; Foster, Margaret J. and Sarah T. Jewell, editors. Assembling the Pieces of a Systematic Review: A Guide for Librarians . Lanham, MD: Rowman and Littlefield, 2017; Gough, David, Sandy Oliver, James Thomas, editors. Introduction to Systematic Reviews . 2nd edition. Los Angeles, CA: Sage Publications, 2017; Gopalakrishnan, S. and P. Ganeshkumar. “Systematic Reviews and Meta-analysis: Understanding the Best Evidence in Primary Healthcare.” Journal of Family Medicine and Primary Care 2 (2013): 9-14; Gough, David, James Thomas, and Sandy Oliver. "Clarifying Differences between Review Designs and Methods." Systematic Reviews 1 (2012): 1-9; Khan, Khalid S., Regina Kunz, Jos Kleijnen, and Gerd Antes. “Five Steps to Conducting a Systematic Review.” Journal of the Royal Society of Medicine 96 (2003): 118-121; Mulrow, C. D. “Systematic Reviews: Rationale for Systematic Reviews.” BMJ 309:597 (September 1994); O'Dwyer, Linda C., and Q. Eileen Wafford. "Addressing Challenges with Systematic Review Teams through Effective Communication: A Case Report." Journal of the Medical Library Association 109 (October 2021): 643-647; Okoli, Chitu, and Kira Schabram. "A Guide to Conducting a Systematic Literature Review of Information Systems Research."  Sprouts: Working Papers on Information Systems 10 (2010); Siddaway, Andy P., Alex M. Wood, and Larry V. Hedges. "How to Do a Systematic Review: A Best Practice Guide for Conducting and Reporting Narrative Reviews, Meta-analyses, and Meta-syntheses." Annual Review of Psychology 70 (2019): 747-770; Torgerson, Carole J. “Publication Bias: The Achilles’ Heel of Systematic Reviews?” British Journal of Educational Studies 54 (March 2006): 89-102; Torgerson, Carole. Systematic Reviews . New York: Continuum, 2003.

  • << Previous: Purpose of Guide
  • Next: Design Flaws to Avoid >>
  • Last Updated: Feb 8, 2024 1:57 PM
  • URL: https://libguides.usc.edu/writingguide

Illustration

  • Basics of Research Process
  • Methodology
  • Research Design: Definition, Types, Characteristics & Study Examples
  • Speech Topics
  • Basics of Essay Writing
  • Essay Topics
  • Other Essays
  • Main Academic Essays
  • Research Paper Topics
  • Basics of Research Paper Writing
  • Miscellaneous
  • Chicago/ Turabian
  • Data & Statistics
  • Admission Writing Tips
  • Admission Advice
  • Other Guides
  • Student Life
  • Studying Tips
  • Understanding Plagiarism
  • Academic Writing Tips
  • Basics of Dissertation & Thesis Writing

Illustration

  • Essay Guides
  • Research Paper Guides
  • Formatting Guides
  • Admission Guides
  • Dissertation & Thesis Guides

Research Design: Definition, Types, Characteristics & Study Examples

Research design

Table of contents

Illustration

Use our free Readability checker

A research design is the blueprint for any study. It's the plan that outlines how the research will be carried out. A study design usually includes the methods of data collection, the type of data to be gathered, and how it will be analyzed. Research designs help ensure the study is reliable, valid, and can answer the research question.

Behind every groundbreaking discovery and innovation lies a well-designed research. Whether you're investigating a new technology or exploring a social phenomenon, a solid research design is key to achieving reliable results. But what exactly does it means, and how do you create an effective one? Stay with our paper writers and find out:

  • Detailed definition
  • Types of research study designs
  • How to write a research design
  • Useful examples.

Whether you're a seasoned researcher or just getting started, understanding the core principles will help you conduct better studies and make more meaningful contributions.

What Is a Research Design: Definition

Research design is an overall study plan outlining a specific approach to investigating a research question . It covers particular methods and strategies for collecting, measuring and analyzing data. Students  are required to build a study design either as an individual task or as a separate chapter in a research paper , thesis or dissertation .

Before designing a research project, you need to consider a series aspects of your future study:

  • Research aims What research objectives do you want to accomplish with your study? What approach will you take to get there? Will you use a quantitative, qualitative, or mixed methods approach?
  • Type of data Will you gather new data (primary research), or rely on existing data (secondary research) to answer your research question?
  • Sampling methods How will you pick participants? What criteria will you use to ensure your sample is representative of the population?
  • Data collection methods What tools or instruments will you use to gather data (e.g., conducting a survey , interview, or observation)?
  • Measurement  What metrics will you use to capture and quantify data?
  • Data analysis  What statistical or qualitative techniques will you use to make sense of your findings?

By using a well-designed research plan, you can make sure your findings are solid and can be generalized to a larger group.

Research design example

You are going to investigate the effectiveness of a mindfulness-based intervention for reducing stress and anxiety among college students. You decide to organize an experiment to explore the impact. Participants should be randomly assigned to either an intervention group or a control group. You need to conduct pre- and post-intervention using self-report measures of stress and anxiety.

What Makes a Good Study Design? 

To design a research study that works, you need to carefully think things through. Make sure your strategy is tailored to your research topic and watch out for potential biases. Your procedures should be flexible enough to accommodate changes that may arise during the course of research. 

A good research design should be:

  • Clear and methodologically sound
  • Feasible and realistic
  • Knowledge-driven.

By following these guidelines, you'll set yourself up for success and be able to produce reliable results.

Research Study Design Structure

A structured research design provides a clear and organized plan for carrying out a study. It helps researchers to stay on track and ensure that the study stays within the bounds of acceptable time, resources, and funding.

A typical design includes 5 main components:

  • Research question(s): Central research topic(s) or issue(s).
  • Sampling strategy: Method for selecting participants or subjects.
  • Data collection techniques: Tools or instruments for retrieving data.
  • Data analysis approaches: Techniques for interpreting and scrutinizing assembled data.
  • Ethical considerations: Principles for protecting human subjects (e.g., obtaining a written consent, ensuring confidentiality guarantees).

Research Design Essential Characteristics

Creating a research design warrants a firm foundation for your exploration. The cost of making a mistake is too high. This is not something scholars can afford, especially if financial resources or a considerable amount of time is invested. Choose the wrong strategy, and you risk undermining your whole study and wasting resources. 

To avoid any unpleasant surprises, make sure your study conforms to the key characteristics. Here are some core features of research designs:

  • Reliability   Reliability is stability of your measures or instruments over time. A reliable research design is one that can be reproduced in the same way and deliver consistent outcomes. It should also nurture accurate representations of actual conditions and guarantee data quality.
  • Validity For a study to be valid , it must measure what it claims to measure. This means that methodological approaches should be carefully considered and aligned to the main research question(s).
  • Generalizability Generalizability means that your insights can be practiced outside of the scope of a study. When making inferences, researchers must take into account determinants such as sample size, sampling technique, and context.
  • Neutrality A study model should be free from personal or cognitive biases to ensure an impartial investigation of a research topic. Steer clear of highlighting any particular group or achievement.

Key Concepts in Research Design

Now let’s discuss the fundamental principles that underpin study designs in research. This will help you develop a strong framework and make sure all the puzzles fit together.

Primary concepts

Types of Approaches to Research Design

Study frameworks can fall into 2 major categories depending on the approach to compiling data you opt for. The 2 main types of study designs in research are qualitative and quantitative research. Both approaches have their unique strengths and weaknesses, and can be utilized based on the nature of information you are dealing with. 

Quantitative Research  

Quantitative study is focused on establishing empirical relationships between variables and collecting numerical data. It involves using statistics, surveys, and experiments to measure the effects of certain phenomena. This research design type looks at hard evidence and provides measurements that can be analyzed using statistical techniques. 

Qualitative Research 

Qualitative approach is used to examine the behavior, attitudes, and perceptions of individuals in a given environment. This type of study design relies on unstructured data retrieved through interviews, open-ended questions and observational methods. 

If you need your study done yesterday, leave StudyCrumb a “ write my research paper for me ” notice and have your project completed by experts.

Types of Research Designs & Examples

Choosing a research design may be tough especially for the first-timers. One of the great ways to get started is to pick the right design that will best fit your objectives. There are 4 different types of research designs you can opt for to carry out your investigation:

  • Experimental
  • Correlational
  • Descriptive
  • Diagnostic/explanatory.

Below we will go through each type and offer you examples of study designs to assist you with selection.

1. Experimental

In experimental research design , scientists manipulate one or more independent variables and control other factors in order to observe their effect on a dependent variable. This type of research design is used for experiments where the goal is to determine a causal relationship. 

Its core characteristics include:

  • Randomization
  • Manipulation
  • Replication.
A pharmaceutical company wants to test a new drug to investigate its effectiveness in treating a specific medical condition. Researchers would randomly assign participants to either a control group (receiving a placebo) or an experimental group (receiving the new drug). They would rigorously control all variables (e.g, age, medical history) and manipulate them to get reliable results.

2. Correlational

Correlational study is used to examine the existing relationships between variables. In this type of design, you don’t need to manipulate other variables. Here, researchers just focus on observing and measuring the naturally occurring relationship.

Correlational studies encompass such features: 

  • Data collection from natural settings
  • No intervention by the researcher
  • Observation over time.
A research team wants to examine the relationship between academic performance and extracurricular activities. They would observe students' performance in courses and measure how much time they spend engaging in extracurricular activities.

3. Descriptive 

Descriptive research design is all about describing a particular population or phenomenon without any interruption. This study design is especially helpful when we're not sure about something and want to understand it better.

Descriptive studies are characterized by such features:

  • Random and convenience sampling
  • Observation
  • No intervention.
A psychologist wants to understand how parents' behavior affects their child's self-concept. They would observe the interaction between children and their parents in a natural setting. Gathered information will help her get an overview of this situation and recognize some patterns.

4. Diagnostic

Diagnostic or explanatory research is used to determine the cause of an existing problem or a chronic symptom. Unlike other types of design, here scientists try to understand why something is happening. 

Among essential hallmarks of explanatory studies are: 

  • Testing hypotheses and theories
  • Examining existing data
  • Comparative analysis.
A public health specialist wants to identify the cause of an outbreak of water-borne disease in a certain area. They would inspect water samples and records to compare them with similar outbreaks in other areas. This will help to uncover reasons behind this accident.

How to Design a Research Study: Step-by-Step Process

When designing your research don't just jump into it. It's important to take the time and do things right in order to attain accurate findings. Follow these simple steps on how to design a study to get the most out of your project.

1. Determine Your Aims 

The first step in the research design process is figuring out what you want to achieve. This involves identifying your research question, goals and specific objectives you want to accomplish. Think whether you want to explore a specific issue or develop a new theory? Setting your aims from the get-go will help you stay focused and ensure that your study is driven by purpose. 

Once  you are clear with your goals, you need to decide on the main approach. Will you use qualitative or quantitative methods? Or perhaps a mixture of both?

2. Select a Type of Research Design

Choosing a suitable design requires considering multiple factors, such as your research question, data collection methods, and resources. There are various research design types, each with its own advantages and limitations. Think about the kind of data that would be most useful to address your questions. Ultimately, a well-devised strategy should help you gather accurate data to achieve your objectives.

3. Define Your Population and Sampling Methods

To design a research project, it is essential to establish your target population and parameters for selecting participants. First, identify a cohort of individuals who share common characteristics and possess relevant experiences. 

For instance, if you are researching the impact of social media on mental health, your population could be young adults aged 18-25 who use social media frequently.

With your population in mind, you can now choose an optimal sampling method. Sampling is basically the process of narrowing down your target group to only those individuals who will participate in your study. At this point, you need to decide on whether you want to randomly choose the participants (probability sampling) or set out any selection criteria (non-probability sampling). 

To examine the influence of social media on mental well-being, we will divide a whole population into smaller subgroups using stratified random sampling . Then, we will randomly pick participants from each subcategory to make sure that findings are also true for a broader group of young adults.

4. Decide on Your Data Collection Methods

When devising your study, it is also important to consider how you will retrieve data.  Depending on the type of design you are using, you may deploy diverse methods. Below you can see various data collection techniques suited for different research designs. 

Data collection methods in various studies

Additionally, if you plan on integrating existing data sources like medical records or publicly available datasets, you want to mention this as well. 

5. Arrange Your Data Collection Process

Your data collection process should also be meticulously thought out. This stage involves scheduling interviews, arranging questionnaires and preparing all the necessary tools for collecting information from participants. Detail how long your study will take and what procedures will be followed for recording and analyzing the data. 

State which variables will be studied and what measures or scales will be used when assessing each variable.

Measures and scales 

Measures and scales are tools used to quantify variables in research. A measure is any method used to collect data on a variable, while a scale is a set of items or questions used to measure a particular construct or concept. Different types of scales include nominal, ordinal, interval, or ratio , each of which has distinct properties

Operationalization 

When working with abstract information that needs to be quantified, researchers often operationalize the variable by defining it in concrete terms that can be measured or observed. This allows the abstract concept to be studied systematically and rigorously. 

Operationalization in study design example

If studying the concept of happiness, researchers might operationalize it by using a scale that measures positive affect or life satisfaction. This allows us to quantify happiness and inspect its relationship with other variables, such as income or social support.

Remember that research design should be flexible enough to adjust for any unforeseen developments. Even with rigorous preparation, you may still face unexpected challenges during your project. That’s why you need to work out contingency plans when designing research.

6. Choose Data Analysis Techniques

It’s impossible to design research without mentioning how you are going to scrutinize data. To select a proper method, take into account the type of data you are dealing with and how many variables you need to analyze. 

Qualitative data may require thematic analysis or content analysis.

Quantitative data, on the other hand, could be processed with more sophisticated statistical analysis approaches such as regression analysis, factor analysis or descriptive statistics.

Finally, don’t forget about ethical considerations. Opt for those methods that minimize harm to participants and protect their rights.

Research Design Checklist

Having a checklist in front of you will help you design your research flawlessly.

Bottom Line on Research Design & Study Types

Designing a research project involves making countless decisions that can affect the quality of your work. By planning out each step and selecting the best methods for data collection and analysis, you can ensure that your project is conducted professionally.

We hope this article has helped you to better understand the research design process. If you have any questions or comments, ping us in the comments section below.

Illustration

Entrust us your study and we will find the best research paper writer to complete your project. Count on a unique work and fast turnaround.

FAQ About Research Study Designs

1. what is a study design.

Study design, or else called research design, is the overall plan for a project, including its purpose, methodology, data collection and analysis techniques. A good design ensures that your project is conducted in an organized and ethical manner. It also provides clear guidelines for replicating or extending a study in the future.

2. What is the purpose of a research design?

The purpose of a research design is to provide a structure and framework for your project. By outlining your methodology, data collection techniques, and analysis methods in advance, you can ensure that your project will be conducted effectively.

3. What is the importance of research designs?

Research designs are critical to the success of any research project for several reasons. Specifically, study designs grant:

  • Clear direction for all stages of a study
  • Validity and reliability of findings
  • Roadmap for replication or further extension
  • Accurate results by controlling for potential bias
  • Comparison between studies by providing consistent guidelines.

By following an established plan, researchers can be sure that their projects are organized, ethical, and reliable.

4. What are the 4 types of study designs?

There are generally 4 types of study designs commonly used in research:

  • Experimental studies: investigate cause-and-effect relationships by manipulating the independent variable.
  • Correlational studies: examine relationships between 2 or more variables without intruding them.
  • Descriptive studies: describe the characteristics of a population or phenomenon without making any inferences about cause and effect.
  • Explanatory studies: intended to explain causal relationships.

Joe_Eckel_1_ab59a03630.jpg

Joe Eckel is an expert on Dissertations writing. He makes sure that each student gets precious insights on composing A-grade academic writing.

You may also like

Descriptive Research

  • checkbox I clearly defined my research question and its significance.
  • checkbox I considered crucial factors such as the nature of my study, type of required data and available resources to choose a suitable design.
  • checkbox A sample size is sufficient to provide statistically significant results.
  • checkbox My data collection methods are reliable and valid.
  • checkbox Analysis methods are appropriate for the type of data I will be gathering.
  • checkbox My research design protects the rights and privacy of my participants.
  • checkbox I created a realistic timeline for research, including deadlines for data collection, analysis, and write-up.
  • checkbox I considered funding sources and potential limitations.

For more advanced studies, you can even combine several types. Mixed-methods research may come in handy when exploring complex phenomena that cannot be adequately captured by one method alone.

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Guide to Experimental Design | Overview, Steps, & Examples

Guide to Experimental Design | Overview, 5 steps & Examples

Published on December 3, 2019 by Rebecca Bevans . Revised on June 21, 2023.

Experiments are used to study causal relationships . You manipulate one or more independent variables and measure their effect on one or more dependent variables.

Experimental design create a set of procedures to systematically test a hypothesis . A good experimental design requires a strong understanding of the system you are studying.

There are five key steps in designing an experiment:

  • Consider your variables and how they are related
  • Write a specific, testable hypothesis
  • Design experimental treatments to manipulate your independent variable
  • Assign subjects to groups, either between-subjects or within-subjects
  • Plan how you will measure your dependent variable

For valid conclusions, you also need to select a representative sample and control any  extraneous variables that might influence your results. If random assignment of participants to control and treatment groups is impossible, unethical, or highly difficult, consider an observational study instead. This minimizes several types of research bias, particularly sampling bias , survivorship bias , and attrition bias as time passes.

Table of contents

Step 1: define your variables, step 2: write your hypothesis, step 3: design your experimental treatments, step 4: assign your subjects to treatment groups, step 5: measure your dependent variable, other interesting articles, frequently asked questions about experiments.

You should begin with a specific research question . We will work with two research question examples, one from health sciences and one from ecology:

To translate your research question into an experimental hypothesis, you need to define the main variables and make predictions about how they are related.

Start by simply listing the independent and dependent variables .

Then you need to think about possible extraneous and confounding variables and consider how you might control  them in your experiment.

Finally, you can put these variables together into a diagram. Use arrows to show the possible relationships between variables and include signs to show the expected direction of the relationships.

Diagram of the relationship between variables in a sleep experiment

Here we predict that increasing temperature will increase soil respiration and decrease soil moisture, while decreasing soil moisture will lead to decreased soil respiration.

The only proofreading tool specialized in correcting academic writing - try for free!

The academic proofreading tool has been trained on 1000s of academic texts and by native English editors. Making it the most accurate and reliable proofreading tool for students.

research design report means

Try for free

Now that you have a strong conceptual understanding of the system you are studying, you should be able to write a specific, testable hypothesis that addresses your research question.

The next steps will describe how to design a controlled experiment . In a controlled experiment, you must be able to:

  • Systematically and precisely manipulate the independent variable(s).
  • Precisely measure the dependent variable(s).
  • Control any potential confounding variables.

If your study system doesn’t match these criteria, there are other types of research you can use to answer your research question.

How you manipulate the independent variable can affect the experiment’s external validity – that is, the extent to which the results can be generalized and applied to the broader world.

First, you may need to decide how widely to vary your independent variable.

  • just slightly above the natural range for your study region.
  • over a wider range of temperatures to mimic future warming.
  • over an extreme range that is beyond any possible natural variation.

Second, you may need to choose how finely to vary your independent variable. Sometimes this choice is made for you by your experimental system, but often you will need to decide, and this will affect how much you can infer from your results.

  • a categorical variable : either as binary (yes/no) or as levels of a factor (no phone use, low phone use, high phone use).
  • a continuous variable (minutes of phone use measured every night).

How you apply your experimental treatments to your test subjects is crucial for obtaining valid and reliable results.

First, you need to consider the study size : how many individuals will be included in the experiment? In general, the more subjects you include, the greater your experiment’s statistical power , which determines how much confidence you can have in your results.

Then you need to randomly assign your subjects to treatment groups . Each group receives a different level of the treatment (e.g. no phone use, low phone use, high phone use).

You should also include a control group , which receives no treatment. The control group tells us what would have happened to your test subjects without any experimental intervention.

When assigning your subjects to groups, there are two main choices you need to make:

  • A completely randomized design vs a randomized block design .
  • A between-subjects design vs a within-subjects design .

Randomization

An experiment can be completely randomized or randomized within blocks (aka strata):

  • In a completely randomized design , every subject is assigned to a treatment group at random.
  • In a randomized block design (aka stratified random design), subjects are first grouped according to a characteristic they share, and then randomly assigned to treatments within those groups.

Sometimes randomization isn’t practical or ethical , so researchers create partially-random or even non-random designs. An experimental design where treatments aren’t randomly assigned is called a quasi-experimental design .

Between-subjects vs. within-subjects

In a between-subjects design (also known as an independent measures design or classic ANOVA design), individuals receive only one of the possible levels of an experimental treatment.

In medical or social research, you might also use matched pairs within your between-subjects design to make sure that each treatment group contains the same variety of test subjects in the same proportions.

In a within-subjects design (also known as a repeated measures design), every individual receives each of the experimental treatments consecutively, and their responses to each treatment are measured.

Within-subjects or repeated measures can also refer to an experimental design where an effect emerges over time, and individual responses are measured over time in order to measure this effect as it emerges.

Counterbalancing (randomizing or reversing the order of treatments among subjects) is often used in within-subjects designs to ensure that the order of treatment application doesn’t influence the results of the experiment.

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

research design report means

Finally, you need to decide how you’ll collect data on your dependent variable outcomes. You should aim for reliable and valid measurements that minimize research bias or error.

Some variables, like temperature, can be objectively measured with scientific instruments. Others may need to be operationalized to turn them into measurable observations.

  • Ask participants to record what time they go to sleep and get up each day.
  • Ask participants to wear a sleep tracker.

How precisely you measure your dependent variable also affects the kinds of statistical analysis you can use on your data.

Experiments are always context-dependent, and a good experimental design will take into account all of the unique considerations of your study system to produce information that is both valid and relevant to your research question.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Student’s  t -distribution
  • Normal distribution
  • Null and Alternative Hypotheses
  • Chi square tests
  • Confidence interval
  • Cluster sampling
  • Stratified sampling
  • Data cleansing
  • Reproducibility vs Replicability
  • Peer review
  • Likert scale

Research bias

  • Implicit bias
  • Framing effect
  • Cognitive bias
  • Placebo effect
  • Hawthorne effect
  • Hindsight bias
  • Affect heuristic

Experimental design means planning a set of procedures to investigate a relationship between variables . To design a controlled experiment, you need:

  • A testable hypothesis
  • At least one independent variable that can be precisely manipulated
  • At least one dependent variable that can be precisely measured

When designing the experiment, you decide:

  • How you will manipulate the variable(s)
  • How you will control for any potential confounding variables
  • How many subjects or samples will be included in the study
  • How subjects will be assigned to treatment levels

Experimental design is essential to the internal and external validity of your experiment.

The key difference between observational studies and experimental designs is that a well-done observational study does not influence the responses of participants, while experiments do have some sort of treatment condition applied to at least some participants by random assignment .

A confounding variable , also called a confounder or confounding factor, is a third variable in a study examining a potential cause-and-effect relationship.

A confounding variable is related to both the supposed cause and the supposed effect of the study. It can be difficult to separate the true effect of the independent variable from the effect of the confounding variable.

In your research design , it’s important to identify potential confounding variables and plan how you will reduce their impact.

In a between-subjects design , every participant experiences only one condition, and researchers assess group differences between participants in various conditions.

In a within-subjects design , each participant experiences all conditions, and researchers test the same participants repeatedly for differences between conditions.

The word “between” means that you’re comparing different conditions between groups, while the word “within” means you’re comparing different conditions within the same group.

An experimental group, also known as a treatment group, receives the treatment whose effect researchers wish to study, whereas a control group does not. They should be identical in all other ways.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Bevans, R. (2023, June 21). Guide to Experimental Design | Overview, 5 steps & Examples. Scribbr. Retrieved February 12, 2024, from https://www.scribbr.com/methodology/experimental-design/

Is this article helpful?

Rebecca Bevans

Rebecca Bevans

Other students also liked, random assignment in experiments | introduction & examples, quasi-experimental design | definition, types & examples, how to write a lab report, what is your plagiarism score.

Research-Methodology

Research Design

Same as research approach, different textbooks place different meanings on research design. Some authors consider research design as the choice between qualitative and quantitative research methods. Others argue that research design refers to the choice of specific methods of data collection and analysis. Research design is also placed as a master plan for conducting a research project and this appears to be the most authentic explanation of the term.

 In your dissertation you can define research design as a general plan about what you will do to answer the research question. [1] It is a framework for choosing specific methods of data collection and data analysis.

Research design can be divided into two groups:  exploratory  and  conclusive . Exploratory research, according to its name merely aims to explore specific aspects of the research area. Exploratory research does not aim to provide final and conclusive answers to research questions. The researcher may even change the direction of the study to a certain extent, however not fundamentally, according to new evidences gained during the research process.

Conclusive research, on the contrary, generate findings that can be practically useful for decision-making. The following Table 1 illustrates the main differences between exploratory and conclusive research in relation to important components of a dissertation.

Table 1 Major differences between exploratory and conclusive research design [2]

The following can be mentioned as examples with exploratory design:

  • A critical analysis of argument of mandatory CSR for UK private sector organisations
  • A study into contradictions between CSR program and initiatives and business practices: a case study of Philip Morris USA
  • An investigation into the ways of customer relationship management in mobile marketing environment

Studies listed above do not aim to generate final and conclusive evidences to research questions. These studies merely aim to explore their respective research areas.

Conclusive research  can be divided into two categories:  descriptive  and  causal . Descriptive research design, as the name suggests, describes specific elements, causes, or phenomena in the research area.

Table 2 Examples for descriptive research design

Causal research design , on the other hand, is conducted to study cause-and-effect relationships.  Table 3 below illustrates some examples for studies with causal research design.

Table 3 Examples for studies with causal design

My e-book,  The Ultimate Guide to Writing a Dissertation in Business Studies: a step by step assistance  contains discussions of theory and application of research designs. The e-book also explains all stages of the  research process  starting from the  selection of the research area  to writing personal reflection. Important elements of dissertations such as  research philosophy ,  research approach ,  methods of data collection ,  data analysis  and  sampling  are explained in this e-book in simple words.

John Dudovskiy

Research design

[1] Saunders, M., Lewis, P. & Thornhill, A. (2012) “Research Methods for Business Students” 6 th  edition, Pearson Education Limited

[2] Source: Pride and Ferrell (2007)

A crucial roadmap for navigating the complex world of social media marketing.

  • · Brandwatch Academy
  • Forrester Wave

Brandwatch Consumer Research

Formerly the Falcon suite

Formerly Paladin

Published May 11 th 2020

10 Design Tips For Creating Research Reports People Want to Read

Visme's Chloe West shares her design tips for creating killer research reports that give the insights the best chance of being actioned

People work hard on generating data that can help their business, but all too often that data isn’t shared in a way that means others can act on the insights.

To help your report really stand out and ensure your information is as comprehensive, digestible, and actionable as possible, you want to put a lot of emphasis on your research report design.

Here are some tips on creating engaging research reports that easily convey your point and help your colleagues take action.

1. Play with research report formats

While everyone is used to receiving PDFs, that doesn’t have to be the only way you share your data and other information.

Instead, play with other research report formats, like a presentation, infographic , or even a live data report with real-time displays .

The best way to decide which format to use is to think about the amount of content you have to put into your report, and the best way to present those different pieces of information in the context of your business. You should also gather feedback from those you’re creating the report for on their preferred methods of receiving information. If they want a quick update, a simple one-page report might be your best bet.

2. Get readers on board from the first page

From the very first page, slide, content block, or whatever starts off your report, you want to reel your audience in.

The first thing they’re likely going to see is a report cover page , so you want to make a great first impression. Your colors, fonts, and images matter.

Titles and subheadings help set expectations, so make sure that they’re 100% accurate and give the reader an indication of the value they will get from the report. Create a clear and concise title in a large font and a descriptive subtitle directly below it to convey your report’s contents.

You’ll also want to include visuals that grab your reader’s eye and get them intrigued to read more. This can be stock photos, graphic illustrations or even animated icons. Just make sure they’re relevant and, especially if your report is around something sensitive, well thought out.

Take a look at this report’s cover page as an example.

Brandwatch image

The large font stands out as the title, with the subtitle underneath explaining what the reader can expect. The color scheme is warm and welcoming, and the background photo illustrates the report’s topic.

Note: See ‘9. Brand your report’ below.

3. Stay away from walls of text

No one wants to read a document or presentation that’s full of intimidating, dull-looking walls of text. When creating a research report with design in mind, it’s important to break up text into paragraphs, callouts, visuals, and other design elements to help make your content easy to read.

Take a look at this report page as an example.

Brandwatch image

By using various fonts and font sizes, as well as background colors to create accentuated callouts, this report page content is incredibly easy to digest.

Make sure to keep your report content concise and use visuals whenever you can to convey your point instead of another paragraph of text, especially when writing about something complex.

4. Call out important bits of information

We mentioned callouts in the last point, but we want to call them out a little bit more. Callouts are blurbs that help to accentuate important points within your report.

You want to use your report design to put emphasis on the most important bits of information so your reader has the option to easily skim and grasp the must-know facts.

The bottom elements of this report page below are great examples of this.

Brandwatch image

Although users could look to the bar graphs and determine which products are highest and lowest performing, the callout at the bottom immediately lets the reader know at a glance.

5. Visualize your data

If you have any important numbers to share within your research report, use data visualization to do so. Charts, graphs, pictograms, and other tools are great ways to showcase numbers at a glance.

It’s important to remember, though, that you want your charts to be clear and concise rather than flashy. Data visualization is supposed to highlight your report’s data, not to distract from it.

When referring to something like social media performance, data visualization is the perfect way to demonstrate success.

Rather than trying to explain performance in a paragraph filled with numbers, this example is using charts to demonstrate how well or poorly the ads did for a quick, at-a-glance insight.

Brandwatch image

Make sure you use the best chart or graph for your data so that your information is clearly conveyed, and ensure that you label data correctly so all the important information is easily available for the reader.

6. Pay attention to your margins

Give yourself a good bit of space around the edges of your report design. Taking up the full amount of page space can make your report look cluttered, so it’s important to allow yourself some negative space.

This report page is a great example of both margins around your page edges and white space between your elements so your report feels clean and minimalistic.

Brandwatch image

Ensuring there are margins and space between your design elements also gives your reader the perfect opportunity to take notes if the report is printed off or accompanying a presentation.

7. Add imagery

You can convey your information with more than just text and charts. Incorporating images, icons, and illustrations into your research report is another great way to help your reader navigate the pages.

If you create a digital report, you can even embed video clips or upload GIFs or other animated elements to your document.

Take a look at this report page to see how visuals can help break up content.

Brandwatch image

The stock photos interlaced with the text areas help to provide a visual balance in the report, making the content more visually appealing and easy to read. The trick is not to over-do it or use confusing or distracting images. You want readers to focus on the information.

8. Make your report interactive

Whether you’re creating an online presentation or a digital document, interactivity can make it even more engaging.

Find a way to make your report interactive by adding audio, embedding videos or past live streams, or enabling the reader to click on charts or prompt animations.

At the same time, be sure to keep in mind that time-short readers just want the insight directly from your report, so don’t add things that aren’t necessary. Your report shouldn’t be flashy, just engaging and easy to read.

A great example of interactivity that can add to your report is having your chart legends appear as your readers hover over each different bar or line in your graph.

You can easily share your interactive, digital reports via an online link or by embedding them on a landing page on your website.

There are some instances where you might want to encourage the reader to seek out their own insights through interactivity. The Pudding creates amazing data visualizations that encourage user interactivity. But, generally speaking, if you’re delivering business insights or corporate training information, you should keep things as to-the-point as possible.

9. Brand your report

Incorporate your brand colors and fonts into your report so it’s obvious that it belongs to your company.

Your company likely already has branding guidelines that help you understand how your colors and fonts should be used in documents or presentations. You might also have ready-made templates..

Even if it’s just an internal report, using your brand colors, fonts, and logo can give your report a more professional feel which can help add weight to the enclosed insights.

10. Create a report template

Once you finalize your design, save it as a template to reuse over and over again. Next time you need to create a similar report, you can simply copy and paste your data and update the information.

This’ll help make your job easier each time you have to create a report. Even with research reports that are directed at a different audience or needed for a different person, having a template as a starting point will be a game changer. Working with your design team to create these templates can ensure the branding is consistent.

Insights aren’t valuable unless they’re acted upon

Ensuring your business insights are seen throughout the organization is important – if no one reads them, they won’t be acted upon.

By improving the design of your report to allow for maximum engagement and readability, you’re a step closer to creating a bigger impact with your research and analysis.

Content Marketing Manager

research design report means

Share this post

Brandwatch bulletin.

Offering up analysis and data on everything from the events of the day to the latest consumer trends. Subscribe to keep your finger on the world’s pulse.

New: Consumer Research

Harness the power of digital consumer intelligence.

Consumer Research gives you access to deep consumer insights from 100 million online sources and over 1.4 trillion posts.

Brandwatch image

More in marketing

How to optimize your brand for social media search.

By Emily Smith Jan 30

The Complete Guide to Social Media Lead Generation

By Michaela Vogl Jan 29

How to Run Successful LinkedIn Ads in 2024

By Emily Smith Jan 22

How to Do an X (Twitter) Audit: A Step-by-Step Guide

By Michaela Vogl Jan 18

We value your privacy

We use cookies to improve your experience and give you personalized content. Do you agree to our cookie policy?

By using our site you agree to our use of cookies — I Agree

Falcon.io is now part of Brandwatch. You're in the right place!

Existing customer? Log in to access your existing Falcon products and data via the login menu on the top right of the page. New customer? You'll find the former Falcon products under 'Social Media Management' if you go to 'Our Suite' in the navigation.

Paladin is now Influence. You're in the right place!

Brandwatch acquired Paladin in March 2022. It's now called Influence, which is part of Brandwatch's Social Media Management solution. Want to access your Paladin account? Use the login menu at the top right corner.

  • Privacy Policy
  • SignUp/Login

Research Method

Home » Descriptive Research Design – Types, Methods and Examples

Descriptive Research Design – Types, Methods and Examples

Table of Contents

Descriptive Research Design

Descriptive Research Design

Definition:

Descriptive research design is a type of research methodology that aims to describe or document the characteristics, behaviors, attitudes, opinions, or perceptions of a group or population being studied.

Descriptive research design does not attempt to establish cause-and-effect relationships between variables or make predictions about future outcomes. Instead, it focuses on providing a detailed and accurate representation of the data collected, which can be useful for generating hypotheses, exploring trends, and identifying patterns in the data.

Types of Descriptive Research Design

Types of Descriptive Research Design are as follows:

Cross-sectional Study

This involves collecting data at a single point in time from a sample or population to describe their characteristics or behaviors. For example, a researcher may conduct a cross-sectional study to investigate the prevalence of certain health conditions among a population, or to describe the attitudes and beliefs of a particular group.

Longitudinal Study

This involves collecting data over an extended period of time, often through repeated observations or surveys of the same group or population. Longitudinal studies can be used to track changes in attitudes, behaviors, or outcomes over time, or to investigate the effects of interventions or treatments.

This involves an in-depth examination of a single individual, group, or situation to gain a detailed understanding of its characteristics or dynamics. Case studies are often used in psychology, sociology, and business to explore complex phenomena or to generate hypotheses for further research.

Survey Research

This involves collecting data from a sample or population through standardized questionnaires or interviews. Surveys can be used to describe attitudes, opinions, behaviors, or demographic characteristics of a group, and can be conducted in person, by phone, or online.

Observational Research

This involves observing and documenting the behavior or interactions of individuals or groups in a natural or controlled setting. Observational studies can be used to describe social, cultural, or environmental phenomena, or to investigate the effects of interventions or treatments.

Correlational Research

This involves examining the relationships between two or more variables to describe their patterns or associations. Correlational studies can be used to identify potential causal relationships or to explore the strength and direction of relationships between variables.

Data Analysis Methods

Descriptive research design data analysis methods depend on the type of data collected and the research question being addressed. Here are some common methods of data analysis for descriptive research:

Descriptive Statistics

This method involves analyzing data to summarize and describe the key features of a sample or population. Descriptive statistics can include measures of central tendency (e.g., mean, median, mode) and measures of variability (e.g., range, standard deviation).

Cross-tabulation

This method involves analyzing data by creating a table that shows the frequency of two or more variables together. Cross-tabulation can help identify patterns or relationships between variables.

Content Analysis

This method involves analyzing qualitative data (e.g., text, images, audio) to identify themes, patterns, or trends. Content analysis can be used to describe the characteristics of a sample or population, or to identify factors that influence attitudes or behaviors.

Qualitative Coding

This method involves analyzing qualitative data by assigning codes to segments of data based on their meaning or content. Qualitative coding can be used to identify common themes, patterns, or categories within the data.

Visualization

This method involves creating graphs or charts to represent data visually. Visualization can help identify patterns or relationships between variables and make it easier to communicate findings to others.

Comparative Analysis

This method involves comparing data across different groups or time periods to identify similarities and differences. Comparative analysis can help describe changes in attitudes or behaviors over time or differences between subgroups within a population.

Applications of Descriptive Research Design

Descriptive research design has numerous applications in various fields. Some of the common applications of descriptive research design are:

  • Market research: Descriptive research design is widely used in market research to understand consumer preferences, behavior, and attitudes. This helps companies to develop new products and services, improve marketing strategies, and increase customer satisfaction.
  • Health research: Descriptive research design is used in health research to describe the prevalence and distribution of a disease or health condition in a population. This helps healthcare providers to develop prevention and treatment strategies.
  • Educational research: Descriptive research design is used in educational research to describe the performance of students, schools, or educational programs. This helps educators to improve teaching methods and develop effective educational programs.
  • Social science research: Descriptive research design is used in social science research to describe social phenomena such as cultural norms, values, and beliefs. This helps researchers to understand social behavior and develop effective policies.
  • Public opinion research: Descriptive research design is used in public opinion research to understand the opinions and attitudes of the general public on various issues. This helps policymakers to develop effective policies that are aligned with public opinion.
  • Environmental research: Descriptive research design is used in environmental research to describe the environmental conditions of a particular region or ecosystem. This helps policymakers and environmentalists to develop effective conservation and preservation strategies.

Descriptive Research Design Examples

Here are some real-time examples of descriptive research designs:

  • A restaurant chain wants to understand the demographics and attitudes of its customers. They conduct a survey asking customers about their age, gender, income, frequency of visits, favorite menu items, and overall satisfaction. The survey data is analyzed using descriptive statistics and cross-tabulation to describe the characteristics of their customer base.
  • A medical researcher wants to describe the prevalence and risk factors of a particular disease in a population. They conduct a cross-sectional study in which they collect data from a sample of individuals using a standardized questionnaire. The data is analyzed using descriptive statistics and cross-tabulation to identify patterns in the prevalence and risk factors of the disease.
  • An education researcher wants to describe the learning outcomes of students in a particular school district. They collect test scores from a representative sample of students in the district and use descriptive statistics to calculate the mean, median, and standard deviation of the scores. They also create visualizations such as histograms and box plots to show the distribution of scores.
  • A marketing team wants to understand the attitudes and behaviors of consumers towards a new product. They conduct a series of focus groups and use qualitative coding to identify common themes and patterns in the data. They also create visualizations such as word clouds to show the most frequently mentioned topics.
  • An environmental scientist wants to describe the biodiversity of a particular ecosystem. They conduct an observational study in which they collect data on the species and abundance of plants and animals in the ecosystem. The data is analyzed using descriptive statistics to describe the diversity and richness of the ecosystem.

How to Conduct Descriptive Research Design

To conduct a descriptive research design, you can follow these general steps:

  • Define your research question: Clearly define the research question or problem that you want to address. Your research question should be specific and focused to guide your data collection and analysis.
  • Choose your research method: Select the most appropriate research method for your research question. As discussed earlier, common research methods for descriptive research include surveys, case studies, observational studies, cross-sectional studies, and longitudinal studies.
  • Design your study: Plan the details of your study, including the sampling strategy, data collection methods, and data analysis plan. Determine the sample size and sampling method, decide on the data collection tools (such as questionnaires, interviews, or observations), and outline your data analysis plan.
  • Collect data: Collect data from your sample or population using the data collection tools you have chosen. Ensure that you follow ethical guidelines for research and obtain informed consent from participants.
  • Analyze data: Use appropriate statistical or qualitative analysis methods to analyze your data. As discussed earlier, common data analysis methods for descriptive research include descriptive statistics, cross-tabulation, content analysis, qualitative coding, visualization, and comparative analysis.
  • I nterpret results: Interpret your findings in light of your research question and objectives. Identify patterns, trends, and relationships in the data, and describe the characteristics of your sample or population.
  • Draw conclusions and report results: Draw conclusions based on your analysis and interpretation of the data. Report your results in a clear and concise manner, using appropriate tables, graphs, or figures to present your findings. Ensure that your report follows accepted research standards and guidelines.

When to Use Descriptive Research Design

Descriptive research design is used in situations where the researcher wants to describe a population or phenomenon in detail. It is used to gather information about the current status or condition of a group or phenomenon without making any causal inferences. Descriptive research design is useful in the following situations:

  • Exploratory research: Descriptive research design is often used in exploratory research to gain an initial understanding of a phenomenon or population.
  • Identifying trends: Descriptive research design can be used to identify trends or patterns in a population, such as changes in consumer behavior or attitudes over time.
  • Market research: Descriptive research design is commonly used in market research to understand consumer preferences, behavior, and attitudes.
  • Health research: Descriptive research design is useful in health research to describe the prevalence and distribution of a disease or health condition in a population.
  • Social science research: Descriptive research design is used in social science research to describe social phenomena such as cultural norms, values, and beliefs.
  • Educational research: Descriptive research design is used in educational research to describe the performance of students, schools, or educational programs.

Purpose of Descriptive Research Design

The main purpose of descriptive research design is to describe and measure the characteristics of a population or phenomenon in a systematic and objective manner. It involves collecting data that describe the current status or condition of the population or phenomenon of interest, without manipulating or altering any variables.

The purpose of descriptive research design can be summarized as follows:

  • To provide an accurate description of a population or phenomenon: Descriptive research design aims to provide a comprehensive and accurate description of a population or phenomenon of interest. This can help researchers to develop a better understanding of the characteristics of the population or phenomenon.
  • To identify trends and patterns: Descriptive research design can help researchers to identify trends and patterns in the data, such as changes in behavior or attitudes over time. This can be useful for making predictions and developing strategies.
  • To generate hypotheses: Descriptive research design can be used to generate hypotheses or research questions that can be tested in future studies. For example, if a descriptive study finds a correlation between two variables, this could lead to the development of a hypothesis about the causal relationship between the variables.
  • To establish a baseline: Descriptive research design can establish a baseline or starting point for future research. This can be useful for comparing data from different time periods or populations.

Characteristics of Descriptive Research Design

Descriptive research design has several key characteristics that distinguish it from other research designs. Some of the main characteristics of descriptive research design are:

  • Objective : Descriptive research design is objective in nature, which means that it focuses on collecting factual and accurate data without any personal bias. The researcher aims to report the data objectively without any personal interpretation.
  • Non-experimental: Descriptive research design is non-experimental, which means that the researcher does not manipulate any variables. The researcher simply observes and records the behavior or characteristics of the population or phenomenon of interest.
  • Quantitative : Descriptive research design is quantitative in nature, which means that it involves collecting numerical data that can be analyzed using statistical techniques. This helps to provide a more precise and accurate description of the population or phenomenon.
  • Cross-sectional: Descriptive research design is often cross-sectional, which means that the data is collected at a single point in time. This can be useful for understanding the current state of the population or phenomenon, but it may not provide information about changes over time.
  • Large sample size: Descriptive research design typically involves a large sample size, which helps to ensure that the data is representative of the population of interest. A large sample size also helps to increase the reliability and validity of the data.
  • Systematic and structured: Descriptive research design involves a systematic and structured approach to data collection, which helps to ensure that the data is accurate and reliable. This involves using standardized procedures for data collection, such as surveys, questionnaires, or observation checklists.

Advantages of Descriptive Research Design

Descriptive research design has several advantages that make it a popular choice for researchers. Some of the main advantages of descriptive research design are:

  • Provides an accurate description: Descriptive research design is focused on accurately describing the characteristics of a population or phenomenon. This can help researchers to develop a better understanding of the subject of interest.
  • Easy to conduct: Descriptive research design is relatively easy to conduct and requires minimal resources compared to other research designs. It can be conducted quickly and efficiently, and data can be collected through surveys, questionnaires, or observations.
  • Useful for generating hypotheses: Descriptive research design can be used to generate hypotheses or research questions that can be tested in future studies. For example, if a descriptive study finds a correlation between two variables, this could lead to the development of a hypothesis about the causal relationship between the variables.
  • Large sample size : Descriptive research design typically involves a large sample size, which helps to ensure that the data is representative of the population of interest. A large sample size also helps to increase the reliability and validity of the data.
  • Can be used to monitor changes : Descriptive research design can be used to monitor changes over time in a population or phenomenon. This can be useful for identifying trends and patterns, and for making predictions about future behavior or attitudes.
  • Can be used in a variety of fields : Descriptive research design can be used in a variety of fields, including social sciences, healthcare, business, and education.

Limitation of Descriptive Research Design

Descriptive research design also has some limitations that researchers should consider before using this design. Some of the main limitations of descriptive research design are:

  • Cannot establish cause and effect: Descriptive research design cannot establish cause and effect relationships between variables. It only provides a description of the characteristics of the population or phenomenon of interest.
  • Limited generalizability: The results of a descriptive study may not be generalizable to other populations or situations. This is because descriptive research design often involves a specific sample or situation, which may not be representative of the broader population.
  • Potential for bias: Descriptive research design can be subject to bias, particularly if the researcher is not objective in their data collection or interpretation. This can lead to inaccurate or incomplete descriptions of the population or phenomenon of interest.
  • Limited depth: Descriptive research design may provide a superficial description of the population or phenomenon of interest. It does not delve into the underlying causes or mechanisms behind the observed behavior or characteristics.
  • Limited utility for theory development: Descriptive research design may not be useful for developing theories about the relationship between variables. It only provides a description of the variables themselves.
  • Relies on self-report data: Descriptive research design often relies on self-report data, such as surveys or questionnaires. This type of data may be subject to biases, such as social desirability bias or recall bias.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Case Study Research

Case Study – Methods, Examples and Guide

Qualitative Research

Qualitative Research – Methods, Analysis Types...

Qualitative Research Methods

Qualitative Research Methods

Basic Research

Basic Research – Types, Methods and Examples

Exploratory Research

Exploratory Research – Types, Methods and...

One-to-One Interview in Research

One-to-One Interview – Methods and Guide

Circular economy: definition, importance and benefits

The circular economy: find out what it means, how it benefits you, the environment and our economy.

research design report means

The European Union produces more than 2.2 billion tonnes of waste every year . It is currently updating its legislation on waste management to promote a shift to a more sustainable model known as the circular economy.

But what exactly does the circular economy mean? And what would be the benefits?

What is the circular economy?

The circular economy is a model of production and consumption , which involves sharing, leasing, reusing, repairing, refurbishing and recycling existing materials and products as long as possible. In this way, the life cycle of products is extended.

In practice, it implies reducing waste to a minimum. When a product reaches the end of its life, its materials are kept within the economy wherever possible thanks to recycling. These can be productively used again and again, thereby creating further value .

This is a departure from the traditional, linear economic model, which is based on a take-make-consume-throw away pattern. This model relies on large quantities of cheap, easily accessible materials and energy.

Also part of this model is planned obsolescence , when a product has been designed to have a limited lifespan to encourage consumers to buy it again. The European Parliament has called for measures to tackle this practice.

Infographic explaining the circular economy model

Benefits: why do we need to switch to a circular economy?

To protect the environment.

Reusing and recycling products would slow down the use of natural resources, reduce landscape and habitat disruption and help to limit biodiversity loss .

Another benefit from the circular economy is a reduction in total annual greenhouse gas emissions . According to the European Environment Agency, industrial processes and product use are responsible for 9.10% of greenhouse gas emissions in the EU, while the management of waste accounts for 3.32%.

Creating more efficient and sustainable products from the start would help to reduce energy and resource consumption, as it is estimated that more than 80% of a product's environmental impact is determined during the design phase.

A shift to more reliable products that can be reused, upgraded and repaired would reduce the amount of waste. Packaging is a growing issue and, on average, the average European generates nearly 180 kilos of packaging waste per year . The aim is to tackle excessive packaging and improve its design to promote reuse and recycling.

Reduce raw material dependence

The world's population is growing and with it the demand for raw materials. However, the supply of crucial raw materials is limited.

Finite supplies also means some EU countries are dependent on other countries for their raw materials. According to Eurostat , the EU imports about half of the raw materials it consumes.

The total value of trade (import plus exports) of raw materials between the EU and the rest of the world has almost tripled since 2002, with exports growing faster than imports. Regardless, the EU still imports more than it exports. In 2021, this resulted in a trade deficit of €35.5 billion.

Recycling raw materials mitigates the risks associated with supply, such as price volatility, availability and import dependency.

This especially applies to critical raw materials , needed for the production of technologies that are crucial for achieving climate goals, such as batteries and electric engines.

Create jobs and save consumers money

Moving towards a more circular economy could increase competitiveness, stimulate innovation, boost economic growth and create jobs ( 700,000 jobs in the EU alone by 2030 ).

Redesigning materials and products for circular use would also boost innovation across different sectors of the economy.

Consumers will be provided with more durable and innovative products that will increase the quality of life and save them money in the long term.

What is the EU doing to become a circular economy?

  In March 2020, the European Commission presented the circular economy action plan,  which aims to promote more sustainable product design, reduce waste and empower consumers, for example by creating a right to repair ). There is a focus on resource intensive sectors, such as electronics and ICT , plastics , textiles and construction.

In February 2021, the Parliament adopted a resolution on the new circular economy action plan demanding additional measures to achieve a carbon-neutral, environmentally sustainable, toxic-free and fully circular economy by 2050, including tighter recycling rules and binding targets for materials use and consumption by 2030. In March 2022, the Commission released the first package of measures to speed up the transition towards a circular economy, as part of the circular economy action plan. The proposals include boosting sustainable products, empowering consumers for the green transition, reviewing construction product regulation, and creating a strategy on sustainable textiles.

In November 2022, the Commission proposed new EU-wide rules on packaging . It aims to reduce packaging waste and improve packaging design, with for example clear labelling to promote reuse and recycling; and calls for a transition to bio-based, biodegradable and compostable plastics.

Find out more

  • Infographic on the circular economy

Related articles

Circular economy and waste reduction, share this article on:.

  • Sign up for mail updates
  • PDF version

This section features overview and background articles for the general public. Press releases and materials for news media are available in the news section .

  • Share full article

Advertisement

Supported by

Leading Museums Remove Native Displays Amid New Federal Rules

The American Museum of Natural History is closing two major halls as museums around the nation respond to updated policies from the Biden administration.

Several people look at glass cases filled with Native American cultural items.

By Julia Jacobs and Zachary Small

The American Museum of Natural History will close two major halls exhibiting Native American objects, its leaders said on Friday, in a dramatic response to new federal regulations that require museums to obtain consent from tribes before displaying or performing research on cultural items.

“The halls we are closing are artifacts of an era when museums such as ours did not respect the values, perspectives and indeed shared humanity of Indigenous peoples,” Sean Decatur, the museum’s president, wrote in a letter to the museum’s staff on Friday morning. “Actions that may feel sudden to some may seem long overdue to others.”

The museum is closing galleries dedicated to the Eastern Woodlands and the Great Plains this weekend, and covering a number of other display cases featuring Native American cultural items as it goes through its enormous collection to make sure it is in compliance with the new federal rules, which took effect this month.

Museums around the country have been covering up displays as curators scramble to determine whether they can be shown under the new regulations. The Field Museum in Chicago covered some display cases , the Peabody Museum of Archaeology and Ethnology at Harvard University said it would remove all funerary belongings from exhibition and the Cleveland Museum of Art has covered up some cases. And the Metropolitan Museum of Art in New York said Friday evening that it had removed roughly 20 items from its musical instruments galleries.

But the action by the American Museum of Natural History in New York, which draws 4.5 million visitors a year, making it one of the most visited museums in the world, sends a powerful message to the field. The museum’s anthropology department is one of the oldest and most prestigious in the United States, known for doing pioneering work under a long line of curators including Franz Boas and Margaret Mead. The closures will leave nearly 10,000 square feet of exhibition space off-limits to visitors; the museum said it could not provide an exact timeline for when the reconsidered exhibits would reopen.

“Some objects may never come back on display as a result of the consultation process,” Decatur said in an interview. “But we are looking to create smaller-scale programs throughout the museum that can explain what kind of process is underway.”

The changes are the result of a concerted effort by the Biden administration to speed up the repatriation of Native American remains, funerary objects and other sacred items. The process started in 1990 with the passage of the Native American Graves Protection and Repatriation Act , or NAGPRA, which established protocols for museums and other institutions to return human remains, funerary objects and other holdings to tribes. But as those efforts have dragged on for decades, the law was criticized by tribal representatives as being too slow and too susceptible to institutional resistance.

This month, new federal regulations went into effect that were designed to hasten returns, giving institutions five years to prepare all human remains and related funerary objects for repatriation and giving more authority to tribes throughout the process.

“We’re finally being heard — and it’s not a fight, it’s a conversation,” said Myra Masiel-Zamora, an archaeologist and curator with the Pechanga Band of Indians.

Even in the two weeks since the new regulations took effect, she said, she has felt the tenor of talks shift. In the past, institutions often viewed Native oral histories as less persuasive than academic studies when determining which modern-day tribes to repatriate objects to, she said. But the new regulations require institutions to “defer to the Native American traditional knowledge of lineal descendants, Indian Tribes and Native Hawaiian organizations.”

“We can say, ‘This needs to come home,’ and I’m hoping there will not be pushback,” Masiel-Zamora said.

Museum leaders have been preparing for the new regulations for months, consulting lawyers and curators and holding lengthy meetings to discuss what might need to be covered up or removed. Many institutions are planning to hire staff to comply with the new rules, which can involve extensive consultations with tribal representatives.

The result has been a major shift in practices when it comes to Native American exhibitions at some of the country’s leading museums — one that will be noticeable to visitors.

At the American Museum of Natural History, segments of the collection once used to teach students about the Iroquois, Mohegans, Cheyenne, Arapaho and other groups will be temporarily inaccessible. That includes large objects, like the birchbark canoe of Menominee origin in the Hall of Eastern Woodlands, and smaller ones, including darts that date as far back as 10,000 B.C. and a Hopi Katsina doll from what is now Arizona. Field trips for students to the Hall of Eastern Woodlands are being rethought now that they will not have access to those galleries.

“What might seem out of alignment for some people is because of a notion that museums affix in amber descriptions of the world,” Decatur said. “But museums are at their best when they reflect changing ideas.”

Exhibiting Native American human remains is generally prohibited at museums, so the collections being reassessed include sacred objects, burial belongings and other items of cultural patrimony. As the new regulations have been discussed and debated over the past year or so, some professional organizations, such as the Society for American Archaeology, have expressed concern that the rules were reaching too far into museums’ collection management practices. But since the regulations went into effect on Jan. 12, there has been little public pushback from museums.

Much of the holdings of human remains and Native cultural items were collected through practices that are now considered antiquated and even odious, including through donations by grave robbers and archaeological digs that cleared out Indigenous burial grounds.

“This is human rights work, and we need to think about it as that and not as science,” said Candace Sall, the director of the museum of anthropology at the University of Missouri, which is still working to repatriate the remains of more than 2,400 Native American individuals. Sall said she added five staff members to work on repatriation in anticipation of the regulations and hopes to add more.

Criticism of the pace of repatriation had put institutions such as the American Museum of Natural History under public pressure. In more than 30 years, the museum has repatriated the remains of approximately 1,000 individuals to tribal groups; it still holds the remains of about 2,200 Native Americans and thousands of funerary objects. (Last year, the museum said it would overhaul practices that extended to its larger collection of some 12,000 skeletons by removing human bones from public display and improving the storage facilities where they are kept.)

A top priority of the new regulations, which are administered by the Interior Department, is to finish the work of repatriating the Native human remains in institutional holdings, which amount to more than 96,000 individuals, according to federal data published in the fall.

The government has given institutions a deadline, giving them until 2029 to prepare human remains and their burial belongings for repatriation.

In many cases, human remains and cultural objects have little information attached to them, which has slowed repatriation in the past, especially for institutions that have sought exacting anthropological and ethnographic evidence of links to a modern Native group.

Now the government is urging institutions to push forward with the information they have, in some cases relying solely on geographical information — such as what county the remains were discovered in.

There have been concerns among some tribal officials that the new rules will result in a deluge of requests from museums that may be beyond their capacities and could create a financial burden.

Speaking in June to a committee that reviews the implementation of the law, Scott Willard, who works on repatriation issues for the Miami Tribe of Oklahoma, expressed concern that the rhetoric regarding the new regulations sometimes made it sound as if Native ancestors were “throwaway items.”

“This garage sale mentality of ‘give it all away right now’ is very offensive to us,” Willard said.

The officials who drew up the new regulations have said that institutions can get extensions to their deadlines as long as the tribes that they are consulting with agree, emphasizing the need to hold institutions accountable without overburdening tribes. If museums are found to have violated the regulations, they could be subject to fines.

Bryan Newland, the assistant secretary for Indian Affairs and a former tribal president of the Bay Mills Indian Community, said the rules were drawn up in consultation with tribal representatives, who wanted their ancestors to recover dignity in death.

“Repatriation isn’t just a rule on paper,” Newland said, “but it brings real meaningful healing and closure to people.”

Julia Jacobs is a general assignment reporter who often covers legal issues in arts and culture. More about Julia Jacobs

Zachary Small is a Times reporter writing about the art world’s relationship to money, politics and technology. More about Zachary Small

Seismic Performance of Isolated Bridges Under Beyond Design Basis Shaking, PEER Report 2024-02

Seismically isolated highway bridges are expected to provide limited service under a safety evaluation-level ground shaking with minimal to moderate damage. The behavior under shaking beyond design considerations, corresponding to a large return period seismic hazard, is not well understood and could induce significant damage. In these rare events, the seismic isolation system can be subjected to displacement demands beyond its design capacity, resulting in failure of the bearings, exceeding the clearance and pounding against the abutment backwalls, or damage propagating to other primary structural components. To better understand the seismic performance of simple highway bridges subjected to earthquakes beyond design considerations, this study simulates the response of a prototype bridge structure and examines the lateral displacement demands, the transfer of forces to the substructure, and potential failure modes of seismically isolated bridges. Advanced modeling approaches are considered to capture bearing characteristics, such as hardening at large strains, and a pounding macro-element to capture the effects of impact. Results show that for beyond design shaking, the bearings can reach the maximum shear strain capacity, significant residual deformation of the abutment can result from pounding, and the columns can experience moderate damage. The progression of damage is identified in an effort toward the development of models suitable for assessing the overall seismic risk, repairability, and downtime of seismically isolated bridges.

Two-page summary:  click here .

Download full report:  click here.

Full List of PEER Reports:  click here.

PDF icon

  • PEER Reports topic page

This paper is in the following e-collection/theme issue:

Published on 14.2.2024 in Vol 26 (2024)

Dietary Intake Assessment Using a Novel, Generic Meal–Based Recall and a 24-Hour Recall: Comparison Study

Authors of this article:

Author Orcid Image

Original Paper

  • Cathal O'Hara 1, 2, 3 * , BSc   ; 
  • Eileen R Gibney 1, 2, 3 * , PhD  

1 University College Dublin Institute of Food and Health, Science Centre South, University College Dublin, Dublin, Ireland

2 Insight Centre for Data Analytics, University College Dublin, Belfield, Dublin, Ireland

3 School of Agriculture and Food Science, University College Dublin, Belfield, Dublin, Ireland

*all authors contributed equally

Corresponding Author:

Eileen R Gibney, PhD

University College Dublin Institute of Food and Health

Science Centre South

University College Dublin

Dublin, D04 N2E5

Phone: 353 17162819

Email: [email protected]

Background: Dietary intake assessment is an integral part of addressing suboptimal dietary intakes. Existing food-based methods are time-consuming and burdensome for users to report the individual foods consumed at each meal. However, ease of use is the most important feature for individuals choosing a nutrition or diet app. Intakes of whole meals can be reported in a manner that is less burdensome than reporting individual foods. No study has developed a method of dietary intake assessment where individuals report their dietary intakes as whole meals rather than individual foods.

Objective: This study aims to develop a novel, meal-based method of dietary intake assessment and test its ability to estimate nutrient intakes compared with that of a web-based, 24-hour recall (24HR).

Methods: Participants completed a web-based, generic meal–based recall. This involved, for each meal type (breakfast, light meal, main meal, snack, and beverage), choosing from a selection of meal images those that most represented their intakes during the previous day. Meal images were based on generic meals from a previous study that were representative of the actual meal intakes in Ireland. Participants also completed a web-based 24HR. Both methods were completed on the same day, 3 hours apart. In a crossover design, participants were randomized in terms of which method they completed first. Then, 2 weeks after the first dietary assessments, participants repeated the process in the reverse order. Estimates of mean daily nutrient intakes and the categorization of individuals according to nutrient-based guidelines (eg, low, adequate, and high) were compared between the 2 methods. P values of less than .05 were considered statistically significant.

Results: In total, 161 participants completed the study. For the 23 nutrient variables compared, the median percentage difference between the 2 methods was 7.6% (IQR 2.6%-13.2%), with P values ranging from <.001 to .97, and out of 23 variables, effect sizes for the differences were small for 19 (83%) variables, moderate for 2 (9%) variables, and large for 2 (9%) variables. Correlation coefficients were statistically significant ( P <.05) for 18 (78%) of the 23 variables. Statistically significant correlations ranged from 0.16 to 0.45, with median correlation of 0.32 (IQR 0.25-0.40). When participants were classified according to nutrient-based guidelines, the proportion of individuals who were classified into the same category ranged from 52.8% (85/161) to 84.5% (136/161).

Conclusions: A generic meal–based method of dietary intake assessment provides estimates of nutrient intake comparable with those provided by a web-based 24HR but with varying levels of agreement among nutrients. Further studies are required to refine and improve the generic recall across a range of nutrients. Future studies will consider user experience including the potential feasibility of incorporating image recognition of whole meals into the generic recall.

Introduction

Well-established causal relationships exist between dietary intakes and health [ 1 ]. Accurate dietary intake assessment is required to identify suboptimal intakes and devise interventions to address them [ 2 ]. Existing food-based methods of dietary intake assessment can be time-consuming and burdensome for individuals to complete [ 3 ]. In addition, not all methods provide information such as the timing of meals, different foods that are consumed in combination as part of these meals, or combinations of different meals over a day; for example, the food frequency questionnaire (FFQ) focuses on mean daily food and nutrient intakes [ 4 ].

The time and effort required for individuals to complete the existing methods of dietary intake assessment such as FFQs, 24-hour recalls (24HRs), and food diaries may limit adherence to and engagement with such methods, which are often used on web-based and mobile-based personalized nutrition platforms [ 3 ]. A survey of 2382 adults across Europe found that ease of use was the most important feature for participants when choosing a nutrition or diet app [ 5 ]. Digital versions of 24HRs and food diaries require the user to text search for a food and then select from a list of results the food that they consumed. This process is then repeated for each food in the meal and each meal in the day [ 6 ]. Intakes of whole meals can be recorded in a manner that is less burdensome than recording individual foods, providing a potentially low-burden method for dietary intake assessment [ 3 ]. For example, instead of text searching for individual food, as is required in 24HRs and food diaries, the user could be presented with images of whole meals and choose the image most similar to their meal [ 3 ]. The use of meal-based methods may also be preferred in personalized nutrition because people tend to perceive their dietary intakes in terms of the meals they have consumed rather than their daily intakes of nutrients or foods; therefore, recording dietary intakes and providing dietary advice in this manner may be more intuitive [ 7 , 8 ].

Although the number of studies examining meal patterns has increased in recent years [ 4 ], only 3 studies [ 9 - 11 ] have developed meal-based methods of dietary intake assessment rather than using existing food-based methods. Englund-Ögge et al [ 9 ] used a method in which participants reported how often they consumed various meal types (breakfast, morning snack, lunch, afternoon snack, dinner, evening snack, supper, and night meal). Wilson et al [ 10 ] used a similar approach, but instead of using meal types, they divided the day into periods and asked participants to report for each period whether they consumed nothing, a snack, a small meal, or a large meal and whether they drank nothing, alcohol, water, or something else. These approaches to meal-based dietary assessment are simple to complete and provide qualitative information about meal types and their timing. They do not, however, provide the qualitative detail necessary to identify the different combinations of foods being consumed in meals and the combinations of those meals over a day or the quantitative detail required to estimate the nutrient intakes arising from those consumptions. Murakami et al [ 11 ] developed an approach that involves participants reporting the frequency of consumption of combinations of food groups and foods at specified meal types (breakfast, morning snack, lunch, afternoon snack, dinner, and night snack), and it has been designed for use in the Japanese population. This approach allows for the identification of meal patterns and nutrient intakes but still requires individuals to report intakes at the food level. None of those studies, however, allow for the reporting of meal portion sizes or capture information from the previous 24 hours in the form of recall.

Several tools have been developed that use image recognition software for dietary intake assessment [ 12 ]. However, these methods remain food based rather than meal based. The software first segments a meal image into its constituent foods and then provides a suggested match for each food in the image. The user must then confirm whether the suggested foods are correct. For any missing or incorrect foods, the user must text search for the correct food and add it to their record [ 12 , 13 ]. No study that allows individuals to record their dietary intakes at the meal level by reporting intakes of whole meals rather than individual foods or food groups has been identified.

This study aimed to develop a novel, meal-based method of dietary intake assessment that would allow individuals to report their intakes of whole meals rather than reporting the individual foods that make up those meals and to compare this method with a web-based 24HR.

Ethical Considerations

The human research ethics committee of University College Dublin granted ethics approval to conduct this study (LS-21-64-OHara-Gibney). Participants were assigned a study number, and on completion of data collection any information linking this number to participants’ personal data was deleted, thus de-identifying participants. No financial compensation was provided for participation in this study. However, on completion of the study, all participants received a personalized nutrition advice report based on the data they provided during the 24HRs.

Recruitment

The target sample size was 160 participants, based on a previous review of comparisons between digital and paper-based 24HRs, which found a range of sample sizes from 53 to 167 [ 14 ]. There were no studies of meal-based methods of dietary assessment on which to base the sample size.

Recruitment was conducted using local radio, local newspaper, posters, social media, and word of mouth. Researchers directed potential participants to a web page containing full details of the study requirements and contact details of the researchers for further queries, if required. After reading the study information, potential participants could indicate whether they had read and understood the material and agree or disagree to proceed to the web-based screening questionnaire to determine their eligibility to participate in the study. Those who were eligible then completed a web-based consent form to provide electronic informed consent. People were eligible if they were aged >18 years, were fluent in English, had regular access to the internet, and were not current or former students of a degree in nutrition or dietetics ( Figure 1 ).

research design report means

Study Design

Participants who were deemed eligible to participate and provided consent were contacted via email, which provided details about the next steps of the study and the links to the 2 web-based dietary intake assessment tools (24HR and generic meal–based recall). A crossover design was used with regard to the order in which participants completed the recalls. Participants were randomized to complete either one of the 2 methods first and then complete the second method at least 3 hours later on the same day. Participants were also randomized regarding whether they would recall a weekday or a weekend day. Then, 2 weeks after having completed the first set of recalls, participants completed the recalls again in the reverse order (compared with the order in which they were completed on the first occasion), followed by the completion of the evaluation questionnaire ( Figure 1 ).

Overview of the Generic Meal–Based Dietary Recall

The meal-based dietary intake assessment method was administered using Qualtrics XM (Qualtrics International Inc), a web-based platform for questionnaires. Before completing the dietary intake assessment, participants provided information about their sex, age, weight, and height via a web-based questionnaire. Participants were then asked to select the meal types they had consumed on the previous day from the following list: breakfast, morning snack, lunch, afternoon snack, evening meal, evening snack, and beverage (for beverages consumed alone without food). For each meal type selected, participants were presented with a series of images of generic meals that are associated with that meal type and asked to choose the meal image that was most similar to the meal that they had consumed ( Figure 2 ). For each selected meal, participants were asked to choose from 3 different images of that meal, each representing a different portion size, and then asked to select whether the chosen image was smaller than the chosen portion, the same size as the portion chosen, or larger than the portion chosen ( Figure 2 ). For beverages, participants were asked to choose from 3 different images of that beverage, representing 3 different portion sizes, and then asked to select the number of those portion sizes that they had consumed. For each meal type, participants could choose the option that none of the images presented were representative of their intake for that meal type, for example, “none of the above options are similar to what I ate for breakfast.” If participants selected this option, a box appeared, in which they could enter a free-text description of what they had consumed for that meal. This allowed the researchers to determine whether a suitable generic meal could have been chosen or whether there was, in fact, no matching generic meal.

research design report means

Development of the Meal-Based Dietary Recall

The process of identifying the generic meals that were presented to participants as images has been described in detail elsewhere [ 15 ]. In brief, data from the Irish National Adult Nutrition Survey (NANS; 2008-2010) [ 16 ] were used. This is a representative data set about dietary intakes of 1500 adults in Ireland, collected using 4-day weighed food diaries. The meals reported were categorized into the following meal types: breakfast, light meals, main meals, snacks, and beverages. A nutrient profiling score, the Nutrient Rich Foods (NRF) Index [ 17 ], was calculated for each meal; specifically, the NRF9.3 version of that profiling score was used. Within each meal type, meals were grouped (ie, clustered) using partitioning around the medoids clustering to identify groups of meals that had similar NRF Index scores and food group composition. These groups were defined as generic meals.

The 27,336 individual meals consumed by the participants in the NANS study were condensed to 63 generic meals; 49% (31/63) of these were consumed during the week and 51% (32/63) during the weekend. Among the 63 generic meals that were identified, there was overlap between weekday and weekend meals. That is, some weekday meals were the same as the weekend meals. When these duplicates were removed, participants were presented with 43 meal images: 5 (12%) breakfasts, 5 (12%) snacks (repeated for morning, afternoon, and evening snacks), 10 (23%) lunches, 19 (44%) dinners, and 4 (9%) beverages.

The nutrient content of a given generic meal was defined as the mean nutrient content of the individual meals that made up that generic meal per 100 g. Each generic meal was assigned 7 portion sizes by ordering each of the individual meals by weight and dividing the meals into 7 equal parts based on septile values for meal weight. The median weights for each part were assigned as the generic portion size for that meal. The nutrient composition for each of the portion sizes was calculated using the meal weight for that portion and the generic meal nutrient composition [ 15 ]. Within the meal intake assessment tool presented in this study, the second, fourth, and sixth portion sizes were used as the 3 portion size images shown to participants, with the options asking whether the image chosen was smaller than, the same as, or larger than that consumed, allowing participants to be categorized as consuming the first, third, fifth, or seventh portion size for a given meal.

24HR Method

Participants completed their 24HRs using a validated, web-based, self-administered 24HR tool called Foodbook24, which follows the multipass recall method [ 18 - 20 ]. Participants first chose the meal types that they had consumed from the following list: breakfast, morning snack, lunch, afternoon snack, evening meal, and evening snack with the option to add additional snacks. For each of the selected meal types, participants added the foods and beverages they had consumed as part of those meal types by text searching from the food list using a search bar. Portion size was then reported based on the number of the food or beverage items consumed or from portion size photographs, as appropriate. Participants were then presented with the list of foods they had recorded for review, before being presented with a list of commonly forgotten foods. The food list contained food composition data from the McCance and Widdowson’s Composition of Foods Integrated Dataset (CoFID) [ 21 ], with some additions relevant to dietary intakes in Ireland. The development of Foodbook24 and its food list is described in detail elsewhere [ 18 , 20 ].

Statistical Analysis

All analyses were performed using R (version 4.2.2, R Foundation for Statistical Computing) [ 22 ] in the RStudio integrated development environment (version 2022.07.2+576, Posit PBC) [ 23 ]. Data from 24HRs were used to identify participants likely to be misreporters of energy intake (EI), based on the ratio of estimated EI to basal metabolic rate (BMR; EI:BMR) using the BMR equations from Henry [ 24 ]. On the basis of the Goldberg equations [ 25 ], EI:BMR <0.96 was deemed indicative of underreporting, and EI:BMR >2.49 was indicative of overreporting. The analysis presented in this paper includes all participants (161/161, 100%), given the negligible differences observed when misreporters were removed; the analysis of the smaller data set with misreporters excluded is provided in Multimedia Appendix 1 . P values of <.05 were considered statistically significant.

The Shapiro-Wilk test was used to determine whether the differences in the variables between methods were normally distributed and confirmed using visual inspection of histograms. Wilcoxon signed rank test was performed to compare nutrient intake estimates obtained from the web-based 24HR with those obtained from the generic meal–based recall. Wilcoxon effect size ( r ) was calculated. Effect size ≥0.1 and <0.3 was considered small, effect size ≥0.3 and <0.5 was considered moderate, and effect size ≥0.5 was considered large [ 26 ]. Bland-Altman analysis was also performed, whereby the mean difference between the 2 data sets and the limits of agreement (LOAs; mean difference – 1.96 SD to mean difference + 1.96 SD) for each nutrient were calculated. The correlation of nutrient intakes between the 2 methods was assessed using Spearman rank correlation coefficients. Correlation coefficient <0.20 was indicative of poor correlation, coefficient ≥0.20 and <0.50 was indicative of acceptable correlation, and coefficient ≥0.5 was indicative of good correlation [ 27 ].

Cross-classification of quartiles was performed for all nutrients (23/23, 100%) assessed. That is, nutrient intakes from both methods were divided into quartiles to determine the proportion of participants who remained in the same quartile for both methods (exact agreement), the proportion of participants who were classified in the same or adjacent quartiles (exact + adjacent), the proportion of participants who were classified 2 quartiles apart (disagreement), and the proportion of participants who were classified 3 quartiles apart (extreme disagreement). Participants were also classified according to nutrient-based dietary guidelines, separately for both methods [ 28 - 30 ]. For example, they were classified based on whether their nutrient intakes were low, adequate, or high according to those guidelines. The nutrients assessed were those deemed to be of public health relevance and included protein, carbohydrate, fat, monounsaturated fat, polyunsaturated fat, saturated fat, salt, dietary fiber, calcium, iron, folate, thiamin, riboflavin, and vitamin C. The proportion of individuals who were classified into the same category based on both methods was calculated for each nutrient.

Evaluation Questionnaire

Participants also completed an evaluation questionnaire at the end of the study, which was administered via Qualtrics XM (Qualtrics International Inc). Participants were asked to what extent they agreed that the meals presented in the meal-based dietary intake assessment tool were representative of what they consume, that the portion sizes presented were representative of what they consume, that the instructions were clear and easy to understand, and that overall, the tool was easy to use. The response options were agree, somewhat agree, somewhat disagree, or disagree. Participants were also asked how they would describe the ease of use of the meal-based tool compared with the 24HR with the response options being: better, somewhat better, somewhat worse, or worse. Finally, participants were asked to respond either yes or no as to whether they would consider using a similar tool to the meal-based tool in the future.

Study Sample

A total of 161 participants completed both methods of dietary intake assessment at 2 time points. Most participants were female (131/161, 81.4%), the median age was 54 (IQR 39-63) years, and the median BMI was 25.3 (IQR 22.5-28.9) kg/m 2 ( Table 1 ).

a Weight and height were self-reported by participants, and BMI was subsequently calculated from the self-reported values.

Daily Nutrient Intakes

For the 23 variables compared, the percentage difference between the meal-based and 24HR methods ranged from 0% to 46.7%, with the median percentage difference being 7.6% and with the generic method providing a higher estimate than the 24HR for 18 nutrients. P values for the differences between the 2 methods ranged from <.001 to .97, with 13 (57%) of 23 comparisons reaching statistical significance ( P <.05). Among the 23 variables, effect sizes for the differences were small for 19 (83%) variables, moderate for 2 (9%) variables (folate in µg and sodium in mg), and large for 2 (9%) variables (polyunsaturated fat in g and as % total EI [TEI]; Table 2 ).

a P values were derived using Wilcoxon signed rank test, with P <.05 indicating statistical significance.

b Effect size ≥0.1 and <0.3 was considered small, effect size ≥0.3 and <0.5 was considered moderate, and effect size ≥0.5 was considered large [ 26 ].

c TEI: total energy intake.

Comparing the differences using the Bland-Altman analysis, the mean differences between the 2 methods for macronutrients were close to 0, whereas those for micronutrients were larger. The LOAs tended to be wide for all nutrients (23/23, 100%). The analysis identified 17 (74%) nutrients for which ≥95% of participants fell within the LOA. The proportion of individuals who fell within the LOA ranged from 92.5% (149/161) for polyunsaturated fats (% TEI) to 98.1% (158/161) for dietary fiber (g). Bland-Altman plots for energy and macronutrients are presented in Figure 3 ; values from the Bland-Altman analysis for the remaining nutrients are given in Table 3 .

research design report means

a Differences are given as values from the 24-hour recall minus values from the generic recall.

b LOA: limit of agreement.

Correlation coefficients were statistically significant ( P <.05) for 18 (78%) of the 23 variables. No significant correlation was identified for total fat (% TEI), monounsaturated fat (% TEI), polyunsaturated fat (% TEI), vitamin D (µg), and sodium (mg). For those where a statistically significant correlation was identified, they ranged from 0.16 for saturated fat as % TEI to 0.45 for sugar in grams, with median correlation of 0.32 (IQR 0.25-0.40; Table 4 ).

a TEI: total energy intake.

Categorization of Daily Nutrient Intakes

Cross-classification of quartiles is also presented in Table 4 . The proportion of individuals remaining in the same quartile ranged from 22.9% (37/161) for total fat (% TEI) to 39.1% (63/161) for carbohydrates (g), with median of 32.3% (IQR 28.6%-34.5%). Of the 23 nutrients, 3 (13%) nutrients had extreme disagreement for ≤5% of participants (protein in both g and % TEI and sugar as % TEI), 17 (74%) had extreme disagreement for between >5% and ≤10% of participants, and 3 (13%) had extreme disagreement for >10% of participants (monounsaturated and polyunsaturated fat as % TEI and vitamin D in µg). When participants were classified according to nutrient-based guidelines (eg, low, adequate, or high) for the 14 nutrients, the proportion of participants who were classified into the same category by both methods ranged from 52.8% (85/161) for total fat (% TEI) to 84.5% (136/161) for protein (g/kg body weight; Table 5 ). The median proportion of participants who were classified correctly among the 14 nutrients was 70.5% (IQR 61.8%-78.9%).

a BW: body weight.

b TEI: total energy intake.

Participant Evaluation Questionnaire

Most participants either somewhat agreed or agreed regarding generic recall that the instructions provided were clear and easy to understand (147/161, 91.3%); that the portion sizes in the tool were largely representative of the portion sizes they had consumed (137/161, 85.1%); and that, overall, the tool was easy to use (130/161, 80.7%). However, most (89/161, 55.3%) reported that the meal images were not representative of what they had actually consumed. This was anticipated, and for each meal in generic recall, participants were given the option to select that none of the meal images presented to them was similar to what they had consumed. This option was chosen for 36.04% (683/1895) of the total meals consumed, distributed across the meal types as follows: 17.4% (119/683) were breakfasts, 24.7% (169/683) were light meals, 20.4% (139/683) were main meals, 29.3% (200/683) were snacks, and 8.2% (56/683) were beverages. When these choices were reviewed by the researchers, by comparing participants’ text descriptions of their meals with the possible options from generic meal images, it was determined that 86.8% (593/683) of the meals were correctly recorded by participants as not having a matching generic meal, whereas 13.2% (90/683) of the meals could have been matched to one of the generic meal images. Although most of the participants (106/161, 65.8%) reported that the ease of use of the meal-based tool was worse or somewhat worse than the 24HR, 39.1% (63/161) of the participants reported that they would consider using a similar tool to generic recall again.

Principal Findings

This study reports about a novel, meal-based recall that allows individuals to report their intakes of whole meals rather than reporting individual food or food group intakes for each meal, as is necessary in commonly used traditional dietary intake assessment methods. Estimated nutrient intakes obtained from the generic meal–based method were comparable with those obtained from the 24HR for some but not all nutrients. Comparisons between the methods were more similar at the group level than the individual level. Participants found the meal-based method understandable and easy to use. Previous studies have identified generic meals that exist in national dietary survey data [ 15 , 31 - 33 ], but this study is the first to use images of those generic meals as a novel method of dietary intake assessment.

In previous studies of generic meals, comparisons were made between the estimated nutrient intakes obtained from the original data and from the generic data. Agreement between the original and generic data was stronger in those studies than the agreement between the 2 methods of dietary assessment presented in this study; however, this was expected because those studies compared intakes arising from generic meals with intakes arising from the original data from which those generic meals were derived [ 15 , 31 - 33 ]. In this study, comparisons were made on data collected using 2 different methods; therefore, the generic meal images presented to participants were not influenced by those participants’ food intakes. Agreement between the methods in this study varied depending on the nutrient in question. In general, percentage differences and effect sizes for differences between the 2 methods were small for most nutrients (19/23, 83%). Certain nutrients showed poorer agreement than others including total fat, polyunsaturated fats, monounsaturated fats, and vitamin D. A few features of the generic method may have given rise to differences in these fat-soluble nutrients. In the food-based 24HR, participants could specify the types of fats that they added to foods; however, in the generic method, this was not the case, as participants had to choose between predefined generic meals. Some of these nutrients are found in relatively high concentrations in relatively few foods, which may also give rise to differences between a food-based method and a generic method. These trends have also been observed previously when comparing FFQs with 24HRs and food records, as noted by Cui et al [ 34 ] in a meta-analysis of 130 such studies. In the case of polyunsaturated fats, the difference between the median intakes was considerably larger than other nutrients. This was identified as having arisen from differences in food composition between the generic data and the 24HR data. The generic meals used in the generic recall and their nutritional composition were derived from data from NANS in Ireland [ 15 , 16 ]. The composition data in that survey, in turn, were obtained from a variety of sources including food packaging, industry information, published papers, and various food composition tables including those from the United Kingdom, Finland, and Australia that were published between 2002 and 2010 [ 35 ]. The composition data used in the 24HR were obtained from the 2021 publication of McCance and Widdowson’s CoFID [ 21 ]. Upon further examination of polyunsaturated fat content of individual foods from both NANS and CoFID, several foods were identified in NANS that had considerably higher values for polyunsaturated fat than those corresponding foods in CoFID in a manner which was not evident for other nutrients.

Although other studies have not examined the potential for dietary assessment based on individuals’ reporting of whole meal intakes, comparisons can be drawn with other related methods. Murakami et al [ 36 ] developed a meal-focused method called the food combination questionnaire (FCQ). In this method, participants reported, for each meal type (breakfast, lunch, dinner, and snacks), what staple foods they had consumed, what accompanying foods they had consumed with those staple foods, and how frequently per week they had consumed them during the preceding month. Agreement regarding estimates of nutrient intake between the FCQ and a 4-day weighed food record was similar to the agreement reported in this study for the comparison between the generic recall and 24HR. The median value of the statistically significant correlation coefficients was 0.35 compared with 0.31 in this study. In that study, however, the FCQ estimated lower intakes for most nutrients compared with food record, whereas in this study, the generic recall estimated higher intakes for most nutrients (16/23, 70%) compared with the 24HR.

Another approach to dietary intake assessment reported by Katz et al [ 37 ] involves participants reporting their intakes at the dietary pattern level instead of the meal, food group, or food levels. In this format, participants are presented with 2 images at a time. Each image contains multiple different foods and meals representing different dietary patterns based on the Healthy Eating Index, and the participant must choose which image is more representative of their dietary intakes. This process of choosing between 2 different diet pattern images is repeated until a “best fit” is identified for the individual [ 37 ]. Comparison of estimated nutrient intakes obtained from this method with those obtained from 3 separate 24HRs resulted in median correlation of 0.30 for correlations that were statistically significant [ 38 ]. Similar to this study, the abovementioned diet pattern approach also tended to estimate higher nutrient intakes than the 24HR. Similar trends were also observed in the Bland-Altman analysis with low bias or systematic error for macronutrients (ie, mean differences were close to 0) but slightly greater bias for micronutrients. Wide LOAs were also observed in both studies, indicating considerable random error. Random error can be reduced through repeated measures [ 39 ]; therefore, conducting >2 generic recalls performed in this study may mitigate against the random error observed here. Another trend that is evident from the Bland-Altman plots for macronutrients expressed as % TEI is that generic recall gives higher estimates of intake than 24HR when intakes are low and gives lower estimates of intake than 24HR when intakes are high. This arises from the narrowing of the distribution of intakes (ie, reduced variance) in the generic method, where participants can only choose from a small number of generic meals compared with the practically limitless different combinations of foods that participants can choose from in 24HR. The use of portion sizes mitigates against this trend for values expressed in absolute intakes, for example, grams of macronutrient intake. However, expressing those values relative to EI effectively nullifies the increased variance introduced by portion size, giving rise to the trends observed in the Bland-Altman plots for values given as % TEI but not for those given as absolute intakes.

A version of FCQ described previously has been used to provided dietary advice in a pilot study by Murakami et al [ 40 ]. Similarly, the dietary pattern–based method of dietary assessment described previously has also been incorporated into a commercially available tool that provides nutrition recommendations to users [ 41 ]; however, the recommendation aspect of this tool has not been described in the scientific literature. It has also been proposed that a meal-based method of dietary intake assessment could provide a low-burden alternative to gathering dietary data for personalized nutrition advice [ 3 ]. This study presents the first attempt at implementing such a proposal. Its performance in estimating nutrient intakes is similar to those of other approaches that focus on meals or dietary patterns. However, these methods tend to have poorer agreement compared with that between FFQs and 24HRs, where median correlation of 0.42 has been reported for the various nutrients assessed [ 34 ], or compared with the agreement between 24HRs and diet records, where median correlations in the range of 0.45 to 0.50 have been reported [ 42 , 43 ]. Further studies are required to appraise the performance of these methods, including generic recall, against objective biomarkers of dietary intake.

Many of the comparisons between the 2 methods in this study are based on their ability to provide point estimates of nutrient intakes, with agreement for point estimates of nutrient intake typically stronger at the group level than at the individual level. This trend has also been observed in comparisons of other methods of dietary intake assessment, for example, between 24HR and diet record [ 42 ], between FFQ and diet record [ 44 ], and between FFQ and 24HR [ 45 ]. In this study, this was expected, given that there are fewer generic meals than foods for participants to choose from, that is, the variance was intentionally reduced but in a manner that was systematic and consistent across meals [ 15 ]. This trend has been observed not just in comparison studies of different methods of dietary assessment but also in studies that have used generic meals as a method for secondary analysis of dietary data that have already been collected [ 15 , 31 - 33 ]. Approaches to personalized nutrition facilitated by technology, however, do not rely solely on point estimates of nutrient intake. Instead, individuals are categorized into ranges of nutrient intakes (eg, low, adequate, or high), allowing room for error in the point estimates [ 40 , 46 ]. In the Food4Me study of personalized nutrition, for example, participants were categorized in this manner, and dietary advice reports were tailored depending on the participants’ categories for various nutrients [ 40 , 46 ]. This study has shown that individuals can be classified according to nutrient-based dietary guidelines using generic recall. Similarly, this approach can be used to rank individuals based on their nutrient intakes, with values from the cross-classification of quartiles being comparable with those observed in studies comparing FFQs with 24HRs, where median exact + adjacent agreement has been reported between 66% and 86.1% [ 47 - 49 ].

Notably, with regard to the use of generic recall in dietary assessment in personalized nutrition, most participants (130/161, 80.7%) reported in the evaluation questionnaire that the recall was easy to use. Ease of use has previously been reported as an important factor in the choice of nutrition or diet apps among a cohort of 2382 adults in Europe [ 5 ]. In contrast, two-thirds of the participants reported that the ease of use of the 24HR was better than that of the meal-based approach. This is understandable given that the web-based 24HR used in this study was developed as a stand-alone platform in collaboration with software developers [ 20 ], whereas generic recall was a concept or pilot tool implemented by the authors using a commercially available questionnaire platform and not specifically designed for user experience. Future user evaluations and collaboration with software development professionals could further enhance the user experience of generic recall, including incorporation of meal image recognition.

So far, many studies have been conducted on image-based food recognition using computer vision as a means of reducing the burden of data input for food-based methods of dietary intake assessment [ 13 ]. These image recognition tools, however, are food based, insofar as when an individual takes a photo of their meal, the software segments the image and provides a suggested match for each of the individual foods that make up the meal [ 13 ]. The user must then confirm that each of the suggested foods are correct. For any missing or incorrect foods, the user must text search for the correct food and add it to their record [ 12 ]. It is possible that a meal-based approach could be taken to image recognition in dietary assessment, removing the need to identify individual foods. Instead, the software would classify a whole meal image as one of the generic meals used in this study. Further studies are required, however, to determine the feasibility of such an approach for meal-based image recognition in nutrition.

Limitations

This study has a number of limitations. The generic meal images used are based on dietary intake data obtained from NANS (2008-2010) [ 16 ]. Although, at the time of writing, these were the most recently published intake data in Ireland, the generic meals do not account for any changes in food composition or meal intakes that may have occurred since that time. This may account for the findings in the evaluation questionnaire of this study, with 55.3% (89/161) of the participants reporting that the generic meals were not representative of their intakes, and shows the need to ensure that generic meals are revised using most recent data. This may be of particular importance if the tool was to be used with a younger cohort of participants than those who participated in this study. This study also used a convenience sample rather than one that is representative of the population in Ireland. This may result in the recruitment of participants who are more likely to have an interest in nutrition and health. The unbalanced nature of the demographics of the study participants precludes subgroup analysis in relation to demographic factors.

The 2 types of recall used in this study are influenced by measurement error and only provide an estimation of true intakes. This study aimed to compare these 2 methods, and therefore, the reported statistics should be interpreted as representing the relationship between generic recall and 24HR and not the relationship between generic recall and true intakes. A comparison study was deemed more appropriate as an initial indication of generic recall’s strengths and weaknesses before considering more labor-intensive and costly objective measures of comparison such as feeding studies or biomarkers of dietary intake.

The strengths of this study include its sample size of 161 participants. The randomization of participants regarding the recall method they would complete first, the reversal of the order of completion in their second set of recalls, and the 2-week washout period mitigates against any learning effect that participants may have experienced after completing the first recall. The comparison method used, Foodbook24, is a validated, web-based 24HR [ 20 ]. This study also captured dietary intakes on both weekend days and weekdays, accounting for the potential differences in eating habits that occur between the 2 categories [ 50 - 52 ].

Conclusions

A generic meal–based method of dietary intake assessment provides estimates of nutrient intake comparable with those provided by a web-based 24HR. The agreement ranges among nutrients from weak to moderate, with better agreement at the group level than the individual level. Further studies are required to improve this method of dietary assessment considering the number of recalls required, and more recent dietary intake data should be used to define the generic meals. Future studies will determine the feasibility of taking a meal-based approach to image recognition in dietary intake assessment.

Acknowledgments

This paper has emanated from a study conducted with the financial support of Science Foundation Ireland under grant 12/RC/2289_P2. For the purpose of Open Access, the authors have applied a CC BY public copyright license to any author-accepted manuscript version arising from this submission. Science Foundation Ireland had no role in any aspect of designing or conducting the study or preparing the manuscript.

Data Availability

The data sets generated and analyzed during this study are available from the corresponding author upon reasonable request.

Authors' Contributions

CO was involved in conceptualization, data curation, formal analysis, investigation, methodology, project administration, visualization, original draft preparation, and review and editing. ERG was involved in conceptualization, data curation, formal analysis, funding acquisition, methodology, project administration, resources, supervision, visualization, and review and editing.

Conflicts of Interest

None declared.

Analysis of the smaller data set, with misreporters excluded.

  • GBD 2017 Diet Collaborators. Health effects of dietary risks in 195 countries, 1990-2017: a systematic analysis for the Global Burden of Disease Study 2017. Lancet. May 11, 2019;393(10184):1958-1972. [ https://linkinghub.elsevier.com/retrieve/pii/S0140-6736(19)30041-8 ] [ CrossRef ] [ Medline ]
  • Gandy J. Manual of Dietetic Practice. 6th edition. Hoboken, NJ. John Wiley & Sons; Jul 2019.
  • Gibney MJ, Walsh MC. The future direction of personalised nutrition: my diet, my phenotype, my genes. Proc Nutr Soc. May 2013;72(2):219-225. [ CrossRef ] [ Medline ]
  • O'Hara C, Gibney ER. Meal pattern analysis in nutritional science: recent methods and findings. Adv Nutr. Jul 30, 2021;12(4):1365-1378. [ https://linkinghub.elsevier.com/retrieve/pii/S2161-8313(22)00144-2 ] [ CrossRef ] [ Medline ]
  • Vasiloglou MF, Christodoulidis S, Reber E, Stathopoulou T, Lu Y, Stanga Z, et al. Perspectives and preferences of adult smartphone users regarding nutrition and diet apps: web-based survey study. JMIR Mhealth Uhealth. Jul 30, 2021;9(7):e27885. [ https://boris.unibe.ch/id/eprint/156649 ] [ CrossRef ] [ Medline ]
  • Eldridge AL, Piernas C, Illner AK, Gibney MJ, Gurinović MA, de Vries JH, et al. Evaluation of new technology-based tools for dietary intake assessment-an ILSI Europe dietary intake and exposure task force evaluation. Nutrients. Dec 28, 2018;11(1):55. [ https://www.mdpi.com/resolver?pii=nu11010055 ] [ CrossRef ] [ Medline ]
  • Camargo AM, Botelho AM, Dean M, Fiates GM. Meal planning by high and low health conscious individuals during a simulated shop in the supermarket: a mixed methods study. Appetite. Jan 01, 2020;144:104468. [ CrossRef ] [ Medline ]
  • Bisogni CA, Falk LW, Madore E, Blake CE, Jastran M, Sobal J, et al. Dimensions of everyday eating and drinking episodes. Appetite. Mar 2007;48(2):218-231. [ CrossRef ] [ Medline ]
  • Englund-Ögge L, Birgisdottir BE, Sengpiel V, Brantsæter AL, Haugen M, Myhre R, et al. Meal frequency patterns and glycemic properties of maternal diet in relation to preterm delivery: results from a large prospective cohort study. PLoS One. Mar 01, 2017;12(3):e0172896. [ https://dx.plos.org/10.1371/journal.pone.0172896 ] [ CrossRef ] [ Medline ]
  • Wilson JE, Blizzard L, Gall SL, Magnussen CG, Oddy WH, Dwyer T, et al. An eating pattern characterised by skipped or delayed breakfast is associated with mood disorders among an Australian adult cohort. Psychol Med. Dec 2020;50(16):2711-2721. [ https://urn.fi/URN:NBN:fi-fe2021042824709 ] [ CrossRef ] [ Medline ]
  • Murakami K, Shinozaki N, McCaffrey TA, Livingstone MB, Sasaki S. Data-driven development of the meal-based diet history questionnaire for Japanese adults. Br J Nutr. Oct 14, 2021;126(7):1056-1064. [ CrossRef ] [ Medline ]
  • Boushey CJ, Spoden M, Zhu FM, Delp EJ, Kerr DA. New mobile methods for dietary assessment: review of image-assisted and image-based dietary assessment methods. Proc Nutr Soc. Aug 2017;76(3):283-294. [ CrossRef ] [ Medline ]
  • Dalakleidi KV, Papadelli M, Kapolos I, Papadimitriou K. Applying image-based food-recognition systems on dietary assessment: a systematic review. Adv Nutr. Dec 22, 2022;13(6):2590-2619. [ https://linkinghub.elsevier.com/retrieve/pii/S2161-8313(23)00093-5 ] [ CrossRef ] [ Medline ]
  • Timon CM, van den Barg R, Blain RJ, Kehoe L, Evans K, Walton J, et al. A review of the design and validation of web- and computer-based 24-h dietary recall tools. Nutr Res Rev. Dec 2016;29(2):268-280. [ CrossRef ] [ Medline ]
  • O'Hara C, O'Sullivan A, Gibney ER. A clustering approach to meal-based analysis of dietary intakes applied to population and individual data. J Nutr. Oct 06, 2022;152(10):2297-2308. [ https://linkinghub.elsevier.com/retrieve/pii/S0022-3166(23)08606-6 ] [ CrossRef ] [ Medline ]
  • Walton J. National adult nutrition survey summary report. Irish Universities Nutrition Alliance. Mar 2011. URL: https:/​/irp-cdn.​multiscreensite.com/​46a7ad27/​files/​uploaded/​The%20National%20Adult%20Nutrition%20Survey%20Summary%20Report%20March%202011.​pdf [accessed 2024-05-20]
  • Fulgoni 3rd VL, Keast DR, Drewnowski A. Development and validation of the nutrient-rich foods index: a tool to measure nutritional quality of foods. J Nutr. Aug 2009;139(8):1549-1554. [ https://linkinghub.elsevier.com/retrieve/pii/S0022-3166(22)06842-0 ] [ CrossRef ] [ Medline ]
  • Evans K, Hennessy Á, Walton J, Timon C, Gibney E, Flynn A. Development and evaluation of a concise food list for use in a web-based 24-h dietary recall tool. J Nutr Sci. Aug 29, 2017;6:e46. [ https://europepmc.org/abstract/MED/29152250 ] [ CrossRef ] [ Medline ]
  • Timon CM, Evans K, Kehoe L, Blain RJ, Flynn A, Gibney ER, et al. Comparison of a web-based 24-h dietary recall tool (Foodbook24) to an interviewer-led 24-h dietary recall. Nutrients. Apr 25, 2017;9(5):425. [ https://www.mdpi.com/resolver?pii=nu9050425 ] [ CrossRef ] [ Medline ]
  • Timon CM, Blain RJ, McNulty B, Kehoe L, Evans K, Walton J, et al. The development, validation, and user evaluation of Foodbook24: a web-based dietary assessment tool developed for the irish adult population. J Med Internet Res. May 11, 2017;19(5):e158. [ https://www.jmir.org/2017/5/e158/ ] [ CrossRef ] [ Medline ]
  • Public Health England. McCance and Widdowson's The Composition of Foods. Burlington, MA. The Royal Society of Chemistry Publishing; 2002.
  • R Core Team. R: a language and environment for statistical computing. R Foundation for Statistical Computing. 2019. URL: https://www.R-project.org/ [accessed 2024-01-26]
  • RStudio: integrated development for R. RStudio. URL: http://www.rstudio.com/ [accessed 2024-01-26]
  • Henry CJ. Basal metabolic rate studies in humans: measurement and development of new equations. Public Health Nutr. Oct 2005;8(7A):1133-1152. [ CrossRef ] [ Medline ]
  • Black AE. Critical evaluation of energy intake using the Goldberg cut-off for energy intake:basal metabolic rate. A practical guide to its calculation, use and limitations. Int J Obes Relat Metab Disord. Sep 2000;24(9):1119-1130. [ CrossRef ] [ Medline ]
  • Cohen J. A power primer. Psychol Bull. Jul 1992;112(1):155-159. [ CrossRef ] [ Medline ]
  • Lombard MJ, Steyn NP, Charlton KE, Senekal M. Application and interpretation of multiple statistical tests to evaluate validity of dietary intake assessment methods. Nutr J. Apr 22, 2015;14:40. [ https://nutritionj.biomedcentral.com/articles/10.1186/s12937-015-0027-y ] [ CrossRef ] [ Medline ]
  • Institute of Medicine. Dietary Reference Intakes for Energy, Carbohydrate, Fiber, Fat, Fatty Acids, Cholesterol, Protein, and Amino Acids. Washington, DC. The National Academies Press; 2005.
  • Joint WHO/FAO/UNU Expert Consultation. Protein and amino acid requirements in human nutrition. World Health Organ Tech Rep Ser. 2007(935):1-265. [ Medline ]
  • Institute of Medicine (U.S.). Subcommittee on Interpretation and Uses of Dietary Reference Intakes; Institute of Medicine (U.S.). Subcommittee on Upper Reference Levels of Nutrients; Standing Committee on the Scientific Evaluation of Dietary Reference Intakes. Dietary Reference Intakes: Applications in Dietary Assessment. Washington, DC. The National Academies Press; 2000.
  • Woolhead C, Gibney MJ, Walsh MC, Brennan L, Gibney ER. A generic coding approach for the examination of meal patterns. Am J Clin Nutr. Aug 2015;102(2):316-323. [ https://linkinghub.elsevier.com/retrieve/pii/S0002-9165(23)12501-9 ] [ CrossRef ] [ Medline ]
  • Uzhova I, Woolhead C, Timon CM, O'Sullivan A, Brennan L, Peñalvo JL, et al. Generic meal patterns identified by latent class analysis: insights from NANS (national adult nutrition survey). Nutrients. Mar 06, 2018;10(3):310. [ https://www.mdpi.com/resolver?pii=nu10030310 ] [ CrossRef ] [ Medline ]
  • Murakami K, Livingstone MB, Sasaki S, Hirota N, Notsu A, Miura A, et al. Applying a meal coding system to 16-d weighed dietary record data in the Japanese context: towards the development of simple meal-based dietary assessment tools. J Nutr Sci. Nov 13, 2018;7:e29. [ https://europepmc.org/abstract/MED/30459946 ] [ CrossRef ] [ Medline ]
  • Cui Q, Xia Y, Wu Q, Chang Q, Niu K, Zhao Y. Validity of the food frequency questionnaire for adults in nutritional epidemiological studies: a systematic review and meta-analysis. Crit Rev Food Sci Nutr. 2023;63(12):1670-1688. [ CrossRef ] [ Medline ]
  • Li K, McNulty BA, Tiernery AM, Devlin NF, Joyce T, Leite JC, et al. Dietary fat intakes in Irish adults in 2011: how much has changed in 10 years? Br J Nutr. May 28, 2016;115(10):1798-1809. [ CrossRef ] [ Medline ]
  • Murakami K, Shinozaki N, Livingstone MB, Kimoto N, Masayasu S, Sasaki S. Food combination questionnaire for Japanese: relative validity regarding food and nutrient intake and overall diet quality against the 4-day weighed dietary record. J Nutr Sci. 2023;12:e22. [ https://europepmc.org/abstract/MED/36843967 ] [ CrossRef ] [ Medline ]
  • Katz DL, Rhee LQ, Katz CS, Aronson DL, Frank GC, Gardner CD, et al. Dietary assessment can be based on pattern recognition rather than recall. Med Hypotheses. Feb 26, 2020;140:109644. [ CrossRef ] [ Medline ]
  • Turner-McGrievy G, Hutto B, Bernhart JA, Wilson MJ. Comparison of the diet ID platform to the automated self-administered 24-hour (ASA24) dietary assessment tool for assessment of dietary intake. J Am Nutr Assoc. 2022;41(4):360-382. [ https://europepmc.org/abstract/MED/33705267 ] [ CrossRef ] [ Medline ]
  • Subar AF, Freedman LS, Tooze JA, Kirkpatrick SI, Boushey C, Neuhouser ML, et al. Addressing current criticism regarding the value of self-report dietary data. J Nutr. Dec 2015;145(12):2639-2645. [ https://linkinghub.elsevier.com/retrieve/pii/S0022-3166(22)08939-8 ] [ CrossRef ] [ Medline ]
  • Murakami K, Shinozaki N, Masayasu S, Livingstone MB. Web-based personalized nutrition system for delivering dietary feedback based on behavior change techniques: development and pilot study among dietitians. Nutrients. Sep 27, 2021;13(10):3391. [ https://www.mdpi.com/resolver?pii=nu13103391 ] [ CrossRef ] [ Medline ]
  • The evidence-based nutrition wellbeing toolkit. Diet ID. URL: https://www.dietid.com/home2 [accessed 2024-03-15]
  • Frankenfeld CL, Poudrier JK, Waters NM, Gillevet PM, Xu Y. Dietary intake measured from a self-administered, online 24-hour recall system compared with 4-day diet records in an adult US population. J Acad Nutr Diet. Oct 2012;112(10):1642-1647. [ CrossRef ] [ Medline ]
  • De Keyzer W, Huybrechts I, De Vriendt V, Vandevijvere S, Slimani N, Van Oyen H, et al. Repeated 24-hour recalls versus dietary records for estimating nutrient intakes in a national food consumption survey. Food Nutr Res. 2011;55:1-10. [ https://www.tandfonline.com/doi/full/10.3402/fnr.v55i0.7307 ] [ CrossRef ] [ Medline ]
  • Araujo MC, Yokoo EM, Pereira RA. Validation and calibration of a semiquantitative food frequency questionnaire designed for adolescents. J Am Diet Assoc. Aug 2010;110(8):1170-1177. [ CrossRef ] [ Medline ]
  • Dehghan M, del Cerro S, Zhang X, Cuneo JM, Linetzky B, Diaz R, et al. Validation of a semi-quantitative food frequency questionnaire for Argentinean adults. PLoS One. 2012;7(5):e37958. [ https://dx.plos.org/10.1371/journal.pone.0037958 ] [ CrossRef ] [ Medline ]
  • Celis-Morales C, Livingstone KM, Marsaux CF, Macready AL, Fallaize R, O'Donovan CB, et al. Effect of personalized nutrition on health-related behaviour change: evidence from the Food4Me European randomized controlled trial. Int J Epidemiol. Apr 01, 2017;46(2):578-588. [ CrossRef ] [ Medline ]
  • Ye Q, Hong X, Wang Z, Yang H, Chen X, Zhou H, et al. Reproducibility and validity of an FFQ developed for adults in Nanjing, China. Br J Nutr. Mar 14, 2016;115(5):887-894. [ CrossRef ] [ Medline ]
  • Zack RM, Irema K, Kazonda P, Leyna GH, Liu E, Gilbert S, et al. Validity of an FFQ to measure nutrient and food intakes in Tanzania. Public Health Nutr. Aug 2018;21(12):2211-2220. [ https://europepmc.org/abstract/MED/29656731 ] [ CrossRef ] [ Medline ]
  • Liu L, Wang PP, Roebothan B, Ryan A, Tucker CS, Colbourne J, et al. Assessing the validity of a self-administered food-frequency questionnaire (FFQ) in the adult population of Newfoundland and Labrador, Canada. Nutr J. Apr 16, 2013;12:49. [ https://nutritionj.biomedcentral.com/articles/10.1186/1475-2891-12-49 ] [ CrossRef ] [ Medline ]
  • Haines PS, Hama MY, Guilkey DK, Popkin BM. Weekend eating in the United States is linked with greater energy, fat, and alcohol intake. Obes Res. Aug 2003;11(8):945-949. [ https://onlinelibrary.wiley.com/doi/10.1038/oby.2003.130 ] [ CrossRef ] [ Medline ]
  • Racette SB, Weiss EP, Schechtman KB, Steger-May K, Villareal DT, Obert KA, et al. Influence of weekend lifestyle patterns on body weight. Obesity (Silver Spring). Aug 2008;16(8):1826-1830. [ https://europepmc.org/abstract/MED/18551108 ] [ CrossRef ] [ Medline ]
  • An R. Weekend-weekday differences in diet among U.S. adults, 2003-2012. Ann Epidemiol. Jan 2016;26(1):57-65. [ CrossRef ] [ Medline ]

Abbreviations

Edited by T Leung, G Eysenbach; submitted 10.05.23; peer-reviewed by C Matthys, N Pahlavani; comments to author 31.08.23; revised version received 19.09.23; accepted 29.11.23; published 14.02.24

©Cathal O'Hara, Eileen R Gibney. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 14.02.2024.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.

IMAGES

  1. 14+ Design Report Templates

    research design report means

  2. FREE 11+ Sample Research Reports in MS Word

    research design report means

  3. Research Report Templates

    research design report means

  4. 10 Report Design Ideas & Tips to ENGAGE Readers [+Templates]

    research design report means

  5. Research Report Design: Economist & RBC

    research design report means

  6. Types of Research Report

    research design report means

VIDEO

  1. Research Design| Principles of research design

  2. research design

  3. Research DESIGN

  4. Types of research design

  5. Continuation on Research Design

  6. RESEARCH DESIGN in Detail. #researchmethods #sociology

COMMENTS

  1. What Is a Research Design

    Step 1: Consider your aims and approach Step 2: Choose a type of research design Step 3: Identify your population and sampling method Step 4: Choose your data collection methods Step 5: Plan your data collection procedures Step 6: Decide on your data analysis strategies Other interesting articles Frequently asked questions about research design

  2. What Is Research Design? 8 Types + Examples

    Research design refers to the overall plan, structure or strategy that guides a research project, from its conception to the final data analysis. A good research design serves as the blueprint for how you, as the researcher, will collect and analyse data while ensuring consistency, reliability and validity throughout your study.

  3. What is a Research Design? Definition, Types, Methods and Examples

    A research design is defined as the overall plan or structure that guides the process of conducting research. It is a critical component of the research process and serves as a blueprint for how a study will be carried out, including the methods and techniques that will be used to collect and analyze data.

  4. Research Design

    Step 1 Step 2 Step 3 Step 4 Step 5 Step 6 Step 1: Consider your aims and approach Before you can start designing your research, you should already have a clear idea of the research question you want to investigate. Example: Research question How can teachers adapt their lessons for effective remote learning?

  5. Research Design

    Definition: Research design refers to the overall strategy or plan for conducting a research study. It outlines the methods and procedures that will be used to collect and analyze data, as well as the goals and objectives of the study.

  6. Research Design: What it is, Elements & Types

    The process of research design is a critical step in conducting research. By following the steps of research design, researchers can ensure that their study is well-planned, ethical, and rigorous. Research Design Elements. Impactful research usually creates a minimum bias in data and increases trust in the accuracy of collected data.

  7. PDF WHAT IS RESEARCH DESIGN?

    Descriptive research encompasses much government spon-sored research including the population census, the collection of a wide range of social indicators and economic information such as household expenditure patterns, time use studies, employment and crime statistics and the like. Descriptions can be concrete or abstract.

  8. How to Write a Research Design

    Step 2: Data Type you Need for Research. Decide on the type of data you need for your research. The type of data you need to collect depends on your research questions or research hypothesis. Two types of research data can be used to answer the research questions: Primary Data Vs. Secondary Data.

  9. The Four Types of Research Design

    Jenny Romanchuk Updated: December 11, 2023 Published: January 18, 2023 When you conduct research, you need to have a clear idea of what you want to achieve and how to accomplish it. A good research design enables you to collect accurate and reliable data to draw valid conclusions.

  10. Introducing Research Designs

    report research perspectives, positions, and values found in the research literature. ... Explain what we mean by research design. Research design specifically combines decisions within a research process that enables us to make a specific type of argument by answering the research question. It is the implementation plan for the research study ...

  11. What is a research design?

    A research design is a strategy for answering your research question. It defines your overall approach and determines how you will collect and analyze data. Frequently asked questions: Methodology What is differential attrition? What's the difference between action research and a case study? What is the main purpose of action research?

  12. Research design

    A research design is an arrangement of conditions or collection. [5] Descriptive (e.g., case-study, naturalistic observation, survey) Correlational (e.g., case-control study, observational study) Experimental (e.g., field experiment, controlled experiment, quasi-experiment) Review ( literature review, systematic review)

  13. Research design: the methodology for interdisciplinary research

    The first kind, "Research into design" studies the design product post hoc and the MIR framework suits the interdisciplinary study of such a product. In contrast, "Research for design" generates knowledge that feeds into the noun and the verb 'design', which means it precedes the design (ing).

  14. What are research designs?

    Research Design. According to Jenkins-Smith, et al. (2017), a research design is the set of steps you take to collect and analyze your research data. In other words, it is the general plan to answer your research topic or question. You can also think of it as a combination of your research methodology and your research method.

  15. What is Research Design? Types, Elements and Examples

    The research design categories under this are descriptive, experimental, correlational, diagnostic, and explanatory. Data analysis involves interpretation and narrative analysis. Data analysis involves statistical analysis and hypothesis testing. The reasoning used to synthesize data is inductive.

  16. (PDF) Research Design

    The research design refers to the overall strategy that you choose to integrate the different components of the study in a coherent and logical way, thereby, ensuring you will effectively address...

  17. Organizing Your Social Sciences Research Paper

    The research design refers to the overall strategy and analytical approach that you have chosen in order to integrate, in a coherent and logical way, the different components of the study, thus ensuring that the research problem will be thoroughly investigated.

  18. What Is a Research Design: Types, Characteristics & Examples

    Readability checker Check for free A research design is the blueprint for any study. It's the plan that outlines how the research will be carried out.

  19. Guide to Experimental Design

    Step 1: Define your variables. You should begin with a specific research question. We will work with two research question examples, one from health sciences and one from ecology: Example question 1: Phone use and sleep. You want to know how phone use before bedtime affects sleep patterns.

  20. Research Design

    In your dissertation you can define research design as a general plan about what you will do to answer the research question. [1] It is a framework for choosing specific methods of data collection and data analysis. Research design can be divided into two groups: exploratory and conclusive. Exploratory research, according to its name merely ...

  21. Research Report

    Definition: Research Report is a written document that presents the results of a research project or study, including the research question, methodology, results, and conclusions, in a clear and objective manner.

  22. 10 Design Tips For Creating Research Reports People Want to Read

    Make sure you use the best chart or graph for your data so that your information is clearly conveyed, and ensure that you label data correctly so all the important information is easily available for the reader. 6. Pay attention to your margins. Give yourself a good bit of space around the edges of your report design.

  23. Descriptive Research Design

    Descriptive research design is a type of research methodology that aims to describe or document the characteristics, behaviors, attitudes, opinions, or perceptions of a group or population being studied.

  24. Circular economy: definition, importance and benefits

    The circular economy is a model of production and consumption, which involves sharing, leasing, reusing, repairing, refurbishing and recycling existing materials and products as long as possible. In this way, the life cycle of products is extended. In practice, it implies reducing waste to a minimum. When a product reaches the end of its life ...

  25. Novel bispecific design improves CAR T-cell immunotherapy for childhood

    St. Jude Children's Research Hospital. St. Jude Children's Research Hospital is leading the way the world understands, treats and cures childhood cancer, sickle cell disease, and other life-threatening disorders. It is the only National Cancer Institute-designated Comprehensive Cancer Center devoted solely to children. Treatments developed at St. Jude have helped push the overall childhood ...

  26. Museum of Natural History Closes Native Displays Amid New Federal Rules

    Visitors at the American Museum of Natural History's Hall of the Great Plains, which is being closed this weekend as the museum works to comply with new federal rules governing the holding and ...

  27. Seismic Performance of Isolated Bridges Under Beyond Design Basis

    The PEER research program aims to provide data, models, and software tools to support a formalized performance-based earthquake engineering methodology. ... Bustamante, R., Mosqueda, G. (2024). Seismic Performance of Isolated Bridges Under Beyond Design Basis, PEER Report 2024/02. Pacific Earthquake Engineering Research Center, University of ...

  28. Journal of Medical Internet Research

    Background: Dietary intake assessment is an integral part of addressing suboptimal dietary intakes. Existing food-based methods are time-consuming and burdensome for users to report the individual foods consumed at each meal. However, ease of use is the most important feature for individuals choosing a nutrition or diet app. Intakes of whole meals can be reported in a manner that is less ...