Search NYU Steinhardt

a research paper on online education

The Research Alliance for New York City Schools

A woman doing school work on her laptop.

Exploring the Evidence on Virtual and Blended Learning

Chelsea farley (2020).

The Research Alliance has developed an overview of research and practical guidance on strategies to implement remote teaching and learning, as well as strategies that combine virtual and in-class instruction. While not a complete summary of the relevant literature, our overview provides links to a variety of useful articles, resources, and reports. We hope this material can inform school and district leaders’ planning and support their ongoing assessment of what has and has not been effective, for whom, and under what conditions.

Key Takeaways from the Research Alliance’s Review

  • Eight months into the COVID-19 pandemic, there is still an enormous need for data and evidence to understand how the school closures that took place in NYC and around the country—and how the various approaches to reopening—have affected students’ academic, social/emotional, and health outcomes. New research is needed to inform critical policy and practice decisions. (Below we highlight specific kinds of data that would help answer the most pressing questions.)
  • Past research about online learning is limited and mostly focused on post-secondary and adult education. The studies that do exist in K-12 education find that students participating in online learning generally perform similarly to or worse than peers who have access to traditional face-to-face instruction (with programs that are 100% online faring worse than blended learning approaches). It is important to note that this research typically compares online learning with regular classroom instruction—rather than comparing it to no instruction at all—and that these studies took place under dramatically different conditions than those resulting from COVID-19.
  • Studies of blended learning, personalized learning, and specific technology-based tools and programs provide hints about successful approaches, but also underscore substantial “fuzziness” around the definition of these terms; major challenges to high-quality implementation; and a lack of rigorous impact research.
  • Teaching quality is more important than how lessons are delivered  (e.g., “clear explanations, scaffolding and feedback”);
  • Ensuring access to technology is key , particularly for disadvantaged students and families;
  • Peer interactions can provide motivation and improve learning outcomes  (e.g., “peer marking and feedback, sharing models of good work,” and opportunities for collaboration and live discussions of content);
  • Supporting students to work independently can improve learning outcomes  (e.g., “prompting pupils to reflect on their work or to consider the strategies they will use if they get stuck”, checklists or daily plans); and
  • Different approaches to remote learning suit different tasks and types of content.

Our overview highlights these and other lessons from dozens of relevant studies. It also underscores the need for more rigorous evidence about the implementation and impact of different approaches to remote and blended learning, particularly in the context of the current pandemic. To begin to fill these knowledge gaps,  the Research Alliance strongly encourages schools and districts—including the NYC Department of Education—to collect, analyze, and share data about :

  • COVID-19 testing results,
  • Professional development aimed at helping teachers implement remote and blended learning,
  • Students’ attendance and engagement (online and in person),
  • Students’ social and emotional wellbeing,
  • Students’ and families’ experiences with remote and blended instruction,
  • Teachers’ experiences with remote and blended instruction, and—critically—
  • What students are learning, over time.

All of this should be done with an eye toward pre-existing inequalities—especially differences related to race/ethnicity, poverty, home language, and disability. These data are crucial for understanding how COVID-19 has affected the educational trajectories of different groups of students and for developing strong policy and practice responses. 

Read our full overview here . This document was initially released in May and updated in November of 2020.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Advanced Search
  • Journal List
  • Elsevier - PMC COVID-19 Collection

Logo of pheelsevier

A systematic review of research on online teaching and learning from 2009 to 2018

Associated data.

Systematic reviews were conducted in the nineties and early 2000's on online learning research. However, there is no review examining the broader aspect of research themes in online learning in the last decade. This systematic review addresses this gap by examining 619 research articles on online learning published in twelve journals in the last decade. These studies were examined for publication trends and patterns, research themes, research methods, and research settings and compared with the research themes from the previous decades. While there has been a slight decrease in the number of studies on online learning in 2015 and 2016, it has then continued to increase in 2017 and 2018. The majority of the studies were quantitative in nature and were examined in higher education. Online learning research was categorized into twelve themes and a framework across learner, course and instructor, and organizational levels was developed. Online learner characteristics and online engagement were examined in a high number of studies and were consistent with three of the prior systematic reviews. However, there is still a need for more research on organization level topics such as leadership, policy, and management and access, culture, equity, inclusion, and ethics and also on online instructor characteristics.

  • • Twelve online learning research themes were identified in 2009–2018.
  • • A framework with learner, course and instructor, and organizational levels was used.
  • • Online learner characteristics and engagement were the mostly examined themes.
  • • The majority of the studies used quantitative research methods and in higher education.
  • • There is a need for more research on organization level topics.

1. Introduction

Online learning has been on the increase in the last two decades. In the United States, though higher education enrollment has declined, online learning enrollment in public institutions has continued to increase ( Allen & Seaman, 2017 ), and so has the research on online learning. There have been review studies conducted on specific areas on online learning such as innovations in online learning strategies ( Davis et al., 2018 ), empirical MOOC literature ( Liyanagunawardena et al., 2013 ; Veletsianos & Shepherdson, 2016 ; Zhu et al., 2018 ), quality in online education ( Esfijani, 2018 ), accessibility in online higher education ( Lee, 2017 ), synchronous online learning ( Martin et al., 2017 ), K-12 preparation for online teaching ( Moore-Adams et al., 2016 ), polychronicity in online learning ( Capdeferro et al., 2014 ), meaningful learning research in elearning and online learning environments ( Tsai, Shen, & Chiang, 2013 ), problem-based learning in elearning and online learning environments ( Tsai & Chiang, 2013 ), asynchronous online discussions ( Thomas, 2013 ), self-regulated learning in online learning environments ( Tsai, Shen, & Fan, 2013 ), game-based learning in online learning environments ( Tsai & Fan, 2013 ), and online course dropout ( Lee & Choi, 2011 ). While there have been review studies conducted on specific online learning topics, very few studies have been conducted on the broader aspect of online learning examining research themes.

2. Systematic Reviews of Distance Education and Online Learning Research

Distance education has evolved from offline to online settings with the access to internet and COVID-19 has made online learning the common delivery method across the world. Tallent-Runnels et al. (2006) reviewed research late 1990's to early 2000's, Berge and Mrozowski (2001) reviewed research 1990 to 1999, and Zawacki-Richter et al. (2009) reviewed research in 2000–2008 on distance education and online learning. Table 1 shows the research themes from previous systematic reviews on online learning research. There are some themes that re-occur in the various reviews, and there are also new themes that emerge. Though there have been reviews conducted in the nineties and early 2000's, there is no review examining the broader aspect of research themes in online learning in the last decade. Hence, the need for this systematic review which informs the research themes in online learning from 2009 to 2018. In the following sections, we review these systematic review studies in detail.

Comparison of online learning research themes from previous studies.

2.1. Distance education research themes, 1990 to 1999 ( Berge & Mrozowski, 2001 )

Berge and Mrozowski (2001) reviewed 890 research articles and dissertation abstracts on distance education from 1990 to 1999. The four distance education journals chosen by the authors to represent distance education included, American Journal of Distance Education, Distance Education, Open Learning, and the Journal of Distance Education. This review overlapped in the dates of the Tallent-Runnels et al. (2006) study. Berge and Mrozowski (2001) categorized the articles according to Sherry's (1996) ten themes of research issues in distance education: redefining roles of instructor and students, technologies used, issues of design, strategies to stimulate learning, learner characteristics and support, issues related to operating and policies and administration, access and equity, and costs and benefits.

In the Berge and Mrozowski (2001) study, more than 100 studies focused on each of the three themes: (1) design issues, (2) learner characteristics, and (3) strategies to increase interactivity and active learning. By design issues, the authors focused on instructional systems design and focused on topics such as content requirement, technical constraints, interactivity, and feedback. The next theme, strategies to increase interactivity and active learning, were closely related to design issues and focused on students’ modes of learning. Learner characteristics focused on accommodating various learning styles through customized instructional theory. Less than 50 studies focused on the three least examined themes: (1) cost-benefit tradeoffs, (2) equity and accessibility, and (3) learner support. Cost-benefit trade-offs focused on the implementation costs of distance education based on school characteristics. Equity and accessibility focused on the equity of access to distance education systems. Learner support included topics such as teacher to teacher support as well as teacher to student support.

2.2. Online learning research themes, 1993 to 2004 ( Tallent-Runnels et al., 2006 )

Tallent-Runnels et al. (2006) reviewed research on online instruction from 1993 to 2004. They reviewed 76 articles focused on online learning by searching five databases, ERIC, PsycINFO, ContentFirst, Education Abstracts, and WilsonSelect. Tallent-Runnels et al. (2006) categorized research into four themes, (1) course environment, (2) learners' outcomes, (3) learners’ characteristics, and (4) institutional and administrative factors. The first theme that the authors describe as course environment ( n  = 41, 53.9%) is an overarching theme that includes classroom culture, structural assistance, success factors, online interaction, and evaluation.

Tallent-Runnels et al. (2006) for their second theme found that studies focused on questions involving the process of teaching and learning and methods to explore cognitive and affective learner outcomes ( n  = 29, 38.2%). The authors stated that they found the research designs flawed and lacked rigor. However, the literature comparing traditional and online classrooms found both delivery systems to be adequate. Another research theme focused on learners’ characteristics ( n  = 12, 15.8%) and the synergy of learners, design of the online course, and system of delivery. Research findings revealed that online learners were mainly non-traditional, Caucasian, had different learning styles, and were highly motivated to learn. The final theme that they reported was institutional and administrative factors (n  = 13, 17.1%) on online learning. Their findings revealed that there was a lack of scholarly research in this area and most institutions did not have formal policies in place for course development as well as faculty and student support in training and evaluation. Their research confirmed that when universities offered online courses, it improved student enrollment numbers.

2.3. Distance education research themes 2000 to 2008 ( Zawacki-Richter et al., 2009 )

Zawacki-Richter et al. (2009) reviewed 695 articles on distance education from 2000 to 2008 using the Delphi method for consensus in identifying areas and classified the literature from five prominent journals. The five journals selected due to their wide scope in research in distance education included Open Learning, Distance Education, American Journal of Distance Education, the Journal of Distance Education, and the International Review of Research in Open and Distributed Learning. The reviewers examined the main focus of research and identified gaps in distance education research in this review.

Zawacki-Richter et al. (2009) classified the studies into macro, meso and micro levels focusing on 15 areas of research. The five areas of the macro-level addressed: (1) access, equity and ethics to deliver distance education for developing nations and the role of various technologies to narrow the digital divide, (2) teaching and learning drivers, markets, and professional development in the global context, (3) distance delivery systems and institutional partnerships and programs and impact of hybrid modes of delivery, (4) theoretical frameworks and models for instruction, knowledge building, and learner interactions in distance education practice, and (5) the types of preferred research methodologies. The meso-level focused on seven areas that involve: (1) management and organization for sustaining distance education programs, (2) examining financial aspects of developing and implementing online programs, (3) the challenges and benefits of new technologies for teaching and learning, (4) incentives to innovate, (5) professional development and support for faculty, (6) learner support services, and (7) issues involving quality standards and the impact on student enrollment and retention. The micro-level focused on three areas: (1) instructional design and pedagogical approaches, (2) culturally appropriate materials, interaction, communication, and collaboration among a community of learners, and (3) focus on characteristics of adult learners, socio-economic backgrounds, learning preferences, and dispositions.

The top three research themes in this review by Zawacki-Richter et al. (2009) were interaction and communities of learning ( n  = 122, 17.6%), instructional design ( n  = 121, 17.4%) and learner characteristics ( n  = 113, 16.3%). The lowest number of studies (less than 3%) were found in studies examining the following research themes, management and organization ( n  = 18), research methods in DE and knowledge transfer ( n  = 13), globalization of education and cross-cultural aspects ( n  = 13), innovation and change ( n  = 13), and costs and benefits ( n  = 12).

2.4. Online learning research themes

These three systematic reviews provide a broad understanding of distance education and online learning research themes from 1990 to 2008. However, there is an increase in the number of research studies on online learning in this decade and there is a need to identify recent research themes examined. Based on the previous systematic reviews ( Berge & Mrozowski, 2001 ; Hung, 2012 ; Tallent-Runnels et al., 2006 ; Zawacki-Richter et al., 2009 ), online learning research in this study is grouped into twelve different research themes which include Learner characteristics, Instructor characteristics, Course or program design and development, Course Facilitation, Engagement, Course Assessment, Course Technologies, Access, Culture, Equity, Inclusion, and Ethics, Leadership, Policy and Management, Instructor and Learner Support, and Learner Outcomes. Table 2 below describes each of the research themes and using these themes, a framework is derived in Fig. 1 .

Research themes in online learning.

Fig. 1

Online learning research themes framework.

The collection of research themes is presented as a framework in Fig. 1 . The themes are organized by domain or level to underscore the nested relationship that exists. As evidenced by the assortment of themes, research can focus on any domain of delivery or associated context. The “Learner” domain captures characteristics and outcomes related to learners and their interaction within the courses. The “Course and Instructor” domain captures elements about the broader design of the course and facilitation by the instructor, and the “Organizational” domain acknowledges the contextual influences on the course. It is important to note as well that due to the nesting, research themes can cross domains. For example, the broader cultural context may be studied as it pertains to course design and development, and institutional support can include both learner support and instructor support. Likewise, engagement research can involve instructors as well as learners.

In this introduction section, we have reviewed three systematic reviews on online learning research ( Berge & Mrozowski, 2001 ; Tallent-Runnels et al., 2006 ; Zawacki-Richter et al., 2009 ). Based on these reviews and other research, we have derived twelve themes to develop an online learning research framework which is nested in three levels: learner, course and instructor, and organization.

2.5. Purpose of this research

In two out of the three previous reviews, design, learner characteristics and interaction were examined in the highest number of studies. On the other hand, cost-benefit tradeoffs, equity and accessibility, institutional and administrative factors, and globalization and cross-cultural aspects were examined in the least number of studies. One explanation for this may be that it is a function of nesting, noting that studies falling in the Organizational and Course levels may encompass several courses or many more participants within courses. However, while some research themes re-occur, there are also variations in some themes across time, suggesting the importance of research themes rise and fall over time. Thus, a critical examination of the trends in themes is helpful for understanding where research is needed most. Also, since there is no recent study examining online learning research themes in the last decade, this study strives to address that gap by focusing on recent research themes found in the literature, and also reviewing research methods and settings. Notably, one goal is to also compare findings from this decade to the previous review studies. Overall, the purpose of this study is to examine publication trends in online learning research taking place during the last ten years and compare it with the previous themes identified in other review studies. Due to the continued growth of online learning research into new contexts and among new researchers, we also examine the research methods and settings found in the studies of this review.

The following research questions are addressed in this study.

  • 1. What percentage of the population of articles published in the journals reviewed from 2009 to 2018 were related to online learning and empirical?
  • 2. What is the frequency of online learning research themes in the empirical online learning articles of journals reviewed from 2009 to 2018?
  • 3. What is the frequency of research methods and settings that researchers employed in the empirical online learning articles of the journals reviewed from 2009 to 2018?

This five-step systematic review process described in the U.S. Department of Education, Institute of Education Sciences, What Works Clearinghouse Procedures and Standards Handbook, Version 4.0 ( 2017 ) was used in this systematic review: (a) developing the review protocol, (b) identifying relevant literature, (c) screening studies, (d) reviewing articles, and (e) reporting findings.

3.1. Data sources and search strategies

The Education Research Complete database was searched using the keywords below for published articles between the years 2009 and 2018 using both the Title and Keyword function for the following search terms.

“online learning" OR "online teaching" OR "online program" OR "online course" OR “online education”

3.2. Inclusion/exclusion criteria

The initial search of online learning research among journals in the database resulted in more than 3000 possible articles. Therefore, we limited our search to select journals that focus on publishing peer-reviewed online learning and educational research. Our aim was to capture the journals that published the most articles in online learning. However, we also wanted to incorporate the concept of rigor, so we used expert perception to identify 12 peer-reviewed journals that publish high-quality online learning research. Dissertations and conference proceedings were excluded. To be included in this systematic review, each study had to meet the screening criteria as described in Table 3 . A research study was excluded if it did not meet all of the criteria to be included.

Inclusion/Exclusion criteria.

3.3. Process flow selection of articles

Fig. 2 shows the process flow involved in the selection of articles. The search in the database Education Research Complete yielded an initial sample of 3332 articles. Targeting the 12 journals removed 2579 articles. After reviewing the abstracts, we removed 134 articles based on the inclusion/exclusion criteria. The final sample, consisting of 619 articles, was entered into the computer software MAXQDA ( VERBI Software, 2019 ) for coding.

Fig. 2

Flowchart of online learning research selection.

3.4. Developing review protocol

A review protocol was designed as a codebook in MAXQDA ( VERBI Software, 2019 ) by the three researchers. The codebook was developed based on findings from the previous review studies and from the initial screening of the articles in this review. The codebook included 12 research themes listed earlier in Table 2 (Learner characteristics, Instructor characteristics, Course or program design and development, Course Facilitation, Engagement, Course Assessment, Course Technologies, Access, Culture, Equity, Inclusion, and Ethics, Leadership, Policy and Management, Instructor and Learner Support, and Learner Outcomes), four research settings (higher education, continuing education, K-12, corporate/military), and three research designs (quantitative, qualitative and mixed methods). Fig. 3 below is a screenshot of MAXQDA used for the coding process.

Fig. 3

Codebook from MAXQDA.

3.5. Data coding

Research articles were coded by two researchers in MAXQDA. Two researchers independently coded 10% of the articles and then discussed and updated the coding framework. The second author who was a doctoral student coded the remaining studies. The researchers met bi-weekly to address coding questions that emerged. After the first phase of coding, we found that more than 100 studies fell into each of the categories of Learner Characteristics or Engagement, so we decided to pursue a second phase of coding and reexamine the two themes. Learner Characteristics were classified into the subthemes of Academic, Affective, Motivational, Self-regulation, Cognitive, and Demographic Characteristics. Engagement was classified into the subthemes of Collaborating, Communication, Community, Involvement, Interaction, Participation, and Presence.

3.6. Data analysis

Frequency tables were generated for each of the variables so that outliers could be examined and narrative data could be collapsed into categories. Once cleaned and collapsed into a reasonable number of categories, descriptive statistics were used to describe each of the coded elements. We first present the frequencies of publications related to online learning in the 12 journals. The total number of articles for each journal (collectively, the population) was hand-counted from journal websites, excluding editorials and book reviews. The publication trend of online learning research was also depicted from 2009 to 2018. Then, the descriptive information of the 12 themes, including the subthemes of Learner Characteristics and Engagement were provided. Finally, research themes by research settings and methodology were elaborated.

4.1. Publication trends on online learning

Publication patterns of the 619 articles reviewed from the 12 journals are presented in Table 4 . International Review of Research in Open and Distributed Learning had the highest number of publications in this review. Overall, about 8% of the articles appearing in these twelve journals consisted of online learning publications; however, several journals had concentrations of online learning articles totaling more than 20%.

Empirical online learning research articles by journal, 2009–2018.

Note . Journal's Total Article count excludes reviews and editorials.

The publication trend of online learning research is depicted in Fig. 4 . When disaggregated by year, the total frequency of publications shows an increasing trend. Online learning articles increased throughout the decade and hit a relative maximum in 2014. The greatest number of online learning articles ( n  = 86) occurred most recently, in 2018.

Fig. 4

Online learning publication trends by year.

4.2. Online learning research themes that appeared in the selected articles

The publications were categorized into the twelve research themes identified in Fig. 1 . The frequency counts and percentages of the research themes are provided in Table 5 below. A majority of the research is categorized into the Learner domain. The fewest number of articles appears in the Organization domain.

Research themes in the online learning publications from 2009 to 2018.

The specific themes of Engagement ( n  = 179, 28.92%) and Learner Characteristics ( n  = 134, 21.65%) were most often examined in publications. These two themes were further coded to identify sub-themes, which are described in the next two sections. Publications focusing on Instructor Characteristics ( n  = 21, 3.39%) were least common in the dataset.

4.2.1. Research on engagement

The largest number of studies was on engagement in online learning, which in the online learning literature is referred to and examined through different terms. Hence, we explore this category in more detail. In this review, we categorized the articles into seven different sub-themes as examined through different lenses including presence, interaction, community, participation, collaboration, involvement, and communication. We use the term “involvement” as one of the terms since researchers sometimes broadly used the term engagement to describe their work without further description. Table 6 below provides the description, frequency, and percentages of the various studies related to engagement.

Research sub-themes on engagement.

In the sections below, we provide several examples of the different engagement sub-themes that were studied within the larger engagement theme.

Presence. This sub-theme was the most researched in engagement. With the development of the community of inquiry framework most of the studies in this subtheme examined social presence ( Akcaoglu & Lee, 2016 ; Phirangee & Malec, 2017 ; Wei et al., 2012 ), teaching presence ( Orcutt & Dringus, 2017 ; Preisman, 2014 ; Wisneski et al., 2015 ) and cognitive presence ( Archibald, 2010 ; Olesova et al., 2016 ).

Interaction . This was the second most studied theme under engagement. Researchers examined increasing interpersonal interactions ( Cung et al., 2018 ), learner-learner interactions ( Phirangee, 2016 ; Shackelford & Maxwell, 2012 ; Tawfik et al., 2018 ), peer-peer interaction ( Comer et al., 2014 ), learner-instructor interaction ( Kuo et al., 2014 ), learner-content interaction ( Zimmerman, 2012 ), interaction through peer mentoring ( Ruane & Koku, 2014 ), interaction and community building ( Thormann & Fidalgo, 2014 ), and interaction in discussions ( Ruane & Lee, 2016 ; Tibi, 2018 ).

Community. Researchers examined building community in online courses ( Berry, 2017 ), supporting a sense of community ( Jiang, 2017 ), building an online learning community of practice ( Cho, 2016 ), building an academic community ( Glazer & Wanstreet, 2011 ; Nye, 2015 ; Overbaugh & Nickel, 2011 ), and examining connectedness and rapport in an online community ( Bolliger & Inan, 2012 ; Murphy & Rodríguez-Manzanares, 2012 ; Slagter van Tryon & Bishop, 2012 ).

Participation. Researchers examined engagement through participation in a number of studies. Some of the topics include, participation patterns in online discussion ( Marbouti & Wise, 2016 ; Wise et al., 2012 ), participation in MOOCs ( Ahn et al., 2013 ; Saadatmand & Kumpulainen, 2014 ), features that influence students’ online participation ( Rye & Støkken, 2012 ) and active participation.

Collaboration. Researchers examined engagement through collaborative learning. Specific studies focused on cross-cultural collaboration ( Kumi-Yeboah, 2018 ; Yang et al., 2014 ), how virtual teams collaborate ( Verstegen et al., 2018 ), types of collaboration teams ( Wicks et al., 2015 ), tools for collaboration ( Boling et al., 2014 ), and support for collaboration ( Kopp et al., 2012 ).

Involvement. Researchers examined engaging learners through involvement in various learning activities ( Cundell & Sheepy, 2018 ), student engagement through various measures ( Dixson, 2015 ), how instructors included engagement to involve students in learning ( O'Shea et al., 2015 ), different strategies to engage the learner ( Amador & Mederer, 2013 ), and designed emotionally engaging online environments ( Koseoglu & Doering, 2011 ).

Communication. Researchers examined communication in online learning in studies using social network analysis ( Ergün & Usluel, 2016 ), using informal communication tools such as Facebook for class discussion ( Kent, 2013 ), and using various modes of communication ( Cunningham et al., 2010 ; Rowe, 2016 ). Studies have also focused on both asynchronous and synchronous aspects of communication ( Swaggerty & Broemmel, 2017 ; Yamagata-Lynch, 2014 ).

4.2.2. Research on learner characteristics

The second largest theme was learner characteristics. In this review, we explore this further to identify several aspects of learner characteristics. In this review, we categorized the learner characteristics into self-regulation characteristics, motivational characteristics, academic characteristics, affective characteristics, cognitive characteristics, and demographic characteristics. Table 7 provides the number of studies and percentages examining the various learner characteristics.

Research sub-themes on learner characteristics.

Online learning has elements that are different from the traditional face-to-face classroom and so the characteristics of the online learners are also different. Yukselturk and Top (2013) categorized online learner profile into ten aspects: gender, age, work status, self-efficacy, online readiness, self-regulation, participation in discussion list, participation in chat sessions, satisfaction, and achievement. Their categorization shows that there are differences in online learner characteristics in these aspects when compared to learners in other settings. Some of the other aspects such as participation and achievement as discussed by Yukselturk and Top (2013) are discussed in different research themes in this study. The sections below provide examples of the learner characteristics sub-themes that were studied.

Self-regulation. Several researchers have examined self-regulation in online learning. They found that successful online learners are academically motivated ( Artino & Stephens, 2009 ), have academic self-efficacy ( Cho & Shen, 2013 ), have grit and intention to succeed ( Wang & Baker, 2018 ), have time management and elaboration strategies ( Broadbent, 2017 ), set goals and revisit course content ( Kizilcec et al., 2017 ), and persist ( Glazer & Murphy, 2015 ). Researchers found a positive relationship between learner's self-regulation and interaction ( Delen et al., 2014 ) and self-regulation and communication and collaboration ( Barnard et al., 2009 ).

Motivation. Researchers focused on motivation of online learners including different motivation levels of online learners ( Li & Tsai, 2017 ), what motivated online learners ( Chaiprasurt & Esichaikul, 2013 ), differences in motivation of online learners ( Hartnett et al., 2011 ), and motivation when compared to face to face learners ( Paechter & Maier, 2010 ). Harnett et al. (2011) found that online learner motivation was complex, multifaceted, and sensitive to situational conditions.

Academic. Several researchers have focused on academic aspects for online learner characteristics. Readiness for online learning has been examined as an academic factor by several researchers ( Buzdar et al., 2016 ; Dray et al., 2011 ; Wladis & Samuels, 2016 ; Yu, 2018 ) specifically focusing on creating and validating measures to examine online learner readiness including examining students emotional intelligence as a measure of student readiness for online learning. Researchers have also examined other academic factors such as academic standing ( Bradford & Wyatt, 2010 ), course level factors ( Wladis et al., 2014 ) and academic skills in online courses ( Shea & Bidjerano, 2014 ).

Affective. Anderson and Bourke (2013) describe affective characteristics through which learners express feelings or emotions. Several research studies focused on the affective characteristics of online learners. Learner satisfaction for online learning has been examined by several researchers ( Cole et al., 2014 ; Dziuban et al., 2015 ; Kuo et al., 2013 ; Lee, 2014a ) along with examining student emotions towards online assessment ( Kim et al., 2014 ).

Cognitive. Researchers have also examined cognitive aspects of learner characteristics including meta-cognitive skills, cognitive variables, higher-order thinking, cognitive density, and critical thinking ( Chen & Wu, 2012 ; Lee, 2014b ). Lee (2014b) examined the relationship between cognitive presence density and higher-order thinking skills. Chen and Wu (2012) examined the relationship between cognitive and motivational variables in an online system for secondary physical education.

Demographic. Researchers have examined various demographic factors in online learning. Several researchers have examined gender differences in online learning ( Bayeck et al., 2018 ; Lowes et al., 2016 ; Yukselturk & Bulut, 2009 ), ethnicity, age ( Ke & Kwak, 2013 ), and minority status ( Yeboah & Smith, 2016 ) of online learners.

4.2.3. Less frequently studied research themes

While engagement and learner characteristics were studied the most, other themes were less often studied in the literature and are presented here, according to size, with general descriptions of the types of research examined for each.

Evaluation and Quality Assurance. There were 38 studies (6.14%) published in the theme of evaluation and quality assurance. Some of the studies in this theme focused on course quality standards, using quality matters to evaluate quality, using the CIPP model for evaluation, online learning system evaluation, and course and program evaluations.

Course Technologies. There were 35 studies (5.65%) published in the course technologies theme. Some of the studies examined specific technologies such as Edmodo, YouTube, Web 2.0 tools, wikis, Twitter, WebCT, Screencasts, and Web conferencing systems in the online learning context.

Course Facilitation. There were 34 studies (5.49%) published in the course facilitation theme. Some of the studies in this theme examined facilitation strategies and methods, experiences of online facilitators, and online teaching methods.

Institutional Support. There were 33 studies (5.33%) published in the institutional support theme which included support for both the instructor and learner. Some of the studies on instructor support focused on training new online instructors, mentoring programs for faculty, professional development resources for faculty, online adjunct faculty training, and institutional support for online instructors. Studies on learner support focused on learning resources for online students, cognitive and social support for online learners, and help systems for online learner support.

Learner Outcome. There were 32 studies (5.17%) published in the learner outcome theme. Some of the studies that were examined in this theme focused on online learner enrollment, completion, learner dropout, retention, and learner success.

Course Assessment. There were 30 studies (4.85%) published in the course assessment theme. Some of the studies in the course assessment theme examined online exams, peer assessment and peer feedback, proctoring in online exams, and alternative assessments such as eportfolio.

Access, Culture, Equity, Inclusion, and Ethics. There were 29 studies (4.68%) published in the access, culture, equity, inclusion, and ethics theme. Some of the studies in this theme examined online learning across cultures, multi-cultural effectiveness, multi-access, and cultural diversity in online learning.

Leadership, Policy, and Management. There were 27 studies (4.36%) published in the leadership, policy, and management theme. Some of the studies on leadership, policy, and management focused on online learning leaders, stakeholders, strategies for online learning leadership, resource requirements, university policies for online course policies, governance, course ownership, and faculty incentives for online teaching.

Course Design and Development. There were 27 studies (4.36%) published in the course design and development theme. Some of the studies examined in this theme focused on design elements, design issues, design process, design competencies, design considerations, and instructional design in online courses.

Instructor Characteristics. There were 21 studies (3.39%) published in the instructor characteristics theme. Some of the studies in this theme were on motivation and experiences of online instructors, ability to perform online teaching duties, roles of online instructors, and adjunct versus full-time online instructors.

4.3. Research settings and methodology used in the studies

The research methods used in the studies were classified into quantitative, qualitative, and mixed methods ( Harwell, 2012 , pp. 147–163). The research setting was categorized into higher education, continuing education, K-12, and corporate/military. As shown in Table A in the appendix, the vast majority of the publications used higher education as the research setting ( n  = 509, 67.6%). Table B in the appendix shows that approximately half of the studies adopted the quantitative method ( n  = 324, 43.03%), followed by the qualitative method ( n  = 200, 26.56%). Mixed methods account for the smallest portion ( n  = 95, 12.62%).

Table A shows that the patterns of the four research settings were approximately consistent across the 12 themes except for the theme of Leaner Outcome and Institutional Support. Continuing education had a higher relative frequency in Learner Outcome (0.28) and K-12 had a higher relative frequency in Institutional Support (0.33) compared to the frequencies they had in the total themes (0.09 and 0.08 respectively). Table B in the appendix shows that the distribution of the three methods were not consistent across the 12 themes. While quantitative studies and qualitative studies were roughly evenly distributed in Engagement, they had a large discrepancy in Learner Characteristics. There were 100 quantitative studies; however, only 18 qualitative studies published in the theme of Learner Characteristics.

In summary, around 8% of the articles published in the 12 journals focus on online learning. Online learning publications showed a tendency of increase on the whole in the past decade, albeit fluctuated, with the greatest number occurring in 2018. Among the 12 research themes related to online learning, the themes of Engagement and Learner Characteristics were studied the most and the theme of Instructor Characteristics was studied the least. Most studies were conducted in the higher education setting and approximately half of the studies used the quantitative method. Looking at the 12 themes by setting and method, we found that the patterns of the themes by setting or by method were not consistent across the 12 themes.

The quality of our findings was ensured by scientific and thorough searches and coding consistency. The selection of the 12 journals provides evidence of the representativeness and quality of primary studies. In the coding process, any difficulties and questions were resolved by consultations with the research team at bi-weekly meetings, which ensures the intra-rater and interrater reliability of coding. All these approaches guarantee the transparency and replicability of the process and the quality of our results.

5. Discussion

This review enabled us to identify the online learning research themes examined from 2009 to 2018. In the section below, we review the most studied research themes, engagement and learner characteristics along with implications, limitations, and directions for future research.

5.1. Most studied research themes

Three out of the four systematic reviews informing the design of the present study found that online learner characteristics and online engagement were examined in a high number of studies. In this review, about half of the studies reviewed (50.57%) focused on online learner characteristics or online engagement. This shows the continued importance of these two themes. In the Tallent-Runnels et al.’s (2006) study, the learner characteristics theme was identified as least studied for which they state that researchers are beginning to investigate learner characteristics in the early days of online learning.

One of the differences found in this review is that course design and development was examined in the least number of studies in this review compared to two prior systematic reviews ( Berge & Mrozowski, 2001 ; Zawacki-Richter et al., 2009 ). Zawacki-Richter et al. did not use a keyword search but reviewed all the articles in five different distance education journals. Berge and Mrozowski (2001) included a research theme called design issues to include all aspects of instructional systems design in distance education journals. In our study, in addition to course design and development, we also had focused themes on learner outcomes, course facilitation, course assessment and course evaluation. These are all instructional design focused topics and since we had multiple themes focusing on instructional design topics, the course design and development category might have resulted in fewer studies. There is still a need for more studies to focus on online course design and development.

5.2. Least frequently studied research themes

Three out of the four systematic reviews discussed in the opening of this study found management and organization factors to be least studied. In this review, Leadership, Policy, and Management was studied among 4.36% of the studies and Access, Culture, Equity, Inclusion, and Ethics was studied among 4.68% of the studies in the organizational level. The theme on Equity and accessibility was also found to be the least studied theme in the Berge and Mrozowski (2001) study. In addition, instructor characteristics was the least examined research theme among the twelve themes studied in this review. Only 3.39% of the studies were on instructor characteristics. While there were some studies examining instructor motivation and experiences, instructor ability to teach online, online instructor roles, and adjunct versus full-time online instructors, there is still a need to examine topics focused on instructors and online teaching. This theme was not included in the prior reviews as the focus was more on the learner and the course but not on the instructor. While it is helpful to see research evolving on instructor focused topics, there is still a need for more research on the online instructor.

5.3. Comparing research themes from current study to previous studies

The research themes from this review were compared with research themes from previous systematic reviews, which targeted prior decades. Table 8 shows the comparison.

Comparison of most and least studied online learning research themes from current to previous reviews.

L = Learner, C=Course O=Organization.

5.4. Need for more studies on organizational level themes of online learning

In this review there is a greater concentration of studies focused on Learner domain topics, and reduced attention to broader more encompassing research themes that fall into the Course and Organization domains. There is a need for organizational level topics such as Access, Culture, Equity, Inclusion and Ethics, and Leadership, Policy and Management to be researched on within the context of online learning. Examination of access, culture, equity, inclusion and ethics is very important to support diverse online learners, particularly with the rapid expansion of online learning across all educational levels. This was also least studied based on Berge and Mrozowski (2001) systematic review.

The topics on leadership, policy and management were least studied both in this review and also in the Tallent-Runnels et al. (2006) and Zawacki-Richter et al. (2009) study. Tallent-Runnels categorized institutional and administrative aspects into institutional policies, institutional support, and enrollment effects. While we included support as a separate category, in this study leadership, policy and management were combined. There is still a need for research on leadership of those who manage online learning, policies for online education, and managing online programs. In the Zawacki-Richter et al. (2009) study, only a few studies examined management and organization focused topics. They also found management and organization to be strongly correlated with costs and benefits. In our study, costs and benefits were collectively included as an aspect of management and organization and not as a theme by itself. These studies will provide research-based evidence for online education administrators.

6. Limitations

As with any systematic review, there are limitations to the scope of the review. The search is limited to twelve journals in the field that typically include research on online learning. These manuscripts were identified by searching the Education Research Complete database which focuses on education students, professionals, and policymakers. Other discipline-specific journals as well as dissertations and proceedings were not included due to the volume of articles. Also, the search was performed using five search terms “online learning" OR "online teaching" OR "online program" OR "online course" OR “online education” in title and keyword. If authors did not include these terms, their respective work may have been excluded from this review even if it focused on online learning. While these terms are commonly used in North America, it may not be commonly used in other parts of the world. Additional studies may exist outside this scope.

The search strategy also affected how we presented results and introduced limitations regarding generalization. We identified that only 8% of the articles published in these journals were related to online learning; however, given the use of search terms to identify articles within select journals it was not feasible to identify the total number of research-based articles in the population. Furthermore, our review focused on the topics and general methods of research and did not systematically consider the quality of the published research. Lastly, some journals may have preferences for publishing studies on a particular topic or that use a particular method (e.g., quantitative methods), which introduces possible selection and publication biases which may skew the interpretation of results due to over/under representation. Future studies are recommended to include more journals to minimize the selection bias and obtain a more representative sample.

Certain limitations can be attributed to the coding process. Overall, the coding process for this review worked well for most articles, as each tended to have an individual or dominant focus as described in the abstracts, though several did mention other categories which likely were simultaneously considered to a lesser degree. However, in some cases, a dominant theme was not as apparent and an effort to create mutually exclusive groups for clearer interpretation the coders were occasionally forced to choose between two categories. To facilitate this coding, the full-texts were used to identify a study focus through a consensus seeking discussion among all authors. Likewise, some studies focused on topics that we have associated with a particular domain, but the design of the study may have promoted an aggregated examination or integrated factors from multiple domains (e.g., engagement). Due to our reliance on author descriptions, the impact of construct validity is likely a concern that requires additional exploration. Our final grouping of codes may not have aligned with the original author's description in the abstract. Additionally, coding of broader constructs which disproportionately occur in the Learner domain, such as learner outcomes, learner characteristics, and engagement, likely introduced bias towards these codes when considering studies that involved multiple domains. Additional refinement to explore the intersection of domains within studies is needed.

7. Implications and future research

One of the strengths of this review is the research categories we have identified. We hope these categories will support future researchers and identify areas and levels of need for future research. Overall, there is some agreement on research themes on online learning research among previous reviews and this one, at the same time there are some contradicting findings. We hope the most-researched themes and least-researched themes provide authors a direction on the importance of research and areas of need to focus on.

The leading themes found in this review is online engagement research. However, presentation of this research was inconsistent, and often lacked specificity. This is not unique to online environments, but the nuances of defining engagement in an online environment are unique and therefore need further investigation and clarification. This review points to seven distinct classifications of online engagement. Further research on engagement should indicate which type of engagement is sought. This level of specificity is necessary to establish instruments for measuring engagement and ultimately testing frameworks for classifying engagement and promoting it in online environments. Also, it might be of importance to examine the relationship between these seven sub-themes of engagement.

Additionally, this review highlights growing attention to learner characteristics, which constitutes a shift in focus away from instructional characteristics and course design. Although this is consistent with the focus on engagement, the role of the instructor, and course design with respect to these outcomes remains important. Results of the learner characteristics and engagement research paired with course design will have important ramifications for the use of teaching and learning professionals who support instruction. Additionally, the review also points to a concentration of research in the area of higher education. With an immediate and growing emphasis on online learning in K-12 and corporate settings, there is a critical need for further investigation in these settings.

Lastly, because the present review did not focus on the overall effect of interventions, opportunities exist for dedicated meta-analyses. Particular attention to research on engagement and learner characteristics as well as how these vary by study design and outcomes would be logical additions to the research literature.

8. Conclusion

This systematic review builds upon three previous reviews which tackled the topic of online learning between 1990 and 2010 by extending the timeframe to consider the most recent set of published research. Covering the most recent decade, our review of 619 articles from 12 leading online learning journal points to a more concentrated focus on the learner domain including engagement and learner characteristics, with more limited attention to topics pertaining to the classroom or organizational level. The review highlights an opportunity for the field to clarify terminology concerning online learning research, particularly in the areas of learner outcomes where there is a tendency to classify research more generally (e.g., engagement). Using this sample of published literature, we provide a possible taxonomy for categorizing this research using subcategories. The field could benefit from a broader conversation about how these categories can shape a comprehensive framework for online learning research. Such efforts will enable the field to effectively prioritize research aims over time and synthesize effects.

Credit author statement

Florence Martin: Conceptualization; Writing - original draft, Writing - review & editing Preparation, Supervision, Project administration. Ting Sun: Methodology, Formal analysis, Writing - original draft, Writing - review & editing. Carl Westine: Methodology, Formal analysis, Writing - original draft, Writing - review & editing, Supervision

This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.

1 Includes articles that are cited in this manuscript and also included in the systematic review. The entire list of 619 articles used in the systematic review can be obtained by emailing the authors.*

Appendix B Supplementary data to this article can be found online at .

Appendix A. 

Research Themes by the Settings in the Online Learning Publications

Research Themes by the Methodology in the Online Learning Publications

Appendix B. Supplementary data

The following are the Supplementary data to this article:

References 1

  • Ahn J., Butler B.S., Alam A., Webster S.A. Learner participation and engagement in open online courses: Insights from the Peer 2 Peer University. MERLOT Journal of Online Learning and Teaching. 2013; 9 (2):160–171. * [ Google Scholar ]
  • Akcaoglu M., Lee E. Increasing social presence in online learning through small group discussions. International Review of Research in Open and Distance Learning. 2016; 17 (3) * [ Google Scholar ]
  • Allen I.E., Seaman J. Babson survey research group; 2017. Digital compass learning: Distance education enrollment Report 2017. [ Google Scholar ]
  • Amador J.A., Mederer H. Migrating successful student engagement strategies online: Opportunities and challenges using jigsaw groups and problem-based learning. Journal of Online Learning and Teaching. 2013; 9 (1):89. * [ Google Scholar ]
  • Anderson L.W., Bourke S.F. Routledge; 2013. Assessing affective characteristics in the schools. [ Google Scholar ]
  • Archibald D. Fostering the development of cognitive presence: Initial findings using the community of inquiry survey instrument. The Internet and Higher Education. 2010; 13 (1–2):73–74. * [ Google Scholar ]
  • Artino A.R., Jr., Stephens J.M. Academic motivation and self-regulation: A comparative analysis of undergraduate and graduate students learning online. The Internet and Higher Education. 2009; 12 (3–4):146–151. [ Google Scholar ]
  • Barnard L., Lan W.Y., To Y.M., Paton V.O., Lai S.L. Measuring self-regulation in online and blended learning environments. Internet and Higher Education. 2009; 12 (1):1–6. * [ Google Scholar ]
  • Bayeck R.Y., Hristova A., Jablokow K.W., Bonafini F. Exploring the relevance of single‐gender group formation: What we learn from a massive open online course (MOOC) British Journal of Educational Technology. 2018; 49 (1):88–100. * [ Google Scholar ]
  • Berge Z., Mrozowski S. Review of research in distance education, 1990 to 1999. American Journal of Distance Education. 2001; 15 (3):5–19. doi: 10.1080/08923640109527090. [ CrossRef ] [ Google Scholar ]
  • Berry S. Building community in online doctoral classrooms: Instructor practices that support community. Online Learning. 2017; 21 (2):n2. * [ Google Scholar ]
  • Boling E.C., Holan E., Horbatt B., Hough M., Jean-Louis J., Khurana C., Spiezio C. Using online tools for communication and collaboration: Understanding educators' experiences in an online course. The Internet and Higher Education. 2014; 23 :48–55. * [ Google Scholar ]
  • Bolliger D.U., Inan F.A. Development and validation of the online student connectedness survey (OSCS) International Review of Research in Open and Distance Learning. 2012; 13 (3):41–65. * [ Google Scholar ]
  • Bradford G., Wyatt S. Online learning and student satisfaction: Academic standing, ethnicity and their influence on facilitated learning, engagement, and information fluency. The Internet and Higher Education. 2010; 13 (3):108–114. * [ Google Scholar ]
  • Broadbent J. Comparing online and blended learner's self-regulated learning strategies and academic performance. The Internet and Higher Education. 2017; 33 :24–32. [ Google Scholar ]
  • Buzdar M., Ali A., Tariq R. Emotional intelligence as a determinant of readiness for online learning. International Review of Research in Open and Distance Learning. 2016; 17 (1) * [ Google Scholar ]
  • Capdeferro N., Romero M., Barberà E. Polychronicity: Review of the literature and a new configuration for the study of this hidden dimension of online learning. Distance Education. 2014; 35 (3):294–310. [ Google Scholar ]
  • Chaiprasurt C., Esichaikul V. Enhancing motivation in online courses with mobile communication tool support: A comparative study. International Review of Research in Open and Distance Learning. 2013; 14 (3):377–401. [ Google Scholar ]
  • Chen C.H., Wu I.C. The interplay between cognitive and motivational variables in a supportive online learning system for secondary physical education. Computers & Education. 2012; 58 (1):542–550. * [ Google Scholar ]
  • Cho H. Under co-construction: An online community of practice for bilingual pre-service teachers. Computers & Education. 2016; 92 :76–89. * [ Google Scholar ]
  • Cho M.H., Shen D. Self-regulation in online learning. Distance Education. 2013; 34 (3):290–301. [ Google Scholar ]
  • Cole M.T., Shelley D.J., Swartz L.B. Online instruction, e-learning, and student satisfaction: A three-year study. International Review of Research in Open and Distance Learning. 2014; 15 (6) * [ Google Scholar ]
  • Comer D.K., Clark C.R., Canelas D.A. Writing to learn and learning to write across the disciplines: Peer-to-peer writing in introductory-level MOOCs. International Review of Research in Open and Distance Learning. 2014; 15 (5):26–82. * [ Google Scholar ]
  • Cundell A., Sheepy E. Student perceptions of the most effective and engaging online learning activities in a blended graduate seminar. Online Learning. 2018; 22 (3):87–102. * [ Google Scholar ]
  • Cung B., Xu D., Eichhorn S. Increasing interpersonal interactions in an online course: Does increased instructor email activity and voluntary meeting time in a physical classroom facilitate student learning? Online Learning. 2018; 22 (3):193–215. [ Google Scholar ]
  • Cunningham U.M., Fägersten K.B., Holmsten E. Can you hear me, Hanoi?" Compensatory mechanisms employed in synchronous net-based English language learning. International Review of Research in Open and Distance Learning. 2010; 11 (1):161–177. [ Google Scholar ]
  • Davis D., Chen G., Hauff C., Houben G.J. Activating learning at scale: A review of innovations in online learning strategies. Computers & Education. 2018; 125 :327–344. [ Google Scholar ]
  • Delen E., Liew J., Willson V. Effects of interactivity and instructional scaffolding on learning: Self-regulation in online video-based environments. Computers & Education. 2014; 78 :312–320. [ Google Scholar ]
  • Dixson M.D. Measuring student engagement in the online course: The Online Student Engagement scale (OSE) Online Learning. 2015; 19 (4):n4. * [ Google Scholar ]
  • Dray B.J., Lowenthal P.R., Miszkiewicz M.J., Ruiz‐Primo M.A., Marczynski K. Developing an instrument to assess student readiness for online learning: A validation study. Distance Education. 2011; 32 (1):29–47. * [ Google Scholar ]
  • Dziuban C., Moskal P., Thompson J., Kramer L., DeCantis G., Hermsdorfer A. Student satisfaction with online learning: Is it a psychological contract? Online Learning. 2015; 19 (2):n2. * [ Google Scholar ]
  • Ergün E., Usluel Y.K. An analysis of density and degree-centrality according to the social networking structure formed in an online learning environment. Journal of Educational Technology & Society. 2016; 19 (4):34–46. * [ Google Scholar ]
  • Esfijani A. Measuring quality in online education: A meta-synthesis. American Journal of Distance Education. 2018; 32 (1):57–73. [ Google Scholar ]
  • Glazer H.R., Murphy J.A. Optimizing success: A model for persistence in online education. American Journal of Distance Education. 2015; 29 (2):135–144. [ Google Scholar ]
  • Glazer H.R., Wanstreet C.E. Connection to the academic community: Perceptions of students in online education. Quarterly Review of Distance Education. 2011; 12 (1):55. * [ Google Scholar ]
  • Hartnett M., George A.S., Dron J. Examining motivation in online distance learning environments: Complex, multifaceted and situation-dependent. International Review of Research in Open and Distance Learning. 2011; 12 (6):20–38. [ Google Scholar ]
  • Harwell M.R. 2012. Research design in qualitative/quantitative/mixed methods. Section III. Opportunities and challenges in designing and conducting inquiry. [ Google Scholar ]
  • Hung J.L. Trends of e‐learning research from 2000 to 2008: Use of text mining and bibliometrics. British Journal of Educational Technology. 2012; 43 (1):5–16. [ Google Scholar ]
  • Jiang W. Interdependence of roles, role rotation, and sense of community in an online course. Distance Education. 2017; 38 (1):84–105. [ Google Scholar ]
  • Ke F., Kwak D. Online learning across ethnicity and age: A study on learning interaction participation, perception, and learning satisfaction. Computers & Education. 2013; 61 :43–51. [ Google Scholar ]
  • Kent M. Changing the conversation: Facebook as a venue for online class discussion in higher education. MERLOT Journal of Online Learning and Teaching. 2013; 9 (4):546–565. * [ Google Scholar ]
  • Kim C., Park S.W., Cozart J. Affective and motivational factors of learning in online mathematics courses. British Journal of Educational Technology. 2014; 45 (1):171–185. [ Google Scholar ]
  • Kizilcec R.F., Pérez-Sanagustín M., Maldonado J.J. Self-regulated learning strategies predict learner behavior and goal attainment in Massive Open Online Courses. Computers & Education. 2017; 104 :18–33. [ Google Scholar ]
  • Kopp B., Matteucci M.C., Tomasetto C. E-tutorial support for collaborative online learning: An explorative study on experienced and inexperienced e-tutors. Computers & Education. 2012; 58 (1):12–20. [ Google Scholar ]
  • Koseoglu S., Doering A. Understanding complex ecologies: An investigation of student experiences in adventure learning programs. Distance Education. 2011; 32 (3):339–355. * [ Google Scholar ]
  • Kumi-Yeboah A. Designing a cross-cultural collaborative online learning framework for online instructors. Online Learning. 2018; 22 (4):181–201. * [ Google Scholar ]
  • Kuo Y.C., Walker A.E., Belland B.R., Schroder K.E. A predictive study of student satisfaction in online education programs. International Review of Research in Open and Distance Learning. 2013; 14 (1):16–39. * [ Google Scholar ]
  • Kuo Y.C., Walker A.E., Schroder K.E., Belland B.R. Interaction, Internet self-efficacy, and self-regulated learning as predictors of student satisfaction in online education courses. Internet and Higher Education. 2014; 20 :35–50. * [ Google Scholar ]
  • Lee J. An exploratory study of effective online learning: Assessing satisfaction levels of graduate students of mathematics education associated with human and design factors of an online course. International Review of Research in Open and Distance Learning. 2014; 15 (1) [ Google Scholar ]
  • Lee S.M. The relationships between higher order thinking skills, cognitive density, and social presence in online learning. The Internet and Higher Education. 2014; 21 :41–52. * [ Google Scholar ]
  • Lee K. Rethinking the accessibility of online higher education: A historical review. The Internet and Higher Education. 2017; 33 :15–23. [ Google Scholar ]
  • Lee Y., Choi J. A review of online course dropout research: Implications for practice and future research. Educational Technology Research & Development. 2011; 59 (5):593–618. [ Google Scholar ]
  • Li L.Y., Tsai C.C. Accessing online learning material: Quantitative behavior patterns and their effects on motivation and learning performance. Computers & Education. 2017; 114 :286–297. [ Google Scholar ]
  • Liyanagunawardena T., Adams A., Williams S. MOOCs: A systematic study of the published literature 2008-2012. International Review of Research in Open and Distance Learning. 2013; 14 (3):202–227. [ Google Scholar ]
  • Lowes S., Lin P., Kinghorn B.R. Gender differences in online high school courses. Online Learning. 2016; 20 (4):100–117. [ Google Scholar ]
  • Marbouti F., Wise A.F. Starburst: A new graphical interface to support purposeful attention to others' posts in online discussions. Educational Technology Research & Development. 2016; 64 (1):87–113. * [ Google Scholar ]
  • Martin F., Ahlgrim-Delzell L., Budhrani K. Systematic review of two decades (1995 to 2014) of research on synchronous online learning. American Journal of Distance Education. 2017; 31 (1):3–19. [ Google Scholar ]
  • Moore-Adams B.L., Jones W.M., Cohen J. Learning to teach online: A systematic review of the literature on K-12 teacher preparation for teaching online. Distance Education. 2016; 37 (3):333–348. [ Google Scholar ]
  • Murphy E., Rodríguez-Manzanares M.A. Rapport in distance education. International Review of Research in Open and Distance Learning. 2012; 13 (1):167–190. * [ Google Scholar ]
  • Nye A. Building an online academic learning community among undergraduate students. Distance Education. 2015; 36 (1):115–128. * [ Google Scholar ]
  • Olesova L., Slavin M., Lim J. Exploring the effect of scripted roles on cognitive presence in asynchronous online discussions. Online Learning. 2016; 20 (4):34–53. * [ Google Scholar ]
  • Orcutt J.M., Dringus L.P. Beyond being there: Practices that establish presence, engage students and influence intellectual curiosity in a structured online learning environment. Online Learning. 2017; 21 (3):15–35. * [ Google Scholar ]
  • Overbaugh R.C., Nickel C.E. A comparison of student satisfaction and value of academic community between blended and online sections of a university-level educational foundations course. The Internet and Higher Education. 2011; 14 (3):164–174. * [ Google Scholar ]
  • O'Shea S., Stone C., Delahunty J. “I ‘feel’like I am at university even though I am online.” Exploring how students narrate their engagement with higher education institutions in an online learning environment. Distance Education. 2015; 36 (1):41–58. * [ Google Scholar ]
  • Paechter M., Maier B. Online or face-to-face? Students' experiences and preferences in e-learning. Internet and Higher Education. 2010; 13 (4):292–297. [ Google Scholar ]
  • Phirangee K. Students' perceptions of learner-learner interactions that weaken a sense of community in an online learning environment. Online Learning. 2016; 20 (4):13–33. * [ Google Scholar ]
  • Phirangee K., Malec A. Othering in online learning: An examination of social presence, identity, and sense of community. Distance Education. 2017; 38 (2):160–172. * [ Google Scholar ]
  • Preisman K.A. Teaching presence in online education: From the instructor's point of view. Online Learning. 2014; 18 (3):n3. * [ Google Scholar ]
  • Rowe M. Developing graduate attributes in an open online course. British Journal of Educational Technology. 2016; 47 (5):873–882. * [ Google Scholar ]
  • Ruane R., Koku E.F. Social network analysis of undergraduate education student interaction in online peer mentoring settings. Journal of Online Learning and Teaching. 2014; 10 (4):577–589. * [ Google Scholar ]
  • Ruane R., Lee V.J. Analysis of discussion board interaction in an online peer mentoring site. Online Learning. 2016; 20 (4):79–99. * [ Google Scholar ]
  • Rye S.A., Støkken A.M. The implications of the local context in global virtual education. International Review of Research in Open and Distance Learning. 2012; 13 (1):191–206. * [ Google Scholar ]
  • Saadatmand M., Kumpulainen K. Participants' perceptions of learning and networking in connectivist MOOCs. Journal of Online Learning and Teaching. 2014; 10 (1):16. * [ Google Scholar ]
  • Shackelford J.L., Maxwell M. Sense of community in graduate online education: Contribution of learner to learner interaction. International Review of Research in Open and Distance Learning. 2012; 13 (4):228–249. * [ Google Scholar ]
  • Shea P., Bidjerano T. Does online learning impede degree completion? A national study of community college students. Computers & Education. 2014; 75 :103–111. * [ Google Scholar ]
  • Sherry L. Issues in distance learning. International Journal of Educational Telecommunications. 1996; 1 (4):337–365. [ Google Scholar ]
  • Slagter van Tryon P.J., Bishop M.J. Evaluating social connectedness online: The design and development of the social perceptions in learning contexts instrument. Distance Education. 2012; 33 (3):347–364. * [ Google Scholar ]
  • Swaggerty E.A., Broemmel A.D. Authenticity, relevance, and connectedness: Graduate students' learning preferences and experiences in an online reading education course. The Internet and Higher Education. 2017; 32 :80–86. * [ Google Scholar ]
  • Tallent-Runnels M.K., Thomas J.A., Lan W.Y., Cooper S., Ahern T.C., Shaw S.M., Liu X. Teaching courses online: A review of the research. Review of Educational Research. 2006; 76 (1):93–135. doi: 10.3102/00346543076001093. [ CrossRef ] [ Google Scholar ]
  • Tawfik A.A., Giabbanelli P.J., Hogan M., Msilu F., Gill A., York C.S. Effects of success v failure cases on learner-learner interaction. Computers & Education. 2018; 118 :120–132. [ Google Scholar ]
  • Thomas J. Exploring the use of asynchronous online discussion in health care education: A literature review. Computers & Education. 2013; 69 :199–215. [ Google Scholar ]
  • Thormann J., Fidalgo P. Guidelines for online course moderation and community building from a student's perspective. Journal of Online Learning and Teaching. 2014; 10 (3):374–388. * [ Google Scholar ]
  • Tibi M.H. Computer science students' attitudes towards the use of structured and unstructured discussion forums in fully online courses. Online Learning. 2018; 22 (1):93–106. * [ Google Scholar ]
  • Tsai C.W., Chiang Y.C. Research trends in problem‐based learning (pbl) research in e‐learning and online education environments: A review of publications in SSCI‐indexed journals from 2004 to 2012. British Journal of Educational Technology. 2013; 44 (6):E185–E190. [ Google Scholar ]
  • Tsai C.W., Fan Y.T. Research trends in game‐based learning research in online learning environments: A review of studies published in SSCI‐indexed journals from 2003 to 2012. British Journal of Educational Technology. 2013; 44 (5):E115–E119. [ Google Scholar ]
  • Tsai C.W., Shen P.D., Chiang Y.C. Research trends in meaningful learning research on e‐learning and online education environments: A review of studies published in SSCI‐indexed journals from 2003 to 2012. British Journal of Educational Technology. 2013; 44 (6):E179–E184. [ Google Scholar ]
  • Tsai C.W., Shen P.D., Fan Y.T. Research trends in self‐regulated learning research in online learning environments: A review of studies published in selected journals from 2003 to 2012. British Journal of Educational Technology. 2013; 44 (5):E107–E110. [ Google Scholar ]
  • U.S. Department of Education, Institute of Education Sciences . InstituteofEducationSciences; Washington,DC: 2017. What Works Clearinghouse procedures and standards handbook, version3.0. Retrievedfrom. [ Google Scholar ]
  • Veletsianos G., Shepherdson P. A systematic analysis and synthesis of the empirical MOOC literature published in 2013–2015. International Review of Research in Open and Distance Learning. 2016; 17 (2) [ Google Scholar ]
  • VERBI Software . 2019. MAXQDA 2020 online manual. Retrieved from maxqda. Com/help-max20/welcome [ Google Scholar ]
  • Verstegen D., Dailey-Hebert A., Fonteijn H., Clarebout G., Spruijt A. How do virtual teams collaborate in online learning tasks in a MOOC? International Review of Research in Open and Distance Learning. 2018; 19 (4) * [ Google Scholar ]
  • Wang Y., Baker R. Grit and intention: Why do learners complete MOOCs? International Review of Research in Open and Distance Learning. 2018; 19 (3) * [ Google Scholar ]
  • Wei C.W., Chen N.S., Kinshuk A model for social presence in online classrooms. Educational Technology Research & Development. 2012; 60 (3):529–545. * [ Google Scholar ]
  • Wicks D., Craft B.B., Lee D., Lumpe A., Henrikson R., Baliram N., Wicks K. An evaluation of low versus high collaboration in online learning. Online Learning. 2015; 19 (4):n4. * [ Google Scholar ]
  • Wise A.F., Perera N., Hsiao Y.T., Speer J., Marbouti F. Microanalytic case studies of individual participation patterns in an asynchronous online discussion in an undergraduate blended course. The Internet and Higher Education. 2012; 15 (2):108–117. * [ Google Scholar ]
  • Wisneski J.E., Ozogul G., Bichelmeyer B.A. Does teaching presence transfer between MBA teaching environments? A comparative investigation of instructional design practices associated with teaching presence. The Internet and Higher Education. 2015; 25 :18–27. * [ Google Scholar ]
  • Wladis C., Hachey A.C., Conway K. An investigation of course-level factors as predictors of online STEM course outcomes. Computers & Education. 2014; 77 :145–150. * [ Google Scholar ]
  • Wladis C., Samuels J. Do online readiness surveys do what they claim? Validity, reliability, and subsequent student enrollment decisions. Computers & Education. 2016; 98 :39–56. [ Google Scholar ]
  • Yamagata-Lynch L.C. Blending online asynchronous and synchronous learning. International Review of Research in Open and Distance Learning. 2014; 15 (2) * [ Google Scholar ]
  • Yang J., Kinshuk, Yu H., Chen S.J., Huang R. Strategies for smooth and effective cross-cultural online collaborative learning. Journal of Educational Technology & Society. 2014; 17 (3):208–221. * [ Google Scholar ]
  • Yeboah A.K., Smith P. Relationships between minority students online learning experiences and academic performance. Online Learning. 2016; 20 (4):n4. * [ Google Scholar ]
  • Yu T. Examining construct validity of the student online learning readiness (SOLR) instrument using confirmatory factor analysis. Online Learning. 2018; 22 (4):277–288. * [ Google Scholar ]
  • Yukselturk E., Bulut S. Gender differences in self-regulated online learning environment. Educational Technology & Society. 2009; 12 (3):12–22. [ Google Scholar ]
  • Yukselturk E., Top E. Exploring the link among entry characteristics, participation behaviors and course outcomes of online learners: An examination of learner profile using cluster analysis. British Journal of Educational Technology. 2013; 44 (5):716–728. [ Google Scholar ]
  • Zawacki-Richter O., Backer E., Vogt S. Review of distance education research (2000 to 2008): Analysis of research areas, methods, and authorship patterns. International Review of Research in Open and Distance Learning. 2009; 10 (6):30. doi: 10.19173/irrodl.v10i6.741. [ CrossRef ] [ Google Scholar ]
  • Zhu M., Sari A., Lee M.M. A systematic review of research methods and topics of the empirical MOOC literature (2014–2016) The Internet and Higher Education. 2018; 37 :31–39. [ Google Scholar ]
  • Zimmerman T.D. Exploring learner to content interaction as a success factor in online courses. International Review of Research in Open and Distance Learning. 2012; 13 (4):152–165. [ Google Scholar ]
  • Published: 21 April 2021

Impact of online classes on the satisfaction and performance of students during the pandemic period of COVID 19

  • Ram Gopal 1 ,
  • Varsha Singh 1 &
  • Arun Aggarwal   ORCID: 2  

Education and Information Technologies volume  26 ,  pages 6923–6947 ( 2021 ) Cite this article

593k Accesses

167 Citations

25 Altmetric

Metrics details

The aim of the study is to identify the factors affecting students’ satisfaction and performance regarding online classes during the pandemic period of COVID–19 and to establish the relationship between these variables. The study is quantitative in nature, and the data were collected from 544 respondents through online survey who were studying the business management (B.B.A or M.B.A) or hotel management courses in Indian universities. Structural equation modeling was used to analyze the proposed hypotheses. The results show that four independent factors used in the study viz. quality of instructor, course design, prompt feedback, and expectation of students positively impact students’ satisfaction and further student’s satisfaction positively impact students’ performance. For educational management, these four factors are essential to have a high level of satisfaction and performance for online courses. This study is being conducted during the epidemic period of COVID- 19 to check the effect of online teaching on students’ performance.

Working on a manuscript?

1 introduction.

Coronavirus is a group of viruses that is the main root of diseases like cough, cold, sneezing, fever, and some respiratory symptoms (WHO, 2019 ). Coronavirus is a contagious disease, which is spreading very fast amongst the human beings. COVID-19 is a new sprain which was originated in Wuhan, China, in December 2019. Coronavirus circulates in animals, but some of these viruses can transmit between animals and humans (Perlman & Mclntosh, 2020 ). As of March 282,020, according to the MoHFW, a total of 909 confirmed COVID-19 cases (862 Indians and 47 foreign nationals) had been reported in India (Centers for Disease Control and Prevention, 2020 ). Officially, no vaccine or medicine is evaluated to cure the spread of COVID-19 (Yu et al., 2020 ). The influence of the COVID-19 pandemic on the education system leads to schools and colleges’ widespread closures worldwide. On March 24, India declared a country-wide lockdown of schools and colleges (NDTV, 2020 ) for preventing the transmission of the coronavirus amongst the students (Bayham & Fenichel, 2020 ). School closures in response to the COVID-19 pandemic have shed light on several issues affecting access to education. COVID-19 is soaring due to which the huge number of children, adults, and youths cannot attend schools and colleges (UNESCO, 2020 ). Lah and Botelho ( 2012 ) contended that the effect of school closing on students’ performance is hazy.

Similarly, school closing may also affect students because of disruption of teacher and students’ networks, leading to poor performance. Bridge ( 2020 ) reported that schools and colleges are moving towards educational technologies for student learning to avoid a strain during the pandemic season. Hence, the present study’s objective is to develop and test a conceptual model of student’s satisfaction pertaining to online teaching during COVID-19, where both students and teachers have no other option than to use the online platform uninterrupted learning and teaching.

UNESCO recommends distance learning programs and open educational applications during school closure caused by COVID-19 so that schools and teachers use to teach their pupils and bound the interruption of education. Therefore, many institutes go for the online classes (Shehzadi et al., 2020 ).

As a versatile platform for learning and teaching processes, the E-learning framework has been increasingly used (Salloum & Shaalan, 2018 ). E-learning is defined as a new paradigm of online learning based on information technology (Moore et al., 2011 ). In contrast to traditional learning academics, educators, and other practitioners are eager to know how e-learning can produce better outcomes and academic achievements. Only by analyzing student satisfaction and their performance can the answer be sought.

Many comparative studies have been carried out to prove the point to explore whether face-to-face or traditional teaching methods are more productive or whether online or hybrid learning is better (Lockman & Schirmer, 2020 ; Pei & Wu, 2019 ; González-Gómez et al., 2016 ; González-Gómez et al., 2016 ). Results of the studies show that the students perform much better in online learning than in traditional learning. Henriksen et al. ( 2020 ) highlighted the problems faced by educators while shifting from offline to online mode of teaching. In the past, several research studies had been carried out on online learning to explore student satisfaction, acceptance of e-learning, distance learning success factors, and learning efficiency (Sher, 2009 ; Lee, 2014 ; Yen et al., 2018 ). However, scant amount of literature is available on the factors that affect the students’ satisfaction and performance in online classes during the pandemic of Covid-19 (Rajabalee & Santally, 2020 ). In the present study, the authors proposed that course design, quality of the instructor, prompt feedback, and students’ expectations are the four prominent determinants of learning outcome and satisfaction of the students during online classes (Lee, 2014 ).

The Course Design refers to curriculum knowledge, program organization, instructional goals, and course structure (Wright, 2003 ). If well planned, course design increasing the satisfaction of pupils with the system (Almaiah & Alyoussef, 2019 ). Mtebe and Raisamo ( 2014 ) proposed that effective course design will help in improving the performance through learners knowledge and skills (Khan & Yildiz, 2020 ; Mohammed et al., 2020 ). However, if the course is not designed effectively then it might lead to low usage of e-learning platforms by the teachers and students (Almaiah & Almulhem, 2018 ). On the other hand, if the course is designed effectively then it will lead to higher acceptance of e-learning system by the students and their performance also increases (Mtebe & Raisamo, 2014 ). Hence, to prepare these courses for online learning, many instructors who are teaching blended courses for the first time are likely to require a complete overhaul of their courses (Bersin, 2004 ; Ho et al., 2006 ).

The second-factor, Instructor Quality, plays an essential role in affecting the students’ satisfaction in online classes. Instructor quality refers to a professional who understands the students’ educational needs, has unique teaching skills, and understands how to meet the students’ learning needs (Luekens et al., 2004 ). Marsh ( 1987 ) developed five instruments for measuring the instructor’s quality, in which the main method was Students’ Evaluation of Educational Quality (SEEQ), which delineated the instructor’s quality. SEEQ is considered one of the methods most commonly used and embraced unanimously (Grammatikopoulos et al., 2014 ). SEEQ was a very useful method of feedback by students to measure the instructor’s quality (Marsh, 1987 ).

The third factor that improves the student’s satisfaction level is prompt feedback (Kinicki et al., 2004 ). Feedback is defined as information given by lecturers and tutors about the performance of students. Within this context, feedback is a “consequence of performance” (Hattie & Timperley, 2007 , p. 81). In education, “prompt feedback can be described as knowing what you know and what you do not related to learning” (Simsek et al., 2017 , p.334). Christensen ( 2014 ) studied linking feedback to performance and introduced the positivity ratio concept, which is a mechanism that plays an important role in finding out the performance through feedback. It has been found that prompt feedback helps in developing a strong linkage between faculty and students which ultimately leads to better learning outcomes (Simsek et al., 2017 ; Chang, 2011 ).

The fourth factor is students’ expectation . Appleton-Knapp and Krentler ( 2006 ) measured the impact of student’s expectations on their performance. They pin pointed that the student expectation is important. When the expectations of the students are achieved then it lead to the higher satisfaction level of the student (Bates & Kaye, 2014 ). These findings were backed by previous research model “Student Satisfaction Index Model” (Zhang et al., 2008 ). However, when the expectations are students is not fulfilled then it might lead to lower leaning and satisfaction with the course. Student satisfaction is defined as students’ ability to compare the desired benefit with the observed effect of a particular product or service (Budur et al., 2019 ). Students’ whose grade expectation is high will show high satisfaction instead of those facing lower grade expectations.

The scrutiny of the literature show that although different researchers have examined the factors affecting student satisfaction but none of the study has examined the effect of course design, quality of the instructor, prompt feedback, and students’ expectations on students’ satisfaction with online classes during the pandemic period of Covid-19. Therefore, this study tries to explore the factors that affect students’ satisfaction and performance regarding online classes during the pandemic period of COVID–19. As the pandemic compelled educational institutions to move online with which they were not acquainted, including teachers and learners. The students were not mentally prepared for such a shift. Therefore, this research will be examined to understand what factors affect students and how students perceived these changes which are reflected through their satisfaction level.

This paper is structured as follows: The second section provides a description of theoretical framework and the linkage among different research variables and accordingly different research hypotheses were framed. The third section deals with the research methodology of the paper as per APA guideline. The outcomes and corresponding results of the empirical analysis are then discussed. Lastly, the paper concludes with a discussion and proposes implications for future studies.

2 Theoretical framework

Achievement goal theory (AGT) is commonly used to understand the student’s performance, and it is proposed by four scholars Carole Ames, Carol Dweck, Martin Maehr, and John Nicholls in the late 1970s (Elliot, 2005 ). Elliott & Dweck ( 1988 , p11) define that “an achievement goal involves a program of cognitive processes that have cognitive, affective and behavioral consequence”. This theory suggests that students’ motivation and achievement-related behaviors can be easily understood by the purpose and the reasons they adopted while they are engaged in the learning activities (Dweck & Leggett, 1988 ; Ames, 1992 ; Urdan, 1997 ). Some of the studies believe that there are four approaches to achieve a goal, i.e., mastery-approach, mastery avoidance, performance approach, and performance-avoidance (Pintrich, 1999 ; Elliot & McGregor, 2001 ; Schwinger & Stiensmeier-Pelster, 2011 , Hansen & Ringdal, 2018 ; Mouratidis et al., 2018 ). The environment also affects the performance of students (Ames & Archer, 1988 ). Traditionally, classroom teaching is an effective method to achieve the goal (Ames & Archer, 1988 ; Ames, 1992 ; Clayton et al., 2010 ) however in the modern era, the internet-based teaching is also one of the effective tools to deliver lectures, and web-based applications are becoming modern classrooms (Azlan et al., 2020 ). Hence, following section discuss about the relationship between different independent variables and dependent variables (Fig. 1 ).

figure 1

Proposed Model

3 Hypotheses development

3.1 quality of the instructor and satisfaction of the students.

Quality of instructor with high fanaticism on student’s learning has a positive impact on their satisfaction. Quality of instructor is one of the most critical measures for student satisfaction, leading to the education process’s outcome (Munteanu et al., 2010 ; Arambewela & Hall, 2009 ; Ramsden, 1991 ). Suppose the teacher delivers the course effectively and influence the students to do better in their studies. In that case, this process leads to student satisfaction and enhances the learning process (Ladyshewsky, 2013 ). Furthermore, understanding the need of learner by the instructor also ensures student satisfaction (Kauffman, 2015 ). Hence the hypothesis that the quality of instructor significantly affects the satisfaction of the students was included in this study.

H1: The quality of the instructor positively affects the satisfaction of the students.

3.2 Course design and satisfaction of students

The course’s technological design is highly persuading the students’ learning and satisfaction through their course expectations (Liaw, 2008 ; Lin et al., 2008 ). Active course design indicates the students’ effective outcomes compared to the traditional design (Black & Kassaye, 2014 ). Learning style is essential for effective course design (Wooldridge, 1995 ). While creating an online course design, it is essential to keep in mind that we generate an experience for students with different learning styles. Similarly, (Jenkins, 2015 ) highlighted that the course design attributes could be developed and employed to enhance student success. Hence the hypothesis that the course design significantly affects students’ satisfaction was included in this study.

H2: Course design positively affects the satisfaction of students.

3.3 Prompt feedback and satisfaction of students

The emphasis in this study is to understand the influence of prompt feedback on satisfaction. Feedback gives the information about the students’ effective performance (Chang, 2011 ; Grebennikov & Shah, 2013 ; Simsek et al., 2017 ). Prompt feedback enhances student learning experience (Brownlee et al., 2009 ) and boosts satisfaction (O'donovan, 2017 ). Prompt feedback is the self-evaluation tool for the students (Rogers, 1992 ) by which they can improve their performance. Eraut ( 2006 ) highlighted the impact of feedback on future practice and student learning development. Good feedback practice is beneficial for student learning and teachers to improve students’ learning experience (Yorke, 2003 ). Hence the hypothesis that prompt feedback significantly affects satisfaction was included in this study.

H3: Prompt feedback of the students positively affects the satisfaction.

3.4 Expectations and satisfaction of students

Expectation is a crucial factor that directly influences the satisfaction of the student. Expectation Disconfirmation Theory (EDT) (Oliver, 1980 ) was utilized to determine the level of satisfaction based on their expectations (Schwarz & Zhu, 2015 ). Student’s expectation is the best way to improve their satisfaction (Brown et al., 2014 ). It is possible to recognize student expectations to progress satisfaction level (ICSB, 2015 ). Finally, the positive approach used in many online learning classes has been shown to place a high expectation on learners (Gold, 2011 ) and has led to successful outcomes. Hence the hypothesis that expectations of the student significantly affect the satisfaction was included in this study.

H4: Expectations of the students positively affects the satisfaction.

3.5 Satisfaction and performance of the students

Zeithaml ( 1988 ) describes that satisfaction is the outcome result of the performance of any educational institute. According to Kotler and Clarke ( 1986 ), satisfaction is the desired outcome of any aim that amuses any individual’s admiration. Quality interactions between instructor and students lead to student satisfaction (Malik et al., 2010 ; Martínez-Argüelles et al., 2016 ). Teaching quality and course material enhances the student satisfaction by successful outcomes (Sanderson, 1995 ). Satisfaction relates to the student performance in terms of motivation, learning, assurance, and retention (Biner et al., 1996 ). Mensink and King ( 2020 ) described that performance is the conclusion of student-teacher efforts, and it shows the interest of students in the studies. The critical element in education is students’ academic performance (Rono, 2013 ). Therefore, it is considered as center pole, and the entire education system rotates around the student’s performance. Narad and Abdullah ( 2016 ) concluded that the students’ academic performance determines academic institutions’ success and failure.

Singh et al. ( 2016 ) asserted that the student academic performance directly influences the country’s socio-economic development. Farooq et al. ( 2011 ) highlights the students’ academic performance is the primary concern of all faculties. Additionally, the main foundation of knowledge gaining and improvement of skills is student’s academic performance. According to Narad and Abdullah ( 2016 ), regular evaluation or examinations is essential over a specific period of time in assessing students’ academic performance for better outcomes. Hence the hypothesis that satisfaction significantly affects the performance of the students was included in this study.

H5: Students’ satisfaction positively affects the performance of the students.

3.6 Satisfaction as mediator

Sibanda et al. ( 2015 ) applied the goal theory to examine the factors persuading students’ academic performance that enlightens students’ significance connected to their satisfaction and academic achievement. According to this theory, students perform well if they know about factors that impact on their performance. Regarding the above variables, institutional factors that influence student satisfaction through performance include course design and quality of the instructor (DeBourgh, 2003 ; Lado et al., 2003 ), prompt feedback, and expectation (Fredericksen et al., 2000 ). Hence the hypothesis that quality of the instructor, course design, prompts feedback, and student expectations significantly affect the students’ performance through satisfaction was included in this study.

H6: Quality of the instructor, course design, prompt feedback, and student’ expectations affect the students’ performance through satisfaction.

H6a: Students’ satisfaction mediates the relationship between quality of the instructor and student’s performance.

H6b: Students’ satisfaction mediates the relationship between course design and student’s performance.

H6c: Students’ satisfaction mediates the relationship between prompt feedback and student’s performance.

H6d: Students’ satisfaction mediates the relationship between student’ expectations and student’s performance.

4.1 Participants

In this cross-sectional study, the data were collected from 544 respondents who were studying the management (B.B.A or M.B.A) and hotel management courses. The purposive sampling technique was used to collect the data. Descriptive statistics shows that 48.35% of the respondents were either MBA or BBA and rests of the respondents were hotel management students. The percentages of male students were (71%) and female students were (29%). The percentage of male students is almost double in comparison to females. The ages of the students varied from 18 to 35. The dominant group was those aged from 18 to 22, and which was the under graduation student group and their ratio was (94%), and another set of students were from the post-graduation course, which was (6%) only.

4.2 Materials

The research instrument consists of two sections. The first section is related to demographical variables such as discipline, gender, age group, and education level (under-graduate or post-graduate). The second section measures the six factors viz. instructor’s quality, course design, prompt feedback, student expectations, satisfaction, and performance. These attributes were taken from previous studies (Yin & Wang, 2015 ; Bangert, 2004 ; Chickering & Gamson, 1987 ; Wilson et al., 1997 ). The “instructor quality” was measured through the scale developed by Bangert ( 2004 ). The scale consists of seven items. The “course design” and “prompt feedback” items were adapted from the research work of Bangert ( 2004 ). The “course design” scale consists of six items. The “prompt feedback” scale consists of five items. The “students’ expectation” scale consists of five items. Four items were adapted from Bangert, 2004 and one item was taken from Wilson et al. ( 1997 ). Students’ satisfaction was measure with six items taken from Bangert ( 2004 ); Wilson et al. ( 1997 ); Yin and Wang ( 2015 ). The “students’ performance” was measured through the scale developed by Wilson et al. ( 1997 ). The scale consists of six items. These variables were accessed on a five-point likert scale, ranging from 1(strongly disagree) to 5(strongly agree). Only the students from India have taken part in the survey. A total of thirty-four questions were asked in the study to check the effect of the first four variables on students’ satisfaction and performance. For full details of the questionnaire, kindly refer Appendix Tables 6 .

The study used a descriptive research design. The factors “instructor quality, course design, prompt feedback and students’ expectation” were independent variables. The students’ satisfaction was mediator and students’ performance was the dependent variable in the current study.

4.4 Procedure

In this cross-sectional research the respondents were selected through judgment sampling. They were informed about the objective of the study and information gathering process. They were assured about the confidentiality of the data and no incentive was given to then for participating in this study. The information utilizes for this study was gathered through an online survey. The questionnaire was built through Google forms, and then it was circulated through the mails. Students’ were also asked to write the name of their college, and fifteen colleges across India have taken part to fill the data. The data were collected in the pandemic period of COVID-19 during the total lockdown in India. This was the best time to collect the data related to the current research topic because all the colleges across India were involved in online classes. Therefore, students have enough time to understand the instrument and respondent to the questionnaire in an effective manner. A total of 615 questionnaires were circulated, out of which the students returned 574. Thirty responses were not included due to the unengaged responses. Finally, 544 questionnaires were utilized in the present investigation. Male and female students both have taken part to fill the survey, different age groups, and various courses, i.e., under graduation and post-graduation students of management and hotel management students were the part of the sample.

5.1 Exploratory factor analysis (EFA)

To analyze the data, SPSS and AMOS software were used. First, to extract the distinct factors, an exploratory factor analysis (EFA) was performed using VARIMAX rotation on a sample of 544. Results of the exploratory analysis rendered six distinct factors. Factor one was named as the quality of instructor, and some of the items were “The instructor communicated effectively”, “The instructor was enthusiastic about online teaching” and “The instructor was concerned about student learning” etc. Factor two was labeled as course design, and the items were “The course was well organized”, “The course was designed to allow assignments to be completed across different learning environments.” and “The instructor facilitated the course effectively” etc. Factor three was labeled as prompt feedback of students, and some of the items were “The instructor responded promptly to my questions about the use of Webinar”, “The instructor responded promptly to my questions about general course requirements” etc. The fourth factor was Student’s Expectations, and the items were “The instructor provided models that clearly communicated expectations for weekly group assignments”, “The instructor used good examples to explain statistical concepts” etc. The fifth factor was students’ satisfaction, and the items were “The online classes were valuable”, “Overall, I am satisfied with the quality of this course” etc. The sixth factor was performance of the student, and the items were “The online classes has sharpened my analytic skills”, “Online classes really tries to get the best out of all its students” etc. These six factors explained 67.784% of the total variance. To validate the factors extracted through EFA, the researcher performed confirmatory factor analysis (CFA) through AMOS. Finally, structural equation modeling (SEM) was used to test the hypothesized relationships.

5.2 Measurement model

The results of Table 1 summarize the findings of EFA and CFA. Results of the table showed that EFA renders six distinct factors, and CFA validated these factors. Table 2 shows that the proposed measurement model achieved good convergent validity (Aggarwal et al., 2018a , b ). Results of the confirmatory factor analysis showed that the values of standardized factor loadings were statistically significant at the 0.05 level. Further, the results of the measurement model also showed acceptable model fit indices such that CMIN = 710.709; df = 480; CMIN/df = 1.481 p  < .000; Incremental Fit Index (IFI) = 0.979; Tucker-Lewis Index (TLI) = 0.976; Goodness of Fit index (GFI) = 0.928; Adjusted Goodness of Fit Index (AGFI) = 0.916; Comparative Fit Index (CFI) = 0.978; Root Mean Square Residual (RMR) = 0.042; Root Mean Squared Error of Approximation (RMSEA) = 0.030 is satisfactory.

The Average Variance Explained (AVE) according to the acceptable index should be higher than the value of squared correlations between the latent variables and all other variables. The discriminant validity is confirmed (Table 2 ) as the value of AVE’s square root is greater than the inter-construct correlations coefficient (Hair et al., 2006 ). Additionally, the discriminant validity existed when there was a low correlation between each variable measurement indicator with all other variables except with the one with which it must be theoretically associated (Aggarwal et al., 2018a , b ; Aggarwal et al., 2020 ). The results of Table 2 show that the measurement model achieved good discriminate validity.

5.3 Structural model

To test the proposed hypothesis, the researcher used the structural equation modeling technique. This is a multivariate statistical analysis technique, and it includes the amalgamation of factor analysis and multiple regression analysis. It is used to analyze the structural relationship between measured variables and latent constructs.

Table  3 represents the structural model’s model fitness indices where all variables put together when CMIN/DF is 2.479, and all the model fit values are within the particular range. That means the model has attained a good model fit. Furthermore, other fit indices as GFI = .982 and AGFI = 0.956 be all so supportive (Schumacker & Lomax, 1996 ; Marsh & Grayson, 1995 ; Kline, 2005 ).

Hence, the model fitted the data successfully. All co-variances among the variables and regression weights were statistically significant ( p  < 0.001).

Table 4 represents the relationship between exogenous, mediator and endogenous variables viz—quality of instructor, prompt feedback, course design, students’ expectation, students’ satisfaction and students’ performance. The first four factors have a positive relationship with satisfaction, which further leads to students’ performance positively. Results show that the instructor’s quality has a positive relationship with the satisfaction of students for online classes (SE = 0.706, t-value = 24.196; p  < 0.05). Hence, H1 was supported. The second factor is course design, which has a positive relationship with students’ satisfaction of students (SE = 0.064, t-value = 2.395; p < 0.05). Hence, H2 was supported. The third factor is Prompt feedback, and results show that feedback has a positive relationship with the satisfaction of the students (SE = 0.067, t-value = 2.520; p < 0.05). Hence, H3 was supported. The fourth factor is students’ expectations. The results show a positive relationship between students’ expectation and students’ satisfaction with online classes (SE = 0.149, t-value = 5.127; p < 0.05). Hence, H4 was supported. The results of SEM show that out of quality of instructor, prompt feedback, course design, and students’ expectation, the most influencing factor that affect the students’ satisfaction was instructor’s quality (SE = 0.706) followed by students’ expectation (SE =5.127), prompt feedback (SE = 2.520). The factor that least affects the students’ satisfaction was course design (2.395). The results of Table 4 finally depicts that students’ satisfaction has positive effect on students’ performance ((SE = 0.186, t-value = 2.800; p < 0.05). Hence H5 was supported.

Table 5 shows that students’ satisfaction partially mediates the positive relationship between the instructor’s quality and student performance. Hence, H6(a) was supported. Further, the mediation analysis results showed that satisfaction again partially mediates the positive relationship between course design and student’s performance. Hence, H6(b) was supported However, the mediation analysis results showed that satisfaction fully mediates the positive relationship between prompt feedback and student performance. Hence, H6(c) was supported. Finally, the results of the Table 5 showed that satisfaction partially mediates the positive relationship between expectations of the students and student’s performance. Hence, H6(d) was supported.

6 Discussion

In the present study, the authors evaluated the different factors directly linked with students’ satisfaction and performance with online classes during Covid-19. Due to the pandemic situation globally, all the colleges and universities were shifted to online mode by their respective governments. No one has the information that how long this pandemic will remain, and hence the teaching method was shifted to online mode. Even though some of the educators were not tech-savvy, they updated themselves to battle the unexpected circumstance (Pillai et al., 2021 ). The present study results will help the educators increase the student’s satisfaction and performance in online classes. The current research assists educators in understanding the different factors that are required for online teaching.

Comparing the current research with past studies, the past studies have examined the factors affecting the student’s satisfaction in the conventional schooling framework. However, the present study was conducted during India’s lockdown period to identify the prominent factors that derive the student’s satisfaction with online classes. The study also explored the direct linkage between student’s satisfaction and their performance. The present study’s findings indicated that instructor’s quality is the most prominent factor that affects the student’s satisfaction during online classes. This means that the instructor needs to be very efficient during the lectures. He needs to understand students’ psychology to deliver the course content prominently. If the teacher can deliver the course content properly, it affects the student’s satisfaction and performance. The teachers’ perspective is critical because their enthusiasm leads to a better online learning process quality.

The present study highlighted that the second most prominent factor affecting students’ satisfaction during online classes is the student’s expectations. Students might have some expectations during the classes. If the instructor understands that expectation and customizes his/her course design following the student’s expectations, then it is expected that the students will perform better in the examinations. The third factor that affects the student’s satisfaction is feedback. After delivering the course, appropriate feedback should be taken by the instructors to plan future courses. It also helps to make the future strategies (Tawafak et al., 2019 ). There must be a proper feedback system for improvement because feedback is the course content’s real image. The last factor that affects the student’s satisfaction is design. The course content needs to be designed in an effective manner so that students should easily understand it. If the instructor plans the course, so the students understand the content without any problems it effectively leads to satisfaction, and the student can perform better in the exams. In some situations, the course content is difficult to deliver in online teaching like the practical part i.e. recipes of dishes or practical demonstration in the lab. In such a situation, the instructor needs to be more creative in designing and delivering the course content so that it positively impacts the students’ overall satisfaction with online classes.

Overall, the students agreed that online teaching was valuable for them even though the online mode of classes was the first experience during the pandemic period of Covid-19 (Agarwal & Kaushik, 2020 ; Rajabalee & Santally, 2020 ). Some of the previous studies suggest that the technology-supported courses have a positive relationship with students’ performance (Cho & Schelzer, 2000 ; Harasim, 2000 ; Sigala, 2002 ). On the other hand, the demographic characteristic also plays a vital role in understanding the online course performance. According to APA Work Group of the Board of Educational Affairs ( 1997 ), the learner-centered principles suggest that students must be willing to invest the time required to complete individual course assignments. Online instructors must be enthusiastic about developing genuine instructional resources that actively connect learners and encourage them toward proficient performances. For better performance in studies, both teachers and students have equal responsibility. When the learner faces any problem to understand the concepts, he needs to make inquiries for the instructor’s solutions (Bangert, 2004 ). Thus, we can conclude that “instructor quality, student’s expectation, prompt feedback, and effective course design” significantly impact students’ online learning process.

7 Implications of the study

The results of this study have numerous significant practical implications for educators, students and researchers. It also contributes to the literature by demonstrating that multiple factors are responsible for student satisfaction and performance in the context of online classes during the period of the COVID-19 pandemic. This study was different from the previous studies (Baber, 2020 ; Ikhsan et al., 2019 ; Eom & Ashill, 2016 ). None of the studies had examined the effect of students’ satisfaction on their perceived academic performance. The previous empirical findings have highlighted the importance of examining the factors affecting student satisfaction (Maqableh & Jaradat, 2021 ; Yunusa & Umar, 2021 ). Still, none of the studies has examined the effect of course design, quality of instructor, prompt feedback, and students’ expectations on students’ satisfaction all together with online classes during the pandemic period. The present study tries to fill this research gap.

The first essential contribution of this study was the instructor’s facilitating role, and the competence he/she possesses affects the level of satisfaction of the students (Gray & DiLoreto, 2016 ). There was an extra obligation for instructors who taught online courses during the pandemic. They would have to adapt to a changing climate, polish their technical skills throughout the process, and foster new students’ technical knowledge in this environment. The present study’s findings indicate that instructor quality is a significant determinant of student satisfaction during online classes amid a pandemic. In higher education, the teacher’s standard referred to the instructor’s specific individual characteristics before entering the class (Darling-Hammond, 2010 ). These attributes include factors such as instructor content knowledge, pedagogical knowledge, inclination, and experience. More significantly, at that level, the amount of understanding could be given by those who have a significant amount of technical expertise in the areas they are teaching (Martin, 2021 ). Secondly, the present study results contribute to the profession of education by illustrating a realistic approach that can be used to recognize students’ expectations in their class effectively. The primary expectation of most students before joining a university is employment. Instructors have agreed that they should do more to fulfill students’ employment expectations (Gorgodze et al., 2020 ). The instructor can then use that to balance expectations to improve student satisfaction. Study results can be used to continually improve and build courses, as well as to make policy decisions to improve education programs. Thirdly, from result outcomes, online course design and instructors will delve deeper into how to structure online courses more efficiently, including design features that minimize adversely and maximize optimistic emotion, contributing to greater student satisfaction (Martin et al., 2018 ). The findings suggest that the course design has a substantial positive influence on the online class’s student performance. The findings indicate that the course design of online classes need to provide essential details like course content, educational goals, course structure, and course output in a consistent manner so that students would find the e-learning system beneficial for them; this situation will enable students to use the system and that leads to student performance (Almaiah & Alyoussef, 2019 ). Lastly, the results indicate that instructors respond to questions promptly and provide timely feedback on assignments to facilitate techniques that help students in online courses improve instructor participation, instructor interaction, understanding, and participation (Martin et al., 2018 ). Feedback can be beneficial for students to focus on the performance that enhances their learning.

Author information

Authors and affiliations.

Chitkara College of Hospitality Management, Chitkara University, Chandigarh, Punjab, India

Ram Gopal & Varsha Singh

Chitkara Business School, Chitkara University, Chandigarh, Punjab, India

Arun Aggarwal

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Arun Aggarwal .

Ethics declarations

Ethics approval.

Not applicable.

Conflict of interest

The authors declare no conflict of interest, financial or otherwise.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and Permissions

About this article

Cite this article.

Gopal, R., Singh, V. & Aggarwal, A. Impact of online classes on the satisfaction and performance of students during the pandemic period of COVID 19. Educ Inf Technol 26 , 6923–6947 (2021).

Download citation

Received : 07 December 2020

Accepted : 22 March 2021

Published : 21 April 2021

Issue Date : November 2021


Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Quality of instructor
  • Course design
  • Instructor’s prompt feedback
  • Expectations
  • Student’s satisfaction
  • Perceived performance


  • Find a journal
  • Publish with us
  • Research article
  • Open access
  • Published: 02 December 2020

Integrating students’ perspectives about online learning: a hierarchy of factors

  • Montgomery Van Wart 1 ,
  • Anna Ni 1 ,
  • Pamela Medina 1 ,
  • Jesus Canelon 1 ,
  • Melika Kordrostami 1 ,
  • Jing Zhang 1 &

International Journal of Educational Technology in Higher Education volume  17 , Article number:  53 ( 2020 ) Cite this article

143k Accesses

42 Citations

24 Altmetric

Metrics details

This article reports on a large-scale ( n  = 987), exploratory factor analysis study incorporating various concepts identified in the literature as critical success factors for online learning from the students’ perspective, and then determines their hierarchical significance. Seven factors--Basic Online Modality, Instructional Support, Teaching Presence, Cognitive Presence, Online Social Comfort, Online Interactive Modality, and Social Presence--were identified as significant and reliable. Regression analysis indicates the minimal factors for enrollment in future classes—when students consider convenience and scheduling—were Basic Online Modality, Cognitive Presence, and Online Social Comfort. Students who accepted or embraced online courses on their own merits wanted a minimum of Basic Online Modality, Teaching Presence, Cognitive Presence, Online Social Comfort, and Social Presence. Students, who preferred face-to-face classes and demanded a comparable experience, valued Online Interactive Modality and Instructional Support more highly. Recommendations for online course design, policy, and future research are provided.


While there are different perspectives of the learning process such as learning achievement and faculty perspectives, students’ perspectives are especially critical since they are ultimately the raison d’être of the educational endeavor (Chickering & Gamson, 1987 ). More pragmatically, students’ perspectives provide invaluable, first-hand insights into their experiences and expectations (Dawson et al., 2019 ). The student perspective is especially important when new teaching approaches are used and when new technologies are being introduced (Arthur, 2009 ; Crews & Butterfield, 2014 ; Van Wart, Ni, Ready, Shayo, & Court, 2020 ). With the renewed interest in “active” education in general (Arruabarrena, Sánchez, Blanco, et al., 2019 ; Kay, MacDonald, & DiGiuseppe, 2019 ; Nouri, 2016 ; Vlachopoulos & Makri, 2017 ) and the flipped classroom approach in particular (Flores, del-Arco, & Silva, 2016 ; Gong, Yang, & Cai, 2020 ; Lundin, et al., 2018 ; Maycock, 2019 ; McGivney-Burelle, 2013 ; O’Flaherty & Phillips, 2015 ; Tucker , 2012 ) along with extraordinary shifts in the technology, the student perspective on online education is profoundly important. What shapes students’ perceptions of quality integrate are their own sense of learning achievement, satisfaction with the support they receive, technical proficiency of the process, intellectual and emotional stimulation, comfort with the process, and sense of learning community. The factors that students perceive as quality online teaching, however, has not been as clear as it might be for at least two reasons.

First, it is important to note that the overall online learning experience for students is also composed of non-teaching factors which we briefly mention. Three such factors are (1) convenience, (2) learner characteristics and readiness, and (3) antecedent conditions that may foster teaching quality but are not directly responsible for it. (1) Convenience is an enormous non-quality factor for students (Artino, 2010 ) which has driven up online demand around the world (Fidalgo, Thormann, Kulyk, et al., 2020 ; Inside Higher Education and Gallup, 2019 ; Legon & Garrett, 2019 ; Ortagus, 2017 ). This is important since satisfaction with online classes is frequently somewhat lower than face-to-face classes (Macon, 2011 ). However, the literature generally supports the relative equivalence of face-to-face and online modes regarding learning achievement criteria (Bernard et al., 2004 ; Nguyen, 2015 ; Ni, 2013 ; Sitzmann, Kraiger, Stewart, & Wisher, 2006 ; see Xu & Jaggars, 2014 for an alternate perspective). These contrasts are exemplified in a recent study of business students, in which online students using a flipped classroom approach outperformed their face-to-face peers, but ironically rated instructor performance lower (Harjoto, 2017 ). (2) Learner characteristics also affect the experience related to self-regulation in an active learning model, comfort with technology, and age, among others,which affect both receptiveness and readiness of online instruction. (Alqurashi, 2016 ; Cohen & Baruth, 2017 ; Kintu, Zhu, & Kagambe, 2017 ; Kuo, Walker, Schroder, & Belland, 2013 ; Ventura & Moscoloni, 2015 ) (3) Finally, numerous antecedent factors may lead to improved instruction, but are not themselves directly perceived by students such as instructor training (Brinkley-Etzkorn, 2018 ), and the sources of faculty motivation (e.g., incentives, recognition, social influence, and voluntariness) (Wingo, Ivankova, & Moss, 2017 ). Important as these factors are, mixing them with the perceptions of quality tends to obfuscate the quality factors directly perceived by students.

Second, while student perceptions of quality are used in innumerable studies, our overall understanding still needs to integrate them more holistically. Many studies use student perceptions of quality and overall effectiveness of individual tools and strategies in online contexts such as mobile devices (Drew & Mann, 2018 ), small groups (Choi, Land, & Turgeon, 2005 ), journals (Nair, Tay, & Koh, 2013 ), simulations (Vlachopoulos & Makri, 2017 ), video (Lange & Costley, 2020 ), etc. Such studies, however, cannot provide the overall context and comparative importance. Some studies have examined the overall learning experience of students with exploratory lists, but have mixed non-quality factors with quality of teaching factors making it difficult to discern the instructor’s versus contextual roles in quality (e.g., Asoodar, Vaezi, & Izanloo, 2016 ; Bollinger & Martindale, 2004 ; Farrell & Brunton, 2020 ; Hong, 2002 ; Song, Singleton, Hill, & Koh, 2004 ; Sun, Tsai, Finger, Chen, & Yeh, 2008 ). The application of technology adoption studies also fall into this category by essentially aggregating all teaching quality in the single category of performance ( Al-Gahtani, 2016 ; Artino, 2010 ). Some studies have used high-level teaching-oriented models, primarily the Community of Inquiry model (le Roux & Nagel, 2018 ), but empirical support has been mixed (Arbaugh et al., 2008 ); and its elegance (i.e., relying on only three factors) has not provided much insight to practitioners (Anderson, 2016 ; Cleveland-Innes & Campbell, 2012 ).

Research questions

Integration of studies and concepts explored continues to be fragmented and confusing despite the fact that the number of empirical studies related to student perceptions of quality factors has increased. It is important to have an empirical view of what students’ value in a single comprehensive study and, also, to know if there is a hierarchy of factors, ranging from students who are least to most critical of the online learning experience. This research study has two research questions.

The first research question is: What are the significant factors in creating a high-quality online learning experience from students’ perspectives? That is important to know because it should have a significant effect on the instructor’s design of online classes. The goal of this research question is identify a more articulated and empirically-supported set of factors capturing the full range of student expectations.

The second research question is: Is there a priority or hierarchy of factors related to students’ perceptions of online teaching quality that relate to their decisions to enroll in online classes? For example, is it possible to distinguish which factors are critical for enrollment decisions when students are primarily motivated by convenience and scheduling flexibility (minimum threshold)? Do these factors differ from students with a genuine acceptance of the general quality of online courses (a moderate threshold)? What are the factors that are important for the students who are the most critical of online course delivery (highest threshold)?

This article next reviews the literature on online education quality, focusing on the student perspective and reviews eight factors derived from it. The research methods section discusses the study structure and methods. Demographic data related to the sample are next, followed by the results, discussion, and conclusion.

Literature review

Online education is much discussed (Prinsloo, 2016 ; Van Wart et al., 2019 ; Zawacki-Richter & Naidu, 2016 ), but its perception is substantially influenced by where you stand and what you value (Otter et al., 2013 ; Tanner, Noser, & Totaro, 2009 ). Accrediting bodies care about meeting technical standards, proof of effectiveness, and consistency (Grandzol & Grandzol, 2006 ). Institutions care about reputation, rigor, student satisfaction, and institutional efficiency (Jung, 2011 ). Faculty care about subject coverage, student participation, faculty satisfaction, and faculty workload (Horvitz, Beach, Anderson, & Xia, 2015 ; Mansbach & Austin, 2018 ). For their part, students care about learning achievement (Marks, Sibley, & Arbaugh, 2005 ; O’Neill & Sai, 2014 ; Shen, Cho, Tsai, & Marra, 2013 ), but also view online education as a function of their enjoyment of classes, instructor capability and responsiveness, and comfort in the learning environment (e.g., Asoodar et al., 2016 ; Sebastianelli, Swift, & Tamimi, 2015 ). It is this last perspective, of students, upon which we focus.

It is important to note students do not sign up for online classes solely based on perceived quality. Perceptions of quality derive from notions of the capacity of online learning when ideal—relative to both learning achievement and satisfaction/enjoyment, and perceptions about the likelihood and experience of classes living up to expectations. Students also sign up because of convenience and flexibility, and personal notions of suitability about learning. Convenience and flexibility are enormous drivers of online registration (Lee, Stringer, & Du, 2017 ; Mann & Henneberry, 2012 ). Even when students say they prefer face-to-face classes to online, many enroll in online classes and re-enroll in the future if the experience meets minimum expectations. This study examines the threshold expectations of students when they are considering taking online classes.

When discussing students’ perceptions of quality, there is little clarity about the actual range of concepts because no integrated empirical studies exist comparing major factors found throughout the literature. Rather, there are practitioner-generated lists of micro-competencies such as the Quality Matters consortium for higher education (Quality Matters, 2018 ), or broad frameworks encompassing many aspects of quality beyond teaching (Open and Distant Learning Quality Council, 2012 ). While checklists are useful for practitioners and accreditation processes, they do not provide robust, theoretical bases for scholarly development. Overarching frameworks are heuristically useful, but not for pragmatic purposes or theory building arenas. The most prominent theoretical framework used in online literature is the Community of Inquiry (CoI) model (Arbaugh et al., 2008 ; Garrison, Anderson, & Archer, 2003 ), which divides instruction into teaching, cognitive, and social presence. Like deductive theories, however, the supportive evidence is mixed (Rourke & Kanuka, 2009 ), especially regarding the importance of social presence (Annand, 2011 ; Armellini and De Stefani, 2016 ). Conceptually, the problem is not so much with the narrow articulation of cognitive or social presence; cognitive presence is how the instructor provides opportunities for students to interact with material in robust, thought-provoking ways, and social presence refers to building a community of learning that incorporates student-to-student interactions. However, teaching presence includes everything else the instructor does—structuring the course, providing lectures, explaining assignments, creating rehearsal opportunities, supplying tests, grading, answering questions, and so on. These challenges become even more prominent in the online context. While the lecture as a single medium is paramount in face-to-face classes, it fades as the primary vehicle in online classes with increased use of detailed syllabi, electronic announcements, recorded and synchronous lectures, 24/7 communications related to student questions, etc. Amassing the pedagogical and technological elements related to teaching under a single concept provides little insight.

In addition to the CoI model, numerous concepts are suggested in single-factor empirical studies when focusing on quality from a student’s perspective, with overlapping conceptualizations and nonstandardized naming conventions. Seven distinct factors are derived here from the literature of student perceptions of online quality: Instructional Support, Teaching Presence, Basic Online Modality, Social Presence, Online Social Comfort, cognitive Presence, and Interactive Online Modality.

Instructional support

Instructional Support refers to students’ perceptions of techniques by the instructor used for input, rehearsal, feedback, and evaluation. Specifically, this entails providing detailed instructions, designed use of multimedia, and the balance between repetitive class features for ease of use, and techniques to prevent boredom. Instructional Support is often included as an element of Teaching Presence, but is also labeled “structure” (Lee & Rha, 2009 ; So & Brush, 2008 ) and instructor facilitation (Eom, Wen, & Ashill, 2006 ). A prime example of the difference between face-to-face and online education is the extensive use of the “flipped classroom” (Maycock, 2019 ; Wang, Huang, & Schunn, 2019 ) in which students move to rehearsal activities faster and more frequently than traditional classrooms, with less instructor lecture (Jung, 2011 ; Martin, Wang, & Sadaf, 2018 ). It has been consistently supported as an element of student perceptions of quality (Espasa & Meneses, 2010 ).

  • Teaching presence

Teaching Presence refers to students’ perceptions about the quality of communication in lectures, directions, and individual feedback including encouragement (Jaggars & Xu, 2016 ; Marks et al., 2005 ). Specifically, instructor communication is clear, focused, and encouraging, and instructor feedback is customized and timely. If Instructional Support is what an instructor does before the course begins and in carrying out those plans, then Teaching Presence is what the instructor does while the class is conducted and in response to specific circumstances. For example, a course could be well designed but poorly delivered because the instructor is distracted; or a course could be poorly designed but an instructor might make up for the deficit by spending time and energy in elaborate communications and ad hoc teaching techniques. It is especially important in student satisfaction (Sebastianelli et al., 2015 ; Young, 2006 ) and also referred to as instructor presence (Asoodar et al., 2016 ), learner-instructor interaction (Marks et al., 2005 ), and staff support (Jung, 2011 ). As with Instructional Support, it has been consistently supported as an element of student perceptions of quality.

Basic online modality

Basic Online Modality refers to the competent use of basic online class tools—online grading, navigation methods, online grade book, and the announcements function. It is frequently clumped with instructional quality (Artino, 2010 ), service quality (Mohammadi, 2015 ), instructor expertise in e-teaching (Paechter, Maier, & Macher, 2010 ), and similar terms. As a narrowly defined concept, it is sometimes called technology (Asoodar et al., 2016 ; Bollinger & Martindale, 2004 ; Sun et al., 2008 ). The only empirical study that did not find Basic Online Modality significant, as technology, was Sun et al. ( 2008 ). Because Basic Online Modality is addressed with basic instructor training, some studies assert the importance of training (e.g., Asoodar et al., 2016 ).

Social presence

Social Presence refers to students’ perceptions of the quality of student-to-student interaction. Social Presence focuses on the quality of shared learning and collaboration among students, such as in threaded discussion responses (Garrison et al., 2003 ; Kehrwald, 2008 ). Much emphasized but challenged in the CoI literature (Rourke & Kanuka, 2009 ), it has mixed support in the online literature. While some studies found Social Presence or related concepts to be significant (e.g., Asoodar et al., 2016 ; Bollinger & Martindale, 2004 ; Eom et al., 2006 ; Richardson, Maeda, Lv, & Caskurlu, 2017 ), others found Social Presence insignificant (Joo, Lim, & Kim, 2011 ; So & Brush, 2008 ; Sun et al., 2008 ).

Online social comfort

Online Social Comfort refers to the instructor’s ability to provide an environment in which anxiety is low, and students feel comfortable interacting even when expressing opposing viewpoints. While numerous studies have examined anxiety (e.g., Liaw & Huang, 2013 ; Otter et al., 2013 ; Sun et al., 2008 ), only one found anxiety insignificant (Asoodar et al., 2016 ); many others have not examined the concept.

  • Cognitive presence

Cognitive Presence refers to the engagement of students such that they perceive they are stimulated by the material and instructor to reflect deeply and critically, and seek to understand different perspectives (Garrison et al., 2003 ). The instructor provides instructional materials and facilitates an environment that piques interest, is reflective, and enhances inclusiveness of perspectives (Durabi, Arrastia, Nelson, Cornille, & Liang, 2011 ). Cognitive Presence includes enhancing the applicability of material for student’s potential or current careers. Cognitive Presence is supported as significant in many online studies (e.g., Artino, 2010 ; Asoodar et al., 2016 ; Joo et al., 2011 ; Marks et al., 2005 ; Sebastianelli et al., 2015 ; Sun et al., 2008 ). Further, while many instructors perceive that cognitive presence is diminished in online settings, neuroscientific studies indicate this need not be the case (Takamine, 2017 ). While numerous studies failed to examine Cognitive Presence, this review found no studies that lessened its significance for students.

Interactive online modality

Interactive Online Modality refers to the “high-end” usage of online functionality. That is, the instructor uses interactive online class tools—video lectures, videoconferencing, and small group discussions—well. It is often included in concepts such as instructional quality (Artino, 2010 ; Asoodar et al., 2016 ; Mohammadi, 2015 ; Otter et al., 2013 ; Paechter et al., 2010 ) or engagement (Clayton, Blumberg, & Anthony, 2018 ). While individual methods have been investigated (e.g. Durabi et al., 2011 ), high-end engagement methods have not.

Other independent variables affecting perceptions of quality include age, undergraduate versus graduate status, gender, ethnicity/race, discipline, educational motivation of students, and previous online experience. While age has been found to be small or insignificant, more notable effects have been reported at the level-of-study, with graduate students reporting higher “success” (Macon, 2011 ), and community college students having greater difficulty with online classes (Legon & Garrett, 2019 ; Xu & Jaggars, 2014 ). Ethnicity and race have also been small or insignificant. Some situational variations and student preferences can be captured by paying attention to disciplinary differences (Arbaugh, 2005 ; Macon, 2011 ). Motivation levels of students have been reported to be significant in completion and achievement, with better students doing as well across face-to-face and online modes, and weaker students having greater completion and achievement challenges (Clayton et al., 2018 ; Lu & Lemonde, 2013 ).

Research methods

To examine the various quality factors, we apply a critical success factor methodology, initially introduced to schools of business research in the 1970s. In 1981, Rockhart and Bullen codified an approach embodying principles of critical success factors (CSFs) as a way to identify the information needs of executives, detailing steps for the collection and analyzation of data to create a set of organizational CSFs (Rockhart & Bullen, 1981 ). CSFs describe the underlying or guiding principles which must be incorporated to ensure success.

Utilizing this methodology, CSFs in the context of this paper define key areas of instruction and design essential for an online class to be successful from a student’s perspective. Instructors implicitly know and consider these areas when setting up an online class and designing and directing activities and tasks important to achieving learning goals. CSFs make explicit those things good instructors may intuitively know and (should) do to enhance student learning. When made explicit, CSFs not only confirm the knowledge of successful instructors, but tap their intuition to guide and direct the accomplishment of quality instruction for entire programs. In addition, CSFs are linked with goals and objectives, helping generate a small number of truly important matters an instructor should focus attention on to achieve different thresholds of online success.

After a comprehensive literature review, an instrument was created to measure students’ perceptions about the importance of techniques and indicators leading to quality online classes. Items were designed to capture the major factors in the literature. The instrument was pilot studied during academic year 2017–18 with a 397 student sample, facilitating an exploratory factor analysis leading to important preliminary findings (reference withheld for review). Based on the pilot, survey items were added and refined to include seven groups of quality teaching factors and two groups of items related to students’ overall acceptance of online classes as well as a variable on their future online class enrollment. Demographic information was gathered to determine their effects on students’ levels of acceptance of online classes based on age, year in program, major, distance from university, number of online classes taken, high school experience with online classes, and communication preferences.

This paper draws evidence from a sample of students enrolled in educational programs at Jack H. Brown College of Business and Public Administration (JHBC), California State University San Bernardino (CSUSB). The JHBC offers a wide range of online courses for undergraduate and graduate programs. To ensure comparable learning outcomes, online classes and face-to-face classes of a certain subject are similar in size—undergraduate classes are generally capped at 60 and graduate classes at 30, and often taught by the same instructors. Students sometimes have the option to choose between both face-to-face and online modes of learning.

A Qualtrics survey link was sent out by 11 instructors to students who were unlikely to be cross-enrolled in classes during the 2018–19 academic year. 1 Approximately 2500 students were contacted, with some instructors providing class time to complete the anonymous survey. All students, whether they had taken an online class or not, were encouraged to respond. Nine hundred eighty-seven students responded, representing a 40% response rate. Although drawn from a single business school, it is a broad sample representing students from several disciplines—management, accounting and finance, marketing, information decision sciences, and public administration, as well as both graduate and undergraduate programs of study.

The sample age of students is young, with 78% being under 30. The sample has almost no lower division students (i.e., freshman and sophomore), 73% upper division students (i.e., junior and senior) and 24% graduate students (master’s level). Only 17% reported having taken a hybrid or online class in high school. There was a wide range of exposure to university level online courses, with 47% reporting having taken 1 to 4 classes, and 21% reporting no online class experience. As a Hispanic-serving institution, 54% self-identified as Latino, 18% White, and 13% Asian and Pacific Islander. The five largest majors were accounting & finance (25%), management (21%), master of public administration (16%), marketing (12%), and information decision sciences (10%). Seventy-four percent work full- or part-time. See Table  1 for demographic data.

Measures and procedure

To increase the reliability of evaluation scores, composite evaluation variables are formed after an exploratory factor analysis of individual evaluation items. A principle component method with Quartimin (oblique) rotation was applied to explore the factor construct of student perceptions of online teaching CSFs. The item correlations for student perceptions of importance coefficients greater than .30 were included, a commonly acceptable ratio in factor analysis. A simple least-squares regression analysis was applied to test the significance levels of factors on students’ impression of online classes.

Exploratory factor constructs

Using a threshold loading of 0.3 for items, 37 items loaded on seven factors. All factors were logically consistent. The first factor, with eight items, was labeled Teaching Presence. Items included providing clear instructions, staying on task, clear deadlines, and customized feedback on strengths and weaknesses. Teaching Presence items all related to instructor involvement during the course as a director, monitor, and learning facilitator. The second factor, with seven items, aligned with Cognitive Presence. Items included stimulating curiosity, opportunities for reflection, helping students construct explanations posed in online courses, and the applicability of material. The third factor, with six items, aligned with Social Presence defined as providing student-to-student learning opportunities. Items included getting to know course participants for sense of belonging, forming impressions of other students, and interacting with others. The fourth factor, with six new items as well as two (“interaction with other students” and “a sense of community in the class”) shared with the third factor, was Instructional Support which related to the instructor’s roles in providing students a cohesive learning experience. They included providing sufficient rehearsal, structured feedback, techniques for communication, navigation guide, detailed syllabus, and coordinating student interaction and creating a sense of online community. This factor also included enthusiasm which students generally interpreted as a robustly designed course, rather than animation in a traditional lecture. The fifth factor was labeled Basic Online Modality and focused on the basic technological requirements for a functional online course. Three items included allowing students to make online submissions, use of online gradebooks, and online grading. A fourth item is the use of online quizzes, viewed by students as mechanical practice opportunities rather than small tests and a fifth is navigation, a key component of Online Modality. The sixth factor, loaded on four items, was labeled Online Social Comfort. Items here included comfort discussing ideas online, comfort disagreeing, developing a sense of collaboration via discussion, and considering online communication as an excellent medium for social interaction. The final factor was called Interactive Online Modality because it included items for “richer” communications or interactions, no matter whether one- or two-way. Items included videoconferencing, instructor-generated videos, and small group discussions. Taken together, these seven explained 67% of the variance which is considered in the acceptable range in social science research for a robust model (Hair, Black, Babin, & Anderson, 2014 ). See Table  2 for the full list.

To test for factor reliability, the Cronbach alpha of variables were calculated. All produced values greater than 0.7, the standard threshold used for reliability, except for system trust which was therefore dropped. To gauge students’ sense of factor importance, all items were means averaged. Factor means (lower means indicating higher importance to students), ranged from 1.5 to 2.6 on a 5-point scale. Basic Online Modality was most important, followed by Instructional Support and Teaching Presence. Students deemed Cognitive Presence, Social Online Comfort, and Online Interactive Modality less important. The least important for this sample was Social Presence. Table  3 arrays the critical success factor means, standard deviations, and Cronbach alpha.

To determine whether particular subgroups of respondents viewed factors differently, a series of ANOVAs were conducted using factor means as dependent variables. Six demographic variables were used as independent variables: graduate vs. undergraduate, age, work status, ethnicity, discipline, and past online experience. To determine strength of association of the independent variables to each of the seven CSFs, eta squared was calculated for each ANOVA. Eta squared indicates the proportion of variance in the dependent variable explained by the independent variable. Eta squared values greater than .01, .06, and .14 are conventionally interpreted as small, medium, and large effect sizes, respectively (Green & Salkind, 2003 ). Table  4 summarizes the eta squared values for the ANOVA tests with Eta squared values less than .01 omitted.

While no significant differences in factor means among students in different disciplines in the College occur, all five other independent variables have some small effect on some or all CSFs. Graduate students tend to rate Online Interactive Modality, Instructional Support, Teaching Presence, and Cognitive Presence higher than undergraduates. Elder students value more Online Interactive Modality. Full-time working students rate all factors, except Social Online Comfort, slightly higher than part-timers and non-working students. Latino and White rate Basic Online Modality and Instructional Support higher; Asian and Pacific Islanders rate Social Presence higher. Students who have taken more online classes rate all factors higher.

In addition to factor scores, two variables are constructed to identify the resultant impressions labeled online experience. Both were logically consistent with a Cronbach’s α greater than 0.75. The first variable, with six items, labeled “online acceptance,” included items such as “I enjoy online learning,” “My overall impression of hybrid/online learning is very good,” and “the instructors of online/hybrid classes are generally responsive.” The second variable was labeled “face-to-face preference” and combines four items, including enjoying, learning, and communicating more in face-to-face classes, as well as perceiving greater fairness and equity. In addition to these two constructed variables, a one-item variable was also used subsequently in the regression analysis: “online enrollment.” That question asked: if hybrid/online classes are well taught and available, how much would online education make up your entire course selection going forward?

Regression results

As noted above, two constructed variables and one item were used as dependent variables for purposes of regression analysis. They were online acceptance, F2F preference, and the selection of online classes. In addition to seven quality-of-teaching factors identified by factor analysis, control variables included level of education (graduate versus undergraduate), age, ethnicity, work status, distance to university, and number of online/hybrid classes taken in the past. See Table  5 .

When the ETA squared values for ANOVA significance were measured for control factors, only one was close to a medium effect. Graduate versus undergraduate status had a .05 effect (considered medium) related to Online Interactive Modality, meaning graduate students were more sensitive to interactive modality than undergraduates. Multiple regression analysis of critical success factors and online impressions were conducted to compare under what conditions factors were significant. The only consistently significant control factor was number of online classes taken. The more classes students had taken online, the more inclined they were to take future classes. Level of program, age, ethnicity, and working status do not significantly affect students’ choice or overall acceptance of online classes.

The least restrictive condition was online enrollment (Table  6 ). That is, students might not feel online courses were ideal, but because of convenience and scheduling might enroll in them if minimum threshold expectations were met. When considering online enrollment three factors were significant and positive (at the 0.1 level): Basic Online Modality, Cognitive Presence, and Online Social Comfort. These least-demanding students expected classes to have basic technological functionality, provide good opportunities for knowledge acquisition, and provide comfortable interaction in small groups. Students who demand good Instructional Support (e.g., rehearsal opportunities, standardized feedback, clear syllabus) are less likely to enroll.

Online acceptance was more restrictive (see Table  7 ). This variable captured the idea that students not only enrolled in online classes out of necessity, but with an appreciation of the positive attributes of online instruction, which balanced the negative aspects. When this standard was applied, students expected not only Basic Online Modality, Cognitive Presence, and Online Social Comfort, but expected their instructors to be highly engaged virtually as the course progressed (Teaching Presence), and to create strong student-to-student dynamics (Social Presence). Students who rated Instructional Support higher are less accepting of online classes.

Another restrictive condition was catering to the needs of students who preferred face-to-face classes (see Table  8 ). That is, they preferred face-to-face classes even when online classes were well taught. Unlike students more accepting of, or more likely to enroll in, online classes, this group rates Instructional Support as critical to enrolling, rather than a negative factor when absent. Again different from the other two groups, these students demand appropriate interactive mechanisms (Online Interactive Modality) to enable richer communication (e.g., videoconferencing). Student-to-student collaboration (Social Presence) was also significant. This group also rated Cognitive Presence and Online Social Comfort as significant, but only in their absence. That is, these students were most attached to direct interaction with the instructor and other students rather than specific teaching methods. Interestingly, Basic Online Modality and Teaching Presence were not significant. Our interpretation here is this student group, most critical of online classes for its loss of physical interaction, are beyond being concerned with mechanical technical interaction and demand higher levels of interactivity and instructional sophistication.

Discussion and study limitations

Some past studies have used robust empirical methods to identify a single factor or a small number of factors related to quality from a student’s perspective, but have not sought to be relatively comprehensive. Others have used a longer series of itemized factors, but have less used less robust methods, and have not tied those factors back to the literature. This study has used the literature to develop a relatively comprehensive list of items focused on quality teaching in a single rigorous protocol. That is, while a Beta test had identified five coherent factors, substantial changes to the current survey that sharpened the focus on quality factors rather than antecedent factors, as well as better articulating the array of factors often lumped under the mantle of “teaching presence.” In addition, it has also examined them based on threshold expectations: from minimal, such as when flexibility is the driving consideration, to modest, such as when students want a “good” online class, to high, when students demand an interactive virtual experience equivalent to face-to-face.

Exploratory factor analysis identified seven factors that were reliable, coherent, and significant under different conditions. When considering students’ overall sense of importance, they are, in order: Basic Online Modality, Instructional Support, Teaching Presence, Cognitive Presence, Social Online Comfort, Interactive Online Modality, and Social Presence. Students are most concerned with the basics of a course first, that is the technological and instructor competence. Next they want engagement and virtual comfort. Social Presence, while valued, is the least critical from this overall perspective.

The factor analysis is quite consistent with the range of factors identified in the literature, pointing to the fact that students can differentiate among different aspects of what have been clumped as larger concepts, such as teaching presence. Essentially, the instructor’s role in quality can be divided into her/his command of basic online functionality, good design, and good presence during the class. The instructor’s command of basic functionality is paramount. Because so much of online classes must be built in advance of the class, quality of the class design is rated more highly than the instructor’s role in facilitating the class. Taken as a whole, the instructor’s role in traditional teaching elements is primary, as we would expect it to be. Cognitive presence, especially as pertinence of the instructional material and its applicability to student interests, has always been found significant when studied, and was highly rated as well in a single factor. Finally, the degree to which students feel comfortable with the online environment and enjoy the learner-learner aspect has been less supported in empirical studies, was found significant here, but rated the lowest among the factors of quality to students.

Regression analysis paints a more nuanced picture, depending on student focus. It also helps explain some of the heterogeneity of previous studies, depending on what the dependent variables were. If convenience and scheduling are critical and students are less demanding, minimum requirements are Basic Online Modality, Cognitive Presence, and Online Social Comfort. That is, students’ expect an instructor who knows how to use an online platform, delivers useful information, and who provides a comfortable learning environment. However, they do not expect to get poor design. They do not expect much in terms of the quality teaching presence, learner-to-learner interaction, or interactive teaching.

When students are signing up for critical classes, or they have both F2F and online options, they have a higher standard. That is, they not only expect the factors for decisions about enrolling in noncritical classes, but they also expect good Teaching and Social Presence. Students who simply need a class may be willing to teach themselves a bit more, but students who want a good class expect a highly present instructor in terms responsiveness and immediacy. “Good” classes must not only create a comfortable atmosphere, but in social science classes at least, must provide strong learner-to-learner interactions as well. At the time of the research, most students believe that you can have a good class without high interactivity via pre-recorded video and videoconference. That may, or may not, change over time as technology thresholds of various video media become easier to use, more reliable, and more commonplace.

The most demanding students are those who prefer F2F classes because of learning style preferences, poor past experiences, or both. Such students (seem to) assume that a worthwhile online class has basic functionality and that the instructor provides a strong presence. They are also critical of the absence of Cognitive Presence and Online Social Comfort. They want strong Instructional Support and Social Presence. But in addition, and uniquely, they expect Online Interactive Modality which provides the greatest verisimilitude to the traditional classroom as possible. More than the other two groups, these students crave human interaction in the learning process, both with the instructor and other students.

These findings shed light on the possible ramifications of the COVID-19 aftermath. Many universities around the world jumped from relatively low levels of online instruction in the beginning of spring 2020 to nearly 100% by mandate by the end of the spring term. The question becomes, what will happen after the mandate is removed? Will demand resume pre-crisis levels, will it increase modestly, or will it skyrocket? Time will be the best judge, but the findings here would suggest that the ability/interest of instructors and institutions to “rise to the occasion” with quality teaching will have as much effect on demand as students becoming more acclimated to online learning. If in the rush to get classes online many students experience shoddy basic functional competence, poor instructional design, sporadic teaching presence, and poorly implemented cognitive and social aspects, they may be quite willing to return to the traditional classroom. If faculty and institutions supporting them are able to increase the quality of classes despite time pressures, then most students may be interested in more hybrid and fully online classes. If instructors are able to introduce high quality interactive teaching, nearly the entire student population will be interested in more online classes. Of course students will have a variety of experiences, but this analysis suggests that those instructors, departments, and institutions that put greater effort into the temporary adjustment (and who resist less), will be substantially more likely to have increases in demand beyond what the modest national trajectory has been for the last decade or so.

There are several study limitations. First, the study does not include a sample of non-respondents. Non-responders may have a somewhat different profile. Second, the study draws from a single college and university. The profile derived here may vary significantly by type of student. Third, some survey statements may have led respondents to rate quality based upon experience rather than assess the general importance of online course elements. “I felt comfortable participating in the course discussions,” could be revised to “comfort in participating in course discussions.” The authors weighed differences among subgroups (e.g., among majors) as small and statistically insignificant. However, it is possible differences between biology and marketing students would be significant, leading factors to be differently ordered. Emphasis and ordering might vary at a community college versus research-oriented university (Gonzalez, 2009 ).

Availability of data and materials

We will make the data available.

Al-Gahtani, S. S. (2016). Empirical investigation of e-learning acceptance and assimilation: A structural equation model. Applied Comput Information , 12 , 27–50.

Google Scholar  

Alqurashi, E. (2016). Self-efficacy in online learning environments: A literature review. Contemporary Issues Educ Res (CIER) , 9 (1), 45–52.

Anderson, T. (2016). A fourth presence for the Community of Inquiry model? Retrieved from .

Annand, D. (2011). Social presence within the community of inquiry framework. The International Review of Research in Open and Distributed Learning , 12 (5), 40.

Arbaugh, J. B. (2005). How much does “subject matter” matter? A study of disciplinary effects in on-line MBA courses. Academy of Management Learning & Education , 4 (1), 57–73.

Arbaugh, J. B., Cleveland-Innes, M., Diaz, S. R., Garrison, D. R., Ice, P., Richardson, J. C., & Swan, K. P. (2008). Developing a community of inquiry instrument: Testing a measure of the Community of Inquiry framework using a multi-institutional sample. Internet and Higher Education , 11 , 133–136.

Armellini, A., & De Stefani, M. (2016). Social presence in the 21st century: An adjustment to the Community of Inquiry framework. British Journal of Educational Technology , 47 (6), 1202–1216.

Arruabarrena, R., Sánchez, A., Blanco, J. M., et al. (2019). Integration of good practices of active methodologies with the reuse of student-generated content. International Journal of Educational Technology in Higher Education , 16 , #10.

Arthur, L. (2009). From performativity to professionalism: Lecturers’ responses to student feedback. Teaching in Higher Education , 14 (4), 441–454.

Artino, A. R. (2010). Online or face-to-face learning? Exploring the personal factors that predict students’ choice of instructional format. Internet and Higher Education , 13 , 272–276.

Asoodar, M., Vaezi, S., & Izanloo, B. (2016). Framework to improve e-learner satisfaction and further strengthen e-learning implementation. Computers in Human Behavior , 63 , 704–716.

Bernard, R. M., et al. (2004). How does distance education compare with classroom instruction? A meta-analysis of the empirical literature. Review of Educational Research , 74 (3), 379–439.

Bollinger, D., & Martindale, T. (2004). Key factors for determining student satisfaction in online courses. Int J E-learning , 3 (1), 61–67.

Brinkley-Etzkorn, K. E. (2018). Learning to teach online: Measuring the influence of faculty development training on teaching effectiveness through a TPACK lens. The Internet and Higher Education , 38 , 28–35.

Chickering, A. W., & Gamson, Z. F. (1987). Seven principles for good practice in undergraduate education. AAHE Bulletin , 3 , 7.

Choi, I., Land, S. M., & Turgeon, A. J. (2005). Scaffolding peer-questioning strategies to facilitate metacognition during online small group discussion. Instructional Science , 33 , 483–511.

Clayton, K. E., Blumberg, F. C., & Anthony, J. A. (2018). Linkages between course status, perceived course value, and students’ preferences for traditional versus non-traditional learning environments. Computers & Education , 125 , 175–181.

Cleveland-Innes, M., & Campbell, P. (2012). Emotional presence, learning, and the online learning environment. The International Review of Research in Open and Distributed Learning , 13 (4), 269–292.

Cohen, A., & Baruth, O. (2017). Personality, learning, and satisfaction in fully online academic courses. Computers in Human Behavior , 72 , 1–12.

Crews, T., & Butterfield, J. (2014). Data for flipped classroom design: Using student feedback to identify the best components from online and face-to-face classes. Higher Education Studies , 4 (3), 38–47.

Dawson, P., Henderson, M., Mahoney, P., Phillips, M., Ryan, T., Boud, D., & Molloy, E. (2019). What makes for effective feedback: Staff and student perspectives. Assessment & Evaluation in Higher Education , 44 (1), 25–36.

Drew, C., & Mann, A. (2018). Unfitting, uncomfortable, unacademic: A sociological reading of an interactive mobile phone app in university lectures. International Journal of Educational Technology in Higher Education , 15 , #43.

Durabi, A., Arrastia, M., Nelson, D., Cornille, T., & Liang, X. (2011). Cognitive presence in asynchronous online learning: A comparison of four discussion strategies. Journal of Computer Assisted Learning , 27 (3), 216–227.

Eom, S. B., Wen, H. J., & Ashill, N. (2006). The determinants of students’ perceived learning outcomes and satisfaction in university online education: An empirical investigation. Decision Sciences Journal of Innovative Education , 4 (2), 215–235.

Espasa, A., & Meneses, J. (2010). Analysing feedback processes in an online teaching and learning environment: An exploratory study. Higher Education , 59 (3), 277–292.

Farrell, O., & Brunton, J. (2020). A balancing act: A window into online student engagement experiences. International Journal of Educational Technology in High Education , 17 , #25.

Fidalgo, P., Thormann, J., Kulyk, O., et al. (2020). Students’ perceptions on distance education: A multinational study. International Journal of Educational Technology in High Education , 17 , #18.

Flores, Ò., del-Arco, I., & Silva, P. (2016). The flipped classroom model at the university: Analysis based on professors’ and students’ assessment in the educational field. International Journal of Educational Technology in Higher Education , 13 , #21.

Garrison, D. R., Anderson, T., & Archer, W. (2003). A theory of critical inquiry in online distance education. Handbook of Distance Education , 1 , 113–127.

Gong, D., Yang, H. H., & Cai, J. (2020). Exploring the key influencing factors on college students’ computational thinking skills through flipped-classroom instruction. International Journal of Educational Technology in Higher Education , 17 , #19.

Gonzalez, C. (2009). Conceptions of, and approaches to, teaching online: A study of lecturers teaching postgraduate distance courses. Higher Education , 57 (3), 299–314.

Grandzol, J. R., & Grandzol, C. J. (2006). Best practices for online business Education. International Review of Research in Open and Distance Learning , 7 (1), 1–18.

Green, S. B., & Salkind, N. J. (2003). Using SPSS: Analyzing and understanding data , (3rd ed., ). Upper Saddle River: Prentice Hall.

Hair, J. F., Black, W. C., Babin, B. J., & Anderson, R. E. (2014). Multivariate data analysis: Pearson new international edition . Essex: Pearson Education Limited.

Harjoto, M. A. (2017). Blended versus face-to-face: Evidence from a graduate corporate finance class. Journal of Education for Business , 92 (3), 129–137.

Hong, K.-S. (2002). Relationships between students’ instructional variables with satisfaction and learning from a web-based course. The Internet and Higher Education , 5 , 267–281.

Horvitz, B. S., Beach, A. L., Anderson, M. L., & Xia, J. (2015). Examination of faculty self-efficacy related to online teaching. Innovation Higher Education , 40 , 305–316.

Inside Higher Education and Gallup. (2019). The 2019 survey of faculty attitudes on technology. Author .

Jaggars, S. S., & Xu, D. (2016). How do online course design features influence student performance? Computers and Education , 95 , 270–284.

Joo, Y. J., Lim, K. Y., & Kim, E. K. (2011). Online university students’ satisfaction and persistence: Examining perceived level of presence, usefulness and ease of use as predictor in a structural model. Computers & Education , 57 (2), 1654–1664.

Jung, I. (2011). The dimensions of e-learning quality: From the learner’s perspective. Educational Technology Research and Development , 59 (4), 445–464.

Kay, R., MacDonald, T., & DiGiuseppe, M. (2019). A comparison of lecture-based, active, and flipped classroom teaching approaches in higher education. Journal of Computing in Higher Education , 31 , 449–471.

Kehrwald, B. (2008). Understanding social presence in text-based online learning environments. Distance Education , 29 (1), 89–106.

Kintu, M. J., Zhu, C., & Kagambe, E. (2017). Blended learning effectiveness: The relationship between student characteristics, design features and outcomes. International Journal of Educational Technology in Higher Education , 14 , #7.

Kuo, Y.-C., Walker, A. E., Schroder, K. E., & Belland, B. R. (2013). Interaction, internet self-efficacy, and self-regulated learning as predictors of student satisfaction in online education courses. Internet and Education , 20 , 35–50.

Lange, C., & Costley, J. (2020). Improving online video lectures: Learning challenges created by media. International Journal of Educational Technology in Higher Education , 17 , #16.

le Roux, I., & Nagel, L. (2018). Seeking the best blend for deep learning in a flipped classroom – Viewing student perceptions through the Community of Inquiry lens. International Journal of Educational Technology in High Education , 15 , #16.

Lee, H.-J., & Rha, I. (2009). Influence of structure and interaction on student achievement and satisfaction in web-based distance learning. Educational Technology & Society , 12 (4), 372–382.

Lee, Y., Stringer, D., & Du, J. (2017). What determines students’ preference of online to F2F class? Business Education Innovation Journal , 9 (2), 97–102.

Legon, R., & Garrett, R. (2019). CHLOE 3: Behind the numbers . Published online by Quality Matters and Eduventures.

Liaw, S.-S., & Huang, H.-M. (2013). Perceived satisfaction, perceived usefulness and interactive learning environments as predictors of self-regulation in e-learning environments. Computers & Education , 60 (1), 14–24.

Lu, F., & Lemonde, M. (2013). A comparison of online versus face-to-face students teaching delivery in statistics instruction for undergraduate health science students. Advances in Health Science Education , 18 , 963–973.

Lundin, M., Bergviken Rensfeldt, A., Hillman, T., Lantz-Andersson, A., & Peterson, L. (2018). Higher education dominance and siloed knowledge: a systematic review of flipped classroom research. International Journal of Educational Technology in Higher Education , 15 (1).

Macon, D. K. (2011). Student satisfaction with online courses versus traditional courses: A meta-analysis . Disssertation: Northcentral University, CA.

Mann, J., & Henneberry, S. (2012). What characteristics of college students influence their decisions to select online courses? Online Journal of Distance Learning Administration , 15 (5), 1–14.

Mansbach, J., & Austin, A. E. (2018). Nuanced perspectives about online teaching: Mid-career senior faculty voices reflecting on academic work in the digital age. Innovative Higher Education , 43 (4), 257–272.

Marks, R. B., Sibley, S. D., & Arbaugh, J. B. (2005). A structural equation model of predictors for effective online learning. Journal of Management Education , 29 (4), 531–563.

Martin, F., Wang, C., & Sadaf, A. (2018). Student perception of facilitation strategies that enhance instructor presence, connectedness, engagement and learning in online courses. Internet and Higher Education , 37 , 52–65.

Maycock, K. W. (2019). Chalk and talk versus flipped learning: A case study. Journal of Computer Assisted Learning , 35 , 121–126.

McGivney-Burelle, J. (2013). Flipping Calculus. PRIMUS Problems, Resources, and Issues in Mathematics Undergraduate . Studies , 23 (5), 477–486.

Mohammadi, H. (2015). Investigating users’ perspectives on e-learning: An integration of TAM and IS success model. Computers in Human Behavior , 45 , 359–374.

Nair, S. S., Tay, L. Y., & Koh, J. H. L. (2013). Students’ motivation and teachers’ teaching practices towards the use of blogs for writing of online journals. Educational Media International , 50 (2), 108–119.

Nguyen, T. (2015). The effectiveness of online learning: Beyond no significant difference and future horizons. MERLOT Journal of Online Learning and Teaching , 11 (2), 309–319.

Ni, A. Y. (2013). Comparing the effectiveness of classroom and online learning: Teaching research methods. Journal of Public Affairs Education , 19 (2), 199–215.

Nouri, J. (2016). The flipped classroom: For active, effective and increased learning – Especially for low achievers. International Journal of Educational Technology in Higher Education , 13 , #33.

O’Neill, D. K., & Sai, T. H. (2014). Why not? Examining college students’ reasons for avoiding an online course. Higher Education , 68 (1), 1–14.

O'Flaherty, J., & Phillips, C. (2015). The use of flipped classrooms in higher education: A scoping review. The Internet and Higher Education , 25 , 85–95.

Open & Distant Learning Quality Council (2012). ODLQC standards . England: Author .

Ortagus, J. C. (2017). From the periphery to prominence: An examination of the changing profile of online students in American higher education. Internet and Higher Education , 32 , 47–57.

Otter, R. R., Seipel, S., Graef, T., Alexander, B., Boraiko, C., Gray, J., … Sadler, K. (2013). Comparing student and faculty perceptions of online and traditional courses. Internet and Higher Education , 19 , 27–35.

Paechter, M., Maier, B., & Macher, D. (2010). Online or face-to-face? Students’ experiences and preferences in e-learning. Internet and Higher Education , 13 , 292–329.

Prinsloo, P. (2016). (re)considering distance education: Exploring its relevance, sustainability and value contribution. Distance Education , 37 (2), 139–145.

Quality Matters (2018). Specific review standards from the QM higher Education rubric , (6th ed., ). MD: MarylandOnline.

Richardson, J. C., Maeda, Y., Lv, J., & Caskurlu, S. (2017). Social presence in relation to students’ satisfaction and learning in the online environment: A meta-analysis. Computers in Human Behavior , 71 , 402–417.

Rockhart, J. F., & Bullen, C. V. (1981). A primer on critical success factors . Cambridge: Center for Information Systems Research, Massachusetts Institute of Technology.

Rourke, L., & Kanuka, H. (2009). Learning in Communities of Inquiry: A Review of the Literature. The Journal of Distance Education / Revue de l'ducation Distance , 23 (1), 19–48 Athabasca University Press. Retrieved August 2, 2020 from .

Sebastianelli, R., Swift, C., & Tamimi, N. (2015). Factors affecting perceived learning, satisfaction, and quality in the online MBA: A structural equation modeling approach. Journal of Education for Business , 90 (6), 296–305.

Shen, D., Cho, M.-H., Tsai, C.-L., & Marra, R. (2013). Unpacking online learning experiences: Online learning self-efficacy and learning satisfaction. Internet and Higher Education , 19 , 10–17.

Sitzmann, T., Kraiger, K., Stewart, D., & Wisher, R. (2006). The comparative effectiveness of web-based and classroom instruction: A meta-analysis. Personnel Psychology , 59 (3), 623–664.

So, H. J., & Brush, T. A. (2008). Student perceptions of collaborative learning, social presence and satisfaction in a blended learning environment: Relationships and critical factors. Computers & Education , 51 (1), 318–336.

Song, L., Singleton, E. S., Hill, J. R., & Koh, M. H. (2004). Improving online learning: Student perceptions of useful and challenging characteristics. The Internet and Higher Education , 7 (1), 59–70.

Sun, P. C., Tsai, R. J., Finger, G., Chen, Y. Y., & Yeh, D. (2008). What drives a successful e-learning? An empirical investigation of the critical factors influencing learner satisfaction. Computers & Education , 50 (4), 1183–1202.

Takamine, K. (2017). Michelle D. miller: Minds online: Teaching effectively with technology. Higher Education , 73 , 789–791.

Tanner, J. R., Noser, T. C., & Totaro, M. W. (2009). Business faculty and undergraduate students’ perceptions of online learning: A comparative study. Journal of Information Systems Education , 20 (1), 29.

Tucker, B. (2012). The flipped classroom. Education Next , 12 (1), 82–83.

Van Wart, M., Ni, A., Ready, D., Shayo, C., & Court, J. (2020). Factors leading to online learner satisfaction. Business Educational Innovation Journal , 12 (1), 15–24.

Van Wart, M., Ni, A., Rose, L., McWeeney, T., & Worrell, R. A. (2019). Literature review and model of online teaching effectiveness integrating concerns for learning achievement, student satisfaction, faculty satisfaction, and institutional results. Pan-Pacific . Journal of Business Research , 10 (1), 1–22.

Ventura, A. C., & Moscoloni, N. (2015). Learning styles and disciplinary differences: A cross-sectional study of undergraduate students. International Journal of Learning and Teaching , 1 (2), 88–93.

Vlachopoulos, D., & Makri, A. (2017). The effect of games and simulations on higher education: A systematic literature review. International Journal of Educational Technology in Higher Education , 14 , #22.

Wang, Y., Huang, X., & Schunn, C. D. (2019). Redesigning flipped classrooms: A learning model and its effects on student perceptions. Higher Education , 78 , 711–728.

Wingo, N. P., Ivankova, N. V., & Moss, J. A. (2017). Faculty perceptions about teaching online: Exploring the literature using the technology acceptance model as an organizing framework. Online Learning , 21 (1), 15–35.

Xu, D., & Jaggars, S. S. (2014). Performance gaps between online and face-to-face courses: Differences across types of students and academic subject areas. Journal of Higher Education , 85 (5), 633–659.

Young, S. (2006). Student views of effective online teaching in higher education. American Journal of Distance Education , 20 (2), 65–77.

Zawacki-Richter, O., & Naidu, S. (2016). Mapping research trends from 35 years of publications in distance Education. Distance Education , 37 (3), 245–269.

Download references


No external funding/ NA.

Author information

Authors and affiliations.

Development for the JHB College of Business and Public Administration, 5500 University Parkway, San Bernardino, California, 92407, USA

Montgomery Van Wart, Anna Ni, Pamela Medina, Jesus Canelon, Melika Kordrostami, Jing Zhang & Yu Liu

You can also search for this author in PubMed   Google Scholar


Equal. The author(s) read and approved the final manuscript.

Corresponding author

Correspondence to Montgomery Van Wart .

Ethics declarations

Competing interests.

We have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit .

Reprints and Permissions

About this article

Cite this article.

Van Wart, M., Ni, A., Medina, P. et al. Integrating students’ perspectives about online learning: a hierarchy of factors. Int J Educ Technol High Educ 17 , 53 (2020).

Download citation

Received : 29 April 2020

Accepted : 30 July 2020

Published : 02 December 2020


Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Online education
  • Online teaching
  • Student perceptions
  • Online quality
  • Student presence

a research paper on online education

  • Share full article


Supported by

Student Opinion

Is Online Learning Effective?

A new report found that the heavy dependence on technology during the pandemic caused “staggering” education inequality. What was your experience?

A young man in a gray hooded shirt watches a computer screen on a desk.

By Natalie Proulx

During the coronavirus pandemic, many schools moved classes online. Was your school one of them? If so, what was it like to attend school online? Did you enjoy it? Did it work for you?

In “ Dependence on Tech Caused ‘Staggering’ Education Inequality, U.N. Agency Says ,” Natasha Singer writes:

In early 2020, as the coronavirus spread, schools around the world abruptly halted in-person education. To many governments and parents, moving classes online seemed the obvious stopgap solution. In the United States, school districts scrambled to secure digital devices for students. Almost overnight, videoconferencing software like Zoom became the main platform teachers used to deliver real-time instruction to students at home. Now a report from UNESCO , the United Nations’ educational and cultural organization, says that overreliance on remote learning technology during the pandemic led to “staggering” education inequality around the world. It was, according to a 655-page report that UNESCO released on Wednesday, a worldwide “ed-tech tragedy.” The report, from UNESCO’s Future of Education division, is likely to add fuel to the debate over how governments and local school districts handled pandemic restrictions, and whether it would have been better for some countries to reopen schools for in-person instruction sooner. The UNESCO researchers argued in the report that “unprecedented” dependence on technology — intended to ensure that children could continue their schooling — worsened disparities and learning loss for hundreds of millions of students around the world, including in Kenya, Brazil, Britain and the United States. The promotion of remote online learning as the primary solution for pandemic schooling also hindered public discussion of more equitable, lower-tech alternatives, such as regularly providing schoolwork packets for every student, delivering school lessons by radio or television — and reopening schools sooner for in-person classes, the researchers said. “Available evidence strongly indicates that the bright spots of the ed-tech experiences during the pandemic, while important and deserving of attention, were vastly eclipsed by failure,” the UNESCO report said. The UNESCO researchers recommended that education officials prioritize in-person instruction with teachers, not online platforms, as the primary driver of student learning. And they encouraged schools to ensure that emerging technologies like A.I. chatbots concretely benefited students before introducing them for educational use. Education and industry experts welcomed the report, saying more research on the effects of pandemic learning was needed. “The report’s conclusion — that societies must be vigilant about the ways digital tools are reshaping education — is incredibly important,” said Paul Lekas, the head of global public policy for the Software & Information Industry Association, a group whose members include Amazon, Apple and Google. “There are lots of lessons that can be learned from how digital education occurred during the pandemic and ways in which to lessen the digital divide. ” Jean-Claude Brizard, the chief executive of Digital Promise, a nonprofit education group that has received funding from Google, HP and Verizon, acknowledged that “technology is not a cure-all.” But he also said that while school systems were largely unprepared for the pandemic, online education tools helped foster “more individualized, enhanced learning experiences as schools shifted to virtual classrooms.” ​Education International, an umbrella organization for about 380 teachers’ unions and 32 million teachers worldwide, said the UNESCO report underlined the importance of in-person, face-to-face teaching. “The report tells us definitively what we already know to be true, a place called school matters,” said Haldis Holst, the group’s deputy general secretary. “Education is not transactional nor is it simply content delivery. It is relational. It is social. It is human at its core.”

Students, read the entire article and then tell us:

What findings from the report, if any, surprised you? If you participated in online learning during the pandemic, what in the report reflected your experience? If the researchers had asked you about what remote learning was like for you, what would you have told them?

At this point, most schools have returned to in-person teaching, but many still use technology in the classroom. How much tech is involved in your day-to-day education? Does this method of learning work well for you? If you had a say, would you want to spend more or less time online while in school?

What are some of the biggest benefits you have seen from technology when it comes to your education? What are some of the biggest drawbacks?

Haldis Holst, UNESCO’s deputy general secretary, said: “The report tells us definitively what we already know to be true, a place called school matters. Education is not transactional nor is it simply content delivery. It is relational. It is social. It is human at its core.” What is your reaction to that statement? Do you agree? Why or why not?

As a student, what advice would you give to schools that are already using or are considering using educational technology?

Students 13 and older in the United States and Britain, and 16 and older elsewhere, are invited to comment. All comments are moderated by the Learning Network staff, but please keep in mind that once your comment is accepted, it will be made public and may appear in print.

Find more Student Opinion questions here. Teachers, check out this guide to learn how you can incorporate these prompts into your classroom.

Natalie Proulx joined The Learning Network as a staff editor in 2017 after working as an English language arts teacher and curriculum writer. More about Natalie Proulx no longer supports Internet Explorer.

To browse and the wider internet faster and more securely, please take a few seconds to  upgrade your browser .

Enter the email address you signed up with and we'll email you a reset link.

  • We're Hiring!
  • Help Center

paper cover thumbnail

The Impact of Online Learning on Students’ Achievements

Profile image of Fatima Zohra Kroum

The Journal of Quality in Education

The current research tends to investigate the effect of online learning on students’ achievement and performance in classrooms. The pandemic has disrupted teaching and learning system around the world, in general, and Morocco in particular. Electronic learning (e-learning) has become the core method of teaching curriculum during the lockdown. This paper investigates the impact of online learning on students’ achievement during the lockdown; Moroccan vocational institutions as a case study. This research attempts to shed light on the perceptions and attitudes toward students’ experiences regarding online learning, capacity to assimilate information, targeted skills, and the use of e-learning platforms. The paper also discusses some key challenges of online teaching for instructors and teachers followed by a discussion of the results to enhance the effectiveness of online learning.

Related Papers

International Journal of Information Science and Technology

Mohamed Aymane Sbai

The COVID-19 pandemic has caused the biggest disruption of education systems in the history of the modern world, affecting around 1.6 billion learners in more than 190 countries and all continents (United Nations, 2020). Eventually, schools all over the world closed their doors. The Moroccan government, like the rest of the world, has decided to declare a national health emergency. Thus, schools were shut down and the Ministry of Education has decided on a shift from on-site education to distance education. A set of measures were applied such as broadcasting classes on TV and encouraging public and private schools to use platforms such as Microsoft Team, Zoom, and Google Meet for the purpose of sustaining the school year and promoting equal opportunities between students all over the country. Nevertheless, many teachers have lack expertise in using ICT tools to maintain their classes online. Consequently, the exceptional situation bought about by the measures to prevent the spread of the COVID-19 virus has become a nightmare for both teachers, who were not ready for the shift, and students, who are required to demonstrate a sense of responsibility and to become self-directed and autonomous learners in order to get through the school year with very good result. The present study aims at investigating teacher's and students' attitudes towards distance learning during the COVID-19 pandemic in Morocco. It also attempts to investigate its weaknesses in order to come up with recommendations capable of improving the online teaching/learning experience. For these inquiries to be fulfilled, two questionnaires (students' questionnaires and teachers' questionnaires) were administered to 14 high school teachers, and 40 high school students from two high schools in Casablanca. The findings show that the vast majority of teachers believe they have not got the ICT skills necessary to lead online classes which could be as successful as their on-site counterparts. Most students, on the other hand, have shown their dissatisfaction with distance learning and believe that this shift has had a negative effect on their overall academic achievement. The results lead us to the realization that a tremendous reform has to be made to the educational system including the introduction of ICT skills in order to equip the prospective teachers with the necessary skills to be ready for teaching no matter what form it might take in the future.

a research paper on online education

Arab World English Journal

Arab World English Journal (AWEJ)

Due to the COVID-19 pandemic, e-learning has become a required component of all educational institutions such as schools, colleges, and universities worldwide. The offline teaching process has been severely disrupted due to this unexpected event. E-learning is a powerful instructional tool that helps pupils achieve their full potential. This paper aims to investigate the E-learning process among semester 6 English department students of Moulay Ismail University in Meknes, Morocco who have experienced online learning as well as the challenges they faced. To find out the students' perceptions towards e-learning during the COVID-19 pandemic, primary data has been collected from Moulay Ismail University in Meknes, Morocco (semester 6 students of the English department) through a Google forms survey questionnaire. The findings demonstrate that most students are dissatisfied with remote learning and believe it has negatively impacted their academic performance. The results lead us to the realization that a tremendous reform has to be made to the Moroccan educational system.

Universiti Teknologi MARA

Bity Salwana Alias

The global spread of the COVID-19 pandemic has caused one of the most extensive school closures worldwide, sending over one billion students home away from their schools, teachers, and classmates. Governments opted for online education to ensure the continuity of learning. Teachers in Morocco have opted for different tech tools and platforms to design and deliver online classes. This study aims to assess the impact and effectiveness of online teaching during the COVID 19 outbreak among teachers in Morocco. Based on the theoretical framework Online Collaborative Learning (OCL), an online survey questionnaire is employed as a data collection instrument. A total of 421 Moroccan teachers from different regions all over Morocco took part in the study. This paper used the Statistical Package for Social Sciences (SPSS) software to analyze the collected data and determine the impact and quality of online teaching during the Covid-19 national school closure in Morocco. The results showed that most of the teachers faced numerous technology, training, and socio-economic challenges that acted as barriers to the processes of online education. The findings obtained can be of use in making future decisions concerning the implementation of teaching and learning online programs in Morocco considering the teachers’ perspective.

IOSR Journals

Asian Journal of University Education

Azlin Mansor


Across the world, schools have been shut down for weeks, even months on end. Making things worse, no one is sure when schools will be able to operate as the coronavirus pandemic shows no sign of lessening any time soon. In Morocco, students are currently under national lockdown and therefore the second semester has been postponed since the outbreak of the pandemic calling on students to stay at home as teaching is undertaken remotely. Again, very important sectors are forced to embrace remote working to keep the pace steady. As far as education is concerned, an alternative way of learning is being adopted. Thus, the ministry of education in Morocco encouraged schools to use an online platform called Microsoft Teams and launched a daily program of lessons broadcasted on TV for students to study at home. This very paper intriduces some highlights on most challenges faced amid remote learning as well as on possible changes may occur on the education sector post-pandemic.

World Journal on Educational Technology: Current Issues

Zaid Alkouri

As the COVID-19 pandemic strikes Jordan, many universities have implemented online education. However, effective use by students has been hindered by many challenges and attitudes. In this paper, we examine the experiences and attitudes of students during the COVID-19 pandemic in regards to online education, focusing on Al-Balqa Applied University and Jerash University. An online questionnaire was administered to 200 students from 2 different universities who took different online courses through Al-Balqa Applied University and Jerash University. The study found that students from Al-Balqa Applied University and Jerash University face similar challenges in e-learning. Moreover, female students suffered more than male students during the pandemic with regard to the challenges they faced in e-learning. In addition, Jerash University students are significantly more disposed to e-learning than students from other universities. In addition, male students are mo...

Jayaron Jose , Blessy Jose

The delivery mode of the lessons was transitioned from face-to-face to online/e-learning in response to the Covid-19 lockdown across the Middle East, particularly in Oman. The University of Technology and Applied Sciences, Al Musannah (UTASA), also adopted this approach, which brought forth both opportunities and challenges for the academic community, including teachers and students. However, no systematic studies were conducted across various departments at the university to gain insights into the implications of full-time online/e-learning. Therefore, this study was designed to comprehend the perceptions of cross-sectional UTASA students regarding the effectiveness of e-learning, encompassing their experiences and satisfaction with participating in it. The study employed a combination of quantitative and qualitative data collection methods, utilizing a survey questionnaire and a descriptive question. The participants included both male and female learners (N = 212) from departments such as IT (Information Technology), Business, Engineering, and ELC (English Language Centre). The analysis encompassed both descriptive and inferential statistical analyses of the quantitative data, as well as a descriptive thematic analysis of the qualitative data. The results revealed that over half of the participants held a clearly positive impression of their e-learning experience and satisfaction during the Covid-19 lockdown. Furthermore, the analysis of qualitative data shed light on the reasons behind both negative and positive sentiments towards e-learning, along with suggestions for potential enhancements. The diverse reactions of the participants to the survey questions have assisted researchers and interested parties in gaining a comprehensive understanding of both the favorable and unfavorable aspects of the procedure. A subset of the participants held a pessimistic view of online learning due to factors such as receiving low grades, encountering inadequate technical assistance, and observing a lack of commitment. In contrast, a different group perceived online learning as advantageous, citing its provision of a convenient and adaptable learning environment, along with convenient access to recorded lectures. Additionally, certain survey respondents put forth recommendations for enhancing online learning, including the need for better training, improved Internet connectivity, and enhanced interaction between teachers and students, as well as among fellow students. In summary, the study yielded valuable insights into the experiences and contentment levels of learners engaged in the online teaching and learning process. The findings and ensuing discussion provide essential recommendations for stakeholders and future researchers alike.

Tadris: Jurnal Keguruan dan Ilmu Tarbiyah

Fakhrurrazi M Amin

The increasing spread of the COVID-19 encouraged learning systems to switch from traditional to online learning. It has brought significant problems and difficulties for students. The swift shift to fully online learning demands some adaptation. Students must learn to adapt their abilities because of the increasing use of technology in education. This study explored students&#39; attitudes towards online learning and the reasons for their attitudes. A survey of 191 students from four State Islamic universities in Aceh was conducted online. A descriptive research framework was chosen, and a survey approach was used to collect data to gain knowledge of the situation and answer research questions. Students showed a positive attitude and satisfaction toward online learning during Covid-19. Even they realized how to handle technological tools applied during teaching and learning. Integrating technology into education does not challenge students as technical support and facilitates educat...


The Journal of Contemporary Issues in Business and Government

Chanchal Sachdeva Suri

Technium Social Sciences Journal

Rawan Fayyad Flouh

Global Journal of Foreign Language Teaching

International Journal of Advanced Computer Science and Applications

Malika Tridane

Musa Alghamdi

International Journal of Information Technology and Language Studies

Zouhaier Slimi

International Journal of Engineering Applied Sciences and Technology

Akansha Rajkhowa

Indonesian Journal of Learning and Instruction

Mohammed Farrah Associate Professor in Applied Linguistics & Ex. Chairman of the English Department - Hebron University - Palestine

International Journal of Social Science

Ida Bagus Nyoman Mantra

Kultura i Edukacja

Ledia Kashahu

International Journal of Language and Literary Studies

International Journal of Research & Review (IJRR)

Karim (Omid) Hajhashemi


Majdi Al-qdah

Indian Journal of School Health and Wellbeing

Vikas Baniwal , Satbir Bedi

International Journal of Research in Education and Science

Shaun Hoggard

Educatia 21

Cristina Ispas

Proceedings of the 4th Sriwijaya University Learning and Education International Conference (SULE-IC 2020)

Arif Widodo

Journal of Information Technology Education: Research


International Journal of Social Science and Human Research

Abdelmajid JAMIAI

DR Atef Alshorman

International Journal of Advanced Trends in Computer Science and Engineering

Assis. Prof. Dr. Mohamed Alrshah

Baig M U N T A J E E B Ali

International Journal of Academic Research in Business and Social Sciences

Norasyikin Osman

The Effectiveness of Online Education and the Extent to which Online Courses can Replace Traditional Classroom Teaching.

Yolam Zimba

International Journal for Research in Applied Science & Engineering Technology (IJRASET)

IJRASET Publication


  •   We're Hiring!
  •   Help Center
  • Find new research papers in:
  • Health Sciences
  • Earth Sciences
  • Cognitive Science
  • Mathematics
  • Computer Science
  • Academia ©2023
  • Free Samples
  • Premium Essays
  • Editing Services Editing Proofreading Rewriting
  • Extra Tools Essay Topic Generator Thesis Generator Citation Generator GPA Calculator Study Guides Donate Paper
  • Essay Writing Help
  • About Us About Us Testimonials FAQ
  • Studentshare
  • Online Education

Online Education - Research Paper Example

Online Education

  • Subject: English
  • Type: Research Paper
  • Level: College
  • Pages: 3 (750 words)
  • Downloads: 3
  • Author: nathenwilliamso

Extract of sample "Online Education"

Recent findings that compare classroom and web based learning experiences have found that online teaching was superior to the traditional classroom instructive methods, with regards to declarative knowledge outcomes and was equivalent with regards to procedural learning outcomes (Bender, 2008). On average, students involved in online-teaching conditions are more likely to perform better than students who receive face-to-face instructions. These differences, however, are not necessarily rooted in the utilized media.

Generally, advantages of online instruction reflect the differences in learning time, pedagogy, and content (Karacapilidis et al, 2012). Direct comparisons between blended and online learning conditions did not find any significant difference regarding student level of learning. Effectiveness or efficacy of pure and blended, online education processes is dependent on the instructive elements of the two methods. Usually, blended delivery instruction, or face-to-face instruction, provides more opportunity for collaborative learning not received by students who are ion control situations (Karacapilidis et al, 2012).

Online readers that spend more time on the activity compared to face-to-face conditions find a greater benefit in learning. It is vital to note, however, that the research done so far on blended vs. online instructional methods is not very conclusive. However, there is an argument that the medium of learning is simply a bearer of content that has minimal effect on the process of learning per se. As a matter of fact, gender and SAT scores are stronger predictors of college student performance on the post-test with procedural and conceptual items than was the form of online unit to which the student was exposed (Weller, 2012). In. This report talks that on average, students involved in online-teaching conditions are more likely to perform better than students who receive face-to-face instructions.

These differences, however, are not necessarily rooted in the utilized media. Generally, advantages of online instruction reflect the differences in learning time, pedagogy, and content. In order for online learning to become more acceptable as a mode of teaching, a few best practices need to be carried out. Online quizzes or videos have minimal influence on what students are able to learn in class. Additionally, there should be a course moderator to instruct the discussion groups when the students need to respond to a given scenario.

Finally, there should be social scripts that structure the modes of interactions between students. This paper approves that 21st century higher education certainly has taken to online education, with US President Obama talking about expansion of access to higher education. The evidence shows that, for those who want to learn and demonstrate their academic knowledge, online education is an affordable and workable alternative to the more traditional methods of getting a post-secondary education.

Online learning will allow students from all classes of the economy to take advantage of opportunities, which might have been otherwise out of their reach. Best of all, students will no longer need to take, on unmanageable and excessive debt to study. Online education will level the playfield somewhat in the higher education sector.

  • textbooks should be replaced by ipads and online resources
  • Cited: 1 times
  • Copy Citation Citation is copied Copy Citation Citation is copied Copy Citation Citation is copied


Why online education is bad, plagiarism and online education, growing trend in online education, online education market, the online education program, evaluation of online education, the concept of online education, traditional vs. online education.

a research paper on online education


Thank you for visiting You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • My Account Login
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 25 October 2023

Human-like systematic generalization through a meta-learning neural network

  • Brenden M. Lake   ORCID: 1 &
  • Marco Baroni 2 , 3  

Nature volume  623 ,  pages 115–121 ( 2023 ) Cite this article

69k Accesses

807 Altmetric

Metrics details

  • Computer science
  • Human behaviour

The power of human language and thought arises from systematic compositionality—the algebraic ability to understand and produce novel combinations from known components. Fodor and Pylyshyn 1 famously argued that artificial neural networks lack this capacity and are therefore not viable models of the mind. Neural networks have advanced considerably in the years since, yet the systematicity challenge persists. Here we successfully address Fodor and Pylyshyn’s challenge by providing evidence that neural networks can achieve human-like systematicity when optimized for their compositional skills. To do so, we introduce the meta-learning for compositionality (MLC) approach for guiding training through a dynamic stream of compositional tasks. To compare humans and machines, we conducted human behavioural experiments using an instruction learning paradigm. After considering seven different models, we found that, in contrast to perfectly systematic but rigid probabilistic symbolic models, and perfectly flexible but unsystematic neural networks, only MLC achieves both the systematicity and flexibility needed for human-like generalization. MLC also advances the compositional skills of machine learning systems in several systematic generalization benchmarks. Our results show how a standard neural network architecture, optimized for its compositional skills, can mimic human systematic generalization in a head-to-head comparison.

People are adept at learning new concepts and systematically combining them with existing concepts. For example, once a child learns how to ‘skip’, they can understand how to ‘skip backwards’ or ‘skip around a cone twice’ due to their compositional skills. Fodor and Pylyshyn 1 argued that neural networks lack this type of systematicity and are therefore not plausible cognitive models, leading to a vigorous debate that spans 35 years 2 , 3 , 4 , 5 . Counterarguments to Fodor and Pylyshyn 1 have focused on two main points. The first is that human compositional skills, although important, may not be as systematic and rule-like as Fodor and Pylyshyn indicated 3 , 6 , 7 . The second is that neural networks, although limited in their most basic forms, can be more systematic when using sophisticated architectures 8 , 9 , 10 . In recent years, neural networks have advanced considerably and led to a number of breakthroughs, including in natural language processing. In light of these advances, we and other researchers have reformulated classic tests of systematicity and reevaluated Fodor and Pylyshyn’s arguments 1 . Notably, modern neural networks still struggle on tests of systematicity 11 , 12 , 13 , 14 , 15 , 16 , 17 , 18 —tests that even a minimally algebraic mind should pass 2 . As the technology marches on 19 , 20 , the systematicity debate continues.

In this Article, we provide evidence that neural networks can achieve human-like systematic generalization through MLC—an optimization procedure that we introduce for encouraging systematicity through a series of few-shot compositional tasks (Fig. 1 ). Our implementation of MLC uses only common neural networks without added symbolic machinery, and without hand-designed internal representations or inductive biases. Instead, MLC provides a means of specifying the desired behaviour through high-level guidance and/or direct human examples; a neural network is then asked to develop the right learning skills through meta-learning 21 .

figure 1

a , During training, episode a presents a neural network with a set of study examples and a query instruction, all provided as a simultaneous input. The study examples demonstrate how to ‘jump twice’, ‘skip’ and so on with both instructions and corresponding outputs provided as words and text-based action symbols (solid arrows guiding the stick figures), respectively. The query instruction involves compositional use of a word (‘skip’) that is presented only in isolation in the study examples, and no intended output is provided. The network produces a query output that is compared (hollow arrows) with a behavioural target. b , Episode b introduces the next word (‘tiptoe’) and the network is asked to use it compositionally (‘tiptoe backwards around a cone’), and so on for many more training episodes. The colours highlight compositional reuse of words. Stick figures were adapted from art created by D. Chappard (

To demonstrate the abilities of MLC, we evaluated humans and machines side by side on the same tests of systematic generalization. Specifically, we used instruction-learning tasks in a pseudolanguage to examine human and machine learning of structured algebraic systems (details of the procedures are provided in the ‘Behavioural methods: few-shot learning task’ section of the Methods ). We also examined behaviour in response to highly ambiguous linguistic probes, designed to characterize human inductive biases and how these biases could either facilitate or hamper systematic generalization (see the ‘Behavioural methods: open-ended task’ section of the Methods ). Across these evaluations, MLC achieves (or even exceeds) human-level systematic generalization. MLC also produces human-like patterns of errors when human behaviour departs from purely algebraic reasoning, showing how neural networks are not only a capable but also a superior modelling tool for nuanced human compositional behaviour (see ‘Modelling results’). In a final set of simulations (see the ‘Machine learning benchmarks’ section of the Methods ), we show how MLC improves accuracy on popular benchmarks 11 , 16 for few-shot systematic generalization.

Behavioural results

First, we measured human systematic generalization, going beyond classic work that relied primarily on thought experiments to characterize human abilities 1 , 2 , 3 . Our experimental paradigm asks participants to process instructions in a pseudolanguage in order to generate abstract outputs (meanings), differing from artificial grammar learning 22 , statistical learning 23 and program learning 24 in that explicit or implicit judgments of grammaticality are not needed. Instead, the participants generate sequences of symbols in response to sequences of words, enabling computational systems to directly model the resulting data by building on the powerful sequence-to-sequence (seq2seq) toolkit from machine learning 25 , 26 . All experiments were run on Amazon Mechanical Turk, and detailed procedures are described in the ‘Behavioural methods: few-shot learning task’ and ‘Behavioural methods: open-ended task’ sections of the Methods . The complete set of human and machine responses is viewable online (Data availability).

Systematic generalization was evaluated through a few-shot learning paradigm. As illustrated in Fig. 2 , the participants ( n  = 25) were provided with a curriculum of 14 study instructions (input/output pairs) and asked to produce outputs for 10 query instructions (see the ‘Behavioural methods: few-shot learning task’ section of the Methods ). The study instructions were consistent with an underlying interpretation grammar, which derives outputs from inputs through a set of compositional rewrite rules (see the ‘Interpretation grammars’ section of the Methods ). To perform well, the participants must learn the meaning of words from just a few examples and generalize to more complex instructions. The participants were able to produce output sequences that exactly matched the algebraic standard in 80.7% of cases (indicated by an asterisk in Fig. 2b (i)). Chance performance is 2.8% for two-length output sequences if the length is known, and exponentially less for longer sequences. Notably, participants also generalized correctly in 72.5% of cases to longer output sequences than seen during training (an example is shown as the last instruction in Fig. 2b (i)), which is a type of generalization that neural networks often struggle with 11 . When deviating from this algebraic standard, the responses were still highly non-random and suggestive of strong inductive biases. Many errors involved ‘one-to-one’ translations that mapped each input word to exactly one output symbol, as if all words were primitives rather than functions (24.4% of all errors; marked with 1-to-1 in Fig. 2b (i)). Other errors involved applying a function but mixing up its arguments, often in ways that suggest an ‘iconic concatenation’ bias for maintaining the order of the input words in the order of the output symbols (23.3% of all errors involving function 3 followed this pattern; marked with IC in Fig. 2b (i)). These response patterns can be compared to biases in language acquisition more generally; indeed, forms of one-to-one 27 and iconic concatenation 28 , 29 are widely attested in natural language.

figure 2

a , b , Based on the study instructions ( a ; headings were not provided to the participants), humans and MLC executed query instructions ( b ; 4 of 10 shown). The four most frequent responses are shown, marked in parentheses with response rates (counts for people and the percentage of samples for MLC). The superscript notes indicate the algebraic answer (asterisks), a one-to-one error (1-to-1) or an iconic concatenation error (IC). The words and colours were randomized for each participant and a canonical assignment is therefore shown here. A black circle indicates a colour that was unused in the study set.

These inductive biases were evaluated more directly through an open-ended instruction task in which the participants were not influenced by study examples and, therefore, their a priori preferences are more likely to shine through. Different human participants ( n  = 29) were asked to make plausible guesses regarding the outputs of seven unknown instructions and how they relate to one another (responding to ‘fep fep’ or ‘fep wif’ with a series of coloured circles), without seeing any input/output examples to influence their responses (see Fig. 3 for the full task and the ‘Behavioural methods: open-ended task’ section of the Methods for details). Despite the unconstrained nature of the test, people’s responses were highly structured and confirm the previous two inductive biases. People’s responses also followed a third bias related to mutual exclusivity that encourages assigning unique meanings to unique words 27 . Reflecting the strong influence of the biases, the majority of participants (17 out of 29; 58.6%) responded with a pattern analogous to that in Fig. 3a,b (left), which is perfectly consistent with all three inductive biases. Across all responses, 18 out of 29 participants followed one-to-one (62.1%), 23 out of 29 (79.3%) followed iconic concatenation and all but two followed mutual exclusivity in producing a unique response to each instruction (27 out of 29; 93.1%).

figure 3

a , b , The participants produced responses (sequences of coloured circles) to the queries (linguistic strings) without seeing any study examples. Each column shows a different word assignment and a different response, either from a different participant ( a ) or MLC sample ( b ). The leftmost pattern (in both a and b ) was the most common output for both people and MLC, translating the queries in a one-to-one (1-to-1) and left-to-right manner consistent with iconic concatenation (IC). The rightmost patterns (in both a and b ) are less clearly structured but still generate a unique meaning for each instruction (mutual exclusivity (ME)).

Modelling results

We next evaluated MLC on its ability to produce human-level systematic generalization and human-like patterns of error on these challenging generalization tasks. A successful model must learn and use words in systematic ways from just a few examples, and prefer hypotheses that capture structured input/output relationships. MLC aims to guide a neural network to parameter values that, when faced with an unknown task, support exactly these kinds of generalizations and overcome previous limitations for systematicity. Importantly, this approach seeks to model adult compositional skills but not the process by which adults acquire those skills, which is an issue that is considered further in the general discussion. MLC source code and pretrained models are available online (Code availability).

As shown in Fig. 4 and detailed in the ‘Architecture and optimizer’ section of the Methods , MLC uses the standard transformer architecture 26 for memory-based meta-learning. MLC optimizes the transformer for responding to a novel instruction (query input) given a set of input/output pairs (study examples; also known as support examples 21 ), all of which are concatenated and passed together as the input. This amounts to meta-learning because optimization occurs over dynamically changing episodes (each with new study and query examples) rather than a static dataset; specifically, each episode constitutes a different seq2seq task 30 , 31 defined through a randomly generated latent grammar for interpreting inputs as outputs (see the ‘Meta-training procedures for MLC and MLC variants’ section of the Methods ). To succeed, the transformer must find parameter values that are capable of extracting meanings from the study words and composing them to answer queries, relying on meta-learning but also innovations in the transformer architecture that were not envisioned in Fodor and Pylyshyn’s arguments 1 (for example, variable length input, parameter sharing and self-attention). On test episodes, the model weights are frozen and no task-specific parameters are provided 32 . Finally, given the end goal of modelling human responses (including errors), we stochastically pair each query with either the algebraic output sequence (generated through the episode’s grammar) or a heuristic output sequence (sampled through one-to-one translations or misapplied rules), at approximately the same ratios as observed empirically (see the ‘Meta-training procedures for MLC and MLC variants’ section of the Methods ).

figure 4

A standard transformer encoder (bottom) processes the query input along with a set of study examples (input/output pairs; examples are delimited by a vertical line ( ∣ ) token). The standard decoder (top) receives the encoder’s messages and produces an output sequence in response. After optimization on episodes generated from various grammars, the transformer performs novel tasks using frozen weights. Each box is an embedding (vector); input embeddings are light blue (latent are dark).

MLC is capable of optimizing models for highly systematic behaviour. The most systematic run produced a transformer that was perfectly systematic (100% exact match accuracy) when choosing the best responses on the same few-shot instruction-learning task given to people (Fig. 2 ; see the ‘Evaluation procedures’ section of the Methods for details and Supplementary Information  1 for model variability across 10 runs) and additionally capable of inferring novel rules that did not participate in meta-learning (Supplementary Information 1 ). An informal analysis of this run further shows that MLC is also capable of more subtle and bias-driven behaviours; when sampling from the distribution of model outputs (Fig. 2b ), the transformer produced systematic outputs at an average rate (82.4%) close to human performance (80.7%), and appropriately handled longer output sequences at a rate (77.8%) near human levels (72.5%). Moreover, like people, the MLC transformer made errors reflecting one-to-one translations (56.3% of errors; 24.4% for people) and iconic concatenations (13.8% of errors involving function 3; 23.3% for people). MLC can also predict which instructions are easier or harder for people on average (Pearson’s r  = 0.788, P  = 0.031, two-tailed permutation test, n  = 10 items; item-level performance is shown in Extended Data Fig. 1 ). Formally, in Table 1 (few-shot learning), we compare models through the log-likelihood of all the human responses (Fig. 2b (i)) given the model predictions 33 . In the rest of this paragraph, when we say that one model outperforms another, there is a difference of 8 natural log points or greater. The MLC transformer (Table 1 ; MLC) outperforms more rigidly systematic models at predicting human behaviour. This includes a probabilistic symbolic model that assumes that people infer the gold grammar but make occasional arbitrary lapses (symbolic (oracle); details of all of the symbolic and basic seq2seq models are provided in the ‘Alternative neural and symbolic models’ section of the Methods ) and a transformer optimized on the same training episodes as MLC although with strictly algebraic (rather than also bias-based) output responses (MLC (algebraic only); details of all of the MLC variants are provided in the ‘Meta-training procedures for MLC and MLC variants’ section of the Methods ). MLC also outperforms a basic seq2seq transformer fit to the patterns in Fig. 2 without meta-learning and an MLC model optimized for copying rather than systematic generalization (MLC (copy only); during training, the query examples always match one of the study examples). The MLC transformer performs comparably to a probabilistic symbolic model that assumes that people infer the gold grammar but respond stochastically with lapses based on the human inductive biases (symbolic (oracle/biases)). Indeed, MLC was similarly optimized to (implicitly) infer systematic rules and respond with the same biased-based patterns, and it is therefore natural that the two models would perform similarly. The top-performing MLC (joint) was jointly optimized on both the few-shot learning task and the open-ended human responses, as described in the next paragraph.

Although human few-shot learning behaviour can be well characterized by either MLC or a probabilistic symbolic model, a test of more open-ended behaviour emphasizes MLC’s relative strengths. The same transformer architecture was optimized on open-ended participant behaviour and then asked to fill in outputs for the seven instructions one by one (Fig. 3 ; see the ‘Evaluation procedures’ section of the Methods ). The MLC transformer responded exactly like the modal human participant in 65.0% of samples (Fig. 3b (left)), perfectly instantiating the three key inductive biases. An informal analysis further revealed that MLC captured more nuanced patterns of response that only partially use the inductive biases (Fig. 3b (right)). Across all model samples, 66.0% followed one-to-one (62.1% for people), 85.0% followed iconic concatenation (79.3% for people) and the vast majority (99.0%) chose a unique response for each unique command (93.1% for people). Model predictions were also evaluated through fivefold cross-validation 33 : MLC and other models were optimized on responses for either 23 or 24 participants (depending on the cross-validation split) and then predicted responses for held-out participants. Performance was scored by log-likelihood and is summarized in Table 1 (open-ended) (summed over five cross-validation splits, averaged over three runs). In the rest of this paragraph, when we say that one model outperforms another, there is a difference of 57 natural log points or greater. MLC outperforms all alternatives, including the same highly algebraic MLC model as described in the previous experiment (MLC (algebraic only)) and a probabilistic symbolic model that uses the three inductive biases to generate responses but, in contrast to MLC, is not capable of optimizing for other patterns in the human behaviour (Table 1 ; symbolic (oracle/biases)). Importantly, a single transformer can be optimized for both the few-shot learning and open-ended instruction tasks (MLC (joint)); in fact, this is the strongest overall model across experiments for predicting human behaviour (additional analysis is shown in Extended Data Fig. 5 and Supplementary Information 1 ).

Machine learning benchmarks

Beyond predicting human behaviour, MLC can achieve error rates of less than 1% on machine learning benchmarks for systematic generalization. Note that here the examples used for optimization were generated by the benchmark designers through algebraic rules, and there is therefore no direct imitation of human behavioural data. We experiment with two popular benchmarks, SCAN 11 and COGS 16 , focusing on their systematic lexical generalization tasks that probe the handling of new words and word combinations (as opposed to new sentence structures). MLC still used only standard transformer components but, to handle longer sequences, added modularity in how the study examples were processed, as described in the ‘Machine learning benchmarks’ section of the Methods . SCAN involves translating instructions (such as ‘walk twice’) into sequences of actions (‘WALK WALK’). In the ‘add jump’ split, the training set has just one example of how to ‘jump’ (mapping to ‘JUMP’) and the test set probes compositional uses of this verb (for example, ‘jump around right twice and walk thrice’), paralleling our human learning task (‘zup’ is the analogue of ‘jump’ in Fig. 2 ). COGS involves translating sentences (for example, ‘A balloon was drawn by Emma’) into logical forms that express their meanings (balloon( x 1 )  ∨  draw.theme( x 3 ,  x 1 )  ∨  draw.agent( x 3 , Emma)). COGS evaluates 21 different types of systematic generalization, with a majority examining one-shot learning of nouns and verbs. To encourage few-shot inference and composition of meaning, we rely on surface-level word-type permutations for both benchmarks, a simple variant of meta-learning that uses minimal structural knowledge, described in the ‘Machine learning benchmarks’ section of the Methods . These permutations induce changes in word meaning without expanding the benchmark’s vocabulary, to approximate the more naturalistic, continual introduction of new words (Fig. 1 ).

The benchmark error rates are summarized in Table 2 . On SCAN, MLC solves three systematic generalization splits with an error rate of 0.22% or lower (99.78% accuracy or above), including the already mentioned ‘add jump’ split and ‘around right’ and ‘opposite right’, which examine novel combinations of known words. On COGS, MLC achieves an error rate of 0.87% across the 18 types of lexical generalization. Without the benefit of meta-learning, basic seq2seq has error rates at least seven times as high across the benchmarks, despite using the same transformer architecture. However surface-level permutations were not enough for MLC to solve the structural generalization tasks in the benchmarks. MLC fails to handle longer output sequences (SCAN length split) as well as novel and more complex sentence structures (three types in COGS), with error rates at 100%. Such tasks require handling ‘productivity’ (page 33 of ref. 1 ), in ways that are largely distinct from systematicity. However, MLC did handle novel sentence structures in our few-shot instruction-learning task (77.8% correct on queries with both longer input and output sequences than seen during study; Fig. 2 ), suggesting that the right meta-training procedure can promote productivity—a challenge we leave to future work.

Over 35 years ago, when Fodor and Pylyshyn raised the issue of systematicity in neural networks 1 , today’s models 19 and their language skills were probably unimaginable. As a credit to Fodor and Pylyshyn’s prescience, the systematicity debate has endured. Systematicity continues to challenge models 11 , 12 , 13 , 14 , 15 , 16 , 17 , 18 and motivates new frameworks 34 , 35 , 36 , 37 , 38 , 39 , 40 , 41 . Preliminary experiments reported in Supplementary Information  3 suggest that systematicity is still a challenge, or at the very least an open question, even for recent large language models such as GPT-4. To resolve the debate, and to understand whether neural networks can capture human-like compositional skills, we must compare humans and machines side-by-side, as in this Article and other recent work 7 , 42 , 43 . In our experiments, we found that the most common human responses were algebraic and systematic in exactly the ways that Fodor and Pylyshyn 1 discuss. However, people also relied on inductive biases that sometimes support the algebraic solution and sometimes deviate from it; indeed, people are not purely algebraic machines 3 , 6 , 7 . We showed how MLC enables a standard neural network optimized for its compositional skills to mimic or exceed human systematic generalization in a side-by-side comparison. MLC shows much stronger systematicity than neural networks trained in standard ways, and shows more nuanced behaviour than pristine symbolic models. MLC also allows neural networks to tackle other existing challenges, including making systematic use of isolated primitives 11 , 16 and using mutual exclusivity to infer meanings 44 .

Our use of MLC for behavioural modelling relates to other approaches for reverse engineering human inductive biases. Bayesian approaches enable a modeller to evaluate different representational forms and parameter settings for capturing human behaviour, as specified through the model’s prior 45 . These priors can also be tuned with behavioural data through hierarchical Bayesian modelling 46 , although the resulting set-up can be restrictive. MLC shows how meta-learning can be used like hierarchical Bayesian models for reverse-engineering inductive biases (see ref. 47 for a formal connection), although with the aid of neural networks for greater expressive power. Our research adds to a growing literature, reviewed previously 48 , on using meta-learning for understanding human 49 , 50 , 51 or human-like behaviour 52 , 53 , 54 . In our experiments, only MLC closely reproduced human behaviour with respect to both systematicity and biases, with the MLC (joint) model best navigating the trade-off between these two blueprints of human linguistic behaviour. Furthermore, MLC derives its abilities through meta-learning, where both systematic generalization and the human biases are not inherent properties of the neural network architecture but, instead, are induced from data.

Despite its successes, MLC does not solve every challenge raised in Fodor and Pylyshyn 1 . MLC does not automatically handle unpractised forms of generalization or concepts outside the meta-learning distribution, reducing the scope of entirely novel structures it can correctly process (compare the encouraging results on learning novel rules reported in Supplementary Information  1 , with its failure on the SCAN and COGS productivity splits). Moreover, MLC is failing to generalize to nuances in inductive biases that it was not optimized for, as we explore further through an additional behavioural and modelling experiment in Supplementary Information 2 . In the language of machine learning, we conclude that the meta-learning strategy succeeds when generalization makes a new episode in-distribution with respect to the training episodes, even when the specific test items are out-of-distribution with respect to the study examples in the episode. However, meta-learning alone will not allow a standard network to generalize to episodes that are in turn out-of-distribution with respect to the ones presented during meta-learning. The current architecture also lacks a mechanism for emitting new symbols 2 , although new symbols introduced through the study examples could be emitted through an additional pointer mechanism 55 . Last, MLC is untested on the full complexity of natural language and on other modalities; therefore, whether it can achieve human-like systematicity, in all respects and from realistic training experience, remains to be determined. Nevertheless, our use of standard transformers will aid MLC in tackling a wider range of problems at scale. For example, a large language model could receive specialized meta-training 56 , optimizing its compositional skills by alternating between standard training (next word prediction) and MLC meta-training that continually introduces novel words and explicitly improve systematicity (Fig. 1 ). For vision problems, an image classifier or generator could similarly receive specialized meta-training (through current prompt-based procedures 57 ) to learn how to systematically combine object features or multiple objects with relations.

Our study raises natural developmental questions. The specific procedure of optimizing over many related grammar-based tasks is not developmentally plausible, but there are several ways in which the greater principle—that systematicity can be honed through incentive and practice—has developmental merit. First, children are not born with an adult-like ability to compose functions; in fact, there seem to be important changes between infancy 58 and pre-school 59 that could be tied to learning. Second, children become better word learners over the course of development 60 , similar to a meta-learner improving with training. It is possible that children use experience, like in MLC, to hone their skills for learning new words and systematically combining them with familiar words. Beyond natural language, people require a years-long process of education to master other forms of systematic generalization and symbolic reasoning 6 , 7 , including mathematics, logic and computer programming. Although applying the tools developed here to each domain is a long-term effort, we see genuine promise in meta-learning for understanding the origin of human compositional skills, as well as making the behaviour of modern AI systems more human-like.

Behavioural methods: few-shot learning task

The meaning of each word in the few-shot learning task (Fig. 2 ) is described as follows (see the ‘Interpretation grammars’ section for formal definitions, and note that the mapping of words to meanings was varied across participants). The four primitive words are direct mappings from one input word to one output symbol (for example, ‘dax’ is RED, ‘wif’ is GREEN, ‘lug’ is BLUE). Each output symbol is a circle of a particular colour. The other three words are functional terms that take arguments. Function 1 (‘fep’ in Fig. 2 ) takes the preceding primitive as an argument and repeats its output three times (‘dax fep’ is RED RED RED). Function 2 (‘blicket’) takes both the preceding primitive and following primitive as arguments, producing their outputs in a specific alternating sequence (‘wif blicket dax’ is GREEN RED GREEN). Last, function 3 (‘kiki’) takes both the preceding and following strings as input, processes them and concatenates their outputs in reverse order (‘dax kiki lug’ is BLUE RED). We also tested function 3 in cases in which its arguments were generated by the other functions, exploring function composition (‘wif blicket dax kiki lug’ is BLUE GREEN RED GREEN). During the study phase (see description below), participants saw examples that disambiguated the order of function application for the tested compositions (function 3 takes scope over the other functions).

Thirty participants in the United States were recruited using Amazon Mechanical Turk and the psiTurk platform 61 . All of the studies were approved by the NYU IRB, protocol FY2018-1728, and obtained informed consent. The participants were informed that the study investigated how people learn input–output associations, and that they would be asked to learn a set of commands and their corresponding outputs. Learning proceeded in a curriculum with four stages, with each stage featuring both a study phase and a test phase (see Extended Data Fig. 1 for the complete set of study and test instructions). In the first three stages, during the study phase, the participants learned individual functions from just two demonstrations each (functions 1 through 3; Fig. 2a ). In the final stage, participants learned to interpret complex instructions by combining these functions (function compositions; Fig. 2a ). After all stages, there was a short survey that asked about strategy and any technical problems. Participants spent an average of 23 min in the experiment (minimum 8 min and 41 s; maximum 41 min and 19 s).

Each study phase presented the participants with a set of example input–output mappings. For the first three stages, the study instructions always included the four primitives and two examples of the relevant function, presented together on the screen. For the last stage, the entire set of study instructions was provided together to probe composition. During the study phases, the output sequence for one of the study items was covered and the participants were asked to reproduce it, given their memory and the other items on the screen. Corrective feedback was provided, and the participants cycled through all non-primitive study items until all were produced correctly or three cycles were completed. The test phase asked participants to produce the outputs for novel instructions, with no feedback provided (Extended Data Fig. 1b ). The study items remained on the screen for reference, so that performance would reflect generalization in the absence of memory limitations. The study and test items always differed from one another by more than one primitive substitution (except in the function 1 stage, where a single primitive was presented as a novel argument to function 1). Some test items also required reasoning beyond substituting variables and, in particular, understanding longer compositions of functions than were seen in the study phase.

The response interface had a pool of possible output symbols that could be clicked or dragged to the response array. The circles could be rearranged within the array or cleared with a reset button. The study and test set only used four output symbols, but the pool provided six possibilities (that is, there were two extra colours that were not associated to words), to discourage reasoning by exclusion. The assignment of words to colours and functions was randomized for each participant (drawn from nine possible words and six colours), and the first three stages were presented in random order.

We used several strategies to ensure that our participants were paying attention. First, before the experiment, the participants practiced using the response interface and had to pass an instructions quiz; they cycled through the quiz until they passed it. Second, catch trials were included during the test phases, probing the study items rather than new items, with the answers clearly presented on the screen above. There was one catch trial per stage (except the last stage had two); participants were excluded if they missed two or more catch trials ( n  = 5). Finally, query responses were also excluded if the corresponding study phases were not completed correctly (for all items) within three attempts (13% of remaining data).

For statistical analyses of the data from this experiment and elsewhere, we tested for data normalcy and applied alternative nonparametric or permutation tests when the assumptions were not met.

Interpretation grammars

The few-shot learning task evaluated with humans and machines is defined through a set of compositional rewrite rules for translating linguistic instructions to output sequences (Extended Data Fig. 2 ). Inspired by formal semantics 62 , we denote a set of rules such as this as the ‘interpretation grammar’. We refer to the grammar in Extended Data Fig. 2 that defines the human learning task as the ‘gold interpretation grammar’, whereas a different interpretation grammar is shown in Extended Data Fig. 4 . The rules apply one by one, based on their conditions, until they produce an output sequence consisting of all terminal symbols (coloured circles). A worked example of interpreting a complex query is shown in Extended Data Fig. 3 . Four of the rules define how the primitive words (such as ‘dax’, ‘wif’) map to a single output symbol. The other rules define functions (‘fep’, ‘blicket’ and ‘kiki’) that apply when certain conditions are met through their arguments and, when applied, initiate recursive calls of the interpretation process on their intermediate outputs. Note that a different set of rules will define a different few-shot learning problem; this property is used to define many different few-shot learning problems for optimizing MLC. Although the situation does not arise for the study or query instructions in the few-shot task (see the ‘Behavioural methods: few-shot learning task’ section), it is possible that two rules satisfy their conditions at the same intermediate step. If so, the first rule in the interpretation grammar listing is used in order to resolve the ambiguity.

Behavioural methods: open-ended task

The instructions were as similar as possible to the few-shot learning task, although there were several important differences. First, because this experiment was designed to probe inductive biases and does not provide any examples to learn from, it was emphasized to the participants that there are multiple reasonable answers and they should provide a reasonable guess. Second, the participants responded to the query instructions all at once, on a single web page, allowing the participants to edit, go back and forth, and maintain consistency across responses. By contrast, the previous experiment collected the query responses one by one and had a curriculum of multiple distinct stages of learning.

Thirty participants in the United States were recruited using Mechanical Turk and psiTurk. The participants produced output sequences for seven novel instructions consisting of five possible words. The participants also approved a summary view of all of their responses before submitting. There were six pool options, and the assignment of words and item order were random. One participant was excluded because they reported using an external aid in a post-test survey. On average, the participants spent 5 min 5 s in the experiment (minimum 2 min 16 s; maximum 11 min 23 s).

Implementation of MLC

Architecture and optimizer.

As shown in Fig. 4 , our MLC implementation uses a standard seq2seq transformer 26 . This architecture involves two neural networks working together—an encoder transformer to process the query input and study examples, and a decoder transformer to generate the output sequence. Both the encoder and decoder have 3 layers, 8 attention heads per layer, input and hidden embeddings of size 128, and a feedforward hidden size of 512. Following GPT 63 , GELU 64 activation functions are used instead of ReLU. In total, the architecture has about 1.4 million parameters. Note that an earlier version of memory-based meta-learning for compositional generalization used a more limited and specialized architecture 30 , 65 .

The encoder network (Fig. 4 (bottom)) processes a concatenated source string that combines the query input sequence along with a set of study examples (input/output sequence pairs). The encoder vocabulary includes the eight words, six abstract outputs (coloured circles), and two special symbols for separating the study examples ( ∣ and →). The decoder network (Fig. 4 (top)) receives messages from the encoder and generates the output sequence. The decoder vocabulary includes the abstract outputs as well as special symbols for starting and ending sequences (<SOS> and <EOS>, respectively). Sinusoidal positional encodings are added to the input embeddings 26 .

MLC was trained to minimize the cross-entropy loss (averaged over tokens) with the Adam optimizer and a batch size of 25 episodes. Each episode contains many study examples and query examples (for example, up to 14 study examples and 10 queries in optimization for the few-shot learning task) and the effective sequence-level batch size was therefore larger (for example, (14 + 10)25 = 600). Training lasted for 50 epochs. The learning rate was 0.001, with a warm-up applied for the first epoch and then a linear decrease to 0.00005 across training. Dropout of 0.1 was applied to the input embeddings and transformers. For meta-training procedures with a validation set (for example, 200 held-out grammars for few-shot instruction learning), a variant of early stopping was used: although training was not actually truncated, the best parameter setting (across intervals of 100 steps) was saved according to the validation loss. All of the networks were trained using a NVIDIA Titan RTX GPU.

Meta-training procedures for MLC and MLC variants

MLC optimizes the transformers for systematic generalization through high-level behavioural guidance and/or direct human behavioural examples. To prepare MLC for the few-shot instruction task, optimization proceeds over a fixed set of 100,000 training episodes and 200 validation episodes. Extended Data Figure 4 illustrates an example training episode and additionally specifies how each MLC variant differs in terms of access to episode information (see right hand side of figure). Each episode constitutes a seq2seq task that is defined through a randomly generated interpretation grammar (see the ‘Interpretation grammars’ section). The grammars are not observed by the networks and must be inferred (implicitly) to successfully solve few-shot learning problems and make algebraic generalizations. The optimization procedures for the MLC variants in Table 1 are described below.

MLC (algebraic only). The interpretation grammars that define each episode were randomly generated from a simple meta-grammar. An example episode with input/output examples and corresponding interpretation grammar (see the ‘Interpretation grammars’ section) is shown in Extended Data Fig. 4 . Rewrite rules for primitives (first 4 rules in Extended Data Fig. 4 ) were generated by randomly pairing individual input and output symbols (without replacement). Rewrite rules for defining functions (next 3 rules in Extended Data Fig. 4 ) were generated by sampling the left-hand sides and right-hand sides for those rules. For the left-hand side (for example, ⟦ u 1  lug  x 1 ⟧ for the fifth rule in Extended Data Fig. 4 ), rules chose an input symbol as function name, whether the function has one or two arguments (with the function name appearing after the argument or in-between arguments, respectively), and whether each argument can take arbitrary non-empty strings ( x 1 or x 2 ) or just the primitive inputs ( u 1 or u 2 ). A rule’s right-hand side was generated as an arbitrary string (length ≤ 8) that mixes and matches the left-hand-side arguments, each of which are recursively evaluated and then concatenated together (for example, ⟦ x 1 ⟧   ⟦ u 1 ⟧   ⟦ x 1 ⟧   ⟦ u 1 ⟧   ⟦ u 1 ⟧ ). The last rule was the same for each episode and instantiated a form of iconic left-to-right concatenation (Extended Data Fig. 4 ). Study and query examples (set 1 and 2 in Extended Data Fig. 4 ) were produced by sampling arbitrary, unique input sequences (length ≤ 8) that can be parsed with the interpretation grammar to produce outputs (length ≤ 8). Output symbols were replaced uniformly at random with a small probability (0.01) to encourage some robustness in the trained decoder. For this variant of MLC training, episodes consisted of a latent grammar based on 4 rules for defining primitives and 3 rules defining functions, 8 possible input symbols, 6 possible output symbols, 14 study examples and 10 query examples. The study examples were presented in shuffled order on each episode.

The validation episodes were defined by new grammars that differ from the training grammars. Grammars were only considered new if they did not match any of the meta-training grammars, even under permutations of how the rules are ordered. The gold interpretation grammar that produced the few-shot instruction-learning task with humans and machines (Extended Data Fig. 2 ) was also reserved for testing in this way, with an additional structural requirement that grammars for producing the training and validation episodes should also not match the gold grammar through any permutation of the input and output symbol assignments.

For successful optimization, it is also important to pass each study example (input sequence only) as an additional query when training on a particular episode. This effectively introduces an auxiliary copy task—matching the query input sequence to an identical study input sequence, and then reproducing the corresponding study output sequence—that must be solved jointly with the more difficult generalization task.

MLC for the few-shot instruction-learning task. Optimization closely followed the procedure outlined above for the algebraic-only MLC variant. The key difference here is that full MLC model used a behaviourally informed meta-learning strategy aimed at capturing both human successes and patterns of error. Using the same meta-training episodes as the purely algebraic variant, each query example was passed through a bias-based transformation process (see Extended Data Fig. 4 for pseudocode) before MLC processed it during meta-training. Specifically, each query was paired with its algebraic output in 80% of cases and a bias-based heuristic in the other 20% of cases (chosen to approximately reflect the measured human accuracy of 80.7%). To create the heuristic query for meta-training, a fair coin was flipped to decide between a stochastic one-to-one translation and a noisy application of the underlying grammatical rules. For the one-to-one translations, first, the study examples in the episode are examined for any instances of isolated primitive mappings (for example, ‘tufa → PURPLE’). Second, each input symbol is mapped superficially to a single output symbol (in a left-to-right manner) using either the corresponding primitive mapping if observed as a study example, or using an arbitrary output symbol if a primitive mapping is not observed (for example, if the input symbol is a function name). For the noisy rule examples, each two-argument function in the interpretation grammar has a 50% chance of flipping the role of its two arguments. For example, as in Extended Data Fig. 4 , the rule ⟦ u 1  lug  x 1 ⟧  →  ⟦ x 1 ⟧   ⟦ u 1 ⟧   ⟦ x 1 ⟧   ⟦ u 1 ⟧   ⟦ u 1 ⟧ , when flipped, would be applied as ⟦ u 1  lug  x 1 ⟧  →  ⟦ u 1 ⟧   ⟦ x 1 ⟧   ⟦ u 1 ⟧   ⟦ x 1 ⟧   ⟦ x 1 ⟧ .

MLC for the open-ended task. An epoch of optimization consisted of 100,000 episode presentations based on the human behavioural data. To produce one episode, one human participant was randomly selected from the open-ended task, and their output responses were divided arbitrarily into study examples (between 0 and 5), with the remaining responses as query examples. Additional variety was produced by shuffling the order of the study examples, as well as randomly remapping the input and output symbols compared to those in the raw data, without altering the structure of the underlying mapping. The models were trained to completion (no validation set or early stopping).

MLC (joint). Optimization for the joint MLC model, tuned jointly for the few-shot instruction and open-ended tasks, proceeded as described in the two paragraphs above; each epoch combined 100,000 episodes of the few-shot instruction learning optimization and 100,000 episodes of the open-ended optimization. Finally, each epoch also included an additional 100,000 episodes as a unifying bridge between the two types of optimization. These bridge episodes revisit the same 100,000 few-shot instruction learning episodes, although with a smaller number of the study examples provided (sampled uniformly from 0 to 14). Thus, for episodes with a small number of study examples chosen (0 to 5, that is, the same range as in the open-ended trials), the model cannot definitively judge the episode type on the basis of the number of study examples. The models were trained to completion (no validation set or early stopping).

MLC (copy only). Optimization for the copy-only model closely followed the procedure for the algebraic-only variant. Critically, this model was trained only on the copy task of identifying which study example is the same as the query example, and then reproducing that study example’s output sequence (see specification in Extended Data Fig. 4 ; set 1 was used for both study and query examples). It was not trained to handle novel queries that generalize beyond the study set. Thus, the model was trained on the same study examples as MLC, using the same architecture and procedure, but it was not explicitly optimized for compositional generalization.

Evaluation procedures

Few-shot instruction-learning task. MLC was evaluated on this task in several ways; in each case, MLC responded to this novel task through learned memory-based strategies, as its weights were frozen and not updated further. MLC predicted the best response for each query using greedy decoding, which was compared to the algebraic responses prescribed by the gold interpretation grammar (Extended Data Fig. 2 ). MLC also predicted a distribution of possible responses; this distribution was evaluated by scoring the log-likelihood of human responses and by comparing samples to human responses. Although the few-shot task was illustrated with a canonical assignment of words and colours (Fig. 2 ), the assignments of words and colours were randomized for each human participant. Thus, to evaluate MLC comparably, these factors were also randomized. For comparison with the gold grammar or with human behaviour via log-likelihood, performance was averaged over 100 random word/colour assignments. Samples from the model (for example, as shown in Fig. 2 and reported in Extended Data Fig. 1 ) were based on an arbitrary random assignment that varied for each query instruction, with the number of samples scaled to 10× the number of human participants.

Open-ended task. MLC was evaluated on sampling human-like responses and predicting human responses through log-likelihood scores. Human participants made plausible guesses for how to respond to 7 query instructions (see the ‘Behavioural methods: open-ended task’ section). They responded jointly to all 7 queries on the same web page; as analysed in the main text, people’s predicted word meanings followed strong consistency constraints across the responses. Thus, to model these data, MLC cannot simply answer the queries independently. Instead, MLC factorizes the problem of responding jointly to 7 query inputs x 1 , …,  x 7 with 7 query outputs y 1 , …,  y 7 as

using ( x 1 ,  y 1 ), …, ( x i −1 ,  y i −1 ) as study examples for responding to query x i with output y i . Thus, sampling a response for the open-ended task proceeded as follows. First, MLC samples P ( y 1 ∣ x 1 ) with no study examples. Second, when sampling y 2 in response to query x 2 , the previously sampled ( x 1 ,  y 1 ) is now a study example, and so on. The query ordering was chosen arbitrarily (this was also randomized for human participants).

For scoring a particular human response y 1 , …,  y 7 by log-likelihood, MLC uses the same factorization as in equation ( 1 ). Performance was averaged over 200 passes through the dataset, each episode with different random query orderings as well as word and colour assignments.

Alternative neural and symbolic models

In addition to the range of MLC variants specified above, the following additional neural and symbolic models were evaluated.

Lapse model. All MLC, symbolic and neural models were fit to the human behavioural responses (Table 1 ) with a lapse parameter λ . With this parameter, the probability of a participant producing any given output symbol s   ∈   S is \(P(s)=(1-\lambda ){P}_{M}(s)+\lambda \frac{1}{| S| }\) , where S (with cardinality ∣ S ∣ ) is the set of abstract outputs (colour circles) plus the end-of-sequence token ( ) and P M is the model prediction before the lapse mechanism. If the model has no prediction for a particular symbol (for example, this symbol extends beyond the model’s predicted output sequence), \(P(s)=\frac{1}{| S| }\) .

Symbolic (oracle). This probabilistic symbolic model assumes that people can infer the gold grammar from the study examples (Extended Data Fig. 2 ) and translate query instructions accordingly. Non-algebraic responses must be explained through the generic lapse model (see above), with a fit lapse parameter. Note that all of the models compared in Table 1 have the same opportunity to fit a lapse parameter.

Symbolic (oracle/biases). For the few-shot instruction-learning task, this probabilistic symbolic model augments the oracle, described above, by passing the algebraic input/output pairs through the same bias-based transformation process used when optimizing MLC (see pseudocode in Extended Data Fig. 4 and see the ‘MLC few-shot instruction-learning task’ section for more description). Thus, using the gold grammar in Extended Data Fig. 2 , this model predicts a mixture of algebraic outputs, one-to-one translations and noisy rule applications to account for human behaviour.

For the open-ended task, this probabilistic symbolic model operationalizes the three key inductive biases. Using the same factorization as MLC does for the open-ended task (equation ( 1 )), the response sequence y i to query sequence x i is modelled based on previous participant responses, P ( y i ∣ x i ,  x < i ,  y < i ). Each input token within the sequence x i is stochastically translated as a single output token in y i using a left-to-right (iconic concatenation), one-to-one strategy. For example, if x i is ‘dax wug’, a coloured circle for ‘dax’ is sampled in proportion to the number of times ‘dax’ aligned with each coloured circle in the previous x < i and y < i pairs. After handling ‘dax’, a coloured circle for ‘wug’ is sampled in the same manner. If a word is new (and does not appear previously in x < i ), its coloured circle is sampled from the set of unused output symbols (that do not appear in y < i ), implementing mutual exclusivity. As with all models, a fit lapse parameter is also used.

Neural (basic seq2seq). A basic seq2seq transformer can be obtained through a straightforward modification of the MLC diagram (Fig. 4 ): the study examples were excluded from the input sequence, leaving the transformer to process only the query input before producing the query output. Given that only the architecture’s use has changed (not the architecture itself), the model has approximately the same number of learnable parameters as in MLC (except for the smaller input vocabulary). Without access to study examples, the model is poorly equipped for learning words with changing meanings; it has no in-context memory and, therefore, all of its knowledge must be stored in the learned weights. To perform the few-shot instruction-learning task, the basic seq2seq model was trained in the typical way for seq2seq modelling: training iterates over the input/output sequence pairs with the aim of learning the target mapping. In this case, the training set is the 14 study instructions and the test set is the 10 query instructions (Extended Data Fig. 1 ). Otherwise, the same architecture and optimizer was used as described in the ‘Architecture and optimizer’ section. The network was trained for 1,000 epochs over the batched set of study instructions. It was not clear how much training would be optimal and we wanted to examine this model under favourable conditions. To this end, we gave it an additional advantage not offered to any other model class: we tracked each step of the optimizer and selected the best parameter values on the basis of the test loss. Typically, this point was reached within a few dozen steps. Nevertheless, all 10 runs failed to generalize systematically on the few-shot instruction task (0% exact-match accuracy).

We informally examined a couple of other basic seq2seq variants. First, we evaluated lower-capacity transformers but found that they did not perform better. Second, we tried pretraining the basic seq2seq model on the entire meta-training set that MLC had access to, including the study examples, although without the in-context information to track the changing meanings. Then model was then fine-tuned as described above. On the few-shot instruction task, this improves the test loss marginally, but not accuracy.

Handling long in-context sequences

The tasks from the machine-learning literature that we experimented with, SCAN 11 , 66 and COGS 16 , feature long sequences as (in-context) study examples. This raises issues for the previous architecture (see the ‘Architecture and optimizer’ section). Specifically, it is intractable to process a single source sequence that consists of the concatenated query input sequence and multiple study example sequences, which could have a worst-case source sequence of length S  ≈ 1,500 on COGS and potentially longer in other applications (for each individual study example, the maximum length in SCAN is 9 for inputs and 49 for outputs; the maximum length in COGS is 22 for inputs and 154 for outputs). The bottlenecks are the encoder self-attention layers, which are \({\mathcal{O}}({S}^{2})\) . A more scalable procedure for applying a standard transformer (Extended Data Fig. 6 ) was therefore developed for optimizing MLC on machine learning benchmarks. We copy each query input sequence m times and concatenate the copies separately with each of the m study examples. This creates m smaller source sequences to be processed separately by the standard transformer encoder. Each of the resulting contextual embeddings are then marked according to their origin in one of the m study examples, which is done by adding an index embedding vector that enables the decoder to see which embedding came from which study example (one for each index 1, …,  m ). Finally, the set of contextual embeddings is passed to the standard transformer decoder. The decoder cross-attention layers are less expensive, \({\mathcal{O}}(ST)\) , because the target sequence length T , which does not include any study examples, is typically much shorter ( T   ≪   S ).


For each SCAN split, both MLC and basic seq2seq models were optimized for 200 epochs without any early stopping. For COGS, both models were optimized for 300 epochs (also without early stopping), which is slightly more training than the extended amount prescribed in ref. 67 for their strong seq2seq baseline. The batch size was 200 episodes for SCAN and 40 episodes for COGS. This more scalable MLC variant, the original MLC architecture (see the ‘Architecture and optimizer’ section) and basic seq2seq all have approximately the same number of learnable parameters (except for the fact that basic seq2seq has a smaller input vocabulary).

Each SCAN episode contained 10 study examples and 2 query examples (COGS used 8 study and 2 query), such that one query example was a randomly chosen study example (as an auxiliary copy task; see the ‘Meta-training procedures for MLC and MLC variants’ section) and the other query was distinct from the study examples and required generalization. All of the query and study examples were drawn from the training corpus. Each episode was scrambled (with probability 0.95) using a simple word type permutation procedure 30 , 65 , and otherwise was not scrambled (with probability 0.05), meaning that the original training corpus text was used instead. Occasionally skipping the permutations in this way helps to break symmetries that can slow optimization; that is, the association between the input and output primitives is no longer perfectly balanced. Otherwise, all model and optimizer hyperparameters were as described in the ‘Architecture and optimizer’ section.

SCAN: meta-training and testing

During SCAN meta-training (an example episode is shown in Extended Data Fig. 7 ), each episode is formed by sampling a set of study and query examples from the training corpus of a particular SCAN split (‘add jump’, ‘around right’ and so on). Given these examples, a simple permutation procedure remaps the full set of output actions (‘JUMP’, ‘RUN’, ‘WALK’, ‘LOOK’, ‘TURN LEFT’, ‘TURN RIGHT’) through a random permutation of these same set of actions, and remaps the input primitives (‘jump’, ‘run’, ‘walk’, ‘look’, ‘left’, ‘right’) through another random permutation to the same set of words. Note that several other input words (the mostly ‘functional’ words ‘turn’, ‘twice’, ‘thrice’, ‘around’, ‘opposite’, ‘and’, ‘after’) have stable meanings that can be stored in the model weights. To make sense of an episode, MLC must become adept at inferring, from just a few study examples, how words map to meanings. MLC must also become adept at composition: it must systematically compose the inferred word meanings to correctly answer the queries.

During SCAN testing (an example episode is shown in Extended Data Fig. 7 ), MLC is evaluated on each query in the test corpus. For each query, 10 study examples are again sampled uniformly from the training corpus (using the test corpus for study examples would inadvertently leak test information). Neither the study nor query examples are remapped; in other words, the model is asked to infer the original meanings. Finally, for the ‘add jump’ split, one study example is fixed to be ‘jump → JUMP’, ensuring that MLC has access to the basic meaning before attempting compositional uses of ‘jump’.

COGS: meta-training and testing

The COGS output expressions were converted to uppercase to remove any incidental overlap between input and output token indices (which MLC, but not basic seq2seq, could exploit). As in SCAN meta-training, an episode of COGS meta-training involves sampling a set of study and query examples from the training corpus (see the example episode in Extended Data Fig. 8 ). The vocabulary in COGS is much larger than in SCAN; thus, the study examples cannot be sampled arbitrarily with any reasonable hope that they would inform the query of interest. Instead, for each vocabulary word that takes a permuted meaning in an episode, the meta-training procedure chooses one arbitrary study example that also uses that word, providing the network an opportunity to infer its meaning. Any remaining study examples needed to reach a total of 8 are sampled arbitrarily from the training corpus.

COGS is a multi-faceted benchmark that evaluates many forms of systematic generalization. To master the lexical generalization splits, the meta-training procedure targets several lexical classes that participate in particularly challenging compositional generalizations. As in SCAN, the main tool used for meta-learning is a surface-level token permutation that induces changing word meaning across episodes. These permutations are applied within several lexical classes; for examples, 406 input word types categorized as common nouns (‘baby’, ‘backpack’ and so on) are remapped to the same set of 406 types. The other remapped lexical classes include proper nouns (103 input word types; ‘Abigail’, ‘Addison’ and so on), dative verbs (22 input word types; ‘given’, ‘lended’ and so on) and verbs in their infinitive form (21 input word types; such as ‘walk’, ‘run’). Surface-level word type permutations are also applied to the same classes of output word types. Other verbs, punctuation and logical symbols have stable meanings that can be stored in the model weights. Importantly, although the broad classes are assumed and could plausibly arise through simple distributional learning 68 , 69 , the correspondence between input and output word types is unknown and not used.

In one case, COGS meta-learning goes beyond surface-level remapping to use a minimal amount of semantic structure. To guide the networks toward flexible substitution of common nouns with proper nouns, any common noun input token has an independent chance of replacement (probability 0.01) with an arbitrary proper noun input token, while also removing the preceding determiner token. Independently, any common noun output token can also be arbitrarily remapped (again with probability 0.01) to a proper noun output token, with the corresponding minimal change to the structural form to remove the determiner (if remapping the output token ‘cookie’ to ‘John’, the cookie( x i ) predicate is removed, occurrences of variable x i are replaced with ‘John’ and variables j  >  i are decremented by 1). As before, the correspondence between input and output tokens is unknown, both at the levels of a sentence and the whole dataset. Thus, during an episode of meta-training, a common noun (phrase) might correspond to a logical form expressing a proper noun or vice versa. At test, MLC must sort this out and recover how proper and common nouns work on the basis of the study examples.

During the COGS test (an example episode is shown in Extended Data Fig. 8 ), MLC is evaluated on each query in the test corpus. For each query, eight study examples are sampled from the training corpus, using the same procedure as above for picking study examples that facilitate word overlap (note that picking study examples from the generalization corpus would inadvertently leak test information). Neither the study nor query examples are remapped to probe how models infer the original meanings.

Reporting summary

Further information on research design is available in the  Nature Portfolio Reporting Summary linked to this article.

Data availability

Human behavioural data are available at Zenodo ( ). The complete set of human and machine responses is also illustrated and viewable in HTML at the previous link. The human behavioural data also appeared in a previous non-archival conference paper 70 .

Code availability

MLC source code and pretrained models are available online 71 , including MLC models of human behaviour ( ) and MLC models applied to machine learning benchmarks ( ). Any additional code is available on request.

Fodor, J. A. & Pylyshyn, Z. W. Connectionism and cognitive architecture: a critical analysis. Cognition 28 , 3–71 (1988).

Article   CAS   PubMed   Google Scholar  

Marcus, G. F. The Algebraic Mind: Integrating Connectionism and Cognitive Science (MIT Press, 2003).

Johnson, K. On the systematicity of language and thought. J. Philos. 101 , 111–139 (2004).

Article   Google Scholar  

Symons, J. & Calvo, P. (eds) The Architecture of Cognition: Rethinking Fodor and Pylyshyn’s Systematicity Challenge (MIT Press, 2014).

Hill, F. et al. Environmental drivers of systematicity and generalisation in a situated agent. In Proc. International Conference on Learning Representations (ICLR) (2020).

O’Reilly, R. C. et al. in The Architecture of Cognition: Rethinking Fodor and Pylyshyn’s Systematicity Challenge (eds Calvo, P. & Symons, J.) 191–226 (MIT Press, 2014).

Nam, A. J. & McClelland, J. L. What underlies rapid learning and systematic generalization in humans? Preprint at (2021).

Smolensky, P. Tensor product variable binding and the representation of symbolic structures in connectionist networks. Artif. Int. 46 , 159–216 (1990).

Article   MathSciNet   MATH   Google Scholar  

Pollack, J. B. Recursive distributed representations. Artif. Int. 46 , 77–105 (1990).

Kriete, T., Noelle, D. C., Cohen, J. D. & O’Reilly, R. C. Indirection and symbol-like processing in the prefrontal cortex and basal ganglia. Proc. Natl Acad. Sci. USA 110 , 16390–16395 (2013).

Article   ADS   CAS   PubMed   PubMed Central   Google Scholar  

Lake, B. M. & Baroni, M. Generalization without systematicity: on the compositional skills of sequence-to-sequence recurrent networks. In Proc. International Conference on Machine Learning (ICML) (eds. Dy, J. & Krause, A.) 2873–2882 (PMLR, 2018).

Ettinger, A., Elgohary, A., Phillips, C. & Resnik, P. Assessing composition in sentence vector representations. In Proc. 7th International Conference on Computational Linguistics, (COLING 2018) 1790–1801 (Association for Computational Linguistics, 2018).

Bahdanau, D. et al. CLOSURE: assessing systematic generalization of CLEVR models. In Proc. NAACL Workshop on Visually Grounded Interaction and Language (ViGIL) (2019).

Keysers, D. et al. Measuring compositional generalization: a comprehensive method on realistic data. In Proc. International Conference on Learning Representations (ICLR) (2019).

Yu, L. & Ettinger, A. Assessing phrasal representation and composition in transformers. In Proc. Conference on Empirical Methods in Natural Language Processing (EMNLP) 4896–4907 (Association for Computational Linguistics, 2020).

Kim, N. & Linzen, T. COGS: a compositional generalization challenge based on semantic interpretation. In Proc. Conference on Empirical Methods in Natural Language Processing (EMNLP) 9087–9105 (2020).

Hupkes, D., Dankers, V., Mul, M. & Bruni, E. Compositionality decomposed: how do neural networks generalize? J. Artif. Int. Res. 67 , 757–795 (2020).

Press, O. et al. Measuring and narrowing the compositionality gap in language models. Preprint at (2022).

Brown, T. B. et al. Language models are few-shot learners. In Proc. Advances in Neural Information Processing Systems 33 (NeurIPS) (eds Larochelle, H. et al.) 1877–1901 (Curran Associates, 2020).

OpenAI. GPT-4 technical report. Preprint at (2023).

Hospedales, T., Antoniou, A., Micaelli, P. & Storkey, A. Meta learning in neural networks: a survey. IEEE Trans. Pattern Anal. Mach. Int. 44 , 5149–5169 (2022).

Google Scholar  

Reber, A. Implicit learning of artificial grammars. Verb. Learn. Verb. Behav. 5 , 855–863 (1967).

Aslin, R. N., Saffran, J. R. & Newport, E. L. Computation of conditional probability statistics by 8-month-old infants. Psychol. Sci. 9 , 321–324 (1998).

Stuhlmuller, A., Tenenbaum, J. B. & Goodman, N. D. Learning structured generative concepts. In Proc. Thirty-Second Annual Conference of the Cognitive Science Society, 2296–2301 (2010).

Sutskever, I., Vinyals, O. & Le, Q. V. Sequence to sequence learning with neural networks. In Proc. Advances in Neural Information Processing Systems (eds Ghahramani, Z. et al.) (Curran Associates, 2014).

Vaswani, A. et al. Attention is all you need. In Proc. Advances in Neural Information Processing Systems 30 (eds Guyon, I. et al.) 5998–6008 (Curran Associates, 2017).

Markman, E. M. & Wachtel, G. F. Children’s use of mutual exclusivity to constrain the meanings of words. Cogn. Psychol. 20 , 121–157 (1988).

Haiman, J. The iconicity of grammar: isomorphism and motivation. Language 56 , 515–540 (1980).

de Ruiter, L., Theakston, A., Brandt, S. & Lieven, E. Iconicity affects children’s comprehension of complex sentences: the role of semantics, clause order, input and individual differences. Cognition 171 , 202–224 (2018).

Article   PubMed   Google Scholar  

Lake, B. M. Compositional generalization through meta sequence-to-sequence learning. In Proc. Advances in Neural Information Processing Systems (NeurIPS) 32 (eds Wallach, H. et al.) 9791–9801 (Curran Associates, 2019).

Conklin, H., Wang, B., Smith, K. & Titov, I. Meta-learning to compositionally generalize. In Proc. 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (ACL-IJCNLP) 3322–3335 (Association for Computational Linguistics, 2021).

Chan, S. C. Y. et al. Data distributional properties drive emergent in-context learning in transformers. In Advances in Neural Information Processing Systems 35 (eds Koyejo, S. et al.) 18878–18891 (Curran Associates, 2022).

Myung, J. I. & Pitt, M. A. in Stevens’ Handbook of Experimental Psychology and Cognitive Neuroscience (ed. Wixted, J. T.) 85–118 (John Wiley & Sons, 2018).

Collins, A. G. E. & Frank, M. J. Cognitive control over learning: creating, clustering, and generalizing task-set structure. Psychol. Rev. 120 , 190–229 (2013).

Article   PubMed   PubMed Central   Google Scholar  

Chen, X., Liang, C., Yu, A. W., Song, D. & Zhou, D. Compositional generalization via neural-symbolic stack machines. In Proc. Advances in Neural Information Processing Systems 33 (eds Larochelle, H. et al.) 1690–1701 (Curran Associates, 2020).

Russin, J., Jo, J., O’Reilly, R. C. & Bengio, Y. Systematicity in a recurrent neural network by factorizing syntax and semantics. In Proc. 42nd Annual Meeting of the Cognitive Science Society (eds Denison, S. et al.) (Cognitive Science Society. 2020).

Liu, Q. et al. Compositional generalization by learning analytical expressions. Adv. Neural Inf. Proces. Syst. 33 , 11416–1142 (2020).

Nye, M. I., Solar-Lezama, A., Tenenbaum, J. B. & Lake, B. M. Learning compositional rules via neural program synthesis. In Proc. Advances in Neural Information Processing Systems (NeurIPS) 33 (eds Larochelle, H. et al.) (Curran Associates, 2020).

Singh, G., Deng, F. & Ahn, S. Illiterate DALL-E learns to compose. In Proc. ICLR (2022).

Smolensky, P., McCoy, R. T., Fernandez, R., Goldrick, M. & Gao, J. Neurocompositional computing: from the central paradox of cognition to a new generation of AI systems. AI Mag. (2022).

Zhou, D. et al. Least-to-most prompting enables complex reasoning in large language models. In Proc. ICLR (2023).

Franklin, N. T. & Frank, M. J. Generalizing to generalize: humans flexibly switch between compositional and conjunctive structures during reinforcement learning. PLoS Comput. Biol. 16 , e1007720 (2020).

Dekker, R. B., Otto, F. & Summerfield, C. Curriculum learning for human compositional generalization. Proc. Natl Acad. Sci. USA 119 , e2205582119 (2022).

Article   CAS   PubMed   PubMed Central   Google Scholar  

Gandhi, K. & Lake, B. M. Mutual exclusivity as a challenge for deep neural networks. In Proc. Advances in Neural Information Processing Systems (NeurIPS) 33 (eds Larochelle, H. et al.) 14182–14192 (Curran Associates, 2020).

Griffiths, T. L., Chater, N., Kemp, C., Perfors, A. & Tenenbaum, J. B. Probabilistic models of cognition: exploring representations and inductive biases. Trends Cogn. Sci. 14 , 357–364 (2010).

Kemp, C., Perfors, A. & Tenenbaum, J. B. Learning overhypotheses with hierarchical Bayesian models. Dev. Sci. 10 , 307–321 (2007).

Grant, E., Finn, C., Levine, S., Darrell, T. & Griffiths, T. Recasting gradient-based meta-learning as hierarchical bayes. In Proc. International Conference on Learning Representations (ICLR) (2019).

Binz, M. et al. Meta-learned models of cognition. Preprint at (2023).

Grant, E., Peterson, J. C. & Griffiths, T. Learning deep taxonomic priors for concept learning from few positive examples. In Proc. Annual Meeting of the Cognitive Science Society (eds Goel, A. K. et al.) 1865–1870 (Cognitive Science Society, 2019).

Dezfouli, A., Nock, R. & Dayan, P. Adversarial vulnerabilities of human decision-making. Proc. Natl Acad. Sci. USA 117 , 29221–29228 (2020).

Kumar, S., Dasgupta, I., Daw, N. D., Cohen, J. D. & Griffiths, T. L. Disentangling abstraction from statistical pattern matching in human and machine learning. PLoS Comput. Biol. 19 , e1011316 (2023).

Santoro, A., Bartunov, S., Botvinick, M., Wierstra, D. & Lillicrap, T. Meta-learning with memory-augmented neural networks. In Proc. International Conference on Machine Learning (ICML) 1842–1850 (PMLR, 2016).

Wang, J. et al. Learning to reinforcement learn. Preprint at (2017).

McCoy, R. T., Grant, E., Smolensky, P., Griffiths, T. L. & Linzen, T. Universal linguistic inductive biases via meta-learning. In Proc. 42nd Annual Conference of the Cognitive Science Society (eds Denison, S. et al.) (Cognitive Science Society, 2020).

Vinyals, O., Fortunato, M. & Jaitly, N. Pointer networks. In Proc. Advances in Neural Information Processing Systems (eds Cortes, C. et al.) (Curran Associates, 2015).

Chen, Y., Zhong, R., Zhan, S., Karypis, G. & He, H. Meta-learning via language model in-context tuning. In Proc. 60th Annual Meeting of the Association for Computational Linguistics (ACL) 719–730 (Association for Computational Linguistics, 2022).

Ramesh, A., Dhariwal, P., Nichol, A., Chu, C. & Chen, M. Hierarchical text-conditional image generation with CLIP latents. Preprint at (2022).

Piantadosi, S. T., Palmeri, H. & Aslin, R. Limits on composition of conceptual operations in 9-month-olds. Infancy 23 , 310–324 (2018).

Piantadosi, S. & Aslin, R. Compositional reasoning in early childhood. PLoS ONE 11 , e0147734 (2016).

Bergelson, E. The comprehension boost in early word learning: older infants are better learners. Child Dev. Perspect. 14 , 142–149 (2020).

Gureckis, T. M. et al. psiTurk: An open-source framework for conducting replicable behavioral experiments online. Behav. Res. Methods 48 , 829–842 (2015).

Heim, I. & Kratzer, A. Semantics in Generative Grammar (Blackwell, 1998).

Radford, A., Narasimhan, K. R., Salimans, T. & Sutskever, I. Improving language understanding by generative pre-training. Preprint at (2018).

Hendrycks, D. & Gimpel, K. Gaussian error linear units (GELUs). Preprint at (2020).

Mitchell, E., Finn, C. & Manning, C. Challenges of acquiring compositional inductive biases via meta-learning. In Proc. AAAI Workshop on Meta-Learning and MetaDL Challenge 138–148 (2021).

Loula, J., Baroni, M. & Lake, B. M. Rearranging the familiar: testing compositional generalization in recurrent networks. Preprint at (2018).

Csordás, R., Irie, K. & Schmidhuber, J. The devil is in the detail: simple tricks improve systematic generalization of transformers. In Proc. EMNLP 2021—2021 Conference on Empirical Methods in Natural Language Processing 619–634 (Association for Computational Linguistics, 2021).

Elman, J. Finding structure in time. Cogn. Sci. 14 , 179–211 (1990).

Schulte im Walde, S. Experiments on the automatic induction of German semantic verb classes. Comput. Linguist. 32 , 159–194 (2006).

Lake, B. M., Linzen, T. & Baroni, M. Human few-shot learning of compositional instructions. In Proc. 41st Annual Conference of the Cognitive Science Society (eds Goel, A. K. et al.) 611–617 (Cognitive Science Society, 2019).

Lake, B. M. brendenlake/MLC: meta-learning for compositionality (v1.0.0). Zenodo (2023).

Download references


We thank T. Linzen for involvement in the design of the behavioural studies; Y. Boureau, T. Brochhagen, B. Karrer, T. Kwan, G. Murphy and J. Russin for feedback on earlier versions of this Article; the members of the NYU ConCats group, M. Frank, K. Gulordava, G. Kruszewski, R. Levy and A. Williams for suggestions; and N. Kim for guidance on using COGS.

Author information

Authors and affiliations.

Department of Psychology and Center for Data Science, New York University, New York, NY, USA

Brenden M. Lake

Catalan Institution for Research and Advanced Studies (ICREA), Barcelona, Spain

Marco Baroni

Department of Translation and Language Sciences, Universitat Pompeu Fabra, Barcelona, Spain

You can also search for this author in PubMed   Google Scholar


B.M.L. and M.B. designed the research and edited the Article. B.M.L. collected and analysed the behavioural data, designed and implemented the models, and wrote the initial draft of the Article.

Corresponding author

Correspondence to Brenden M. Lake .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Peer review

Peer review information.

Nature thanks Aaron Courville and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Extended data figures and tables

Extended data fig. 1 few-shot instruction learning task with full set of queries..

Based on the study instructions (A; headings were not provided to participants), humans and the MLC model executed 10 query instructions by generating coloured circles from a fixed inventory (B; headings were not provided to participants). The percent of participants who produced each sequence exactly as prescribed algebraically is shown. Similarly, the percent of samples from MLC that match the prescribed sequence is shown in parentheses, which correlates with the human values (Pearson’s r  = 0.788, p  = 0.031 via permutation test, two-tailed, n  = 10 items). The words and colours were randomized for each participant.

Extended Data Fig. 2 The gold interpretation grammar that defines the human instruction learning task.

The double brackets ( ⟦ ⟧ ) denote the interpretation function for translating linguistic instructions into sequences of abstract outputs (colour circles). Each human participant received a different permutation of words and colours. Symbols x i and u i denote variables: x i applies to arbitrary non-empty strings, while u i applies only to ‘dax’, ‘wif’, ‘lug’, and ‘zup’.

Extended Data Fig. 3 Using the gold interpretation grammar for processing ‘zup blicket wif kiki dax fep’.

Each step is annotated with the next re-write rules to be applied, and how many times (e.g., 3 × , since some steps have multiple parallel applications). A rule’s condition is met if and only if it matches the entire string inside the brackets ( ⟦ ⟧ ); for instance, only the ‘kiki’ rule applies on the first step because its condition matches two arbitrary non-empty sequences on either side of ‘kiki,’ thus being able to encompass the entire input.

Extended Data Fig. 4 Example meta-learning episode and how it is processed by different MLC variants.

The interpretation grammar defines the episode but is not observed directly and must be inferred implicitly. Set 1 has 14 input/output examples consistent with the grammar, used as Study examples for all MLC variants. Set 2 has 10 examples, used as Query examples for most MLC variants (except copy only). Pseudocode for the bias-based transformation process is shown here for the instruction ‘tufa lug fep’. This transformation is applied to the query outputs before MLC and MLC (joint) process them. Here, flip ( p ) is a coin flip that returns True with probability p .

Extended Data Fig. 5 Human responses for the (A) few-shot learning task and (B) open-ended task that most favour MLC (joint) compared to a MLC model optimized for individual tasks only.

Panel (A) shows the average log-likelihood advantage for MLC (joint) across five patterns (that is, ll(MLC (joint)) - ll(MLC)), with the algebraic target shown here only as a reference. A black circle indicates a colour that was unused in the study set. Panel (B) shows three participant responses.

Extended Data Fig. 6 Handling long in-context sequences with a MLC transformer.

The query input sequence (shown as ‘jump twice after run twice’) is copied and concatenated to each of the m study examples, leading to m separate source sequences (3 shown here). A shared standard transformer encoder (bottom) processes each source sequence to produce latent (contextual) embeddings. The contextual embeddings are marked with the index of their study example, combined with a set union to form a single set of source messages, and passed to the decoder. The standard decoder (top) receives this message from the encoder, and then produces the output sequence for the query. Each box is an embedding (vector); input embeddings are light blue and latent embeddings are dark blue.

Extended Data Fig. 7 Example SCAN meta-training (top) and test (bottom) episodes for the ‘add jump’ split.

The word and action meanings are changing across the meta-training episodes (‘look’, ‘walk’, etc.) and must be inferred from the study examples. During the test episode, the meanings are fixed to the original SCAN forms. Here, the latter probes a compositional use of ‘jump’.

Extended Data Fig. 8 Example COGS meta-training (top) and test (bottom) episodes.

Word meanings are changing across the meta-training episodes (here, ‘driver’ means ‘PILLOW’, ‘shoebox’ means ‘SPEAKER’ etc.) and must be inferred from the study examples. The meanings are fixed to the original forms during the test episode. This test episode probes the understanding of ‘Paula’ (proper noun), which just occurs in one of COGS’s original training patterns.

Supplementary information

Supplementary information.

Supplementary 1–3 (additional modelling results, experiment probing additional nuances in inductive biases, and few-shot instruction learning with OpenAI models), Supplementary Figs. 1–7 and Supplementary References.

Reporting Summary

Rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit .

Reprints and Permissions

About this article

Cite this article.

Lake, B.M., Baroni, M. Human-like systematic generalization through a meta-learning neural network. Nature 623 , 115–121 (2023).

Download citation

Received : 04 January 2023

Accepted : 21 September 2023

Published : 25 October 2023

Issue Date : 02 November 2023


Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

By submitting a comment you agree to abide by our Terms and Community Guidelines . If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

a research paper on online education

Cookies on GOV.UK

We use some essential cookies to make this website work.

We’d like to set additional cookies to understand how you use GOV.UK, remember your settings and improve government services.

We also use cookies set by other sites to help us deliver content from their services.

You have accepted additional cookies. You can change your cookie settings at any time.

You have rejected additional cookies. You can change your cookie settings at any time.

a research paper on online education

  • Education, training and skills

Generative artificial intelligence (AI) in education

  • Department for Education

Updated 26 October 2023

Applies to England

a research paper on online education

© Crown copyright 2023

This publication is licensed under the terms of the Open Government Licence v3.0 except where otherwise stated. To view this licence, visit or write to the Information Policy Team, The National Archives, Kew, London TW9 4DU, or email: [email protected] .

Where we have identified any third party copyright information you will need to obtain permission from the copyright holders concerned.

This publication is available at

This document sets out the position of the Department for Education ( DfE ) on the use of generative artificial intelligence ( AI ), including large language models ( LLMs ) like ChatGPT or Google Bard, in the education sector.

This statement:

  • is informed by the government’s white paper on a pro-innovation approach to AI regulation
  • follows the government’s announcement to set up an expert Frontier AI Taskforce to help the UK adopt the next generation of safe AI

Understanding generative AI

Generative AI refers to technology that can be used to create new content based on large volumes of data that models have been trained on from a variety of works and other sources. ChatGPT and Google Bard are generative artificial intelligence ( AI ) tools built on large language models ( LLMs ).

Tools such as ChatGPT and Google Bard can:

  • answer questions
  • complete written tasks
  • respond to prompts in a human-like way

Other forms of generative AI can produce:

  • simulations

AI technology is not new and we already use it in everyday life for:

  • email spam filtering
  • media recommendation systems
  • navigation apps
  • online chatbots

However, recent advances in technology mean that we can now use tools such as ChatGPT and Google Bard to produce AI -generated content. This creates opportunities and challenges for the education sector.

Opportunities for the education sector

Generative AI tools are good at quickly:

  • analysing, structuring, and writing text
  • turning prompts into audio, video and images

When used appropriately, generative AI has the potential to:

  • reduce workload across the education sector
  • free up teachers’ time, allowing them to focus on delivering excellent teaching

However, the content produced by generative AI could be:

  • inappropriate
  • taken out of context and without permission
  • out of date or unreliable

Using AI effectively

Teacher workload is an important issue and we are committed to helping teachers spend less time on non-pupil facing activities.

We are working with the education sector and with experts to identify opportunities to improve education and reduce workload using generative AI .

Having access to generative AI is not a substitute for having knowledge in our long-term memory. To make the most of generative AI , we need to have the knowledge to draw on.

We can only:

  • learn how to write good prompts if we can write clearly and understand the domain we are asking about
  • sense-check the results if we have a schema against which to compare them

Generative AI tools can make certain written tasks quicker and easier, but cannot replace the judgement and deep subject knowledge of a human expert. It is more important than ever that our education system ensures pupils acquire knowledge, expertise and intellectual capability.

The education sector should:

  • make the most of the opportunities that technology provides
  • use technology safely and effectively to deliver excellent education that prepares pupils to contribute to society and the future workplace

The limitations of generative AI tools

Generative AI tools can produce unreliable information, therefore any content produced requires professional judgement to check for appropriateness and accuracy.

Generative AI :

  • returns results based on the dataset it has been trained on – for example, a generative AI tool may not have been trained on the English curriculum
  • may not provide results that are comparable with a human-designed resource developed in the context of our curriculum

Whatever tools or resources are used to produce plans, policies or documents, the quality and content of the final document remains the professional responsibility of the person who produced it and the organisation they belong to.

Schools and colleges may wish to review homework policies and other types of unsupervised study to account for the availability of generative AI .

Higher education institutions may wish to review the intellectual asset management guide in regards to developing student policies on the IP they create, and how they interact and use IP of others in light of generative AI use.

Protecting data, pupils and staff

  • stores and learns from the data it is given – any data entered should not be identifiable
  • can create believable content, including more credible scam emails requesting payment – people interact with generative AI differently and the content may seem more authoritative and believable

Schools and colleges should:

  • protect personal and special category data in accordance with data protection legislation
  • not allow or cause intellectual property, including pupils’ work, to be used to train generative AI models, without appropriate consent or exemption to copyright
  • review and strengthen their cyber security by referring to the cyber standards   – generative AI could increase the sophistication and credibility of attacks
  • what they need to do to protect pupils and students online
  • how they can limit children’s exposure to risks from the school’s or college’s IT system
  • refer to the  filtering and monitoring standard  to make sure they have the appropriate systems in place

Find out more on:

  • ChatGPT and LLMs : what’s the risk
  • the principles for the security of machine learning

Data privacy

It is important to be aware of the data privacy implications when using generative AI tools, as is the case with any new technology. Personal and special category data must be protected in accordance with data protection legislation.

If it is strictly necessary to use personal and special category data in generative AI tools within their setting, the education institution must ensure that the products and procedures comply with data protection legislation and their existing data privacy policies to protect the data.

Education institutions should also be open and transparent, ensuring the data subjects (pupils) understand their personal or special category data is being processed using AI tools.

Find out more about:

  • personal data
  • special category data

Intellectual property

Most generative tools will use the inputs submitted by users to further train and refine their models. 

However, pupils own the intellectual property ( IP ) rights to original content they create. Original content is likely to include anything that shows working out or is beyond multiple choice questions. Intellectual property can only be used to train AI if there is consent from the rights holder or an exemption to copyright applies.

Some tools allow users to opt out of inputs being used to train the models.

Education institutions must not allow or cause pupils’ original work to be used to train generative AI models unless they have appropriate consent or exemption to copyright. Consent would need to be from the student if over 18, and from their parent or legal guardian if under 18. 

Exemptions to copyright are limited, and education institutions may wish to take legal advice to ensure they are acting within the law.

Formal assessments

Schools, colleges, universities and awarding organisations need to continue to take reasonable steps where applicable to prevent malpractice involving the use of generative AI .

The Joint Council for Qualifications has published guidance on AI use in assessments to support teachers and exam centres in protecting the integrity of qualifications. This guidance includes information on:

  • what counts as AI misuse
  • the requirements for teachers and exam centres to help prevent and detect malpractice

Knowledge and skills for the future

To harness the potential of generative AI , students will benefit from a knowledge-rich curriculum which allows them to become well-informed users of technology and understand its impact on society. Strong foundational knowledge ensures students are developing the right skills to make best use of generative AI .

The education sector needs to:

  • prepare students for changing workplaces
  • teach students how to use emerging technologies, such as generative AI , safely and appropriately

At different stages of education, this teaching may include:

  • the limitations, reliability, and potential bias of generative AI
  • how information on the internet is organised and ranked
  • online safety to protect against harmful or misleading content
  • understanding and protecting IP rights
  • creating and using digital content safely and responsibly
  • the impact of technology, including disruptive and enabling technologies
  • foundational knowledge about how computers work, connect with each other, follow rules and process data

The Office for AI is currently conducting research into the skills that will be needed for future workforce training.

The education system should:

  • support students, particularly young pupils, to identify and use appropriate resources to support their ongoing education
  • encourage effective use of age-appropriate resources (which, in some instances, may include generative AI )
  • prevent over-reliance on a limited number of tools or resources

DfE will continue to work with experts to:

  • consider and respond to the implications of generative AI and other emerging technologies
  • support primary and secondary schools to teach a knowledge-rich computing curriculum to children up to the age of 16

Is this page useful?

  • Yes this page is useful
  • No this page is not useful

Help us improve GOV.UK

Don’t include personal or financial information like your National Insurance number or credit card details.

To help us improve GOV.UK, we’d like to know more about your visit today. We’ll send you a link to a feedback form. It will take only 2 minutes to fill in. Don’t worry we won’t send you spam or share your email address with anyone.


  1. Essay on Online Education

    a research paper on online education

  2. 38+ Research Paper Samples

    a research paper on online education

  3. Research Paper Format

    a research paper on online education

  4. Research paper about the impact of online class to the students

    a research paper on online education

  5. (PDF) A Research Paper on Social media: An Innovative Educational Tool

    a research paper on online education

  6. Critique Paper Example About Education : Sample Literature Review Essay

    a research paper on online education


  1. 5 Free Plagiarism Checking Tools

  2. Research Paper Methodology

  3. Secret To Writing A Research Paper


  5. How to Write Research Paper

  6. Step-by-step approach to starting and completing a good research paper


  1. (PDF) Online Education and Its Effective Practice: A Research Review

    With the help of qualitative content analysis approach, this study reviewed 45 published studies and research on online teaching and learning since 2008, primarily focusing on how theories ...

  2. (Pdf) Research on Online Learning

    This paper analyzes the difficulties faced by the students and teachers in online teaching learning process during the COVID-19 pandemic. Online learning is an alternative platform that replaced ...

  3. Online and face‐to‐face learning: Evidence from students' performance

    Evaluation of evidence‐based practices in online learning: A meta‐analysis and review of online learning studies (Report No. ed‐04‐co‐0040 task 0006). U.S. Department of Education, Office of Planning, Evaluation, and Policy Development, Washington DC. ... As reported in 355 research reports, summaries and papers. North Carolina State ...

  4. Exploring the Evidence on Virtual and Blended Learning

    Past research about online learning is limited and mostly focused on post-secondary and adult education. The studies that do exist in K-12 education find that students participating in online learning generally perform similarly to or worse than peers who have access to traditional face-to-face instruction (with programs that are 100% online ...

  5. Online education in the post-COVID era

    The coronavirus pandemic has forced students and educators across all levels of education to rapidly adapt to online learning. The impact of this — and the developments required to make it work ...

  6. The effects of online education on academic success: A meta ...

    According to the study of Bernard et al. ( 2004 ), this meta-analysis focuses on the activities done in online education lectures. As a result of the research, an overall effect size close to zero was found for online education utilizing more than one generation technology for students at different levels.

  7. (PDF) A study of effectiveness of online learning

    Rezabek, Landra. "Facilitating Learning" (PDF). Association for Educational Communications and Technology. Retrieved 18 March 2016. An Analysis of the Effectiveness of Online Learning in Colleges ...

  8. Full article: Online education next wave: peer to peer learning

    Current online education technologies and platforms emphasize interactions between professors and students. Through the holistic model of online education, we emphasize in this article student-to-student (peer-to-peer) learning in the online mode similar to what exists in the traditional F2F mode. The evolving student-to-student interactional ...

  9. A systematic review of research on online teaching and learning from

    1. Introduction. Online learning has been on the increase in the last two decades. In the United States, though higher education enrollment has declined, online learning enrollment in public institutions has continued to increase (Allen & Seaman, 2017), and so has the research on online learning.There have been review studies conducted on specific areas on online learning such as innovations ...

  10. Full article: Online Education: Worldwide Status, Challenges, Trends

    Many conferences and journals have had themes and special issues focusing on online education. Research related to online business education was first initiated in 1990s by Information Systems (IS) researchers like Alavi and Leidner ... The paper emphasizes the need for openness to new modes of education like online learning in its various modes.

  11. Impact of online classes on the satisfaction and performance of

    The aim of the study is to identify the factors affecting students' satisfaction and performance regarding online classes during the pandemic period of COVID-19 and to establish the relationship between these variables. The study is quantitative in nature, and the data were collected from 544 respondents through online survey who were studying the business management (B.B.A or M.B.A) or ...

  12. PDF The Effectiveness and Challenges of Online Learning for Secondary ...

    online learning allows students to study in a "safe" environment, without experiencing embarrassment about asking questions. According to Harrison (2018), young children can access pictures and videos, navigate 'Youtube', and interact and participate in games and digital applications that are suited to their age.

  13. Integrating students' perspectives about online learning: a hierarchy

    This article reports on a large-scale (n = 987), exploratory factor analysis study incorporating various concepts identified in the literature as critical success factors for online learning from the students' perspective, and then determines their hierarchical significance. Seven factors--Basic Online Modality, Instructional Support, Teaching Presence, Cognitive Presence, Online Social ...

  14. PDF Students' Perceptions towards the Quality of Online Education: A

    online education courses can be found in a survey conducted by the U.S. Department of Education, which revealed that more than 54,000 online education courses were be ing offered in 1998, with over 1.6 million student's enrolled (cited in Lewis, et al., 1999). In a more recent study, Allen and Seaman (2003) reported that: (a) over 1.6 million

  15. Traditional Learning Compared to Online Learning During the COVID-19

    By examining the strategic goals of online learning, college facilitators, faculty, and instructors find that while online education thus targets learners, develops their skills, encourages student participation, and promotes scientific innovation, its full implementation remains underdeveloped (Andrade et al., 2020). Some universities have ...

  16. PDF Online Education and Its Effective Practice: A Research Review

    The purpose of this paper is to pro- ... The research methodology for this study was to review published studies and research on online teaching and learning, the range of which included literature reviews prior to 2008 and empirical research after 2008. For purposes of this study, online education is operationally defined as a

  17. Assessing the Impact of Online-Learning Effectiveness and Benefits in

    Online learning is one of the educational solutions for students during the COVID-19 pandemic. Worldwide, most universities have shifted much of their learning frameworks to an online learning model to limit physical interaction between people and slow the spread of COVID-19. The effectiveness of online learning depends on many factors, including student and instructor self-efficacy, attitudes ...

  18. Students' experience of online learning during the COVID‐19 pandemic: A

    Online learning has been widely adopted during the COVID-19 pandemic to ensure the continuation of K-12 education. Student success in K-12 online education is substantially lower than in conventional schools. Students experienced various difficulties related to the delivery of online learning. What this paper adds

  19. Is Online Learning Effective?

    Now a report from UNESCO, the United Nations' educational and cultural organization, says that overreliance on remote learning technology during the pandemic led to "staggering" education ...

  20. Research Papers in Education: Vol 38, No 6 (Current issue)

    Article | Published online: 19 Oct 2023. Intertextuality and the advance of mathematisation in young children's inscriptions. Maulfry Worthington et al. Article | Published online: 11 Sep 2023. View all latest articles. Explore the current issue of Research Papers in Education, Volume 38, Issue 6, 2023.

  21. (PDF) A RESEARCH PROJECT REPORT ON To Study on Impact of The Online

    Research was on online learning impact on the student of higher education. and what was the impact of COVID -19 on the student education. ... Secondary: Journals, research papers and internet ...

  22. The Impact of Online Learning on Students' Achievements

    This paper aims to measure learners' preferences for a specific teaching format (online, hybrid, or face-to-face) based on their experience, usage, and interaction with e-learning platforms (Moodle/MS Teams), on their participation in e-learning courses delivered via online streaming platforms (Zoom), on teaching staff skills and teaching ...

  23. Research Papers in Education

    Journal overview. Research Papers in Education has developed an international reputation for publishing significant research findings across the discipline of education. The distinguishing feature of the journal is that we publish longer articles than most other journals, to a limit of 12,000 words. We particularly focus on full accounts of ...

  24. When constellations align: What early childhood pre‐service teachers

    This paper explores our own engagement with these questions as arts educators who deliver courses online. ... suggests that PSTs may not be aware of the quality of experiences they are missing out on when learning online, this research suggests that many do: I could be missing out on hands-on experiences, exploring different materials/textures ...

  25. List of issues Research Papers in Education

    Volume 8 1993. Volume 7 1992. Volume 6 1991. Volume 5 1990. Volume 4 1989. Volume 3 1988. Volume 2 1987. Volume 1 1986. Browse the list of issues and latest articles from Research Papers in Education.

  26. Online Education

    Summary. This paper describes what online education means. A key study done by the University of Kansas' Department of Education confirmed that, online courses had the ability to be as effective as traditional courses involving face-to-face interactions…. Download full paper File format: .doc, available for editing.

  27. SciSciNet: A large-scale open data lake for the science of science research

    Here we present SciSciNet, a large-scale open data lake for the science of science research, covering over 134M scientific publications and millions of external linkages to funding and public uses ...

  28. Human-like systematic generalization through a meta-learning neural

    Our research adds to a growing literature, reviewed previously 48, on using meta-learning for understanding human 49,50,51 or human-like behaviour 52,53,54. In our experiments, only MLC closely ...

  29. Generative artificial intelligence (AI) in education

    This document sets out the position of the Department for Education (DfE) on the use of generative artificial intelligence (AI), including large language models (LLMs) like ChatGPT or Google Bard ...