An official website of the United States government
The .gov means it's official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.
The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.
- Publications
- Account settings
- Browse Titles
NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.
Langlois ÉV, Daniels K, Akl EA, editors. Evidence Synthesis for Health Policy and Systems: A Methods Guide. Geneva: World Health Organization; 2018 Oct 8.
Evidence Synthesis for Health Policy and Systems: A Methods Guide.
Methods commentary quasi-experimental studies in health systems evidence synthesis.
PETER C. ROCKERS .
- Quasi-experimental (QE) studies, involving a nonrandomized, quantitative approach to causal inference, can be incorporated into evidence used to inform health policy decisions.
- Studies using QE methods often produce evidence under real-world scenarios and may have significantly lower costs than experimental studies.
- Use of QE studies in evidence synthesis entails deciding which study designs to include, establishing a robust search strategy, assessing the quality of identified studies and deciding how to include QE effect estimates in meta-analyses.
- Meta-synthesis review is a form of higher-order synthesis focused on a policy area (rather than a discrete policy intervention) and using evidence from multiple sources, including QE evidence not previously included in systematic reviews.
- An interactive meta-synthesis platform may be effective for capturing broad bodies of evidence relevant to meta-synthesis review, which may be too large and diverse to fit easily in a traditional written review product.
- Operators of meta-synthesis platforms can take an active role in producing primary research, especially by identifying priority research questions and facilitating the sharing of raw data amenable to QE analysis.
- INTRODUCTION
Quasi-experimental (QE) studies have a key role in the development of bodies of evidence to both inform health policy decisions and guide investments for health systems strengthening. Studies of this type entail a nonrandomized, quantitative approach to causal inference, which may be applied prospectively (as in a trial) or retrospectively (as in the analysis of routine observational or administrative data) ( 1 ). Although experimental designs are usually preferable when they are feasible, QE methods can produce causal estimates of policy impact and in some cases have advantages over experimental designs with respect to external validity, feasibility and cost ( 2 – 5 ). However, only under particular circumstances of design and implementation will QE studies yield unbiased causal effects. Much of the recent focus on QE methods in the field of health policy and systems research (HPSR) has centred on identifying and incorporating high-quality primary research studies into quantitative systematic reviews and meta-analyses ( 6 , 7 ). This is an important aspect, but the value of QE studies extends beyond their substitutability for experimental studies. Systematic reviews come in a variety of forms, some more quantitative than others ( 8 ), and QE studies can provide useful information for most of these forms. Furthermore, actual policy decisions require triangulation of evidence from multiple sources, including primary research studies and systematic reviews ( 9 ) – a form of meta-synthesis – and QE studies can contribute importantly to this process.
This Methods Guide presents a broad view of evidence synthesis in the field of HPSR, and a similarly broad view of the role of QE studies within such synthesis is warranted. This commentary briefly discusses four aspects of this role: QE studies in systematic reviews, QE studies in meta-synthesis reviews, meta-synthesis platforms and the production of new QE studies of priority questions.
- QUASI-EXPERIMENTAL STUDIES IN SYSTEMATIC REVIEWS
Quasi-experimental studies offer certain advantages over experimental methods and should be considered for inclusion in systematic reviews of HPSR ( 4 ). Studies using QE methods often produce evidence under real-world scenarios that are not controlled by the researcher, whereas experiments are usually implemented under researcher control, a factor that may introduce external validity concerns. In addition, QE studies based on secondary analyses of administrative data usually have significantly lower costs than would be incurred for similar experimental studies. Finally, policy questions, which may be difficult to investigate experimentally because of feasibility, political or ethical constraints, can often be addressed using a QE design. Like experimental studies and studies with other designs, QE studies can produce valuable information on contextual factors and causal mechanisms that might be synthesized in quantitative or qualitative systematic reviews ( 10 ).
The advantages of QE studies in estimating causal impacts are realized only when the relevant methodologies are employed appropriately, resulting in high internal validity. Perhaps because of concerns about study quality – or about reviewers’ inability to accurately assess study quality consistently – QE evidence has been screened out of most systematic reviews of HPSR, on the basis of study design criteria ( 11 ). This omission can lead to key pieces of evidence being excluded from a review, resulting in an incomplete picture of the body of evidence on an important policy question. In some instances, research questions that are not amenable to experimentation are missed entirely by the systematic review literature, despite the existence of relevant QE evidence. For example, a recent overview of systematic reviews found that no systematic review existed on the impact of decentralized governance on health outcomes ( 12 ), a policy that is difficult to test experimentally but for which several QE studies exist ( 13 – 16 ).
When relevant QE studies on a review topic exist alongside studies with other designs, authors of systematic reviews face important decisions on how to handle the different forms of evidence. A recent special issue of the Journal of Clinical Epidemiology (JCE) describes the main considerations ( 3 – 7 , 17 – 24 ). First, authors must decide which (if any) QE study designs to include in their review. Whereas the Cochrane Collaboration’s Effective Practice and Organisation of Care (EPOC) Group recommends including two QE designs – interrupted time series (ITS) analyses and controlled before-and-after (CBA) studies – the authors of the JCE series identify an expanded list that also includes instrumental variable analyses, regression discontinuity designs and fixed-effects analyses of panel data ( 6 , 19 ). This expanded list is consistent with the recommendations of the Campbell Collaboration’s International Development Coordinating Group ( 25 ). Second, authors must establish a robust search strategy for identifying relevant QE studies. This task is complicated by the fact that indexing based on study design is imprecise in most evidence databases, and using study design search criteria is usually not recommended ( 22 ). Third, authors must assess the quality of identified QE studies to determine potential risk of bias. Although relevant tools for this task exist, more work is needed to develop standard guidelines for assessing risk of bias in QE studies ( 20 , 26 , 27 ). In particular, the ROBINS-I (Risk Of Bias In Non-randomized Studies - of Interventions) tool has been developed to assess risk of bias in nonrandomized studies, but it does not yet include guidelines for the full breadth of QE designs ( 26 ). Finally, in cases where meta-analysis is being considered, authors must decide whether and how to include effect estimates from QE studies. The authors of the JCE series consider the challenges associated with including QE evidence in meta-analyses, and argue that doing so is usually warranted, but they also caution that a careful modelling approach that accounts for potential risk of bias is necessary ( 7 , 22 ).
Quasi-experimental studies offer certain advantages over experimental methods and should be considered for inclusion in systematic reviews of health policy and systems research
- QUASI-EXPERIMENTAL STUDIES IN META-SYNTHESIS REVIEWS
As described in Chapter 3 , concerning HPSR synthesis methods, policy-makers must triangulate evidence from multiple sources when making decisions, including primary research studies and published systematic reviews. The term “meta-synthesis review” is used here to refer to this type of higher-order synthesis, which is focused on a policy area rather than a discrete policy intervention. Synthesis hierarchies have been described elsewhere, but this type of higher-order synthesis has not previously been distinguished and given a name ( 7 – 9 ). Umbrella reviews ( 28 ) and overviews of systematic reviews ( 29 ) are forms of meta-synthesis that consider evidence from multiple systematic reviews across a policy area. In its more general form, meta-synthesis allows additionally for consideration of primary research studies and other types of evidence that have not previously been included in systematic reviews. The “Evidence & Gap Map” approach developed and used by the International Initiative for Impact Evaluation (3ie) is an example of this more general form of meta-synthesis ( 30 ). The term “meta-synthesis” as used here differs from an alternative usage that has gained some acceptance, whereby this term refers to a method for synthesis of qualitative primary studies ( 31 ). The meta-synthesis review process may yield a formal written product similar to a traditional systematic review or a policy brief, or it may guide policy deliberations more informally.
Narrative synthesis methods may be the most appropriate means of triangulating evidence in meta-synthesis reviews, given the variety of information types to be considered, although more work is needed to strengthen guidance on the application of those methods ( 32 ). As with traditional systematic reviews, evidence considered for inclusion in meta-synthesis reviews should be assessed for quality, and studies deemed to be of poor quality should be screened out to minimize potential risk of bias. Evidence from QE studies should be considered as part of the meta-synthesis review process, either through inclusion of QE studies in systematic reviews or through separate analysis. Quasi-experimental methods often allow for investigations of unique research questions that, because of their uniqueness, do not fit the inclusion criteria of a traditional systematic review, but nonetheless contribute importantly to a body of evidence on the policy area in question. In particular, QE studies can complement experimental studies by clarifying mechanisms in the causal pathway that determine a policy’s effectiveness, that is, to “interrogate the causal chain” ( 33 ); by contrast, a review of evidence on a mechanism without the context of broader policy considerations will be of limited value.
Research on physician-induced demand provides a useful example on this point ( 34 ). In the past few decades, several researchers have used an instrumental variable approach – a QE method that identifies and exploits exogenous variation in an exposure to estimate its causal impact on an outcome through the use of a third (instrumental) variable that is correlated with the exposure but is uncorrelated with the outcome, outside of its effect on the exposure ( 35 ) – to estimate the causal impact of physician supply on health service volumes ( 36 , 37 ) in several settings in the United States where most physicians are paid through a fee-for-service system. As part of this methodology, certain population characteristics are used as instruments to identify an exogenous form of the physician–population ratio, which is then used in a set of structural equation models to estimate the causal relationship between physician supply and service volumes. There is no obvious policy-relevant interpretation of the point estimates produced by these studies, which limits their value in a traditional quantitative systematic review. However, these studies provide strong evidence that physicians respond to financial incentives by influencing patient behaviour ( 38 ). When considered at the level of meta-synthesis within the broader policy context of provider payment systems, these studies clarify our understanding of a key mechanism in the causal pathway from payment incentives to demand for health services and health spending. By shedding light on an important policy mechanism in this manner, QE studies complement experiments and studies based on other designs. Exploiting the full potential of this complementarity should be a central aim of the meta-synthesis review process.
BOX 1 QUASI-EXPERIMENTAL STUDIES IN SYNTHESIS OF EVIDENCE ON FINANCIAL ARRANGEMENTS FOR HEALTH SYSTEMS
In a recently published overview of systematic reviews – a form of meta-synthesis review – Wiysonge and colleagues looked at evidence across the policy area of health system financing in low-income countries ( 39 ). Quasi-experimental studies played an important role in the body of evidence that they unearthed.
Thirteen of the 15 systematic reviews included in the overview mentioned at least one QE design in their study design inclusion criteria. Eleven of the included reviews were conducted with support from the Cochrane Collaboration’s EPOC group, which recommends including ITS analyses and CBA studies. Other QE designs, including instrumental variable analysis, regression discontinuity studies and fixed-effects analyses of panel data, were explicitly mentioned for consideration in only one review ( 40 ). As a result, relevant studies that used those designs may have been excluded. For example, a review by Lagarde & Palmer ( 41 ) looking at evidence on the impact of user fees did not include a relevant study by Fafchamps & Minten ( 42 ) that used a fixed-effect approach to analysing panel data.
Of the primary studies that made it into the included systematic reviews, many used QE methods. Across all 15 systematic reviews, 276 primary studies were considered; 23 (8%) were CBA studies, 51 (18%) were ITS analyses, and 115 (42%) used an experimental design. The review by Lagarde & Palmer ( 41 ) included 17 studies, 15 of which used a QE design (either ITS or CBA). A non-EPOC review by Acharya and colleagues ( 40 ) examining impacts of insurance schemes included 19 studies, 10 of which used QE designs (four instrumental variable analyses, three CBA studies, two regression discontinuity studies and one fixed-effects analysis of panel data). Only one study included by Acharya and colleagues ( 40 ) used an experimental design, whereas the eight remaining studies used propensity score matching, a method that is sometimes categorized as QE, although the appropriateness of this categorization has been debated ( 2 ).
None of the included systematic reviews presented a meta-analysis: some authors indicated in their initial protocols an intention to do so but found in the end that the included primary studies did not warrant it ( 43 ) or that the diversity of study designs did not allow it ( 40 , 41 ). There is a need for strengthened guidance on whether and how to pool effect estimates from QE studies and those generated using other study designs ( 7 ).
By taking the approach of an overview of systematic reviews, Wiysonge and colleagues ( 39 ) excluded from the outset any evidence that had not previously been included in a systematic review. Although this may have served to focus their meta-synthesis, it may also have caused them to miss relevant QE (and other) studies. In particular, their approach is unlikely to catch QE studies that shed light on relevant causal mechanisms but that do not produce effect estimates on the primary relationship of interest. For example, underlying mechanisms related to price and income elasticity of demand for health services are fundamental to understanding the impact of user fees, but evidence on these mechanisms, much of which comes from studies that employ QE methods ( 44 ), is unlikely to make it into a systematic review of the type considered, leaving the reader with a potentially incomplete picture of the relevant body of evidence.
- META-SYNTHESIS PLATFORMS
The breadth of evidence relevant to a meta-synthesis review is often too large and diverse to fit easily in a traditional written review product. An evolving and interactive meta-synthesis platform may be a more effective means of capturing a body of evidence. This concept is similar to the idea of a “living systematic review” as described by the Cochrane Collaboration ( 45 ). It is unnecessary to restrict evidence included in such platforms according to methodology. Systematic reviews, primary research studies (including those not included in any systematic review) and, whenever possible, raw data (from primary research studies, as well as data that are otherwise relevant but not yet analysed, including individual patient data when appropriate) all contain potentially valuable information and should be included.
One example of a meta-synthesis platform is the recently developed Access Observatory, which organizes and makes publicly available data and evidence on industry-led and other access-to-medicines programs in low- and middle-income countries (LMICs) ( 46 ). Program and policy information included in the Access Observatory is structured according to a taxonomy of commonly used access strategies and a recommended set of measurement indicators. This structure is designed to facilitate evidence synthesis within and across strategies. The Healthy Birth, Growth, and Development—Knowledge Integration platform developed by the Bill & Melinda Gates Foundation similarly fosters meta-synthesis of evidence across strategies that aim to address child growth and development in LMICs; this platform includes an innovative approach to sharing raw data with the public ( 47 ).
- PRODUCTION OF NEW QUASI-EXPERIMENTAL STUDIES ON PRIORITY QUESTIONS
Operators of meta-synthesis platforms must rely in large part on evidence produced by independent researchers, but they also have the opportunity to take an active role in the production of primary research, particularly by identifying priority research questions (based on existing gaps in knowledge) and by facilitating the sharing of raw data. One of the primary barriers to the production of new primary research is the cost of data collection. By warehousing raw data sets and encouraging secondary analysis of these data, platform operators can support the production of important new studies on priority policy questions. In many instances, organizations that collect and own relevant administrative data are those that would benefit most from the potential learnings generated by new primary research; these organizations should therefore have incentives to share data with a reputable meta-synthesis platform. Quasi-experimental methods are well suited for rigorous analysis of retrospective data, and should be prioritized, with the aim of producing new causal evidence of policy impact.
This Methods Guide provides a road map for future efforts to strengthen evidence synthesis for health policy and systems. Quasi-experimental methods should play a central role in those efforts. Studies using such methods can in some cases serve as a substitute for experimental studies and should be considered for inclusion in quantitative systematic reviews. Review authors must make important decisions when considering QE studies, such as which study designs to include, how to assess potential risk of bias, and whether and how to include QE effect estimates in meta-analyses. More work is needed to develop standard guidelines to assist authors with these decisions. Studies using QE methods can also serve as a complement to experiments and other study designs and can deepen our understanding of important policy areas, even when quantitative synthesis is not feasible.
Sales, rights and licensing. To purchase WHO publications, see http://apps.who.int/bookorders . To submit requests for commercial use and queries on rights and licensing, see http://www.who.int/about/licensing .
Third-party materials. If you wish to reuse material from this work that is attributed to a third party, such as tables, figures or images, it is your responsibility to determine whether permission is needed for that reuse and to obtain permission from the copyright holder. The risk of claims resulting from infringement of any third-party-owned component in the work rests solely with the user.
Some rights reserved. This work is available under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 IGO licence (CC BY-NC-SA 3.0 IGO; https://creativecommons.org/licenses/by-nc-sa/3.0/igo ).
Under the terms of this licence, you may copy, redistribute and adapt the work for non-commercial purposes, provided the work is appropriately cited, as indicated below. In any use of this work, there should be no suggestion that WHO endorses any specific organization, products or services. The use of the WHO logo is not permitted. If you adapt the work, then you must license your work under the same or equivalent Creative Commons licence. If you create a translation of this work, you should add the following disclaimer along with the suggested citation: “This translation was not created by the World Health Organization (WHO). WHO is not responsible for the content or accuracy of this translation. The original English edition shall be the binding and authentic edition”.
Any mediation relating to disputes arising under the licence shall be conducted in accordance with the mediation rules of the World Intellectual Property Organization.
- Cite this Page ROCKERS PETERC. QUASI-EXPERIMENTAL STUDIES IN HEALTH SYSTEMS EVIDENCE SYNTHESIS. In: Langlois ÉV, Daniels K, Akl EA, editors. Evidence Synthesis for Health Policy and Systems: A Methods Guide. Geneva: World Health Organization; 2018 Oct 8. METHODS COMMENTARY.
- PDF version of this title (3.8M)
In this Page
Related information.
- PMC PubMed Central citations
- PubMed Links to PubMed
Recent Activity
- QUASI-EXPERIMENTAL STUDIES IN HEALTH SYSTEMS EVIDENCE SYNTHESIS - Evidence Synth... QUASI-EXPERIMENTAL STUDIES IN HEALTH SYSTEMS EVIDENCE SYNTHESIS - Evidence Synthesis for Health Policy and Systems: A Methods Guide
Your browsing activity is empty.
Activity recording is turned off.
Turn recording back on
Connect with NLM
National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894
Web Policies FOIA HHS Vulnerability Disclosure
Help Accessibility Careers
- Privacy Policy
Home » Quasi-Experimental Research Design – Types, Methods
Quasi-Experimental Research Design – Types, Methods
Table of Contents
Quasi-Experimental Design
Quasi-experimental design is a research method that seeks to evaluate the causal relationships between variables, but without the full control over the independent variable(s) that is available in a true experimental design.
In a quasi-experimental design, the researcher uses an existing group of participants that is not randomly assigned to the experimental and control groups. Instead, the groups are selected based on pre-existing characteristics or conditions, such as age, gender, or the presence of a certain medical condition.
Types of Quasi-Experimental Design
There are several types of quasi-experimental designs that researchers use to study causal relationships between variables. Here are some of the most common types:
Non-Equivalent Control Group Design
This design involves selecting two groups of participants that are similar in every way except for the independent variable(s) that the researcher is testing. One group receives the treatment or intervention being studied, while the other group does not. The two groups are then compared to see if there are any significant differences in the outcomes.
Interrupted Time-Series Design
This design involves collecting data on the dependent variable(s) over a period of time, both before and after an intervention or event. The researcher can then determine whether there was a significant change in the dependent variable(s) following the intervention or event.
Pretest-Posttest Design
This design involves measuring the dependent variable(s) before and after an intervention or event, but without a control group. This design can be useful for determining whether the intervention or event had an effect, but it does not allow for control over other factors that may have influenced the outcomes.
Regression Discontinuity Design
This design involves selecting participants based on a specific cutoff point on a continuous variable, such as a test score. Participants on either side of the cutoff point are then compared to determine whether the intervention or event had an effect.
Natural Experiments
This design involves studying the effects of an intervention or event that occurs naturally, without the researcher’s intervention. For example, a researcher might study the effects of a new law or policy that affects certain groups of people. This design is useful when true experiments are not feasible or ethical.
Data Analysis Methods
Here are some data analysis methods that are commonly used in quasi-experimental designs:
Descriptive Statistics
This method involves summarizing the data collected during a study using measures such as mean, median, mode, range, and standard deviation. Descriptive statistics can help researchers identify trends or patterns in the data, and can also be useful for identifying outliers or anomalies.
Inferential Statistics
This method involves using statistical tests to determine whether the results of a study are statistically significant. Inferential statistics can help researchers make generalizations about a population based on the sample data collected during the study. Common statistical tests used in quasi-experimental designs include t-tests, ANOVA, and regression analysis.
Propensity Score Matching
This method is used to reduce bias in quasi-experimental designs by matching participants in the intervention group with participants in the control group who have similar characteristics. This can help to reduce the impact of confounding variables that may affect the study’s results.
Difference-in-differences Analysis
This method is used to compare the difference in outcomes between two groups over time. Researchers can use this method to determine whether a particular intervention has had an impact on the target population over time.
Interrupted Time Series Analysis
This method is used to examine the impact of an intervention or treatment over time by comparing data collected before and after the intervention or treatment. This method can help researchers determine whether an intervention had a significant impact on the target population.
Regression Discontinuity Analysis
This method is used to compare the outcomes of participants who fall on either side of a predetermined cutoff point. This method can help researchers determine whether an intervention had a significant impact on the target population.
Steps in Quasi-Experimental Design
Here are the general steps involved in conducting a quasi-experimental design:
- Identify the research question: Determine the research question and the variables that will be investigated.
- Choose the design: Choose the appropriate quasi-experimental design to address the research question. Examples include the pretest-posttest design, non-equivalent control group design, regression discontinuity design, and interrupted time series design.
- Select the participants: Select the participants who will be included in the study. Participants should be selected based on specific criteria relevant to the research question.
- Measure the variables: Measure the variables that are relevant to the research question. This may involve using surveys, questionnaires, tests, or other measures.
- Implement the intervention or treatment: Implement the intervention or treatment to the participants in the intervention group. This may involve training, education, counseling, or other interventions.
- Collect data: Collect data on the dependent variable(s) before and after the intervention. Data collection may also include collecting data on other variables that may impact the dependent variable(s).
- Analyze the data: Analyze the data collected to determine whether the intervention had a significant impact on the dependent variable(s).
- Draw conclusions: Draw conclusions about the relationship between the independent and dependent variables. If the results suggest a causal relationship, then appropriate recommendations may be made based on the findings.
Quasi-Experimental Design Examples
Here are some examples of real-time quasi-experimental designs:
- Evaluating the impact of a new teaching method: In this study, a group of students are taught using a new teaching method, while another group is taught using the traditional method. The test scores of both groups are compared before and after the intervention to determine whether the new teaching method had a significant impact on student performance.
- Assessing the effectiveness of a public health campaign: In this study, a public health campaign is launched to promote healthy eating habits among a targeted population. The behavior of the population is compared before and after the campaign to determine whether the intervention had a significant impact on the target behavior.
- Examining the impact of a new medication: In this study, a group of patients is given a new medication, while another group is given a placebo. The outcomes of both groups are compared to determine whether the new medication had a significant impact on the targeted health condition.
- Evaluating the effectiveness of a job training program : In this study, a group of unemployed individuals is enrolled in a job training program, while another group is not enrolled in any program. The employment rates of both groups are compared before and after the intervention to determine whether the training program had a significant impact on the employment rates of the participants.
- Assessing the impact of a new policy : In this study, a new policy is implemented in a particular area, while another area does not have the new policy. The outcomes of both areas are compared before and after the intervention to determine whether the new policy had a significant impact on the targeted behavior or outcome.
Applications of Quasi-Experimental Design
Here are some applications of quasi-experimental design:
- Educational research: Quasi-experimental designs are used to evaluate the effectiveness of educational interventions, such as new teaching methods, technology-based learning, or educational policies.
- Health research: Quasi-experimental designs are used to evaluate the effectiveness of health interventions, such as new medications, public health campaigns, or health policies.
- Social science research: Quasi-experimental designs are used to investigate the impact of social interventions, such as job training programs, welfare policies, or criminal justice programs.
- Business research: Quasi-experimental designs are used to evaluate the impact of business interventions, such as marketing campaigns, new products, or pricing strategies.
- Environmental research: Quasi-experimental designs are used to evaluate the impact of environmental interventions, such as conservation programs, pollution control policies, or renewable energy initiatives.
When to use Quasi-Experimental Design
Here are some situations where quasi-experimental designs may be appropriate:
- When the research question involves investigating the effectiveness of an intervention, policy, or program : In situations where it is not feasible or ethical to randomly assign participants to intervention and control groups, quasi-experimental designs can be used to evaluate the impact of the intervention on the targeted outcome.
- When the sample size is small: In situations where the sample size is small, it may be difficult to randomly assign participants to intervention and control groups. Quasi-experimental designs can be used to investigate the impact of an intervention without requiring a large sample size.
- When the research question involves investigating a naturally occurring event : In some situations, researchers may be interested in investigating the impact of a naturally occurring event, such as a natural disaster or a major policy change. Quasi-experimental designs can be used to evaluate the impact of the event on the targeted outcome.
- When the research question involves investigating a long-term intervention: In situations where the intervention or program is long-term, it may be difficult to randomly assign participants to intervention and control groups for the entire duration of the intervention. Quasi-experimental designs can be used to evaluate the impact of the intervention over time.
- When the research question involves investigating the impact of a variable that cannot be manipulated : In some situations, it may not be possible or ethical to manipulate a variable of interest. Quasi-experimental designs can be used to investigate the relationship between the variable and the targeted outcome.
Purpose of Quasi-Experimental Design
The purpose of quasi-experimental design is to investigate the causal relationship between two or more variables when it is not feasible or ethical to conduct a randomized controlled trial (RCT). Quasi-experimental designs attempt to emulate the randomized control trial by mimicking the control group and the intervention group as much as possible.
The key purpose of quasi-experimental design is to evaluate the impact of an intervention, policy, or program on a targeted outcome while controlling for potential confounding factors that may affect the outcome. Quasi-experimental designs aim to answer questions such as: Did the intervention cause the change in the outcome? Would the outcome have changed without the intervention? And was the intervention effective in achieving its intended goals?
Quasi-experimental designs are useful in situations where randomized controlled trials are not feasible or ethical. They provide researchers with an alternative method to evaluate the effectiveness of interventions, policies, and programs in real-life settings. Quasi-experimental designs can also help inform policy and practice by providing valuable insights into the causal relationships between variables.
Overall, the purpose of quasi-experimental design is to provide a rigorous method for evaluating the impact of interventions, policies, and programs while controlling for potential confounding factors that may affect the outcome.
Advantages of Quasi-Experimental Design
Quasi-experimental designs have several advantages over other research designs, such as:
- Greater external validity : Quasi-experimental designs are more likely to have greater external validity than laboratory experiments because they are conducted in naturalistic settings. This means that the results are more likely to generalize to real-world situations.
- Ethical considerations: Quasi-experimental designs often involve naturally occurring events, such as natural disasters or policy changes. This means that researchers do not need to manipulate variables, which can raise ethical concerns.
- More practical: Quasi-experimental designs are often more practical than experimental designs because they are less expensive and easier to conduct. They can also be used to evaluate programs or policies that have already been implemented, which can save time and resources.
- No random assignment: Quasi-experimental designs do not require random assignment, which can be difficult or impossible in some cases, such as when studying the effects of a natural disaster. This means that researchers can still make causal inferences, although they must use statistical techniques to control for potential confounding variables.
- Greater generalizability : Quasi-experimental designs are often more generalizable than experimental designs because they include a wider range of participants and conditions. This can make the results more applicable to different populations and settings.
Limitations of Quasi-Experimental Design
There are several limitations associated with quasi-experimental designs, which include:
- Lack of Randomization: Quasi-experimental designs do not involve randomization of participants into groups, which means that the groups being studied may differ in important ways that could affect the outcome of the study. This can lead to problems with internal validity and limit the ability to make causal inferences.
- Selection Bias: Quasi-experimental designs may suffer from selection bias because participants are not randomly assigned to groups. Participants may self-select into groups or be assigned based on pre-existing characteristics, which may introduce bias into the study.
- History and Maturation: Quasi-experimental designs are susceptible to history and maturation effects, where the passage of time or other events may influence the outcome of the study.
- Lack of Control: Quasi-experimental designs may lack control over extraneous variables that could influence the outcome of the study. This can limit the ability to draw causal inferences from the study.
- Limited Generalizability: Quasi-experimental designs may have limited generalizability because the results may only apply to the specific population and context being studied.
About the author
Muhammad Hassan
Researcher, Academic Writer, Web developer
You may also like
Descriptive Research Design – Types, Methods and...
Basic Research – Types, Methods and Examples
Phenomenology – Methods, Examples and Guide
Qualitative Research – Methods, Analysis Types...
Exploratory Research – Types, Methods and...
Ethnographic Research -Types, Methods and Guide
Quasi-Experiment: Understand What It Is, Types & Examples
Discover the concept of quasi-experiment, its various types, real-world examples, and how QuestionPro aids in conducting these studies.
Quasi-experimental research designs have gained significant recognition in the scientific community due to their unique ability to study cause-and-effect relationships in real-world settings. Unlike true experiments, quasi-experiment lack random assignment of participants to groups, making them more practical and ethical in certain situations. In this article, we will delve into the concept, applications, and advantages of quasi-experiments, shedding light on their relevance and significance in the scientific realm.
What Is A Quasi-Experiment Research Design?
Quasi-experimental research designs are research methodologies that resemble true experiments but lack the randomized assignment of participants to groups. In a true experiment, researchers randomly assign participants to either an experimental group or a control group, allowing for a comparison of the effects of an independent variable on the dependent variable. However, in quasi-experiments, this random assignment is often not possible or ethically permissible, leading to the adoption of alternative strategies.
Types Of Quasi-Experimental Designs
There are several types of quasi-experiment designs to study causal relationships in specific contexts. Some common types include:
Non-Equivalent Groups Design
This design involves selecting pre-existing groups that differ in some key characteristics and comparing their responses to the independent variable. Although the researcher does not randomly assign the groups, they can still examine the effects of the independent variable.
Regression Discontinuity
This design utilizes a cutoff point or threshold to determine which participants receive the treatment or intervention. It assumes that participants on either side of the cutoff are similar in all other aspects, except for their exposure to the independent variable.
Interrupted Time Series Design
This design involves measuring the dependent variable multiple times before and after the introduction of an intervention or treatment. By comparing the trends in the dependent variable, researchers can infer the impact of the intervention.
Natural Experiments
Natural experiments take advantage of naturally occurring events or circumstances that mimic the random assignment found in true experiments. Participants are exposed to different conditions in situations identified by researchers without any manipulation from them.
Application of the Quasi-Experiment Design
Quasi-experimental research designs find applications in various fields, ranging from education to public health and beyond. One significant advantage of quasi-experiments is their feasibility in real-world settings where randomization is not always possible or ethical.
Ethical Reasons
Ethical concerns often arise in research when randomizing participants to different groups could potentially deny individuals access to beneficial treatments or interventions. In such cases, quasi-experimental designs provide an ethical alternative, allowing researchers to study the impact of interventions without depriving anyone of potential benefits.
Examples Of Quasi-Experimental Design
Let’s explore a few examples of quasi-experimental designs to understand their application in different contexts.
Design Of Non-Equivalent Groups
Determining the effectiveness of math apps in supplementing math classes.
Imagine a study aiming to determine the effectiveness of math apps in supplementing traditional math classes in a school. Randomly assigning students to different groups might be impractical or disrupt the existing classroom structure. Instead, researchers can select two comparable classes, one receiving the math app intervention and the other continuing with traditional teaching methods. By comparing the performance of the two groups, researchers can draw conclusions about the app’s effectiveness.
To conduct a quasi-experiment study like the one mentioned above, researchers can utilize QuestionPro , an advanced research platform that offers comprehensive survey and data analysis tools. With QuestionPro, researchers can design surveys to collect data, analyze results, and gain valuable insights for their quasi-experimental research.
How QuestionPro Helps In Quasi-Experimental Research?
QuestionPro’s powerful features, such as random assignment of participants, survey branching, and data visualization, enable researchers to efficiently conduct and analyze quasi-experimental studies. The platform provides a user-friendly interface and robust reporting capabilities, empowering researchers to gather data, explore relationships, and draw meaningful conclusions.
In some cases, researchers can leverage natural experiments to examine causal relationships.
Determining The Effectiveness Of Teaching Modern Leadership Techniques In Start-Up Businesses
Consider a study evaluating the effectiveness of teaching modern leadership techniques in start-up businesses. Instead of artificially assigning businesses to different groups, researchers can observe those that naturally adopt modern leadership techniques and compare their outcomes to those of businesses that have not implemented such practices.
Advantages and Disadvantages Of The Quasi-Experimental Design
Quasi-experimental designs offer several advantages over true experiments, making them valuable tools in research:
- Scope of the research : Quasi-experiments allow researchers to study cause-and-effect relationships in real-world settings, providing valuable insights into complex phenomena that may be challenging to replicate in a controlled laboratory environment.
- Regression Discontinuity : Researchers can utilize regression discontinuity to evaluate the effects of interventions or treatments when random assignment is not feasible. This design leverages existing data and naturally occurring thresholds to draw causal inferences.
Disadvantage
Lack of random assignment : Quasi-experimental designs lack the random assignment of participants, which introduces the possibility of confounding variables affecting the results. Researchers must carefully consider potential alternative explanations for observed effects.
What Are The Different Quasi-Experimental Study Designs?
Quasi-experimental designs encompass various approaches, including nonequivalent group designs, interrupted time series designs, and natural experiments. Each design offers unique advantages and limitations, providing researchers with versatile tools to explore causal relationships in different contexts.
Example Of The Natural Experiment Approach
Researchers interested in studying the impact of a public health campaign aimed at reducing smoking rates may take advantage of a natural experiment. By comparing smoking rates in a region that has implemented the campaign to a similar region that has not, researchers can examine the effectiveness of the intervention.
Differences Between Quasi-Experiments And True Experiments
Quasi-experiments and true experiments differ primarily in their ability to randomly assign participants to groups. While true experiments provide a higher level of control, quasi-experiments offer practical and ethical alternatives in situations where randomization is not feasible or desirable.
Example Comparing A True Experiment And Quasi-Experiment
In a true experiment investigating the effects of a new medication on a specific condition, researchers would randomly assign participants to either the experimental group, which receives the medication, or the control group, which receives a placebo. In a quasi-experiment, researchers might instead compare patients who voluntarily choose to take the medication to those who do not, examining the differences in outcomes between the two groups.
Quasi-Experiment: A Quick Wrap-Up
Quasi-experimental research designs play a vital role in scientific inquiry by allowing researchers to investigate cause-and-effect relationships in real-world settings. These designs offer practical and ethical alternatives to true experiments, making them valuable tools in various fields of study. With their versatility and applicability, quasi-experimental designs continue to contribute to our understanding of complex phenomena.
Turn Your Data Into Easy-To-Understand And Dynamic Stories
When you wish to explain any complex data, it’s always advised to break it down into simpler visuals or stories. This is where Mind the Graph comes in. It is a platform that helps researchers and scientists to turn their data into easy-to-understand and dynamic stories, helping the audience understand the concepts better. Sign Up now to explore the library of scientific infographics.
Subscribe to our newsletter
Exclusive high quality content about effective visual communication in science.
Sign Up for Free
Try the best infographic maker and promote your research with scientifically-accurate beautiful figures
no credit card required
Content tags
Quasi-Experimental Design: Definition, Types, Examples
Appinio Research · 19.12.2023 · 37min read
Ever wondered how researchers uncover cause-and-effect relationships in the real world, where controlled experiments are often elusive? Quasi-experimental design holds the key. In this guide, we'll unravel the intricacies of quasi-experimental design, shedding light on its definition, purpose, and applications across various domains. Whether you're a student, a professional, or simply curious about the methods behind meaningful research, join us as we delve into the world of quasi-experimental design, making complex concepts sound simple and embarking on a journey of knowledge and discovery.
What is Quasi-Experimental Design?
Quasi-experimental design is a research methodology used to study the effects of independent variables on dependent variables when full experimental control is not possible or ethical. It falls between controlled experiments, where variables are tightly controlled, and purely observational studies, where researchers have little control over variables. Quasi-experimental design mimics some aspects of experimental research but lacks randomization.
The primary purpose of quasi-experimental design is to investigate cause-and-effect relationships between variables in real-world settings. Researchers use this approach to answer research questions, test hypotheses, and explore the impact of interventions or treatments when they cannot employ traditional experimental methods. Quasi-experimental studies aim to maximize internal validity and make meaningful inferences while acknowledging practical constraints and ethical considerations.
Quasi-Experimental vs. Experimental Design
It's essential to understand the distinctions between Quasi-Experimental and Experimental Design to appreciate the unique characteristics of each approach:
- Randomization: In Experimental Design, random assignment of participants to groups is a defining feature. Quasi-experimental design, on the other hand, lacks randomization due to practical constraints or ethical considerations.
- Control Groups : Experimental Design typically includes control groups that are subjected to no treatment or a placebo. The quasi-experimental design may have comparison groups but lacks the same level of control.
- Manipulation of IV: Experimental Design involves the intentional manipulation of the independent variable. Quasi-experimental design often deals with naturally occurring independent variables.
- Causal Inference: Experimental Design allows for stronger causal inferences due to randomization and control. Quasi-experimental design permits causal inferences but with some limitations.
When to Use Quasi-Experimental Design?
A quasi-experimental design is particularly valuable in several situations:
- Ethical Constraints: When manipulating the independent variable is ethically unacceptable or impractical, quasi-experimental design offers an alternative to studying naturally occurring variables.
- Real-World Settings: When researchers want to study phenomena in real-world contexts, quasi-experimental design allows them to do so without artificial laboratory settings.
- Limited Resources: In cases where resources are limited and conducting a controlled experiment is cost-prohibitive, quasi-experimental design can provide valuable insights.
- Policy and Program Evaluation: Quasi-experimental design is commonly used in evaluating the effectiveness of policies, interventions, or programs that cannot be randomly assigned to participants.
Importance of Quasi-Experimental Design in Research
Quasi-experimental design plays a vital role in research for several reasons:
- Addressing Real-World Complexities: It allows researchers to tackle complex real-world issues where controlled experiments are not feasible. This bridges the gap between controlled experiments and purely observational studies.
- Ethical Research: It provides an honest approach when manipulating variables or assigning treatments could harm participants or violate ethical standards.
- Policy and Practice Implications: Quasi-experimental studies generate findings with direct applications in policy-making and practical solutions in fields such as education, healthcare, and social sciences.
- Enhanced External Validity: Findings from Quasi-Experimental research often have high external validity, making them more applicable to broader populations and contexts.
By embracing the challenges and opportunities of quasi-experimental design, researchers can contribute valuable insights to their respective fields and drive positive changes in the real world.
Key Concepts in Quasi-Experimental Design
In quasi-experimental design, it's essential to grasp the fundamental concepts underpinning this research methodology. Let's explore these key concepts in detail.
Independent Variable
The independent variable (IV) is the factor you aim to study or manipulate in your research. Unlike controlled experiments, where you can directly manipulate the IV, quasi-experimental design often deals with naturally occurring variables. For example, if you're investigating the impact of a new teaching method on student performance, the teaching method is your independent variable.
Dependent Variable
The dependent variable (DV) is the outcome or response you measure to assess the effects of changes in the independent variable. Continuing with the teaching method example, the dependent variable would be the students' academic performance, typically measured using test scores, grades, or other relevant metrics.
Control Groups vs. Comparison Groups
While quasi-experimental design lacks the luxury of randomly assigning participants to control and experimental groups, you can still establish comparison groups to make meaningful inferences. Control groups consist of individuals who do not receive the treatment, while comparison groups are exposed to different levels or variations of the treatment. These groups help researchers gauge the effect of the independent variable.
Pre-Test and Post-Test Measures
In quasi-experimental design, it's common practice to collect data both before and after implementing the independent variable. The initial data (pre-test) serves as a baseline, allowing you to measure changes over time (post-test). This approach helps assess the impact of the independent variable more accurately. For instance, if you're studying the effectiveness of a new drug, you'd measure patients' health before administering the drug (pre-test) and afterward (post-test).
Threats to Internal Validity
Internal validity is crucial for establishing a cause-and-effect relationship between the independent and dependent variables. However, in a quasi-experimental design, several threats can compromise internal validity. These threats include:
- Selection Bias : When non-randomized groups differ systematically in ways that affect the study's outcome.
- History Effects: External events or changes over time that influence the results.
- Maturation Effects: Natural changes or developments that occur within participants during the study.
- Regression to the Mean: The tendency for extreme scores on a variable to move closer to the mean upon retesting.
- Attrition and Mortality: The loss of participants over time, potentially skewing the results.
- Testing Effects: The mere act of testing or assessing participants can impact their subsequent performance.
Understanding these threats is essential for designing and conducting Quasi-Experimental studies that yield valid and reliable results.
Randomization and Non-Randomization
In traditional experimental designs, randomization is a powerful tool for ensuring that groups are equivalent at the outset of a study. However, quasi-experimental design often involves non-randomization due to the nature of the research. This means that participants are not randomly assigned to treatment and control groups. Instead, researchers must employ various techniques to minimize biases and ensure that the groups are as similar as possible.
For example, if you are conducting a study on the effects of a new teaching method in a real classroom setting, you cannot randomly assign students to the treatment and control groups. Instead, you might use statistical methods to match students based on relevant characteristics such as prior academic performance or socioeconomic status. This matching process helps control for potential confounding variables, increasing the validity of your study.
Types of Quasi-Experimental Designs
In quasi-experimental design, researchers employ various approaches to investigate causal relationships and study the effects of independent variables when complete experimental control is challenging. Let's explore these types of quasi-experimental designs.
One-Group Posttest-Only Design
The One-Group Posttest-Only Design is one of the simplest forms of quasi-experimental design. In this design, a single group is exposed to the independent variable, and data is collected only after the intervention has taken place. Unlike controlled experiments, there is no comparison group. This design is useful when researchers cannot administer a pre-test or when it is logistically difficult to do so.
Example : Suppose you want to assess the effectiveness of a new time management seminar. You offer the seminar to a group of employees and measure their productivity levels immediately afterward to determine if there's an observable impact.
One-Group Pretest-Posttest Design
Similar to the One-Group Posttest-Only Design, this approach includes a pre-test measure in addition to the post-test. Researchers collect data both before and after the intervention. By comparing the pre-test and post-test results within the same group, you can gain a better understanding of the changes that occur due to the independent variable.
Example : If you're studying the impact of a stress management program on participants' stress levels, you would measure their stress levels before the program (pre-test) and after completing the program (post-test) to assess any changes.
Non-Equivalent Groups Design
The Non-Equivalent Groups Design involves multiple groups, but they are not randomly assigned. Instead, researchers must carefully match or control for relevant variables to minimize biases. This design is particularly useful when random assignment is not possible or ethical.
Example : Imagine you're examining the effectiveness of two teaching methods in two different schools. You can't randomly assign students to the schools, but you can carefully match them based on factors like age, prior academic performance, and socioeconomic status to create equivalent groups.
Time Series Design
Time Series Design is an approach where data is collected at multiple time points before and after the intervention. This design allows researchers to analyze trends and patterns over time, providing valuable insights into the sustained effects of the independent variable.
Example : If you're studying the impact of a new marketing campaign on product sales, you would collect sales data at regular intervals (e.g., monthly) before and after the campaign's launch to observe any long-term trends.
Regression Discontinuity Design
Regression Discontinuity Design is employed when participants are assigned to different groups based on a specific cutoff score or threshold. This design is often used in educational and policy research to assess the effects of interventions near a cutoff point.
Example : Suppose you're evaluating the impact of a scholarship program on students' academic performance. Students who score just above or below a certain GPA threshold are assigned differently to the program. This design helps assess the program's effectiveness at the cutoff point.
Propensity Score Matching
Propensity Score Matching is a technique used to create comparable treatment and control groups in non-randomized studies. Researchers calculate propensity scores based on participants' characteristics and match individuals in the treatment group to those in the control group with similar scores.
Example : If you're studying the effects of a new medication on patient outcomes, you would use propensity scores to match patients who received the medication with those who did not but have similar health profiles.
Interrupted Time Series Design
The Interrupted Time Series Design involves collecting data at multiple time points before and after the introduction of an intervention. However, in this design, the intervention occurs at a specific point in time, allowing researchers to assess its immediate impact.
Example : Let's say you're analyzing the effects of a new traffic management system on traffic accidents. You collect accident data before and after the system's implementation to observe any abrupt changes right after its introduction.
Each of these quasi-experimental designs offers unique advantages and is best suited to specific research questions and scenarios. Choosing the right design is crucial for conducting robust and informative studies.
Advantages and Disadvantages of Quasi-Experimental Design
Quasi-experimental design offers a valuable research approach, but like any methodology, it comes with its own set of advantages and disadvantages. Let's explore these in detail.
Quasi-Experimental Design Advantages
Quasi-experimental design presents several advantages that make it a valuable tool in research:
- Real-World Applicability: Quasi-experimental studies often take place in real-world settings, making the findings more applicable to practical situations. Researchers can examine the effects of interventions or variables in the context where they naturally occur.
- Ethical Considerations: In situations where manipulating the independent variable in a controlled experiment would be unethical, quasi-experimental design provides an ethical alternative. For example, it would be unethical to assign participants to smoke for a study on the health effects of smoking, but you can study naturally occurring groups of smokers and non-smokers.
- Cost-Efficiency: Conducting Quasi-Experimental research is often more cost-effective than conducting controlled experiments. The absence of controlled environments and extensive manipulations can save both time and resources.
These advantages make quasi-experimental design an attractive choice for researchers facing practical or ethical constraints in their studies.
Quasi-Experimental Design Disadvantages
However, quasi-experimental design also comes with its share of challenges and disadvantages:
- Limited Control: Unlike controlled experiments, where researchers have full control over variables, quasi-experimental design lacks the same level of control. This limited control can result in confounding variables that make it difficult to establish causality.
- Threats to Internal Validity: Various threats to internal validity, such as selection bias, history effects, and maturation effects, can compromise the accuracy of causal inferences. Researchers must carefully address these threats to ensure the validity of their findings.
- Causality Inference Challenges: Establishing causality can be challenging in quasi-experimental design due to the absence of randomization and control. While you can make strong arguments for causality, it may not be as conclusive as in controlled experiments.
- Potential Confounding Variables: In a quasi-experimental design, it's often challenging to control for all possible confounding variables that may affect the dependent variable. This can lead to uncertainty in attributing changes solely to the independent variable.
Despite these disadvantages, quasi-experimental design remains a valuable research tool when used judiciously and with a keen awareness of its limitations. Researchers should carefully consider their research questions and the practical constraints they face before choosing this approach.
How to Conduct a Quasi-Experimental Study?
Conducting a Quasi-Experimental study requires careful planning and execution to ensure the validity of your research. Let's dive into the essential steps you need to follow when conducting such a study.
1. Define Research Questions and Objectives
The first step in any research endeavor is clearly defining your research questions and objectives. This involves identifying the independent variable (IV) and the dependent variable (DV) you want to study. What is the specific relationship you want to explore, and what do you aim to achieve with your research?
- Specify Your Research Questions : Start by formulating precise research questions that your study aims to answer. These questions should be clear, focused, and relevant to your field of study.
- Identify the Independent Variable: Define the variable you intend to manipulate or study in your research. Understand its significance in your study's context.
- Determine the Dependent Variable: Identify the outcome or response variable that will be affected by changes in the independent variable.
- Establish Hypotheses (If Applicable): If you have specific hypotheses about the relationship between the IV and DV, state them clearly. Hypotheses provide a framework for testing your research questions.
2. Select the Appropriate Quasi-Experimental Design
Choosing the right quasi-experimental design is crucial for achieving your research objectives. Select a design that aligns with your research questions and the available data. Consider factors such as the feasibility of implementing the design and the ethical considerations involved.
- Evaluate Your Research Goals: Assess your research questions and objectives to determine which type of quasi-experimental design is most suitable. Each design has its strengths and limitations, so choose one that aligns with your goals.
- Consider Ethical Constraints: Take into account any ethical concerns related to your research. Depending on your study's context, some designs may be more ethically sound than others.
- Assess Data Availability: Ensure you have access to the necessary data for your chosen design. Some designs may require extensive historical data, while others may rely on data collected during the study.
3. Identify and Recruit Participants
Selecting the right participants is a critical aspect of Quasi-Experimental research. The participants should represent the population you want to make inferences about, and you must address ethical considerations, including informed consent.
- Define Your Target Population: Determine the population that your study aims to generalize to. Your sample should be representative of this population.
- Recruitment Process: Develop a plan for recruiting participants. Depending on your design, you may need to reach out to specific groups or institutions.
- Informed Consent: Ensure that you obtain informed consent from participants. Clearly explain the nature of the study, potential risks, and their rights as participants.
4. Collect Data
Data collection is a crucial step in Quasi-Experimental research. You must adhere to a consistent and systematic process to gather relevant information before and after the intervention or treatment.
- Pre-Test Measures: If applicable, collect data before introducing the independent variable. Ensure that the pre-test measures are standardized and reliable.
- Post-Test Measures: After the intervention, collect post-test data using the same measures as the pre-test. This allows you to assess changes over time.
- Maintain Data Consistency: Ensure that data collection procedures are consistent across all participants and time points to minimize biases.
5. Analyze Data
Once you've collected your data, it's time to analyze it using appropriate statistical techniques . The choice of analysis depends on your research questions and the type of data you've gathered.
- Statistical Analysis : Use statistical software to analyze your data. Common techniques include t-tests , analysis of variance (ANOVA) , regression analysis , and more, depending on the design and variables.
- Control for Confounding Variables: Be aware of potential confounding variables and include them in your analysis as covariates to ensure accurate results.
Chi-Square Calculator :
t-Test Calculator :
6. Interpret Results
With the analysis complete, you can interpret the results to draw meaningful conclusions about the relationship between the independent and dependent variables.
- Examine Effect Sizes: Assess the magnitude of the observed effects to determine their practical significance.
- Consider Significance Levels: Determine whether the observed results are statistically significant . Understand the p-values and their implications.
- Compare Findings to Hypotheses: Evaluate whether your findings support or reject your hypotheses and research questions.
7. Draw Conclusions
Based on your analysis and interpretation of the results, draw conclusions about the research questions and objectives you set out to address.
- Causal Inferences: Discuss the extent to which your study allows for causal inferences. Be transparent about the limitations and potential alternative explanations for your findings.
- Implications and Applications: Consider the practical implications of your research. How do your findings contribute to existing knowledge, and how can they be applied in real-world contexts?
- Future Research: Identify areas for future research and potential improvements in study design. Highlight any limitations or constraints that may have affected your study's outcomes.
By following these steps meticulously, you can conduct a rigorous and informative Quasi-Experimental study that advances knowledge in your field of research.
Quasi-Experimental Design Examples
Quasi-experimental design finds applications in a wide range of research domains, including business-related and market research scenarios. Below, we delve into some detailed examples of how this research methodology is employed in practice:
Example 1: Assessing the Impact of a New Marketing Strategy
Suppose a company wants to evaluate the effectiveness of a new marketing strategy aimed at boosting sales. Conducting a controlled experiment may not be feasible due to the company's existing customer base and the challenge of randomly assigning customers to different marketing approaches. In this scenario, a quasi-experimental design can be employed.
- Independent Variable: The new marketing strategy.
- Dependent Variable: Sales revenue.
- Design: The company could implement the new strategy for one group of customers while maintaining the existing strategy for another group. Both groups are selected based on similar demographics and purchase history , reducing selection bias. Pre-implementation data (sales records) can serve as the baseline, and post-implementation data can be collected to assess the strategy's impact.
Example 2: Evaluating the Effectiveness of Employee Training Programs
In the context of human resources and employee development, organizations often seek to evaluate the impact of training programs. A randomized controlled trial (RCT) with random assignment may not be practical or ethical, as some employees may need specific training more than others. Instead, a quasi-experimental design can be employed.
- Independent Variable: Employee training programs.
- Dependent Variable: Employee performance metrics, such as productivity or quality of work.
- Design: The organization can offer training programs to employees who express interest or demonstrate specific needs, creating a self-selected treatment group. A comparable control group can consist of employees with similar job roles and qualifications who did not receive the training. Pre-training performance metrics can serve as the baseline, and post-training data can be collected to assess the impact of the training programs.
Example 3: Analyzing the Effects of a Tax Policy Change
In economics and public policy, researchers often examine the effects of tax policy changes on economic behavior. Conducting a controlled experiment in such cases is practically impossible. Therefore, a quasi-experimental design is commonly employed.
- Independent Variable: Tax policy changes (e.g., tax rate adjustments).
- Dependent Variable: Economic indicators, such as consumer spending or business investments.
- Design: Researchers can analyze data from different regions or jurisdictions where tax policy changes have been implemented. One region could represent the treatment group (with tax policy changes), while a similar region with no tax policy changes serves as the control group. By comparing economic data before and after the policy change in both groups, researchers can assess the impact of the tax policy changes.
These examples illustrate how quasi-experimental design can be applied in various research contexts, providing valuable insights into the effects of independent variables in real-world scenarios where controlled experiments are not feasible or ethical. By carefully selecting comparison groups and controlling for potential biases, researchers can draw meaningful conclusions and inform decision-making processes.
How to Publish Quasi-Experimental Research?
Publishing your Quasi-Experimental research findings is a crucial step in contributing to the academic community's knowledge. We'll explore the essential aspects of reporting and publishing your Quasi-Experimental research effectively.
Structuring Your Research Paper
When preparing your research paper, it's essential to adhere to a well-structured format to ensure clarity and comprehensibility. Here are key elements to include:
Title and Abstract
- Title: Craft a concise and informative title that reflects the essence of your study. It should capture the main research question or hypothesis.
- Abstract: Summarize your research in a structured abstract, including the purpose, methods, results, and conclusions. Ensure it provides a clear overview of your study.
Introduction
- Background and Rationale: Provide context for your study by discussing the research gap or problem your study addresses. Explain why your research is relevant and essential.
- Research Questions or Hypotheses: Clearly state your research questions or hypotheses and their significance.
Literature Review
- Review of Related Work: Discuss relevant literature that supports your research. Highlight studies with similar methodologies or findings and explain how your research fits within this context.
- Participants: Describe your study's participants, including their characteristics and how you recruited them.
- Quasi-Experimental Design: Explain your chosen design in detail, including the independent and dependent variables, procedures, and any control measures taken.
- Data Collection: Detail the data collection methods , instruments used, and any pre-test or post-test measures.
- Data Analysis: Describe the statistical techniques employed, including any control for confounding variables.
- Presentation of Findings: Present your results clearly, using tables, graphs, and descriptive statistics where appropriate. Include p-values and effect sizes, if applicable.
- Interpretation of Results: Discuss the implications of your findings and how they relate to your research questions or hypotheses.
- Interpretation and Implications: Analyze your results in the context of existing literature and theories. Discuss the practical implications of your findings.
- Limitations: Address the limitations of your study, including potential biases or threats to internal validity.
- Future Research: Suggest areas for future research and how your study contributes to the field.
Ethical Considerations in Reporting
Ethical reporting is paramount in Quasi-Experimental research. Ensure that you adhere to ethical standards, including:
- Informed Consent: Clearly state that informed consent was obtained from all participants, and describe the informed consent process.
- Protection of Participants: Explain how you protected the rights and well-being of your participants throughout the study.
- Confidentiality: Detail how you maintained privacy and anonymity, especially when presenting individual data.
- Disclosure of Conflicts of Interest: Declare any potential conflicts of interest that could influence the interpretation of your findings.
Common Pitfalls to Avoid
When reporting your Quasi-Experimental research, watch out for common pitfalls that can diminish the quality and impact of your work:
- Overgeneralization: Be cautious not to overgeneralize your findings. Clearly state the limits of your study and the populations to which your results can be applied.
- Misinterpretation of Causality: Clearly articulate the limitations in inferring causality in Quasi-Experimental research. Avoid making strong causal claims unless supported by solid evidence.
- Ignoring Ethical Concerns: Ethical considerations are paramount. Failing to report on informed consent, ethical oversight, and participant protection can undermine the credibility of your study.
Guidelines for Transparent Reporting
To enhance the transparency and reproducibility of your Quasi-Experimental research, consider adhering to established reporting guidelines, such as:
- CONSORT Statement: If your study involves interventions or treatments, follow the CONSORT guidelines for transparent reporting of randomized controlled trials.
- STROBE Statement: For observational studies, the STROBE statement provides guidance on reporting essential elements.
- PRISMA Statement: If your research involves systematic reviews or meta-analyses, adhere to the PRISMA guidelines.
- Transparent Reporting of Evaluations with Non-Randomized Designs (TREND): TREND guidelines offer specific recommendations for transparently reporting non-randomized designs, including Quasi-Experimental research.
By following these reporting guidelines and maintaining the highest ethical standards, you can contribute to the advancement of knowledge in your field and ensure the credibility and impact of your Quasi-Experimental research findings.
Quasi-Experimental Design Challenges
Conducting a Quasi-Experimental study can be fraught with challenges that may impact the validity and reliability of your findings. We'll take a look at some common challenges and provide strategies on how you can address them effectively.
Selection Bias
Challenge: Selection bias occurs when non-randomized groups differ systematically in ways that affect the study's outcome. This bias can undermine the validity of your research, as it implies that the groups are not equivalent at the outset of the study.
Addressing Selection Bias:
- Matching: Employ matching techniques to create comparable treatment and control groups. Match participants based on relevant characteristics, such as age, gender, or prior performance, to balance the groups.
- Statistical Controls: Use statistical controls to account for differences between groups. Include covariates in your analysis to adjust for potential biases.
- Sensitivity Analysis: Conduct sensitivity analyses to assess how vulnerable your results are to selection bias. Explore different scenarios to understand the impact of potential bias on your conclusions.
History Effects
Challenge: History effects refer to external events or changes over time that influence the study's results. These external factors can confound your research by introducing variables you did not account for.
Addressing History Effects:
- Collect Historical Data: Gather extensive historical data to understand trends and patterns that might affect your study. By having a comprehensive historical context, you can better identify and account for historical effects.
- Control Groups: Include control groups whenever possible. By comparing the treatment group's results to those of a control group, you can account for external influences that affect both groups equally.
- Time Series Analysis : If applicable, use time series analysis to detect and account for temporal trends. This method helps differentiate between the effects of the independent variable and external events.
Maturation Effects
Challenge: Maturation effects occur when participants naturally change or develop throughout the study, independent of the intervention. These changes can confound your results, making it challenging to attribute observed effects solely to the independent variable.
Addressing Maturation Effects:
- Randomization: If possible, use randomization to distribute maturation effects evenly across treatment and control groups. Random assignment minimizes the impact of maturation as a confounding variable.
- Matched Pairs: If randomization is not feasible, employ matched pairs or statistical controls to ensure that both groups experience similar maturation effects.
- Shorter Time Frames: Limit the duration of your study to reduce the likelihood of significant maturation effects. Shorter studies are less susceptible to long-term maturation.
Regression to the Mean
Challenge: Regression to the mean is the tendency for extreme scores on a variable to move closer to the mean upon retesting. This can create the illusion of an intervention's effectiveness when, in reality, it's a natural statistical phenomenon.
Addressing Regression to the Mean:
- Use Control Groups: Include control groups in your study to provide a baseline for comparison. This helps differentiate genuine intervention effects from regression to the mean.
- Multiple Data Points: Collect numerous data points to identify patterns and trends. If extreme scores regress to the mean in subsequent measurements, it may be indicative of regression to the mean rather than a true intervention effect.
- Statistical Analysis: Employ statistical techniques that account for regression to the mean when analyzing your data. Techniques like analysis of covariance (ANCOVA) can help control for baseline differences.
Attrition and Mortality
Challenge: Attrition refers to the loss of participants over the course of your study, while mortality is the permanent loss of participants. High attrition rates can introduce biases and affect the representativeness of your sample.
Addressing Attrition and Mortality:
- Careful Participant Selection: Select participants who are likely to remain engaged throughout the study. Consider factors that may lead to attrition, such as participant motivation and commitment.
- Incentives: Provide incentives or compensation to participants to encourage their continued participation.
- Follow-Up Strategies: Implement effective follow-up strategies to reduce attrition. Regular communication and reminders can help keep participants engaged.
- Sensitivity Analysis: Conduct sensitivity analyses to assess the impact of attrition and mortality on your results. Compare the characteristics of participants who dropped out with those who completed the study.
Testing Effects
Challenge: Testing effects occur when the mere act of testing or assessing participants affects their subsequent performance. This phenomenon can lead to changes in the dependent variable that are unrelated to the independent variable.
Addressing Testing Effects:
- Counterbalance Testing: If possible, counterbalance the order of tests or assessments between treatment and control groups. This helps distribute the testing effects evenly across groups.
- Control Groups: Include control groups subjected to the same testing or assessment procedures as the treatment group. By comparing the two groups, you can determine whether testing effects have influenced the results.
- Minimize Testing Frequency: Limit the frequency of testing or assessments to reduce the likelihood of testing effects. Conducting fewer assessments can mitigate the impact of repeated testing on participants.
By proactively addressing these common challenges, you can enhance the validity and reliability of your Quasi-Experimental study, making your findings more robust and trustworthy.
Conclusion for Quasi-Expermental Design
Quasi-experimental design is a powerful tool that helps researchers investigate cause-and-effect relationships in real-world situations where strict control is not always possible. By understanding the key concepts, types of designs, and how to address challenges, you can conduct robust research and contribute valuable insights to your field. Remember, quasi-experimental design bridges the gap between controlled experiments and purely observational studies, making it an essential approach in various fields, from business and market research to public policy and beyond. So, whether you're a researcher, student, or decision-maker, the knowledge of quasi-experimental design empowers you to make informed choices and drive positive changes in the world.
How to Supercharge Quasi-Experimental Design with Real-Time Insights?
Introducing Appinio , the real-time market research platform that transforms the world of quasi-experimental design. Imagine having the power to conduct your own market research in minutes, obtaining actionable insights that fuel your data-driven decisions. Appinio takes care of the research and tech complexities, freeing you to focus on what truly matters for your business.
Here's why Appinio stands out:
- Lightning-Fast Insights: From formulating questions to uncovering insights, Appinio delivers results in minutes, ensuring you get the answers you need when you need them.
- No Research Degree Required: Our intuitive platform is designed for everyone, eliminating the need for a PhD in research. Anyone can dive in and start harnessing the power of real-time consumer insights.
- Global Reach, Local Expertise: With access to over 90 countries and the ability to define precise target groups based on 1200+ characteristics, you can conduct Quasi-Experimental research on a global scale while maintaining a local touch.
Get free access to the platform!
Join the loop 💌
Be the first to hear about new updates, product news, and data insights. We'll send it all straight to your inbox.
Get the latest market research news straight to your inbox! 💌
Wait, there's more
04.11.2024 | 5min read
Trustly uses Appinio’s insights to revolutionize utility bill payments
19.09.2024 | 9min read
Track Your Customer Retention & Brand Metrics for Post-Holiday Success
16.09.2024 | 10min read
Creative Checkup – Optimize Advertising Slogans & Creatives for ROI
The use and interpretation of quasi-experimental design
Last updated
6 February 2023
Reviewed by
Miroslav Damyanov
Short on time? Get an AI generated summary of this article instead
- What is a quasi-experimental design?
Commonly used in medical informatics (a field that uses digital information to ensure better patient care), researchers generally use this design to evaluate the effectiveness of a treatment – perhaps a type of antibiotic or psychotherapy, or an educational or policy intervention.
Even though quasi-experimental design has been used for some time, relatively little is known about it. Read on to learn the ins and outs of this research design.
Make research less tedious
Dovetail streamlines research to help you uncover and share actionable insights
- When to use a quasi-experimental design
A quasi-experimental design is used when it's not logistically feasible or ethical to conduct randomized, controlled trials. As its name suggests, a quasi-experimental design is almost a true experiment. However, researchers don't randomly select elements or participants in this type of research.
Researchers prefer to apply quasi-experimental design when there are ethical or practical concerns. Let's look at these two reasons more closely.
Ethical reasons
In some situations, the use of randomly assigned elements can be unethical. For instance, providing public healthcare to one group and withholding it to another in research is unethical. A quasi-experimental design would examine the relationship between these two groups to avoid physical danger.
Practical reasons
Randomized controlled trials may not be the best approach in research. For instance, it's impractical to trawl through large sample sizes of participants without using a particular attribute to guide your data collection .
Recruiting participants and properly designing a data-collection attribute to make the research a true experiment requires a lot of time and effort, and can be expensive if you don’t have a large funding stream.
A quasi-experimental design allows researchers to take advantage of previously collected data and use it in their study.
- Examples of quasi-experimental designs
Quasi-experimental research design is common in medical research, but any researcher can use it for research that raises practical and ethical concerns. Here are a few examples of quasi-experimental designs used by different researchers:
Example 1: Determining the effectiveness of math apps in supplementing math classes
A school wanted to supplement its math classes with a math app. To select the best app, the school decided to conduct demo tests on two apps before selecting the one they will purchase.
Scope of the research
Since every grade had two math teachers, each teacher used one of the two apps for three months. They then gave the students the same math exams and compared the results to determine which app was most effective.
Reasons why this is a quasi-experimental study
This simple study is a quasi-experiment since the school didn't randomly assign its students to the applications. They used a pre-existing class structure to conduct the study since it was impractical to randomly assign the students to each app.
Example 2: Determining the effectiveness of teaching modern leadership techniques in start-up businesses
A hypothetical quasi-experimental study was conducted in an economically developing country in a mid-sized city.
Five start-ups in the textile industry and five in the tech industry participated in the study. The leaders attended a six-week workshop on leadership style, team management, and employee motivation.
After a year, the researchers assessed the performance of each start-up company to determine growth. The results indicated that the tech start-ups were further along in their growth than the textile companies.
The basis of quasi-experimental research is a non-randomized subject-selection process. This study didn't use specific aspects to determine which start-up companies should participate. Therefore, the results may seem straightforward, but several aspects may determine the growth of a specific company, apart from the variables used by the researchers.
Example 3: A study to determine the effects of policy reforms and of luring foreign investment on small businesses in two mid-size cities
In a study to determine the economic impact of government reforms in an economically developing country, the government decided to test whether creating reforms directed at small businesses or luring foreign investments would spur the most economic development.
The government selected two cities with similar population demographics and sizes. In one of the cities, they implemented specific policies that would directly impact small businesses, and in the other, they implemented policies to attract foreign investment.
After five years, they collected end-of-year economic growth data from both cities. They looked at elements like local GDP growth, unemployment rates, and housing sales.
The study used a non-randomized selection process to determine which city would participate in the research. Researchers left out certain variables that would play a crucial role in determining the growth of each city. They used pre-existing groups of people based on research conducted in each city, rather than random groups.
- Advantages of a quasi-experimental design
Some advantages of quasi-experimental designs are:
Researchers can manipulate variables to help them meet their study objectives.
It offers high external validity, making it suitable for real-world applications, specifically in social science experiments.
Integrating this methodology into other research designs is easier, especially in true experimental research. This cuts down on the time needed to determine your outcomes.
- Disadvantages of a quasi-experimental design
Despite the pros that come with a quasi-experimental design, there are several disadvantages associated with it, including the following:
It has a lower internal validity since researchers do not have full control over the comparison and intervention groups or between time periods because of differences in characteristics in people, places, or time involved. It may be challenging to determine whether all variables have been used or whether those used in the research impacted the results.
There is the risk of inaccurate data since the research design borrows information from other studies.
There is the possibility of bias since researchers select baseline elements and eligibility.
- What are the different quasi-experimental study designs?
There are three distinct types of quasi-experimental designs:
Nonequivalent
Regression discontinuity, natural experiment.
This is a hybrid of experimental and quasi-experimental methods and is used to leverage the best qualities of the two. Like the true experiment design, nonequivalent group design uses pre-existing groups believed to be comparable. However, it doesn't use randomization, the lack of which is a crucial element for quasi-experimental design.
Researchers usually ensure that no confounding variables impact them throughout the grouping process. This makes the groupings more comparable.
Example of a nonequivalent group design
A small study was conducted to determine whether after-school programs result in better grades. Researchers randomly selected two groups of students: one to implement the new program, the other not to. They then compared the results of the two groups.
This type of quasi-experimental research design calculates the impact of a specific treatment or intervention. It uses a criterion known as "cutoff" that assigns treatment according to eligibility.
Researchers often assign participants above the cutoff to the treatment group. This puts a negligible distinction between the two groups (treatment group and control group).
Example of regression discontinuity
Students must achieve a minimum score to be enrolled in specific US high schools. Since the cutoff score used to determine eligibility for enrollment is arbitrary, researchers can assume that the disparity between students who only just fail to achieve the cutoff point and those who barely pass is a small margin and is due to the difference in the schools that these students attend.
Researchers can then examine the long-term effects of these two groups of kids to determine the effect of attending certain schools. This information can be applied to increase the chances of students being enrolled in these high schools.
This research design is common in laboratory and field experiments where researchers control target subjects by assigning them to different groups. Researchers randomly assign subjects to a treatment group using nature or an external event or situation.
However, even with random assignment, this research design cannot be called a true experiment since nature aspects are observational. Researchers can also exploit these aspects despite having no control over the independent variables.
Example of the natural experiment approach
An example of a natural experiment is the 2008 Oregon Health Study.
Oregon intended to allow more low-income people to participate in Medicaid.
Since they couldn't afford to cover every person who qualified for the program, the state used a random lottery to allocate program slots.
Researchers assessed the program's effectiveness by assigning the selected subjects to a randomly assigned treatment group, while those that didn't win the lottery were considered the control group.
- Differences between quasi-experiments and true experiments
There are several differences between a quasi-experiment and a true experiment:
Participants in true experiments are randomly assigned to the treatment or control group, while participants in a quasi-experiment are not assigned randomly.
In a quasi-experimental design, the control and treatment groups differ in unknown or unknowable ways, apart from the experimental treatments that are carried out. Therefore, the researcher should try as much as possible to control these differences.
Quasi-experimental designs have several "competing hypotheses," which compete with experimental manipulation to explain the observed results.
Quasi-experiments tend to have lower internal validity (the degree of confidence in the research outcomes) than true experiments, but they may offer higher external validity (whether findings can be extended to other contexts) as they involve real-world interventions instead of controlled interventions in artificial laboratory settings.
Despite the distinct difference between true and quasi-experimental research designs, these two research methodologies share the following aspects:
Both study methods subject participants to some form of treatment or conditions.
Researchers have the freedom to measure some of the outcomes of interest.
Researchers can test whether the differences in the outcomes are associated with the treatment.
- An example comparing a true experiment and quasi-experiment
Imagine you wanted to study the effects of junk food on obese people. Here's how you would do this as a true experiment and a quasi-experiment:
How to carry out a true experiment
In a true experiment, some participants would eat junk foods, while the rest would be in the control group, adhering to a regular diet. At the end of the study, you would record the health and discomfort of each group.
This kind of experiment would raise ethical concerns since the participants assigned to the treatment group are required to eat junk food against their will throughout the experiment. This calls for a quasi-experimental design.
How to carry out a quasi-experiment
In quasi-experimental research, you would start by finding out which participants want to try junk food and which prefer to stick to a regular diet. This allows you to assign these two groups based on subject choice.
In this case, you didn't assign participants to a particular group, so you can confidently use the results from the study.
When is a quasi-experimental design used?
Quasi-experimental designs are used when researchers don’t want to use randomization when evaluating their intervention.
What are the characteristics of quasi-experimental designs?
Some of the characteristics of a quasi-experimental design are:
Researchers don't randomly assign participants into groups, but study their existing characteristics and assign them accordingly.
Researchers study the participants in pre- and post-testing to determine the progress of the groups.
Quasi-experimental design is ethical since it doesn’t involve offering or withholding treatment at random.
Quasi-experimental design encompasses a broad range of non-randomized intervention studies. This design is employed when it is not ethical or logistically feasible to conduct randomized controlled trials. Researchers typically employ it when evaluating policy or educational interventions, or in medical or therapy scenarios.
How do you analyze data in a quasi-experimental design?
You can use two-group tests, time-series analysis, and regression analysis to analyze data in a quasi-experiment design. Each option has specific assumptions, strengths, limitations, and data requirements.
Should you be using a customer insights hub?
Do you want to discover previous research faster?
Do you share your research findings with others?
Do you analyze research data?
Start for free today, add your research, and get to key insights faster
Editor’s picks
Last updated: 24 October 2024
Last updated: 30 January 2024
Last updated: 11 January 2024
Last updated: 17 January 2024
Last updated: 12 December 2023
Last updated: 30 April 2024
Last updated: 4 July 2024
Last updated: 12 October 2023
Last updated: 5 March 2024
Last updated: 31 January 2024
Last updated: 23 January 2024
Last updated: 13 May 2024
Last updated: 20 December 2023
Latest articles
Related topics, a whole new way to understand your customer is here, log in or sign up.
Get started for free
An official website of the United States government
Official websites use .gov A .gov website belongs to an official government organization in the United States.
Secure .gov websites use HTTPS A lock ( Lock Locked padlock icon ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.
- Publications
- Account settings
- Advanced Search
- Journal List
Planning and Conducting Clinical Research: The Whole Process
Boon-how chew.
- Author information
- Article notes
- Copyright and License information
Boon-How Chew [email protected]
Corresponding author.
Received 2019 Jan 23; Accepted 2019 Feb 20; Collection date 2019 Feb.
This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
The goal of this review was to present the essential steps in the entire process of clinical research. Research should begin with an educated idea arising from a clinical practice issue. A research topic rooted in a clinical problem provides the motivation for the completion of the research and relevancy for affecting medical practice changes and improvements. The research idea is further informed through a systematic literature review, clarified into a conceptual framework, and defined into an answerable research question. Engagement with clinical experts, experienced researchers, relevant stakeholders of the research topic, and even patients can enhance the research question’s relevance, feasibility, and efficiency. Clinical research can be completed in two major steps: study designing and study reporting. Three study designs should be planned in sequence and iterated until properly refined: theoretical design, data collection design, and statistical analysis design. The design of data collection could be further categorized into three facets: experimental or non-experimental, sampling or census, and time features of the variables to be studied. The ultimate aims of research reporting are to present findings succinctly and timely. Concise, explicit, and complete reporting are the guiding principles in clinical studies reporting.
Keywords: clinical epidemiology, literature review, conceptual framework, research question, study designs, study reporting
Introduction and background
Medical and clinical research can be classified in many different ways. Probably, most people are familiar with basic (laboratory) research, clinical research, healthcare (services) research, health systems (policy) research, and educational research. Clinical research in this review refers to scientific research related to clinical practices. There are many ways a clinical research's findings can become invalid or less impactful including ignorance of previous similar studies, a paucity of similar studies, poor study design and implementation, low test agent efficacy, no predetermined statistical analysis, insufficient reporting, bias, and conflicts of interest [ 1 - 4 ]. Scientific, ethical, and moral decadence among researchers can be due to incognizant criteria in academic promotion and remuneration and too many forced studies by amateurs and students for the sake of research without adequate training or guidance [ 2 , 5 - 6 ]. This article will review the proper methods to conduct medical research from the planning stage to submission for publication (Table 1 ).
Table 1. Overview of the essential concepts of the whole clinical research process.
a Feasibility and efficiency are considered during the refinement of the research question and adhered to during data collection.
Epidemiologic studies in clinical and medical fields focus on the effect of a determinant on an outcome [ 7 ]. Measurement errors that happen systematically give rise to biases leading to invalid study results, whereas random measurement errors will cause imprecise reporting of effects. Precision can usually be increased with an increased sample size provided biases are avoided or trivialized. Otherwise, the increased precision will aggravate the biases. Because epidemiologic, clinical research focuses on measurement, measurement errors are addressed throughout the research process. Obtaining the most accurate estimate of a treatment effect constitutes the whole business of epidemiologic research in clinical practice. This is greatly facilitated by clinical expertise and current scientific knowledge of the research topic. Current scientific knowledge is acquired through literature reviews or in collaboration with an expert clinician. Collaboration and consultation with an expert clinician should also include input from the target population to confirm the relevance of the research question. The novelty of a research topic is less important than the clinical applicability of the topic. Researchers need to acquire appropriate writing and reporting skills from the beginning of their careers, and these skills should improve with persistent use and regular reviewing of published journal articles. A published clinical research study stands on solid scientific ground to inform clinical practice given the article has passed through proper peer-reviews, revision, and content improvement.
Systematic literature reviews
Systematic literature reviews of published papers will inform authors of the existing clinical evidence on a research topic. This is an important step to reduce wasted efforts and evaluate the planned study [ 8 ]. Conducting a systematic literature review is a well-known important step before embarking on a new study [ 9 ]. A rigorously performed and cautiously interpreted systematic review that includes in-process trials can inform researchers of several factors [ 10 ]. Reviewing the literature will inform the choice of recruitment methods, outcome measures, questionnaires, intervention details, and statistical strategies – useful information to increase the study’s relevance, value, and power. A good review of previous studies will also provide evidence of the effects of an intervention that may or may not be worthwhile; this would suggest either no further studies are warranted or that further study of the intervention is needed. A review can also inform whether a larger and better study is preferable to an additional small study. Reviews of previously published work may yield few studies or low-quality evidence from small or poorly designed studies on certain intervention or observation; this may encourage or discourage further research or prompt consideration of a first clinical trial.
Conceptual framework
The result of a literature review should include identifying a working conceptual framework to clarify the nature of the research problem, questions, and designs, and even guide the latter discussion of the findings and development of possible solutions. Conceptual frameworks represent ways of thinking about a problem or how complex things work the way they do [ 11 ]. Different frameworks will emphasize different variables and outcomes, and their inter-relatedness. Each framework highlights or emphasizes different aspects of a problem or research question. Often, any single conceptual framework presents only a partial view of reality [ 11 ]. Furthermore, each framework magnifies certain elements of the problem. Therefore, a thorough literature search is warranted for authors to avoid repeating the same research endeavors or mistakes. It may also help them find relevant conceptual frameworks including those that are outside one’s specialty or system.
Conceptual frameworks can come from theories with well-organized principles and propositions that have been confirmed by observations or experiments. Conceptual frameworks can also come from models derived from theories, observations or sets of concepts or even evidence-based best practices derived from past studies [ 11 ].
Researchers convey their assumptions of the associations of the variables explicitly in the conceptual framework to connect the research to the literature. After selecting a single conceptual framework or a combination of a few frameworks, a clinical study can be completed in two fundamental steps: study design and study report. Three study designs should be planned in sequence and iterated until satisfaction: the theoretical design, data collection design, and statistical analysis design [ 7 ].
Study designs
Theoretical Design
Theoretical design is the next important step in the research process after a literature review and conceptual framework identification. While the theoretical design is a crucial step in research planning, it is often dealt with lightly because of the more alluring second step (data collection design). In the theoretical design phase, a research question is designed to address a clinical problem, which involves an informed understanding based on the literature review and effective collaboration with the right experts and clinicians. A well-developed research question will have an initial hypothesis of the possible relationship between the explanatory variable/exposure and the outcome. This will inform the nature of the study design, be it qualitative or quantitative, primary or secondary, and non-causal or causal (Figure 1 ).
Figure 1. Fundamental classification of clinical studies.
A study is qualitative if the research question aims to explore, understand, describe, discover or generate reasons underlying certain phenomena. Qualitative studies usually focus on a process to determine how and why things happen [ 12 ]. Quantitative studies use deductive reasoning, and numerical statistical quantification of the association between groups on data often gathered during experiments [ 13 ]. A primary clinical study is an original study gathering a new set of patient-level data. Secondary research draws on the existing available data and pooling them into a larger database to generate a wider perspective or a more powerful conclusion. Non-causal or descriptive research aims to identify the determinants or associated factors for the outcome or health condition, without regard for causal relationships. Causal research is an exploration of the determinants of an outcome while mitigating confounding variables. Table 2 shows examples of non-causal (e.g., diagnostic and prognostic) and causal (e.g., intervention and etiologic) clinical studies. Concordance between the research question, its aim, and the choice of theoretical design will provide a strong foundation and the right direction for the research process and path.
Table 2. Examples of clinical study titles according to the category of research and the data collection designs.
A problem in clinical epidemiology is phrased in a mathematical relationship below, where the outcome is a function of the determinant (D) conditional on the extraneous determinants (ED) or more commonly known as the confounding factors [ 7 ]:
For non-causal research, Outcome = f (D1, D2…Dn) For causal research, Outcome = f (D | ED)
A fine research question is composed of at least three components: 1) an outcome or a health condition, 2) determinant/s or associated factors to the outcome, and 3) the domain. The outcome and the determinants have to be clearly conceptualized and operationalized as measurable variables (Table 3 ; PICOT [ 14 ] and FINER [ 15 ]). The study domain is the theoretical source population from which the study population will be sampled, similar to the wording on a drug package insert that reads, “use this medication (study results) in people with this disease” [ 7 ].
Table 3. The PICOT and FINER of a research question.
The interpretation of study results as they apply to wider populations is known as generalization, and generalization can either be statistical or made using scientific inferences [ 16 ]. Generalization supported by statistical inferences is seen in studies on disease prevalence where the sample population is representative of the source population. By contrast, generalizations made using scientific inferences are not bound by the representativeness of the sample in the study; rather, the generalization should be plausible from the underlying scientific mechanisms as long as the study design is valid and nonbiased. Scientific inferences and generalizations are usually the aims of causal studies.
Confounding: Confounding is a situation where true effects are obscured or confused [ 7 , 16 ]. Confounding variables or confounders affect the validity of a study’s outcomes and should be prevented or mitigated in the planning stages and further managed in the analytical stages. Confounders are also known as extraneous determinants in epidemiology due to their inherent and simultaneous relationships to both the determinant and outcome (Figure 2 ), which are usually one-determinant-to-one outcome in causal clinical studies. The known confounders are also called observed confounders. These can be minimized using randomization, restriction, or a matching strategy. Residual confounding has occurred in a causal relationship when identified confounders were not measured accurately. Unobserved confounding occurs when the confounding effect is present as a variable or factor not observed or yet defined and, thus, not measured in the study. Age and gender are almost universal confounders followed by ethnicity and socio-economic status.
Figure 2. The confounders in a causal relationship.
Confounders have three main characteristics. They are a potential risk factor for the disease, associated with the determinant of interest, and should not be an intermediate variable between the determinant and the outcome or a precursor to the determinant. For example, a sedentary lifestyle is a cause for acute coronary syndrome (ACS), and smoking could be a confounder but not cardiorespiratory unfitness (which is an intermediate factor between a sedentary lifestyle and ACS). For patients with ACS, not having a pair of sports shoes is not a confounder – it is a correlate for the sedentary lifestyle. Similarly, depression would be a precursor, not a confounder.
Sample size consideration: Sample size calculation provides the required number of participants to be recruited in a new study to detect true differences in the target population if they exist. Sample size calculation is based on three facets: an estimated difference in group sizes, the probability of α (Type I) and β (Type II) errors chosen based on the nature of the treatment or intervention, and the estimated variability (interval data) or proportion of the outcome (nominal data) [ 17 - 18 ]. The clinically important effect sizes are determined based on expert consensus or patients’ perception of benefit. Value and economic consideration have increasingly been included in sample size estimations. Sample size and the degree to which the sample represents the target population affect the accuracy and generalization of a study’s reported effects.
Pilot study: Pilot studies assess the feasibility of the proposed research procedures on small sample size. Pilot studies test the efficiency of participant recruitment with minimal practice or service interruptions. Pilot studies should not be conducted to obtain a projected effect size for a larger study population because, in a typical pilot study, the sample size is small, leading to a large standard error of that effect size. This leads to bias when projected for a large population. In the case of underestimation, this could lead to inappropriately terminating the full-scale study. As the small pilot study is equally prone to bias of overestimation of the effect size, this would lead to an underpowered study and a failed full-scale study [ 19 ].
The Design of Data Collection
The “perfect” study design in the theoretical phase now faces the practical and realistic challenges of feasibility. This is the step where different methods for data collection are considered, with one selected as the most appropriate based on the theoretical design along with feasibility and efficiency. The goal of this stage is to achieve the highest possible validity with the lowest risk of biases given available resources and existing constraints.
In causal research, data on the outcome and determinants are collected with utmost accuracy via a strict protocol to maximize validity and precision. The validity of an instrument is defined as the degree of fidelity of the instrument, measuring what it is intended to measure, that is, the results of the measurement correlate with the true state of an occurrence. Another widely used word for validity is accuracy. Internal validity refers to the degree of accuracy of a study’s results to its own study sample. Internal validity is influenced by the study designs, whereas the external validity refers to the applicability of a study’s result in other populations. External validity is also known as generalizability and expresses the validity of assuming the similarity and comparability between the study population and the other populations. Reliability of an instrument denotes the extent of agreeableness of the results of repeated measurements of an occurrence by that instrument at a different time, by different investigators or in a different setting. Other terms that are used for reliability include reproducibility and precision. Preventing confounders by identifying and including them in data collection will allow statistical adjustment in the later analyses. In descriptive research, outcomes must be confirmed with a referent standard, and the determinants should be as valid as those found in real clinical practice.
Common designs for data collection include cross-sectional, case-control, cohort, and randomized controlled trials (RCTs). Many other modern epidemiology study designs are based on these classical study designs such as nested case-control, case-crossover, case-control without control, and stepwise wedge clustered RCTs. A cross-sectional study is typically a snapshot of the study population, and an RCT is almost always a prospective study. Case-control and cohort studies can be retrospective or prospective in data collection. The nested case-control design differs from the traditional case-control design in that it is “nested” in a well-defined cohort from which information on the cohorts can be obtained. This design also satisfies the assumption that cases and controls represent random samples of the same study base. Table 4 provides examples of these data collection designs.
Table 4. Examples of clinical study titles according to the data collection designs.
Additional aspects in data collection: No single design of data collection for any research question as stated in the theoretical design will be perfect in actual conduct. This is because of myriad issues facing the investigators such as the dynamic clinical practices, constraints of time and budget, the urgency for an answer to the research question, and the ethical integrity of the proposed experiment. Therefore, feasibility and efficiency without sacrificing validity and precision are important considerations in data collection design. Therefore, data collection design requires additional consideration in the following three aspects: experimental/non-experimental, sampling, and timing [ 7 ]:
Experimental or non-experimental: Non-experimental research (i.e., “observational”), in contrast to experimental, involves data collection of the study participants in their natural or real-world environments. Non-experimental researches are usually the diagnostic and prognostic studies with cross-sectional in data collection. The pinnacle of non-experimental research is the comparative effectiveness study, which is grouped with other non-experimental study designs such as cross-sectional, case-control, and cohort studies [ 20 ]. It is also known as the benchmarking-controlled trials because of the element of peer comparison (using comparable groups) in interpreting the outcome effects [ 20 ]. Experimental study designs are characterized by an intervention on a selected group of the study population in a controlled environment, and often in the presence of a similar group of the study population to act as a comparison group who receive no intervention (i.e., the control group). Thus, the widely known RCT is classified as an experimental design in data collection. An experimental study design without randomization is referred to as a quasi-experimental study. Experimental studies try to determine the efficacy of a new intervention on a specified population. Table 5 presents the advantages and disadvantages of experimental and non-experimental studies [ 21 ].
Table 5. The advantages and disadvantages of experimental and non-experimental data collection designs .
a May be an issue in cross-sectional studies that require a long recall to the past such as dietary patterns, antenatal events, and life experiences during childhood.
Once an intervention yields a proven effect in an experimental study, non-experimental and quasi-experimental studies can be used to determine the intervention’s effect in a wider population and within real-world settings and clinical practices. Pragmatic or comparative effectiveness are the usual designs used for data collection in these situations [ 22 ].
Sampling/census: Census is a data collection on the whole source population (i.e., the study population is the source population). This is possible when the defined population is restricted to a given geographical area. A cohort study uses the census method in data collection. An ecologic study is a cohort study that collects summary measures of the study population instead of individual patient data. However, many studies sample from the source population and infer the results of the study to the source population for feasibility and efficiency because adequate sampling provides similar results to the census of the whole population. Important aspects of sampling in research planning are sample size and representation of the population. Sample size calculation accounts for the number of participants needed to be in the study to discover the actual association between the determinant and outcome. Sample size calculation relies on the primary objective or outcome of interest and is informed by the estimated possible differences or effect size from previous similar studies. Therefore, the sample size is a scientific estimation for the design of the planned study.
A sampling of participants or cases in a study can represent the study population and the larger population of patients in that disease space, but only in prevalence, diagnostic, and prognostic studies. Etiologic and interventional studies do not share this same level of representation. A cross-sectional study design is common for determining disease prevalence in the population. Cross-sectional studies can also determine the referent ranges of variables in the population and measure change over time (e.g., repeated cross-sectional studies). Besides being cost- and time-efficient, cross-sectional studies have no loss to follow-up; recall bias; learning effect on the participant; or variability over time in equipment, measurement, and technician. A cross-sectional design for an etiologic study is possible when the determinants do not change with time (e.g., gender, ethnicity, genetic traits, and blood groups).
In etiologic research, comparability between the exposed and the non-exposed groups is more important than sample representation. Comparability between these two groups will provide an accurate estimate of the effect of the exposure (risk factor) on the outcome (disease) and enable valid inference of the causal relation to the domain (the theoretical population). In a case-control study, a sampling of the control group should be taken from the same study population (study base), have similar profiles to the cases (matching) but do not have the outcome seen in the cases. Matching important factors minimizes the confounding of the factors and increases statistical efficiency by ensuring similar numbers of cases and controls in confounders’ strata [ 23 - 24 ]. Nonetheless, perfect matching is neither necessary nor achievable in a case-control study because a partial match could achieve most of the benefits of the perfect match regarding a more precise estimate of odds ratio than statistical control of confounding in unmatched designs [ 25 - 26 ]. Moreover, perfect or full matching can lead to an underestimation of the point estimates [ 27 - 28 ].
Time feature: The timing of data collection for the determinant and outcome characterizes the types of studies. A cross-sectional study has the axis of time zero (T = 0) for both the determinant and the outcome, which separates it from all other types of research that have time for the outcome T > 0. Retrospective or prospective studies refer to the direction of data collection. In retrospective studies, information on the determinant and outcome have been collected or recorded before. In prospective studies, this information will be collected in the future. These terms should not be used to describe the relationship between the determinant and the outcome in etiologic studies. Time of exposure to the determinant, the time of induction, and the time at risk for the outcome are important aspects to understand. Time at risk is the period of time exposed to the determinant risk factors. Time of induction is the time from the sufficient exposure to the risk or causal factors to the occurrence of a disease. The latent period is when the occurrence of a disease without manifestation of the disease such as in “silence” diseases for example cancers, hypertension and type 2 diabetes mellitus which is detected from screening practices. Figure 3 illustrates the time features of a variable. Variable timing is important for accurate data capture.
Figure 3. The time features of a variable.
The Design of Statistical Analysis
Statistical analysis of epidemiologic data provides the estimate of effects after correcting for biases (e.g., confounding factors) measures the variability in the data from random errors or chance [ 7 , 16 , 29 ]. An effect estimate gives the size of an association between the studied variables or the level of effectiveness of an intervention. This quantitative result allows for comparison and assessment of the usefulness and significance of the association or the intervention between studies. This significance must be interpreted with a statistical model and an appropriate study design. Random errors could arise in the study resulting from unexplained personal choices by the participants. Random error is, therefore, when values or units of measurement between variables change in non-concerted or non-directional manner. Conversely, when these values or units of measurement between variables change in a concerted or directional manner, we note a significant relationship as shown by statistical significance.
Variability: Researchers almost always collect the needed data through a sampling of subjects/participants from a population instead of a census. The process of sampling or multiple sampling in different geographical regions or over different periods contributes to varied information due to the random inclusion of different participants and chance occurrence. This sampling variation becomes the focus of statistics when communicating the degree and intensity of variation in the sampled data and the level of inference in the population. Sampling variation can be influenced profoundly by the total number of participants and the width of differences of the measured variable (standard deviation). Hence, the characteristics of the participants, measurements and sample size are all important factors in planning a study.
Statistical strategy: Statistical strategy is usually determined based on the theoretical and data collection designs. Use of a prespecified statistical strategy (including the decision to dichotomize any continuous data at certain cut-points, sub-group analysis or sensitive analyses) is recommended in the study proposal (i.e., protocol) to prevent data dredging and data-driven reports that predispose to bias. The nature of the study hypothesis also dictates whether directional (one-tailed) or non-directional (two-tailed) significance tests are conducted. In most studies, two-sided tests are used except in specific instances when unidirectional hypotheses may be appropriate (e.g., in superiority or non-inferiority trials). While data exploration is discouraged, epidemiological research is, by nature of its objectives, statistical research. Hence, it is acceptable to report the presence of persistent associations between any variables with plausible underlying mechanisms during the exploration of the data. The statistical methods used to produce the results should be explicitly explained. Many different statistical tests are used to handle various kinds of data appropriately (e.g., interval vs discrete), and/or the various distribution of the data (e.g., normally distributed or skewed). For additional details on statistical explanations and underlying concepts of statistical tests, readers are recommended the references as cited in this sentence [ 30 - 31 ].
Steps in statistical analyses: Statistical analysis begins with checking for data entry errors. Duplicates are eliminated, and proper units should be confirmed. Extremely low, high or suspicious values are confirmed from the source data again. If this is not possible, this is better classified as a missing value. However, if the unverified suspicious data are not obviously wrong, they should be further examined as an outlier in the analysis. The data checking and cleaning enables the analyst to establish a connection with the raw data and to anticipate possible results from further analyses. This initial step involves descriptive statistics that analyze central tendency (i.e., mode, median, and mean) and dispersion (i.e., (minimum, maximum, range, quartiles, absolute deviation, variance, and standard deviation) of the data. Certain graphical plotting such as scatter plot, a box-whiskers plot, histogram or normal Q-Q plot are helpful at this stage to verify data normality in distribution. See Figure 4 for the statistical tests available for analyses of different types of data.
Figure 4. Statistical tests available for analyses of different types of data.
Once data characteristics are ascertained, further statistical tests are selected. The analytical strategy sometimes involves the transformation of the data distribution for the selected tests (e.g., log, natural log, exponential, quadratic) or for checking the robustness of the association between the determinants and their outcomes. This step is also referred to as inferential statistics whereby the results are about hypothesis testing and generalization to the wider population that the study’s sampled participants represent. The last statistical step is checking whether the statistical analyses fulfill the assumptions of that particular statistical test and model to avoid violation and misleading results. These assumptions include evaluating normality, variance homogeneity, and residuals included in the final statistical model. Other statistical values such as Akaike information criterion, variance inflation factor/tolerance, and R2 are also considered when choosing the best-fitted models. Transforming raw data could be done, or a higher level of statistical analyses can be used (e.g., generalized linear models and mixed-effect modeling). Successful statistical analysis allows conclusions of the study to fit the data.
Bayesian and Frequentist statistical frameworks: Most of the current clinical research reporting is based on the frequentist approach and hypotheses testing p values and confidence intervals. The frequentist approach assumes the acquired data are random, attained by random sampling, through randomized experiments or influences, and with random errors. The distribution of the data (its point estimate and confident interval) infers a true parameter in the real population. The major conceptual difference between Bayesian statistics and frequentist statistics is that in Bayesian statistics, the parameter (i.e., the studied variable in the population) is random and the data acquired is real (true or fix). Therefore, the Bayesian approach provides a probability interval for the parameter. The studied parameter is random because it could vary and be affected by prior beliefs, experience or evidence of plausibility. In the Bayesian statistical approach, this prior belief or available knowledge is quantified into a probability distribution and incorporated into the acquired data to get the results (i.e., the posterior distribution). This uses mathematical theory of Bayes’ Theorem to “turn around” conditional probabilities.
The goal of research reporting is to present findings succinctly and timely via conference proceedings or journal publication. Concise and explicit language use, with all the necessary details to enable replication and judgment of the study applicability, are the guiding principles in clinical studies reporting.
Writing for Reporting
Medical writing is very much a technical chore that accommodates little artistic expression. Research reporting in medicine and health sciences emphasize clear and standardized reporting, eschewing adjectives and adverbs extensively used in popular literature. Regularly reviewing published journal articles can familiarize authors with proper reporting styles and help enhance writing skills. Authors should familiarize themselves with standard, concise, and appropriate rhetoric for the intended audience, which includes consideration for journal reviewers, editors, and referees. However, proper language can be somewhat subjective. While each publication may have varying requirements for submission, the technical requirements for formatting an article are usually available via author or submission guidelines provided by the target journal.
Research reports for publication often contain a title, abstract, introduction, methods, results, discussion, and conclusions section, and authors may want to write each section in sequence. However, best practices indicate the abstract and title should be written last. Authors may find that when writing one section of the report, ideas come to mind that pertains to other sections, so careful note taking is encouraged. One effective approach is to organize and write the result section first, followed by the discussion and conclusions sections. Once these are drafted, write the introduction, abstract, and the title of the report. Regardless of the sequence of writing, the author should begin with a clear and relevant research question to guide the statistical analyses, result interpretation, and discussion. The study findings can be a motivator to propel the author through the writing process, and the conclusions can help the author draft a focused introduction.
Writing for Publication
Specific recommendations on effective medical writing and table generation are available [ 32 ]. One such resource is Effective Medical Writing: The Write Way to Get Published, which is an updated collection of medical writing articles previously published in the Singapore Medical Journal [ 33 ]. The British Medical Journal’s Statistics Notes series also elucidates common and important statistical concepts and usages in clinical studies. Writing guides are also available from individual professional societies, journals, or publishers such as Chest (American College of Physicians) medical writing tips, PLoS Reporting guidelines collection, Springer’s Journal Author Academy, and SAGE’s Research methods [ 34 - 37 ]. Standardized research reporting guidelines often come in the form of checklists and flow diagrams. Table 6 presents a list of reporting guidelines. A full compilation of these guidelines is available at the EQUATOR (Enhancing the QUAlity and Transparency Of health Research) Network website [ 38 ] which aims to improve the reliability and value of medical literature by promoting transparent and accurate reporting of research studies. Publication of the trial protocol in a publicly available database is almost compulsory for publication of the full report in many potential journals.
Table 6. Examples of reporting guidelines and checklists.
Graphics and Tables
Graphics and tables should emphasize salient features of the underlying data and should coherently summarize large quantities of information. Although graphics provide a break from dense prose, authors must not forget that these illustrations should be scientifically informative, not decorative. The titles for graphics and tables should be clear, informative, provide the sample size, and use minimal font weight and formatting only to distinguish headings, data entry or to highlight certain results. Provide a consistent number of decimal points for the numerical results, and with no more than four for the P value. Most journals prefer cell-delineated tables created using the table function in word processing or spreadsheet programs. Some journals require specific table formatting such as the absence or presence of intermediate horizontal lines between cells.
Decisions of authorship are both sensitive and important and should be made at an early stage by the study’s stakeholders. Guidelines and journals’ instructions to authors abound with authorship qualifications. The guideline on authorship by the International Committee of Medical Journal Editors is widely known and provides a standard used by many medical and clinical journals [ 39 ]. Generally, authors are those who have made major contributions to the design, conduct, and analysis of the study, and who provided critical readings of the manuscript (if not involved directly in manuscript writing).
Picking a target journal for submission
Once a report has been written and revised, the authors should select a relevant target journal for submission. Authors should avoid predatory journals—publications that do not aim to advance science and disseminate quality research. These journals focus on commercial gain in medical and clinical publishing. Two good resources for authors during journal selection are Think-Check-Submit and the defunct Beall's List of Predatory Publishers and Journals (now archived and maintained by an anonymous third-party) [ 40 , 41 ]. Alternatively, reputable journal indexes such as Thomson Reuters Journal Citation Reports, SCOPUS, MedLine, PubMed, EMBASE, EBSCO Publishing's Electronic Databases are available areas to start the search for an appropriate target journal. Authors should review the journals’ names, aims/scope, and recently published articles to determine the kind of research each journal accepts for publication. Open-access journals almost always charge article publication fees, while subscription-based journals tend to publish without author fees and instead rely on subscription or access fees for the full text of published articles.
Conclusions
Conducting a valid clinical research requires consideration of theoretical study design, data collection design, and statistical analysis design. Proper study design implementation and quality control during data collection ensures high-quality data analysis and can mitigate bias and confounders during statistical analysis and data interpretation. Clear, effective study reporting facilitates dissemination, appreciation, and adoption, and allows the researchers to affect real-world change in clinical practices and care models. Neutral or absence of findings in a clinical study are as important as positive or negative findings. Valid studies, even when they report an absence of expected results, still inform scientific communities of the nature of a certain treatment or intervention, and this contributes to future research, systematic reviews, and meta-analyses. Reporting a study adequately and comprehensively is important for accuracy, transparency, and reproducibility of the scientific work as well as informing readers.
Acknowledgments
The author would like to thank Universiti Putra Malaysia and the Ministry of Higher Education, Malaysia for their support in sponsoring the Ph.D. study and living allowances for Boon-How Chew.
The content published in Cureus is the result of clinical experience and/or research by independent individuals or organizations. Cureus is not responsible for the scientific accuracy or reliability of data or conclusions published herein. All content published within Cureus is intended only for educational, research and reference purposes. Additionally, articles published within Cureus should not be deemed a suitable substitute for the advice of a qualified health care professional. Do not disregard or avoid professional medical advice due to content published within Cureus.
The materials presented in this paper is being organized by the author into a book.
- 1. Why most published research findings are false. Ioannidis JPA. PLoS Med. 2005;2:124. doi: 10.1371/journal.pmed.0020124. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
- 2. How to make more published research true. Ioannidis JPA. PLoS Med. 2014;11:1001747. doi: 10.1371/journal.pmed.1001747. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
- 3. Avoidable waste in the production and reporting of research evidence. Chalmers I, Glasziou P. Lancet. 2009;374:86–89. doi: 10.1016/S0140-6736(09)60329-9. [ DOI ] [ PubMed ] [ Google Scholar ]
- 4. The truth about the drug companies: how they deceive us and what to do about it. Charatan F. BMJ. 2004:862. [ Google Scholar ]
- 5. The scandal of poor medical research. Altman DG. BMJ. 1994;308:283–284. doi: 10.1136/bmj.308.6924.283. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
- 6. Their lordships on medical research. Smith R. BMJ. 1995;310:1552. doi: 10.1136/bmj.310.6994.1552. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
- 7. Grobbee DE, Hoes AW. Jones & Bartlett Learning: 2014. Clinical Epidemiology: Principles, Methods, and Applications for Clinical Research. [ Google Scholar ]
- 8. How to increase value and reduce waste when research priorities are set. Chalmers I, Bracken MB, Djulbegovic B, et al. Lancet. 2014;383:156–165. doi: 10.1016/S0140-6736(13)62229-1. [ DOI ] [ PubMed ] [ Google Scholar ]
- 9. Doing new research? Don't forget the old. Clarke M. PLoS Med. 2004;1:35. doi: 10.1371/journal.pmed.0010035. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
- 10. How systematic reviews cause research waste. Roberts I, Ker K. Lancet. 2015;386:1536. doi: 10.1016/S0140-6736(15)00489-4. [ DOI ] [ PubMed ] [ Google Scholar ]
- 11. Conceptual frameworks to illuminate and magnify. Bordage G. Med Educ. 2009;43:312–319. doi: 10.1111/j.1365-2923.2009.03295.x. [ DOI ] [ PubMed ] [ Google Scholar ]
- 12. Generating research questions appropriate for qualitative studies in health professions education. O'Brien BC, Ruddick VJ, Young JQ. Acad Med. 2016;91:16. doi: 10.1097/ACM.0000000000001438. [ DOI ] [ PubMed ] [ Google Scholar ]
- 13. Greenhalgh T. London: BMJ Books; 2014. How to Read a Paper: The Basics of Evidence-Based Medicine (How - How To) [ Google Scholar ]
- 14. Guyatt G, Drummond R, Meade M, Cook D. Chicago: McGraw Hill; 2008. The Evidence Based-Medicine Working Group Users’ Guides to the Medical Literature. [ Google Scholar ]
- 15. Generating good research questions in health professions education. Dine CJ, Shea JA, Kogan JR. Acad Med. 2016;91:8. doi: 10.1097/ACM.0000000000001413. [ DOI ] [ PubMed ] [ Google Scholar ]
- 16. Rothman KJ. Oxford, England, UK: Oxford University Press; 2012. Epidemiology: An Introduction. [ Google Scholar ]
- 17. Sample size for beginners. Florey CD. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1677669 . BMJ. 1993;306:1181–1184. doi: 10.1136/bmj.306.6886.1181. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
- 18. Estimating sample sizes for binary, ordered categorical, and continuous outcomes in two group comparisons. Campbell MJ, Julious SA, Altman DG. BMJ. 1995;311:1145–1148. doi: 10.1136/bmj.311.7013.1145. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
- 19. Caution regarding the use of pilot studies to guide power calculations for study proposals. Kraemer HC, Mintz J, Noda A, Tinklenberg J, Yesavage JA. Arch Gen Psychiatry. 2006;63:484–489. doi: 10.1001/archpsyc.63.5.484. [ DOI ] [ PubMed ] [ Google Scholar ]
- 20. Benchmarking controlled trial-a novel concept covering all observational effectiveness studies. Malmivaara A. https://www.ncbi.nlm.nih.gov/pubmed/25965700 . Ann Med. 2015;47:332–340. doi: 10.3109/07853890.2015.1027255. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
- 21. Evaluating the public health impact of health promotion interventions: the RE-AIM framework. Glasgow RE, Vogt TM, Boles SM. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1508772 . Am J Public Health. 1999;89:1322–1327. doi: 10.2105/ajph.89.9.1322. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
- 22. A pragmatic view on pragmatic trials. Patsopoulos NA. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3181997 . Dialogues Clinical Neurosci. 2011;13:217–224. doi: 10.31887/DCNS.2011.13.2/npatsopoulos. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
- 23. Why match? Investigating matched case-control study designs with causal effect estimation. Rose S, Laan MJ. Int J Biostat. 2009;5:1. doi: 10.2202/1557-4679.1127. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
- 24. Analysis of matched case-control studies. Pearce N. BMJ. 2016;352:969. doi: 10.1136/bmj.i969. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
- 25. Degree of matching and gain in power and efficiency in case-control studies. Sturmer T, Brenner H. https://journals.lww.com/epidem/Fulltext/2001/01000/Degree_of_Matching_and_Gain_in_Power_and.17.aspx . Epidemiol. 2001;12:101–108. doi: 10.1097/00001648-200101000-00017. [ DOI ] [ PubMed ] [ Google Scholar ]
- 26. A comparison of different matching designs in case-control studies: an empirical example using continuous exposures, continuous confounders and incidence of myocardial infarction. Friedlander Y, Merom DL, Kark JD. https://www.ncbi.nlm.nih.gov/pubmed/8341869 . Stat Med. 1993;12:993–1004. doi: 10.1002/sim.4780121101. [ DOI ] [ PubMed ] [ Google Scholar ]
- 27. Matching, an appealing method to avoid confounding? de Graaf MA, Jager KJ, Zoccali C, Dekker FW. Nephron Clin Pract. 2011;118:315–318. doi: 10.1159/000323136. [ DOI ] [ PubMed ] [ Google Scholar ]
- 28. Matching. Costanza MC. Prev Med. 1995;24:425–433. doi: 10.1006/pmed.1995.1069. [ DOI ] [ PubMed ] [ Google Scholar ]
- 29. Haynes RB, Sackett DL, Guyatt GH, Tugwell P. Philadelphia: Lippincott Williams & Wilkins; 2006. Clinical Epidemiology: How to Do Clinical Practice Research. [ Google Scholar ]
- 30. Petrie A. Chicago: Year Book Medical Publishers Inc.; 1988. Lecture Notes on Medical Statistics. [ Google Scholar ]
- 31. Kirkwood B, Sterne J. Hoboken: Wiley-Blackwell; 2003. Essential Medical Statistics (Essentials) [ Google Scholar ]
- 32. Hall GM. Hoboken: John Wiley & Sons; 2012. How to Write a Paper. [ Google Scholar ]
- 33. Peh WCG, Ng KH. The University of Malaya Press; 2010. Effective Medical Writing: The Write Way To Get Published (UM Press) [ Google Scholar ]
- 34. Medical writing tip: CHEST journal. [Jan;2019 ]; http://journal.publications.chestnet.org/collection.aspx 2014
- 35. Article collections published by the Public Library of Science. [Jan;2019 ];PLOS Collections. http://www.ploscollections.org/article/browse/issue/info 2014
- 36. Journal author academy: Springer. [Jan;2019 ]; http://www.springer.com/gp/authors-editors/journal-author/journal-author-academy 2014
- 37. SAGE research methods. [Jan;2019 ]; http://srmo.sagepub.com/ 2014
- 38. Enhancing the quality and transparency of health research. [Jan;2019 ]; http://www.equator-network.org/ doi: 10.1136/bmj.a718. [ DOI ] [ PMC free article ] [ PubMed ]
- 39. International Committee of Medical Journal Editors. [Jan;2019 ]; http://www.icmje.org/ 2019
- 40. Think check submit. [Jan;2019 ]; https://thinkchecksubmit.org/ 2019
- 41. (Archived) Beall's list of predatory journals and publishers. [Jan;2019 ]; https://beallslist.weebly.com/ 2019
- View on publisher site
- PDF (816.2 KB)
- Collections
Similar articles
Cited by other articles, links to ncbi databases.
- Download .nbib .nbib
- Format: AMA APA MLA NLM
Add to Collections
- Technical advance
- Open access
- Published: 11 February 2021
Conceptualising natural and quasi experiments in public health
- Frank de Vocht ORCID: orcid.org/0000-0003-3631-627X 1 , 2 , 3 ,
- Srinivasa Vittal Katikireddi 4 ,
- Cheryl McQuire 1 , 2 ,
- Kate Tilling 1 , 5 ,
- Matthew Hickman 1 &
- Peter Craig 4
BMC Medical Research Methodology volume 21 , Article number: 32 ( 2021 ) Cite this article
23k Accesses
70 Citations
92 Altmetric
Metrics details
Natural or quasi experiments are appealing for public health research because they enable the evaluation of events or interventions that are difficult or impossible to manipulate experimentally, such as many policy and health system reforms. However, there remains ambiguity in the literature about their definition and how they differ from randomized controlled experiments and from other observational designs. We conceptualise natural experiments in the context of public health evaluations and align the study design to the Target Trial Framework.
A literature search was conducted, and key methodological papers were used to develop this work. Peer-reviewed papers were supplemented by grey literature.
Natural experiment studies (NES) combine features of experiments and non-experiments. They differ from planned experiments, such as randomized controlled trials, in that exposure allocation is not controlled by researchers. They differ from other observational designs in that they evaluate the impact of events or process that leads to differences in exposure. As a result they are, in theory, less susceptible to bias than other observational study designs. Importantly, causal inference relies heavily on the assumption that exposure allocation can be considered ‘as-if randomized’. The target trial framework provides a systematic basis for evaluating this assumption and the other design elements that underpin the causal claims that can be made from NES.
Conclusions
NES should be considered a type of study design rather than a set of tools for analyses of non-randomized interventions. Alignment of NES to the Target Trial framework will clarify the strength of evidence underpinning claims about the effectiveness of public health interventions.
Peer Review reports
When designing a study to estimate the causal effect of an intervention, the experiment (particularly the randomised controlled trial (RCT) is generally considered to be the least susceptible to bias. A defining feature of the experiment is that the researcher controls the assignment of the treatment or exposure. If properly conducted, random assignment balances unmeasured confounders in expectation between the intervention and control groups . In many evaluations of public health interventions, however, it is not possible to conduct randomised experiments. Instead, standard observational epidemiological study designs have traditionally been used. These are known to be susceptible to unmeasured confounding.
Natural experimental studies (NES) have become popular as an alternative evaluation design in public health research, as they have distinct benefits over traditional designs [ 1 ]. In NES, although the allocation and dosage of treatment or exposure are not under the control of the researcher, they are expected to be unrelated to other factors that cause the outcome of interest [ 2 , 3 , 4 , 5 ]. Such studies can provide strong causal information in complex real-world situations, and can generate effect sizes close to the causal estimates from RCTs [ 6 , 7 , 8 ]. The term natural experiment study is sometimes used synonymously with quasi-experiment; a much broader term that can also refer to researcher-led but non-randomised experiments. In this paper we argue for a clearer conceptualisation of natural experiment studies in public health research, and present a framework to improve their design and reporting and facilitate assessment of causal claims.
Natural and quasi-experiments have a long history of use for evaluations of public health interventions. One of the earliest and best-known examples is the case of ‘Dr John Snow and the Broad Street pump’ [ 9 ]. In this study, cholera deaths were significantly lower among residents served by the Lambeth water company, which had moved its intake pipe to an upstream location of the Thames following an earlier outbreak, compared to those served by the Southwark and Vauxhall water company, who did not move their intake pipe. Since houses in the study area were serviced by either company in an essentially random manner, this natural experiment provided strong evidence that cholera was transmitted through water [ 10 ].
Natural and quasi experiments
Natural and quasi experiments are appealing because they enable the evaluation of changes to a system that are difficult or impossible to manipulate experimentally. These include, for example, large events, pandemics and policy changes [ 7 , 11 ]. They also allow for retrospective evaluation when the opportunity for a trial has passed [ 12 ]. They offer benefits over standard observational studies because they exploit variation in exposure that arises from an exogenous ( i.e. not caused by other factors in the analytic model [ 1 ]) event or intervention. This aligns them to the ‘ do -operator’ in the work of Pearl [ 13 ]. Quasi experiments (QES) and NES thus combine features of experiments (exogenous exposure) and non-experiments (observations without a researcher-controlled intervention). As a result, they are generally less susceptible to confounding than many other observational study designs [ 14 ]. However, a common critique of QES and NES is that because the processes producing variation in exposure are outside the control of the research team, there is uncertainty as to whether confounding has been sufficiently minimized or avoided [ 7 ]. For example, a QES of the impact of a voluntary change by a fast food chain to label its menus with information on calories on subsequent purchasing of calories [ 15 ]. Unmeasured differences in the populations that visit that particular chain compared to other fast-food choices could lead to residual confounding.
A distinction is sometimes made between QES and NES. The term ‘natural experiment’ has traditionally referred to the occurrence of an event with a natural cause; a ‘force of nature‘(Fig. 1 a) [ 1 ]. These make for some of the most compelling studies of causation from non-randomised experiments. For example, the Canterbury earthquakes in 2010–2011 have been used to study the causal impact of such disasters because about half of an established birth cohort lived in the affected area with the remainder of the cohort living elsewhere [ 16 ]. More recently, the use of the term ‘natural’ has been understood more broadly as an event which did not involve the deliberate manipulation of exposure for research purposes (for example a policy change), even if human agency was involved [ 17 ]. Compared to natural experiments in QES the research team may be able to influence exposure allocation, even if the event or exposure itself is not under their full control; for example in a phased roll out of a policy [ 18 ]. A well-known example of a natural experiment is the “Dutch Hunger Winter” summarised by Lumey et al. [ 19 ]. During this period in the Second World War the German authorities blocked all food supplies to the occupied West of the Netherlands, which resulted in widespread starvation. Food supplies were restored immediately after the country was liberated, so the exposure was sharply defined by time as well as place. Because there was sufficient food in the occupied and liberated areas of the Netherlands before and after the Hunger Winter, exposure to famine occurred based on an individual’s time and place (of birth) only. Similar examples of such ‘political’ natural experiment studies are the study of the impact of China’s Great Famine [ 20 ] and the ‘special period’ in Cuba’s history following the collapse of the Soviet Union and the imposition of a US blockade [ 21 ]. NES that describe the evaluation of an event which did not involve the deliberate manipulation of an exposure but involved human agency, such as the impact of a new policy, are the mainstay of ‘natural experimental research’ in public health, and the term NES has become increasingly popular to indicate any quasi-experimental design (although it has not completely replaced it).
Different conceptualisations of natural and quasi experiments within wider evaluation frameworks
Dunning takes the distinction of a NES further. He defines a NES as a QES where knowledge about the exposure allocation process provides a strong argument that allocation, although not deliberately manipulated by the researcher, is essentially random. This concept is referred to as ‘as-if randomization’ (Fig. 1 b) [ 4 , 8 , 10 ]. Under this definition, NES differ from QES in which the allocation of exposure, whether partly controlled by the researcher or not, does not clearly resemble a random process.
A third distinction between QES and NES has been made that argues that NES describe the study of unplanned events whereas QES describe evaluations of events that are planned (but not controlled by the researcher), such as policies or programmes specifically aimed at influencing an outcome (Fig. 1 c) [ 17 ]. In practice however, the distinction between these can be ambiguous.
When the assignment of exposure is not controlled by the researcher, with rare exceptions (for example lottery-system [ 22 ] or military draft [ 23 ] allocations), it is typically very difficult to prove that true (as-if) randomization occurred. Because of the ambiguity of ‘as-if randomization’ and the fact that the tools to assess this are the same as those used for assessment of internal validity in any observational study [ 12 ], the UK Medical Research Council (MRC) guidance advocates a broader conceptualisation of a NES. Under the MRC guidance, a NES is defined as any study that investigates an event that is not under the control of the research team, and which divides a population into exposed and unexposed groups, or into groups with different levels of exposure (Fig. 1 d).
Here, while acknowledging the remaining ambiguity regarding the precise definition of a NES, in consideration of the definitions above [ 24 ], we argue that:
what distinguishes NES from RCTs is that allocation is not controlled by the researchers and;
what distinguishes NES from other observational designs is that they specifically evaluate the impact of a clearly defined event or process which result in differences in exposure between groups.
A detailed assessment of the allocation mechanism (which determines exposure status) is essential. If we can demonstrate that the allocation process approximates a randomization process, any causal claims from NES will be substantially strengthened. The plausibility of the ‘as-if random’ assumption strongly depends on detailed knowledge of why and how individuals or groups of individuals were assigned to conditions and how the assignment process was implemented [ 10 ]. This plausibility can be assessed quantitatively for observed factors using standard tools for assessment of internal validity of a study [ 12 ], and should ideally be supplemented by a qualitative description of the assignment process. Common with contemporary public health practice, we will use the term ‘natural experiment study’, or NES to refer to both NES and QES, from hereon.
Medline, Embase and Google Scholar were searched using search terms including quasi-experiment, natural experiment, policy evaluation and public health evaluation and key methodological papers were used to develop this work. Peer-reviewed papers were supplemented by grey literature.
Part 1. Conceptualisations of natural experiments
An analytic approach.
Some conceptualisations of NES place their emphasis on the analytic tools that are used to evaluate natural experiments [ 25 , 26 ]. In this conceptualisation NES are understood as being defined by the way in which they are analysed, rather than by their design. An array of different statistical methods is available to analyse natural experiments, including regression adjustments, propensity scores, difference-in-differences, interrupted time series, regression discontinuity, synthetic controls, and instrumental variables. Overviews including strengths and limitations of the different methods are provided in [ 12 , 27 ]. However, an important drawback of this conceptualisation is that it suggests that there is a distinct set of methods for the analysis of NES.
A study design
The popularity of NES has resulted in some conceptual stretching, where the label is applied to a research design that only implausibly meets the definitional features of a NES [ 10 ]. For example, observational studies exploring variation in exposures (rather than the study of an event or change in exposure) have sometimes also been badged as NES. A more stringent classification of NES as a type of study design, rather than a collection of analytic tools, is important because it prevents attempts to incorrectly cover observational studies with a ‘glow of experimental legitimacy’ [ 10 ]. If the design rather than the statistical methodology defines a NES, this allows an open-ended array of statistical tools. These tools are not necessarily constrained by those mentioned above, but could also, for example, include new methods such as synthetic controls that can be utilised to analyse the natural experiments. The choice of appropriate evaluation method should be based on what is most suitable for each particular study, and then depends on the knowledge about the event, the availability of data, and design elements such as its allocation process.
Dunning argues that it is the overall research design, rather than just the statistical methods, that compels conviction when making causal claims. He proposes an evaluation framework for NES along the three dimensions of (1) the plausibility of as-if randomization of treatment, (2) the credibility of causal and statistical models, and (3) the substantive relevance of the treatment. Here, the first dimension is considered key for distinguishing NES from other QES [ 4 ]. NES can be divided into those where a plausible case for ‘as-if random’ assignment can be made (which he defines as NES), and those where confounding from observed factors is directly adjusted for through statistical means. The validity of the latter (which Dunning defines as ‘other quasi experiments’, and we define as ‘weaker NES’) relies on the assumption that unmeasured confounding is absent [ 8 ], and is considered less credible in theory for making causal claims [ 4 ]. In this framework, the ‘as-if-randomised’ NES can be viewed as offering stronger causal evidence than other quasi-experiments. In principle, they offer an opportunity for direct estimates of effects (akin to RCTs) where control for confounding factors would not necessarily be required [ 4 ], rather than relying on adjustment to derive conditional effect estimates [ 10 ]. Of course, the latter may well reach valid and compelling conclusions as well, but causal claims suffer to a higher degree from the familiar threats of bias and unmeasured confounding.
Part 2. A target trial framework for natural experiment studies
In this section, we provide recommendations for evaluation of the ‘as if random’ assumption and provide a unifying Target Trial Framework for NES, which brings together key sets of criteria that can be used to appraise the strength of causal claims from NES and assist with study design and reporting.
In public health, there is considerable overlap between analytic and design-based uses of the term NES. Nevertheless, we argue that if we consider NES a type of study design, causal inference can be strengthened by clear appraisal of the likelihood of ‘as-if’ random allocation of exposure. This should be demonstrated by both empirical evidence and by knowledge and reasoning about the causal question and substantive domain under question [ 8 , 10 ]. Because the concept of ‘as-if’ randomization is difficult, if not impossible to prove, it should be thought of along a ‘continuum of plausibility’ [ 10 ]. Specifically, for claims of ‘as-if’ randomization to be plausible, it must be demonstrated that the variables that determine treatment assignment are exogenous. This means that they are: i) strongly correlated with treatment status but are not caused by the outcome of interest (i.e. no reverse causality) and ii) independent of any other (measured or unmeasured) causes of the outcome of interest [ 8 ].
Given this additional layer of justification, especially with respect to the qualitative knowledge of the assignment process and domain knowledge from practitioners more broadly, we argue where feasible for the involvement of practitioners. This could, for example, be formalized through co-production in which members of the public and policy makers are involved in the development of the evaluation. If we appraise NES as a type of study design, which distinguish themselves from other designs because i) there is a particular change in exposure that is evaluated and ii) causal claims are supported by an argument of the plausibility of as-if randomization, then we guard against conflating NES with other observational designs [ 10 , 28 ].
There is a range of ways of dealing with the problems of selection on measured and unmeasured confounders in NES [ 8 , 10 ] which can be understood in terms of a ‘target trial’ we are trying to emulate, had randomization been possible [ 29 ]. The protocol of a target trial describes seven components common to RCTs (‘eligibility criteria’, ‘treatment strategies’, ‘assignment procedures’, ‘follow-up period’, ‘outcome’, ‘causal contrasts of interest’, and the ‘analysis plan’), and provides a systematic way of improving, reporting and appraising NES relative to a ‘gold standard’ (but often not feasible in practice) trial. In the design phase of a NES deviations from the target trial in each domain can be used to evaluate where improvements and where concessions will have to be made. This same approach can be used to appraise existing NES. The target trial framework also provides a structured way for reporting NES, which will facilitate evaluation of the strength of NES, improve consistency and completeness of reporting, and benefit evidence syntheses.
In Table 1 , we bring together elements of the Target Trial framework and conceptualisations of NES to derive a framework to describe the Target Trial for NES [ 12 ]. By encouraging researchers to address the questions in Table 1 , the framework provides a structured approach to the design, reporting and evaluation of NES across the seven target trial domains. Table 1 also provides recommendations to improve the strength of causal claims from NES, focussing primarily on sensitivity analyses to improve internal validity.
An illustrative example of a well-developed NES based on the criteria outlined in Table 1 is by Reeves et al. [ 39 ]. The NES evaluates the impact of the introduction of a National Minimum Wage on mental health. The study compared a clearly defined intervention group of recipients of a wage increase up to 110% of pre-intervention wage with clearly defined control groups of (1) people ineligible to the intervention because their wage at baseline was just above (100–110%) minimum wage and (2) people who were eligible, but whose companies did not comply and did not increase minimum wage. This study also included several sensitivity tests to strengthen causal arguments. We have aligned this study to the Target Trial framework in Additional file 1 .
The Target Trial Approach for NES (outlined in Table 1 ) provides a straightforward approach to improve, report, and appraise existing NES and to assist in the design of future studies. It focusses on structural design elements and goes beyond the use of quantitative tools alone to assess internal validity [ 12 ]. This work complements the ROBINS-I tool for assessing risk of bias in non-randomised studies of interventions, which similarly adopted the Target Trial framework [ 40 ]. Our approach focusses on the internal validity of a NES, with issues of construct and external validity being outside of the scope of this work (guidelines for these are provided in for example [ 41 ]). It should be acknowledged that less methodologically robust studies can still reach valid and compelling conclusions, even without resembling the notional target trial. However, we believe that drawing on the target trial framework helps highlight occasions when causal inference can be made more confidently.
And finally, the framework does explicitly exclude observational studies that aim to investigate the effects of changes in behaviour without an externally forced driver to do so. For example, although a cohort study can be the basis for the evaluation of a NES in principle, effects of the change of diet of some participants (compared to those who did not change their diet) is not an external cause (i.e. exogenous) and does not fall within the definition of an experiment [ 11 ]. However, such studies are likely to be more convincing than those which do not study within-person changes and we note that the statistical methods used may be similar to NES.
Despite their advantages, NES remain based on observational data and thus biases in assignment of the intervention can never be completely excluded (although for plausibly ‘as if randomised’ natural experiments these should be minimal). It is therefore important that a robust assessment of different potential sources of bias is reported. It has additionally been argued that sensitivity analyses are required to assess whether a pattern of small biases could explain away any ostensible effect of the intervention, because confidence intervals and statistical tests do not do this [ 14 ]. Recommendations that would improve the confidence with which we can make causal claims from NES, derived from work by Rosenbaum [ 14 ], have been outlined in Table 1 . Although sensitivity analyses can place plausible limits on the size of the effects of hidden biases, because such analyses are susceptible to assumptions about the maximum size of omitted biases, they cannot completely rule out residual bias [ 34 ]. Of importance for the strength of causal claims therefore, is the triangulation of NES with other evaluations using different data or study designs susceptible to different sources of bias [ 5 , 42 ].
None of the recommendations outlined in Table 1 will by themselves eliminate bias in a NES, but neither is it required to implement all of them to be able to make a causal claim with some confidence. Instead, a continuum of confidence in the causal claims based on the study design and the data is a more appropriate and practical approach [ 43 ]. Each sensitivity analysis aims to minimise ambiguity of a particular potential bias or biases, and as such a combination of selected sensitivity analyses can strengthen causal claims [ 14 ]. We would generally, but not strictly, consider a well conducted RCT as the design where we are most confident about such claims, followed by natural experiments, and then other observational studies; this would be an extension of the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) framework [ 44 ]. GRADE provides a system for rating the quality (or certainty) of a body of evidence and grading the strength of recommendations for use in systematic reviews, health technology assessments (HTAs), and clinical practice guidelines. It typically only distinguishes between trials and observational studies when making these judgments (note however, that recent guidance does not make this explicit distinction when using ROBINS-I [ 45 ]). Given the increased contribution of NES in public health, especially those based on routine data [ 37 ], the specific inclusion of NES in this system might improve the rating of the evidence from these study designs.
Our recommendations are of particular importance for ensuring rigour in the context of (public) health research where natural experiments have become increasingly popular for a variety of reasons, including the availability of large routinely collected datasets [ 37 ]. Such datasets invite the discovery of natural experiments, even where the data may not be particularly applicable to this design, but also these enable many of the sensitivity analyses to be conducted from within the same dataset or through linkage to other routine datasets.
Finally, alignment to the Target Trial Framework also links natural experiment studies directly to other measures of trial validity, including pre-registration, reporting checklists, and evaluation through risk-of-bias-tools [ 40 ]. This aligns with previous recommendations to use established reporting guidelines such as STROBE, TREND [ 12 ], and TIDieR-PHP [ 46 ] for the reporting of natural experiment studies. These reporting guidelines could be customized to specific research areas (for example, as developed for a systematic review of quasi-experimental studies of prenatal alcohol use and birthweight and neurodevelopment [ 47 ]).
We provide a conceptualisation of natural experiment studies as they apply to public health. We argue for the appreciation of natural experiments as a type of study design rather than a set of tools for the analyses of non-randomised interventions. Although there will always remain some ambiguity about the strength of causal claims, there are clear benefits to harnessing NES rather than relying purely on observational studies. This includes the fact that NES can be based on routinely available data and that timely evidence of real-world relevance can be generated. The inclusion of a discussion of the plausibility of as-if randomization of exposure allocation will provide further confidence in the strength of causal claims.
Aligning NES to the Target Trial framework will guard against conceptual stretching of these evaluations and ensure that the causal claims about whether public health interventions ‘work’ are based on evidence that is considered ‘good enough’ to inform public health action within a ‘practice-based evidence’ framework. This framework describes how evaluations can help reducing critical uncertainties and adjust the compass bearing of existing policy (in contrast to the ‘evidence-based practice’ framework in which RCTs are used to generate ‘definitive’ evidence for particular interventions) [ 48 ].
Availability of data and materials
Data sharing is not applicable to this article as no datasets were generated or analysed during the current study.
Abbreviations
Randomised Controlled Trial
Natural Experiment
Stable Unit Treatment Value Assumption
Intention-To-Treat
Shadish WR, Cook TD, Campbell DT. Experimental and Quasi-Experimental Designs. 2nd ed. Wadsworth, Cengage Learning: Belmont; 2002.
Google Scholar
King G, Keohane RO, Verba S. The importance of research Design in Political Science. Am Polit Sci Rev. 1995;89:475–81.
Article Google Scholar
Meyer BD. Natural and quasi-experiments in economics. J Bus Econ Stat. 1995;13:151–61.
Dunning T. Natural experiments in the social sciences. A design-based approach. 6th edition. Cambridge: Cambridge University Press; 2012.
Book Google Scholar
Craig P, Cooper C, Gunnell D, Haw S, Lawson K, Macintyre S, et al. Using natural experiments to evaluate population health interventions: new medical research council guidance. J Epidemiol Community Health. 2012;66:1182–6.
Cook TD, Shadish WR, Wong VC. Three conditions under which experiments and observational studies produce comparable causal estimates: new findings from within-study comparisons. J Policy Anal Manag. 2008;27:724–50.
Bärnighausen T, Røttingen JA, Rockers P, Shemilt I, Tugwell P. Quasi-experimental study designs series—paper 1: introduction: two historical lineages. J Clin Epidemiol. 2017;89:4–11.
Waddington H, Aloe AM, Becker BJ, Djimeu EW, Hombrados JG, Tugwell P, et al. Quasi-experimental study designs series—paper 6: risk of bias assessment. J Clin Epidemiol. 2017;89:43–52.
Saeed S, Moodie EEM, Strumpf EC, Klein MB. Evaluating the impact of health policies: using a difference-in-differences approach. Int J Public Health. 2019;64:637–42.
Dunning T. Improving causal inference: strengths and limitations of natural experiments. Polit Res Q. 2008;61:282–93.
Bärnighausen T, Tugwell P, Røttingen JA, Shemilt I, Rockers P, Geldsetzer P, et al. Quasi-experimental study designs series—paper 4: uses and value. J Clin Epidemiol. 2017;89:21–9.
Craig P, Katikireddi SV, Leyland A, Popham F. Natural experiments: an overview of methods, approaches, and contributions to public health intervention research. Annu Rev Public Health. 2017;38:39–56.
Pearl J, Mackenzie D. The book of why: the new science of cause and effect. London: Allen Lane; 2018.
Rosenbaum PR. How to see more in observational studies: some new quasi-experimental devices. Annu Rev Stat Its Appl. 2015;2:21–48.
Petimar J, Ramirez M, Rifas-Shiman SL, Linakis S, Mullen J, Roberto CA, et al. Evaluation of the impact of calorie labeling on McDonald’s restaurant menus: a natural experiment. Int J Behav Nutr Phys Act. 2019;16. Article no: 99.
Fergusson DM, Horwood LJ, Boden JM, Mulder RT. Impact of a major disaster on the mental health of a well-studied cohort. JAMA Psychiatry. 2014;71:1025–31.
Remler DK, Van Ryzin GG. Natural and quasi experiments. In: Research methods in practice: strategies for description and causation. 2nd ed. Thousand Oaks: SAGE Publication Inc.; 2014. p. 467–500.
Cook PA, Hargreaves SC, Burns EJ, De Vocht F, Parrott S, Coffey M, et al. Communities in charge of alcohol (CICA): a protocol for a stepped-wedge randomised control trial of an alcohol health champions programme. BMC Public Health. 2018;18. Article no: 522.
Lumey LH, Stein AD, Kahn HS, Van der Pal-de Bruin KM, Blauw GJ, Zybert PA, et al. Cohort profile: the Dutch hunger winter families study. Int J Epidemiol. 2007;36:1196–204.
Article CAS Google Scholar
Meng X, Qian N. The Long Term Consequences of Famine on Survivors: Evidence from a Unique Natural Experiment using China’s Great Famine. Natl Bur Econ Res Work Pap Ser. 2011;NBER Worki.
Franco M, Bilal U, Orduñez P, Benet M, Morejón A, Caballero B, et al. Population-wide weight loss and regain in relation to diabetes burden and cardiovascular mortality in Cuba 1980-2010: repeated cross sectional surveys and ecological comparison of secular trends. BMJ. 2013;346:f1515.
Angrist J, Bettinger E, Bloom E, King E, Kremer M. Vouchers for private schooling in Colombia: evidence from a randomized natural experiment. Am Econ Rev. 2002;92:1535–58.
Angrist JD. Lifetime earnings and the Vietnam era draft lottery: evidence from social security administrative records. Am Econ Rev. 1990;80:313–36.
Dawson A, Sim J. The nature and ethics of natural experiments. J Med Ethics. 2015;41:848–53.
Bärnighausen T, Oldenburg C, Tugwell P, Bommer C, Ebert C, Barreto M, et al. Quasi-experimental study designs series—paper 7: assessing the assumptions. J Clin Epidemiol. 2017;89:53-66.
Tugwell P, Knottnerus JA, McGowan J, Tricco A. Big-5 Quasi-Experimental designs. J Clin Epidemiol. 2017;89:1–3.
Reeves BC, Wells GA, Waddington H. Quasi-experimental study designs series—paper 5: a checklist for classifying studies evaluating the effects on health interventions—a taxonomy without labels. J Clin Epidemiol. 2017;89:30–42.
Rubin DB. For objective causal inference, design trumps analysis. Ann Appl Stat. 2008;2:808–40.
Hernán MA, Robins JM. Using big data to emulate a target trial when a randomized trial is not available. Am J Epidemiol. 2016;183:758–64.
Benjamin-Chung J, Arnold BF, Berger D, Luby SP, Miguel E, Colford JM, et al. Spillover effects in epidemiology: parameters, study designs and methodological considerations. Int J Epidemiol. 2018;47:332–47.
Munafò MR, Tilling K, Taylor AE, Evans DM, Smith GD. Collider scope: when selection bias can substantially influence observed associations. Int J Epidemiol. 2018;47:226–35.
Schwartz S, Gatto NM, Campbell UB. Extending the sufficient component cause model to describe the stable unit treatment value assumption (SUTVA). Epidemiol Perspect Innov. 2012;9:3.
Cawley J, Thow AM, Wen K, Frisvold D. The economics of taxes on sugar-sweetened beverages: a review of the effects on prices, sales, cross-border shopping, and consumption. Annu Rev Nutr. 2019;39:317–38.
Reichardt CS. Nonequivalent Group Designs. In: Quasi-Experimentation. A Guide to Design and Analysis. 1st edition. New York: The Guildford Press; 2019. p. 112–162.
Denzin N. Sociological methods: a sourcebook. 5th ed. New York: Routledges; 2006.
Matthay EC, Hagan E, Gottlieb LM, Tan ML, Vlahov D, Adler NE, et al. Alternative causal inference methods in population health research: evaluating tradeoffs and triangulating evidence. SSM - Popul Heal. 2020;10:10052.
Leatherdale ST. Natural experiment methodology for research: a review of how different methods can support real-world research. Int J Soc Res Methodol. 2019;22:19–35.
Reichardt CS. Quasi-experimentation. A guide to design and analysis. 1st ed. New York: The Guildford Press; 2019.
Reeves A, McKee M, Mackenbach J, Whitehead M, Stuckler D. Introduction of a National Minimum Wage Reduced Depressive Symptoms in Low-Wage Workers: A Quasi-Natural Experiment in the UK. Heal Econ (United Kingdom). 2017;26:639–55.
Sterne JA, Hernán MA, Reeves BC, Savović J, Berkman ND, Viswanathan M, et al. ROBINS-I: a tool for assessing risk of bias in non-randomised studies of interventions. BMJ. 2016;355:i4919.
Shadish WR, Cook TD, Campbell DT. Generalized Causal Inference: A Grounded Theory. In: Experimental and Quasi-Experimental Designs for Generalized Causal Inference. 2nd ed. Belmont: Wadsworth, Cengage Learning; 2002. p. 341–73.
Lawlor DA, Tilling K, Smith GD. Triangulation in aetiological epidemiology. Int J Epidemiol. 2016;45:1866–86.
Hernán MA. The C-word: scientific euphemisms do not improve causal inference from observational data. Am J Public Health. 2018;108:616–9.
Guyatt G, Oxman AD, Akl EA, Kunz R, Vist G, Brozek J, et al. GRADE guidelines: 1. Introduction - GRADE evidence profiles and summary of findings tables. J Clin Epidemiol. 2011;64:383–94.
Schünemann HJ, Cuello C, Akl EA, Mustafa RA, Meerpohl JJ, Thayer K, et al. GRADE guidelines: 18. How ROBINS-I and other tools to assess risk of bias in nonrandomized studies should be used to rate the certainty of a body of evidence. J Clin Epidemiol. 2019;111:105–14.
Campbell M, Katikireddi SV, Hoffmann T, Armstrong R, Waters E, Craig P. TIDieR-PHP: a reporting guideline for population health and policy interventions. BMJ. 2018;361:k1079.
Mamluk L, Jones T, Ijaz S, Edwards HB, Savović J, Leach V, et al. Evidence of detrimental effects of prenatal alcohol exposure on offspring birthweight and neurodevelopment from a systematic review of quasi-experimental studies. Int J Epidemiol. 2021;49(6):1972-95.
Ogilvie D, Adams J, Bauman A, Gregg EW, Panter J, Siegel KR, et al. Using natural experimental studies to guide public health action: turning the evidence-based medicine paradigm on its head. J Epidemiol Community Health. 2019;74:203–8.
Download references
Acknowledgements
This study is funded by the National Institute for Health Research (NIHR) School for Public Health Research (Grant Reference Number PD-SPH-2015). The views expressed are those of the author(s) and not necessarily those of the NIHR or the Department of Health and Social Care. The funder had no input in the writing of the manuscript or decision to submit for publication. The NIHR School for Public Health Research is a partnership between the Universities of Sheffield; Bristol; Cambridge; Imperial; and University College London; The London School for Hygiene and Tropical Medicine (LSHTM); LiLaC – a collaboration between the Universities of Liverpool and Lancaster; and Fuse - The Centre for Translational Research in Public Health a collaboration between Newcastle, Durham, Northumbria, Sunderland and Teesside Universities. FdV is partly funded by National Institute for Health Research Applied Research Collaboration West (NIHR ARC West) at University Hospitals Bristol NHS Foundation Trust. SVK and PC acknowledge funding from the Medical Research Council (MC_UU_12017/13) and Scottish Government Chief Scientist Office (SPHSU13). SVK acknowledges funding from a NRS Senior Clinical Fellowship (SCAF/15/02). KT works in the MRC Integrative Epidemiology Unit, which is supported by the Medical Research Council (MRC) and the University of Bristol [MC_UU_00011/3].
Author information
Authors and affiliations.
Population Health Sciences, Bristol Medical School, University of Bristol, Canynge Hall, 39 Whatley Road, Bristol, BS8 2PS, UK
Frank de Vocht, Cheryl McQuire, Kate Tilling & Matthew Hickman
NIHR School for Public Health Research, Newcastle, UK
Frank de Vocht & Cheryl McQuire
NIHR Applied Research Collaboration West, Bristol, UK
Frank de Vocht
MRC/CSO Social and Public Health Sciences Unit, University of Glasgow, Bristol, UK
Srinivasa Vittal Katikireddi & Peter Craig
MRC IEU, University of Bristol, Bristol, UK
Kate Tilling
You can also search for this author in PubMed Google Scholar
Contributions
FdV conceived of the study. FdV, SVK,CMQ,KT,MH, PC interpretated the evidence and theory. FdV wrote the first version of the manuscript. SVK,CMQ,KT,MH, PC provided substantive revisions to subsequent versions. All authors have read and approved the manuscript. FdV, SVK,CMQ,KT,MH, PC agreed to be personally accountable for their own contributions and will ensure that questions related to the accuracy or integrity of any part of the work, even ones in which the author was not personally involved, are appropriately investigated, resolved, and the resolution documented in the literature.
Corresponding author
Correspondence to Frank de Vocht .
Ethics declarations
Ethics approval and consent to participate.
Not applicable.
Consent for publication
Competing interests.
The authors declare that they have no competing interests.
Additional information
Publisher’s note.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Information
Additional file 1..
Online Supplementary Material. Table 1 . the Target Trial for Natural Experiments and Reeves et al. [ 28 ]. Alignment of Reeves et al. (Introduction of a National Minimum Wage Reduced Depressive Symptoms in Low-Wage Workers: A Quasi-Natural Experiment in the UK. Heal Econ. 2017;26:639–55) to the Target Trial framework.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
Reprints and permissions
About this article
Cite this article.
de Vocht, F., Katikireddi, S.V., McQuire, C. et al. Conceptualising natural and quasi experiments in public health. BMC Med Res Methodol 21 , 32 (2021). https://doi.org/10.1186/s12874-021-01224-x
Download citation
Received : 14 July 2020
Accepted : 28 January 2021
Published : 11 February 2021
DOI : https://doi.org/10.1186/s12874-021-01224-x
Share this article
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
- Public health
- Public health policy
- Natural experiments
- Quasi experiments
- Evaluations
BMC Medical Research Methodology
ISSN: 1471-2288
- General enquiries: [email protected]
Cookies on GOV.UK
We use some essential cookies to make this website work.
We’d like to set additional cookies to understand how you use GOV.UK, remember your settings and improve government services.
We also use cookies set by other sites to help us deliver content from their services.
You have accepted additional cookies. You can change your cookie settings at any time.
You have rejected additional cookies. You can change your cookie settings at any time.
Quasi-experimental study: comparative studies
How to use a quasi-experimental study to evaluate your digital health product.
Experimental and quasi-experimental studies can both be used to evaluate whether a digital health product achieves its aims. Randomised controlled trials are classed as experiments. They provide a high level of evidence for the relationship between cause (your digital product) and effect (the outcomes). There are particular things you must do to demonstrate cause and effect, such as randomising participants to groups. A quasi-experiment lacks at least one of these requirements; for example, you are unable to assign your participants to groups. However, quasi-experimental studies can still be used to evaluate how well your product is working.
The phrase ‘quasi-experimental’ often refers to the approach taken rather than a specific method. There are several designs of quasi-experimental studies.
What to use it for
A quasi-experimental study can help you to find out whether your digital product or service achieves its aims, so it can be useful when you have developed your product (summative evaluation). Quasi-experimental methods are often used in economic studies. You could also use them during development (formative or iterative evaluation) to find out how you can improve your product.
Benefits of quasi-experiments include:
- they can mimic an experiment and provide a high level of evidence without randomisation
- there are several designs to choose from that you can adapt depending on your context
- they can be used when there are practical or ethical reasons why participants can’t be randomised
Drawbacks of quasi-experiments include:
- you cannot rule out that other factors out of your control caused the results of your evaluation, although you can minimise this risk
- choosing an appropriate comparison group can be difficult
How to carry out a quasi-experimental study
There are 3 requirements for demonstrating cause and effect:
- randomisation – participants are randomly allocated to groups to make sure the groups are as similar to each other as possible, allowing comparison
- control – a control group is used to compare with the group receiving the product or intervention
- manipulation – the researcher manipulates aspects of what happens, such as assigning participants to different groups
These features make sure that your product has caused the outcomes you found. Otherwise, you cannot rule out that other influencing factors may have distorted your results and conclusions:
Confounding variables
Confounding variables are other variables that might influence the results. If participants in different groups systematically differ on these variables, the difference in outcomes between the groups may be because of the confounding variable rather than the experimental manipulation. The only way to get rid of all confounding variables is through randomisation because when we randomise, the variables will be present in equal numbers in both groups, even if we haven’t identified what these confounding variables are.
Bias means any process that produces systematic errors in the study, for example, errors in recruiting participants, collecting data or analysis, and drawing conclusions. This influences the results and conclusions of your study.
When you carry out a quasi-experimental study you should minimise biases and confounders. If you cannot randomise, you can increase the strength of your research design by:
- comparing your participants to an appropriate group that did not have access to your digital product
- measuring your outcomes before and after your product was introduced
Based on these 3 routes, here is an overview of different types of quasi-experimental designs.
Quasi-experimental designs with a comparison
One way to increase the strength of your results is by finding a comparison group that has similar attributes to your participants and then comparing the outcomes between the groups.
Because you have not randomly assigned participants, pre-existing differences between the people who had access to your product and those who did not may exist. These are called selection differences. It is important to choose your comparison appropriately to reduce this.
For example, if your digital product was introduced in one region, you could compare outcomes in another region. However, people in different regions may have different outcomes for other reasons (confounding variables). One region may be wealthier than another or have better access to alternative health services. The age profile may be different. You could consider what confounding variables might exist and pick a comparison region that has a similar profile.
Quasi-experimental designs with a before-after assessment
In this design, you assess outcomes for participants both before and after your product is introduced, and then compare. This is another way to minimise the effects of not randomly assigning participants.
Potential differences between participants in your evaluation could still have an impact on the results, but assessing participants before they used your product helps to decrease the influence of confounders and biases.
Be aware of additional issues associated with observing participants over time, for example:
- testing effects – participants’ scores are influenced by them repeating the same tests
- regression towards the mean – if you select participants on the basis that they have high or low scores on some measure, their scores may become more moderate over time because their initial extreme score was just random chance
- background changes – for example, demand for a service may be increasing over time, putting stresses on the service and leading to poorer outcomes
Time series designs
These quasi-experiments involve repeating data collection at many points in time before and after treatment.
There are a variety of designs that use time series:
- basic time series – assesses outcomes multiple times before and after your digital product is introduced
- control time series – introduces results from a comparison group
- turning the intervention on and off throughout the study to compare the effects
- interrupted time series – collects data before and after an interruption
In the analysis, the patterns of change over time are compared.
Digital technology is particularly suitable for time series design because digital devices allow you to collect data automatically and frequently. Ecological momentary assessment can be used to collect data.
By including multiple before-and-after assessments, you may be able to minimise problems of the weaker designs, such as simple one group before/after designs described above. There are also different ways to increase the strength of your design, for example by introducing multiple baselines.
Quasi-experimental designs with comparison and before-after assessment
Both including a comparison group and conducting a before-after assessment of the outcomes increases the strength of your design. This gives you greater confidence that your results are caused by the digital product you introduced.
Remember that not randomly assigning participants to the comparison groups and repeated measurements create some challenges with this design compared to a randomised experimental design.
If you cannot use comparison or before-after assessment
If there is no appropriate comparison group and you cannot compare participants before and after your digital product was introduced, drawing any conclusions around cause and effect of your digital product will be challenging.
This type of quasi-experimental design is most susceptible to biases and confounders that may affect the results of your evaluation. Still, using a design with one group and only testing participants after they receive the intervention will give you some insights about how your product is performing and will give you valuable directions for designing a stronger evaluation plan.
Causal methods
Causal inference methods use statistical methods to try and infer causal relationships from data that does not come from an experiment. They rely on identifying any confounding variables and on data being available for individuals for these variables. Read Pearl (2010), An introduction to causal inference for more information.
Examples of quasi-experimental methods
Case-control study , interrupted time-series , N-of-1 , before-and-after study and ecological momentary assessment can be seen as examples of quasi-experimental methods.
More information and resources
Sage research methods (2010), Quasi-experimental design . This explores the threats to the validity of quasi-experimental studies that you want to look out for when designing your study.
Pearl (2010), An introduction to causal inference . Information about causal methods.
Examples of quasi-experimental studies in digital health
Faudjar and others (2020), Field testing of a digital health information system for primary health care: A quasi-experimental study from India . Researchers developed a comprehensive digital tool for primary care and used a quasi-experimental study to evaluate it by comparing 2 communities.
Mitchel and others (2020), Commercial app use linked with sustained physical activity in two Canadian provinces: a 12-month quasi-experimental study . This study assessed one group before and after they gained access to an app that gives incentives for engaging in physical activity.
Peyman and others (2018), Digital Media-based Health Intervention on the promotion of Women’s physical activity: a quasi-experimental study . Researchers wanted to evaluate the impact of digital health on promoting physical activity in women. Eight active health centres were randomly selected to the intervention and control.
Updates to this page
Sign up for emails or print this page, related content, is this page useful.
- Yes this page is useful
- No this page is not useful
Help us improve GOV.UK
Don’t include personal or financial information like your National Insurance number or credit card details.
To help us improve GOV.UK, we’d like to know more about your visit today. Please fill in this survey (opens in a new tab) .
Designing and Conducting Experimental and Quasi-Experimental Research
You approach a stainless-steel wall, separated vertically along its middle where two halves meet. After looking to the left, you see two buttons on the wall to the right. You press the top button and it lights up. A soft tone sounds and the two halves of the wall slide apart to reveal a small room. You step into the room. Looking to the left, then to the right, you see a panel of more buttons. You know that you seek a room marked with the numbers 1-0-1-2, so you press the button marked "10." The halves slide shut and enclose you within the cubicle, which jolts upward. Soon, the soft tone sounds again. The door opens again. On the far wall, a sign silently proclaims, "10th floor."
You have engaged in a series of experiments. A ride in an elevator may not seem like an experiment, but it, and each step taken towards its ultimate outcome, are common examples of a search for a causal relationship-which is what experimentation is all about.
You started with the hypothesis that this is in fact an elevator. You proved that you were correct. You then hypothesized that the button to summon the elevator was on the left, which was incorrect, so then you hypothesized it was on the right, and you were correct. You hypothesized that pressing the button marked with the up arrow would not only bring an elevator to you, but that it would be an elevator heading in the up direction. You were right.
As this guide explains, the deliberate process of testing hypotheses and reaching conclusions is an extension of commonplace testing of cause and effect relationships.
Basic Concepts of Experimental and Quasi-Experimental Research
Discovering causal relationships is the key to experimental research. In abstract terms, this means the relationship between a certain action, X, which alone creates the effect Y. For example, turning the volume knob on your stereo clockwise causes the sound to get louder. In addition, you could observe that turning the knob clockwise alone, and nothing else, caused the sound level to increase. You could further conclude that a causal relationship exists between turning the knob clockwise and an increase in volume; not simply because one caused the other, but because you are certain that nothing else caused the effect.
Independent and Dependent Variables
Beyond discovering causal relationships, experimental research further seeks out how much cause will produce how much effect; in technical terms, how the independent variable will affect the dependent variable. You know that turning the knob clockwise will produce a louder noise, but by varying how much you turn it, you see how much sound is produced. On the other hand, you might find that although you turn the knob a great deal, sound doesn't increase dramatically. Or, you might find that turning the knob just a little adds more sound than expected. The amount that you turned the knob is the independent variable, the variable that the researcher controls, and the amount of sound that resulted from turning it is the dependent variable, the change that is caused by the independent variable.
Experimental research also looks into the effects of removing something. For example, if you remove a loud noise from the room, will the person next to you be able to hear you? Or how much noise needs to be removed before that person can hear you?
Treatment and Hypothesis
The term treatment refers to either removing or adding a stimulus in order to measure an effect (such as turning the knob a little or a lot, or reducing the noise level a little or a lot). Experimental researchers want to know how varying levels of treatment will affect what they are studying. As such, researchers often have an idea, or hypothesis, about what effect will occur when they cause something. Few experiments are performed where there is no idea of what will happen. From past experiences in life or from the knowledge we possess in our specific field of study, we know how some actions cause other reactions. Experiments confirm or reconfirm this fact.
Experimentation becomes more complex when the causal relationships they seek aren't as clear as in the stereo knob-turning examples. Questions like "Will olestra cause cancer?" or "Will this new fertilizer help this plant grow better?" present more to consider. For example, any number of things could affect the growth rate of a plant-the temperature, how much water or sun it receives, or how much carbon dioxide is in the air. These variables can affect an experiment's results. An experimenter who wants to show that adding a certain fertilizer will help a plant grow better must ensure that it is the fertilizer, and nothing else, affecting the growth patterns of the plant. To do this, as many of these variables as possible must be controlled.
Matching and Randomization
In the example used in this guide (you'll find the example below), we discuss an experiment that focuses on three groups of plants -- one that is treated with a fertilizer named MegaGro, another group treated with a fertilizer named Plant!, and yet another that is not treated with fetilizer (this latter group serves as a "control" group). In this example, even though the designers of the experiment have tried to remove all extraneous variables, results may appear merely coincidental. Since the goal of the experiment is to prove a causal relationship in which a single variable is responsible for the effect produced, the experiment would produce stronger proof if the results were replicated in larger treatment and control groups.
Selecting groups entails assigning subjects in the groups of an experiment in such a way that treatment and control groups are comparable in all respects except the application of the treatment. Groups can be created in two ways: matching and randomization. In the MegaGro experiment discussed below, the plants might be matched according to characteristics such as age, weight and whether they are blooming. This involves distributing these plants so that each plant in one group exactly matches characteristics of plants in the other groups. Matching may be problematic, though, because it "can promote a false sense of security by leading [the experimenter] to believe that [the] experimental and control groups were really equated at the outset, when in fact they were not equated on a host of variables" (Jones, 291). In other words, you may have flowers for your MegaGro experiment that you matched and distributed among groups, but other variables are unaccounted for. It would be difficult to have equal groupings.
Randomization, then, is preferred to matching. This method is based on the statistical principle of normal distribution. Theoretically, any arbitrarily selected group of adequate size will reflect normal distribution. Differences between groups will average out and become more comparable. The principle of normal distribution states that in a population most individuals will fall within the middle range of values for a given characteristic, with increasingly fewer toward either extreme (graphically represented as the ubiquitous "bell curve").
Differences between Quasi-Experimental and Experimental Research
Thus far, we have explained that for experimental research we need:
- a hypothesis for a causal relationship;
- a control group and a treatment group;
- to eliminate confounding variables that might mess up the experiment and prevent displaying the causal relationship; and
- to have larger groups with a carefully sorted constituency; preferably randomized, in order to keep accidental differences from fouling things up.
But what if we don't have all of those? Do we still have an experiment? Not a true experiment in the strictest scientific sense of the term, but we can have a quasi-experiment, an attempt to uncover a causal relationship, even though the researcher cannot control all the factors that might affect the outcome.
A quasi-experimenter treats a given situation as an experiment even though it is not wholly by design. The independent variable may not be manipulated by the researcher, treatment and control groups may not be randomized or matched, or there may be no control group. The researcher is limited in what he or she can say conclusively.
The significant element of both experiments and quasi-experiments is the measure of the dependent variable, which it allows for comparison. Some data is quite straightforward, but other measures, such as level of self-confidence in writing ability, increase in creativity or in reading comprehension are inescapably subjective. In such cases, quasi-experimentation often involves a number of strategies to compare subjectivity, such as rating data, testing, surveying, and content analysis.
Rating essentially is developing a rating scale to evaluate data. In testing, experimenters and quasi-experimenters use ANOVA (Analysis of Variance) and ANCOVA (Analysis of Co-Variance) tests to measure differences between control and experimental groups, as well as different correlations between groups.
Since we're mentioning the subject of statistics, note that experimental or quasi-experimental research cannot state beyond a shadow of a doubt that a single cause will always produce any one effect. They can do no more than show a probability that one thing causes another. The probability that a result is the due to random chance is an important measure of statistical analysis and in experimental research.
Example: Causality
Let's say you want to determine that your new fertilizer, MegaGro, will increase the growth rate of plants. You begin by getting a plant to go with your fertilizer. Since the experiment is concerned with proving that MegaGro works, you need another plant, using no fertilizer at all on it, to compare how much change your fertilized plant displays. This is what is known as a control group.
Set up with a control group, which will receive no treatment, and an experimental group, which will get MegaGro, you must then address those variables that could invalidate your experiment. This can be an extensive and exhaustive process. You must ensure that you use the same plant; that both groups are put in the same kind of soil; that they receive equal amounts of water and sun; that they receive the same amount of exposure to carbon-dioxide-exhaling researchers, and so on. In short, any other variable that might affect the growth of those plants, other than the fertilizer, must be the same for both plants. Otherwise, you can't prove absolutely that MegaGro is the only explanation for the increased growth of one of those plants.
Such an experiment can be done on more than two groups. You may not only want to show that MegaGro is an effective fertilizer, but that it is better than its competitor brand of fertilizer, Plant! All you need to do, then, is have one experimental group receiving MegaGro, one receiving Plant! and the other (the control group) receiving no fertilizer. Those are the only variables that can be different between the three groups; all other variables must be the same for the experiment to be valid.
Controlling variables allows the researcher to identify conditions that may affect the experiment's outcome. This may lead to alternative explanations that the researcher is willing to entertain in order to isolate only variables judged significant. In the MegaGro experiment, you may be concerned with how fertile the soil is, but not with the plants'; relative position in the window, as you don't think that the amount of shade they get will affect their growth rate. But what if it did? You would have to go about eliminating variables in order to determine which is the key factor. What if one receives more shade than the other and the MegaGro plant, which received more shade, died? This might prompt you to formulate a plausible alternative explanation, which is a way of accounting for a result that differs from what you expected. You would then want to redo the study with equal amounts of sunlight.
Methods: Five Steps
Experimental research can be roughly divided into five phases:
Identifying a research problem
The process starts by clearly identifying the problem you want to study and considering what possible methods will affect a solution. Then you choose the method you want to test, and formulate a hypothesis to predict the outcome of the test.
For example, you may want to improve student essays, but you don't believe that teacher feedback is enough. You hypothesize that some possible methods for writing improvement include peer workshopping, or reading more example essays. Favoring the former, your experiment would try to determine if peer workshopping improves writing in high school seniors. You state your hypothesis: peer workshopping prior to turning in a final draft will improve the quality of the student's essay.
Planning an experimental research study
The next step is to devise an experiment to test your hypothesis. In doing so, you must consider several factors. For example, how generalizable do you want your end results to be? Do you want to generalize about the entire population of high school seniors everywhere, or just the particular population of seniors at your specific school? This will determine how simple or complex the experiment will be. The amount of time funding you have will also determine the size of your experiment.
Continuing the example from step one, you may want a small study at one school involving three teachers, each teaching two sections of the same course. The treatment in this experiment is peer workshopping. Each of the three teachers will assign the same essay assignment to both classes; the treatment group will participate in peer workshopping, while the control group will receive only teacher comments on their drafts.
Conducting the experiment
At the start of an experiment, the control and treatment groups must be selected. Whereas the "hard" sciences have the luxury of attempting to create truly equal groups, educators often find themselves forced to conduct their experiments based on self-selected groups, rather than on randomization. As was highlighted in the Basic Concepts section, this makes the study a quasi-experiment, since the researchers cannot control all of the variables.
For the peer workshopping experiment, let's say that it involves six classes and three teachers with a sample of students randomly selected from all the classes. Each teacher will have a class for a control group and a class for a treatment group. The essay assignment is given and the teachers are briefed not to change any of their teaching methods other than the use of peer workshopping. You may see here that this is an effort to control a possible variable: teaching style variance.
Analyzing the data
The fourth step is to collect and analyze the data. This is not solely a step where you collect the papers, read them, and say your methods were a success. You must show how successful. You must devise a scale by which you will evaluate the data you receive, therefore you must decide what indicators will be, and will not be, important.
Continuing our example, the teachers' grades are first recorded, then the essays are evaluated for a change in sentence complexity, syntactical and grammatical errors, and overall length. Any statistical analysis is done at this time if you choose to do any. Notice here that the researcher has made judgments on what signals improved writing. It is not simply a matter of improved teacher grades, but a matter of what the researcher believes constitutes improved use of the language.
Writing the paper/presentation describing the findings
Once you have completed the experiment, you will want to share findings by publishing academic paper (or presentations). These papers usually have the following format, but it is not necessary to follow it strictly. Sections can be combined or not included, depending on the structure of the experiment, and the journal to which you submit your paper.
- Abstract : Summarize the project: its aims, participants, basic methodology, results, and a brief interpretation.
- Introduction : Set the context of the experiment.
- Review of Literature : Provide a review of the literature in the specific area of study to show what work has been done. Should lead directly to the author's purpose for the study.
- Statement of Purpose : Present the problem to be studied.
- Participants : Describe in detail participants involved in the study; e.g., how many, etc. Provide as much information as possible.
- Materials and Procedures : Clearly describe materials and procedures. Provide enough information so that the experiment can be replicated, but not so much information that it becomes unreadable. Include how participants were chosen, the tasks assigned them, how they were conducted, how data were evaluated, etc.
- Results : Present the data in an organized fashion. If it is quantifiable, it is analyzed through statistical means. Avoid interpretation at this time.
- Discussion : After presenting the results, interpret what has happened in the experiment. Base the discussion only on the data collected and as objective an interpretation as possible. Hypothesizing is possible here.
- Limitations : Discuss factors that affect the results. Here, you can speculate how much generalization, or more likely, transferability, is possible based on results. This section is important for quasi-experimentation, since a quasi-experiment cannot control all of the variables that might affect the outcome of a study. You would discuss what variables you could not control.
- Conclusion : Synthesize all of the above sections.
- References : Document works cited in the correct format for the field.
Experimental and Quasi-Experimental Research: Issues and Commentary
Several issues are addressed in this section, including the use of experimental and quasi-experimental research in educational settings, the relevance of the methods to English studies, and ethical concerns regarding the methods.
Using Experimental and Quasi-Experimental Research in Educational Settings
Charting causal relationships in human settings.
Any time a human population is involved, prediction of casual relationships becomes cloudy and, some say, impossible. Many reasons exist for this; for example,
- researchers in classrooms add a disturbing presence, causing students to act abnormally, consciously or unconsciously;
- subjects try to please the researcher, just because of an apparent interest in them (known as the Hawthorne Effect); or, perhaps
- the teacher as researcher is restricted by bias and time pressures.
But such confounding variables don't stop researchers from trying to identify causal relationships in education. Educators naturally experiment anyway, comparing groups, assessing the attributes of each, and making predictions based on an evaluation of alternatives. They look to research to support their intuitive practices, experimenting whenever they try to decide which instruction method will best encourage student improvement.
Combining Theory, Research, and Practice
The goal of educational research lies in combining theory, research, and practice. Educational researchers attempt to establish models of teaching practice, learning styles, curriculum development, and countless other educational issues. The aim is to "try to improve our understanding of education and to strive to find ways to have understanding contribute to the improvement of practice," one writer asserts (Floden 1996, p. 197).
In quasi-experimentation, researchers try to develop models by involving teachers as researchers, employing observational research techniques. Although results of this kind of research are context-dependent and difficult to generalize, they can act as a starting point for further study. The "educational researcher . . . provides guidelines and interpretive material intended to liberate the teacher's intelligence so that whatever artistry in teaching the teacher can achieve will be employed" (Eisner 1992, p. 8).
Bias and Rigor
Critics contend that the educational researcher is inherently biased, sample selection is arbitrary, and replication is impossible. The key to combating such criticism has to do with rigor. Rigor is established through close, proper attention to randomizing groups, time spent on a study, and questioning techniques. This allows more effective application of standards of quantitative research to qualitative research.
Often, teachers cannot wait to for piles of experimentation data to be analyzed before using the teaching methods (Lauer and Asher 1988). They ultimately must assess whether the results of a study in a distant classroom are applicable in their own classrooms. And they must continuously test the effectiveness of their methods by using experimental and qualitative research simultaneously. In addition to statistics (quantitative), researchers may perform case studies or observational research (qualitative) in conjunction with, or prior to, experimentation.
Relevance to English Studies
Situations in english studies that might encourage use of experimental methods.
Whenever a researcher would like to see if a causal relationship exists between groups, experimental and quasi-experimental research can be a viable research tool. Researchers in English Studies might use experimentation when they believe a relationship exists between two variables, and they want to show that these two variables have a significant correlation (or causal relationship).
A benefit of experimentation is the ability to control variables, such as the amount of treatment, when it is given, to whom and so forth. Controlling variables allows researchers to gain insight into the relationships they believe exist. For example, a researcher has an idea that writing under pseudonyms encourages student participation in newsgroups. Researchers can control which students write under pseudonyms and which do not, then measure the outcomes. Researchers can then analyze results and determine if this particular variable alone causes increased participation.
Transferability-Applying Results
Experimentation and quasi-experimentation allow for generating transferable results and accepting those results as being dependent upon experimental rigor. It is an effective alternative to generalizability, which is difficult to rely upon in educational research. English scholars, reading results of experiments with a critical eye, ultimately decide if results will be implemented and how. They may even extend that existing research by replicating experiments in the interest of generating new results and benefiting from multiple perspectives. These results will strengthen the study or discredit findings.
Concerns English Scholars Express about Experiments
Researchers should carefully consider if a particular method is feasible in humanities studies, and whether it will yield the desired information. Some researchers recommend addressing pertinent issues combining several research methods, such as survey, interview, ethnography, case study, content analysis, and experimentation (Lauer and Asher, 1988).
Advantages and Disadvantages of Experimental Research: Discussion
In educational research, experimentation is a way to gain insight into methods of instruction. Although teaching is context specific, results can provide a starting point for further study. Often, a teacher/researcher will have a "gut" feeling about an issue which can be explored through experimentation and looking at causal relationships. Through research intuition can shape practice .
A preconception exists that information obtained through scientific method is free of human inconsistencies. But, since scientific method is a matter of human construction, it is subject to human error . The researcher's personal bias may intrude upon the experiment , as well. For example, certain preconceptions may dictate the course of the research and affect the behavior of the subjects. The issue may be compounded when, although many researchers are aware of the affect that their personal bias exerts on their own research, they are pressured to produce research that is accepted in their field of study as "legitimate" experimental research.
The researcher does bring bias to experimentation, but bias does not limit an ability to be reflective . An ethical researcher thinks critically about results and reports those results after careful reflection. Concerns over bias can be leveled against any research method.
Often, the sample may not be representative of a population, because the researcher does not have an opportunity to ensure a representative sample. For example, subjects could be limited to one location, limited in number, studied under constrained conditions and for too short a time.
Despite such inconsistencies in educational research, the researcher has control over the variables , increasing the possibility of more precisely determining individual effects of each variable. Also, determining interaction between variables is more possible.
Even so, artificial results may result . It can be argued that variables are manipulated so the experiment measures what researchers want to examine; therefore, the results are merely contrived products and have no bearing in material reality. Artificial results are difficult to apply in practical situations, making generalizing from the results of a controlled study questionable. Experimental research essentially first decontextualizes a single question from a "real world" scenario, studies it under controlled conditions, and then tries to recontextualize the results back on the "real world" scenario. Results may be difficult to replicate .
Perhaps, groups in an experiment may not be comparable . Quasi-experimentation in educational research is widespread because not only are many researchers also teachers, but many subjects are also students. With the classroom as laboratory, it is difficult to implement randomizing or matching strategies. Often, students self-select into certain sections of a course on the basis of their own agendas and scheduling needs. Thus when, as often happens, one class is treated and the other used for a control, the groups may not actually be comparable. As one might imagine, people who register for a class which meets three times a week at eleven o'clock in the morning (young, no full-time job, night people) differ significantly from those who register for one on Monday evenings from seven to ten p.m. (older, full-time job, possibly more highly motivated). Each situation presents different variables and your group might be completely different from that in the study. Long-term studies are expensive and hard to reproduce. And although often the same hypotheses are tested by different researchers, various factors complicate attempts to compare or synthesize them. It is nearly impossible to be as rigorous as the natural sciences model dictates.
Even when randomization of students is possible, problems arise. First, depending on the class size and the number of classes, the sample may be too small for the extraneous variables to cancel out. Second, the study population is not strictly a sample, because the population of students registered for a given class at a particular university is obviously not representative of the population of all students at large. For example, students at a suburban private liberal-arts college are typically young, white, and upper-middle class. In contrast, students at an urban community college tend to be older, poorer, and members of a racial minority. The differences can be construed as confounding variables: the first group may have fewer demands on its time, have less self-discipline, and benefit from superior secondary education. The second may have more demands, including a job and/or children, have more self-discipline, but an inferior secondary education. Selecting a population of subjects which is representative of the average of all post-secondary students is also a flawed solution, because the outcome of a treatment involving this group is not necessarily transferable to either the students at a community college or the students at the private college, nor are they universally generalizable.
When a human population is involved, experimental research becomes concerned if behavior can be predicted or studied with validity. Human response can be difficult to measure . Human behavior is dependent on individual responses. Rationalizing behavior through experimentation does not account for the process of thought, making outcomes of that process fallible (Eisenberg, 1996).
Nevertheless, we perform experiments daily anyway . When we brush our teeth every morning, we are experimenting to see if this behavior will result in fewer cavities. We are relying on previous experimentation and we are transferring the experimentation to our daily lives.
Moreover, experimentation can be combined with other research methods to ensure rigor . Other qualitative methods such as case study, ethnography, observational research and interviews can function as preconditions for experimentation or conducted simultaneously to add validity to a study.
We have few alternatives to experimentation. Mere anecdotal research , for example is unscientific, unreplicatable, and easily manipulated. Should we rely on Ed walking into a faculty meeting and telling the story of Sally? Sally screamed, "I love writing!" ten times before she wrote her essay and produced a quality paper. Therefore, all the other faculty members should hear this anecdote and know that all other students should employ this similar technique.
On final disadvantage: frequently, political pressure drives experimentation and forces unreliable results. Specific funding and support may drive the outcomes of experimentation and cause the results to be skewed. The reader of these results may not be aware of these biases and should approach experimentation with a critical eye.
Advantages and Disadvantages of Experimental Research: Quick Reference List
Experimental and quasi-experimental research can be summarized in terms of their advantages and disadvantages. This section combines and elaborates upon many points mentioned previously in this guide.
Ethical Concerns
Experimental research may be manipulated on both ends of the spectrum: by researcher and by reader. Researchers who report on experimental research, faced with naive readers of experimental research, encounter ethical concerns. While they are creating an experiment, certain objectives and intended uses of the results might drive and skew it. Looking for specific results, they may ask questions and look at data that support only desired conclusions. Conflicting research findings are ignored as a result. Similarly, researchers, seeking support for a particular plan, look only at findings which support that goal, dismissing conflicting research.
Editors and journals do not publish only trouble-free material. As readers of experiments members of the press might report selected and isolated parts of a study to the public, essentially transferring that data to the general population which may not have been intended by the researcher. Take, for example, oat bran. A few years ago, the press reported how oat bran reduces high blood pressure by reducing cholesterol. But that bit of information was taken out of context. The actual study found that when people ate more oat bran, they reduced their intake of saturated fats high in cholesterol. People started eating oat bran muffins by the ton, assuming a causal relationship when in actuality a number of confounding variables might influence the causal link.
Ultimately, ethical use and reportage of experimentation should be addressed by researchers, reporters and readers alike.
Reporters of experimental research often seek to recognize their audience's level of knowledge and try not to mislead readers. And readers must rely on the author's skill and integrity to point out errors and limitations. The relationship between researcher and reader may not sound like a problem, but after spending months or years on a project to produce no significant results, it may be tempting to manipulate the data to show significant results in order to jockey for grants and tenure.
Meanwhile, the reader may uncritically accept results that receive validity by being published in a journal. However, research that lacks credibility often is not published; consequentially, researchers who fail to publish run the risk of being denied grants, promotions, jobs, and tenure. While few researchers are anything but earnest in their attempts to conduct well-designed experiments and present the results in good faith, rhetorical considerations often dictate a certain minimization of methodological flaws.
Concerns arise if researchers do not report all, or otherwise alter, results. This phenomenon is counterbalanced, however, in that professionals are also rewarded for publishing critiques of others' work. Because the author of an experimental study is in essence making an argument for the existence of a causal relationship, he or she must be concerned not only with its integrity, but also with its presentation. Achieving persuasiveness in any kind of writing involves several elements: choosing a topic of interest, providing convincing evidence for one's argument, using tone and voice to project credibility, and organizing the material in a way that meets expectations for a logical sequence. Of course, what is regarded as pertinent, accepted as evidence, required for credibility, and understood as logical varies according to context. If the experimental researcher hopes to make an impact on the community of professionals in their field, she must attend to the standards and orthodoxy's of that audience.
Related Links
Contrasts: Traditional and computer-supported writing classrooms. This Web presents a discussion of the Transitions Study, a year-long exploration of teachers and students in computer-supported and traditional writing classrooms. Includes description of study, rationale for conducting the study, results and implications of the study.
http://kairos.technorhetoric.net/2.2/features/reflections/page1.htm
Annotated Bibliography
A cozy world of trivial pursuits? (1996, June 28) The Times Educational Supplement . 4174, pp. 14-15.
A critique discounting the current methods Great Britain employs to fund and disseminate educational research. The belief is that research is performed for fellow researchers not the teaching public and implications for day to day practice are never addressed.
Anderson, J. A. (1979, Nov. 10-13). Research as argument: the experimental form. Paper presented at the annual meeting of the Speech Communication Association, San Antonio, TX.
In this paper, the scientist who uses the experimental form does so in order to explain that which is verified through prediction.
Anderson, Linda M. (1979). Classroom-based experimental studies of teaching effectiveness in elementary schools . (Technical Report UTR&D-R- 4102). Austin: Research and Development Center for Teacher Education, University of Texas.
Three recent large-scale experimental studies have built on a database established through several correlational studies of teaching effectiveness in elementary school.
Asher, J. W. (1976). Educational research and evaluation methods . Boston: Little, Brown.
Abstract unavailable by press time.
Babbie, Earl R. (1979). The Practice of Social Research . Belmont, CA: Wadsworth.
A textbook containing discussions of several research methodologies used in social science research.
Bangert-Drowns, R.L. (1993). The word processor as instructional tool: a meta-analysis of word processing in writing instruction. Review of Educational Research, 63 (1), 69-93.
Beach, R. (1993). The effects of between-draft teacher evaluation versus student self-evaluation on high school students' revising of rough drafts. Research in the Teaching of English, 13 , 111-119.
The question of whether teacher evaluation or guided self-evaluation of rough drafts results in increased revision was addressed in Beach's study. Differences in the effects of teacher evaluations, guided self-evaluation (using prepared guidelines,) and no evaluation of rough drafts were examined. The final drafts of students (10th, 11th, and 12th graders) were compared with their rough drafts and rated by judges according to degree of change.
Beishuizen, J. & Moonen, J. (1992). Research in technology enriched schools: a case for cooperation between teachers and researchers . (ERIC Technical Report ED351006).
This paper describes the research strategies employed in the Dutch Technology Enriched Schools project to encourage extensive and intensive use of computers in a small number of secondary schools, and to study the effects of computer use on the classroom, the curriculum, and school administration and management.
Borg, W. P. (1989). Educational Research: an Introduction . (5th ed.). New York: Longman.
An overview of educational research methodology, including literature review and discussion of approaches to research, experimental design, statistical analysis, ethics, and rhetorical presentation of research findings.
Campbell, D. T., & Stanley, J. C. (1963). Experimental and quasi-experimental designs for research . Boston: Houghton Mifflin.
A classic overview of research designs.
Campbell, D.T. (1988). Methodology and epistemology for social science: selected papers . ed. E. S. Overman. Chicago: University of Chicago Press.
This is an overview of Campbell's 40-year career and his work. It covers in seven parts measurement, experimental design, applied social experimentation, interpretive social science, epistemology and sociology of science. Includes an extensive bibliography.
Caporaso, J. A., & Roos, Jr., L. L. (Eds.). Quasi-experimental approaches: Testing theory and evaluating policy. Evanston, WA: Northwestern University Press.
A collection of articles concerned with explicating the underlying assumptions of quasi-experimentation and relating these to true experimentation. With an emphasis on design. Includes a glossary of terms.
Collier, R. Writing and the word processor: How wary of the gift-giver should we be? Unpublished manuscript.
Unpublished typescript. Charts the developments to date in computers and composition and speculates about the future within the framework of Willie Sypher's model of the evolution of creative discovery.
Cook, T.D. & Campbell, D.T. (1979). Quasi-experimentation: design and analysis issues for field settings . Boston: Houghton Mifflin Co.
The authors write that this book "presents some quasi-experimental designs and design features that can be used in many social research settings. The designs serve to probe causal hypotheses about a wide variety of substantive issues in both basic and applied research."
Cutler, A. (1970). An experimental method for semantic field study. Linguistic Communication, 2 , N. pag.
This paper emphasizes the need for empirical research and objective discovery procedures in semantics, and illustrates a method by which these goals may be obtained.
Daniels, L. B. (1996, Summer). Eisenberg's Heisenberg: The indeterminancies of rationality. Curriculum Inquiry, 26 , 181-92.
Places Eisenberg's theories in relation to the death of foundationalism by showing that he distorts rational studies into a form of relativism. He looks at Eisenberg's ideas on indeterminacy, methods and evidence, what he is against and what we should think of what he says.
Danziger, K. (1990). Constructing the subject: Historical origins of psychological research. Cambridge: Cambridge University Press.
Danzinger stresses the importance of being aware of the framework in which research operates and of the essentially social nature of scientific activity.
Diener, E., et al. (1972, December). Leakage of experimental information to potential future subjects by debriefed subjects. Journal of Experimental Research in Personality , 264-67.
Research regarding research: an investigation of the effects on the outcome of an experiment in which information about the experiment had been leaked to subjects. The study concludes that such leakage is not a significant problem.
Dudley-Marling, C., & Rhodes, L. K. (1989). Reflecting on a close encounter with experimental research. Canadian Journal of English Language Arts. 12 , 24-28.
Researchers, Dudley-Marling and Rhodes, address some problems they met in their experimental approach to a study of reading comprehension. This article discusses the limitations of experimental research, and presents an alternative to experimental or quantitative research.
Edgington, E. S. (1985). Random assignment and experimental research. Educational Administration Quarterly, 21 , N. pag.
Edgington explores ways on which random assignment can be a part of field studies. The author discusses both non-experimental and experimental research and the need for using random assignment.
Eisenberg, J. (1996, Summer). Response to critiques by R. Floden, J. Zeuli, and L. Daniels. Curriculum Inquiry, 26 , 199-201.
A response to critiques of his argument that rational educational research methods are at best suspect and at worst futile. He believes indeterminacy controls this method and worries that chaotic research is failing students.
Eisner, E. (1992, July). Are all causal claims positivistic? A reply to Francis Schrag. Educational Researcher, 21 (5), 8-9.
Eisner responds to Schrag who claimed that critics like Eisner cannot escape a positivistic paradigm whatever attempts they make to do so. Eisner argues that Schrag essentially misses the point for trying to argue for the paradigm solely on the basis of cause and effect without including the rest of positivistic philosophy. This weakens his argument against multiple modal methods, which Eisner argues provides opportunities to apply the appropriate research design where it is most applicable.
Floden, R.E. (1996, Summer). Educational research: limited, but worthwhile and maybe a bargain. (response to J.A. Eisenberg). Curriculum Inquiry, 26 , 193-7.
Responds to John Eisenberg critique of educational research by asserting the connection between improvement of practice and research results. He places high value of teacher discrepancy and knowledge that research informs practice.
Fortune, J. C., & Hutson, B. A. (1994, March/April). Selecting models for measuring change when true experimental conditions do not exist. Journal of Educational Research, 197-206.
This article reviews methods for minimizing the effects of nonideal experimental conditions by optimally organizing models for the measurement of change.
Fox, R. F. (1980). Treatment of writing apprehension and tts effects on composition. Research in the Teaching of English, 14 , 39-49.
The main purpose of Fox's study was to investigate the effects of two methods of teaching writing on writing apprehension among entry level composition students, A conventional teaching procedure was used with a control group, while a workshop method was employed with the treatment group.
Gadamer, H-G. (1976). Philosophical hermeneutics . (D. E. Linge, Trans.). Berkeley, CA: University of California Press.
A collection of essays with the common themes of the mediation of experience through language, the impossibility of objectivity, and the importance of context in interpretation.
Gaise, S. J. (1981). Experimental vs. non-experimental research on classroom second language learning. Bilingual Education Paper Series, 5 , N. pag.
Aims on classroom-centered research on second language learning and teaching are considered and contrasted with the experimental approach.
Giordano, G. (1983). Commentary: Is experimental research snowing us? Journal of Reading, 27 , 5-7.
Do educational research findings actually benefit teachers and students? Giordano states his opinion that research may be helpful to teaching, but is not essential and often is unnecessary.
Goldenson, D. R. (1978, March). An alternative view about the role of the secondary school in political socialization: A field-experimental study of theory and research in social education. Theory and Research in Social Education , 44-72.
This study concludes that when political discussion among experimental groups of secondary school students is led by a teacher, the degree to which the students' views were impacted is proportional to the credibility of the teacher.
Grossman, J., and J. P. Tierney. (1993, October). The fallibility of comparison groups. Evaluation Review , 556-71.
Grossman and Tierney present evidence to suggest that comparison groups are not the same as nontreatment groups.
Harnisch, D. L. (1992). Human judgment and the logic of evidence: A critical examination of research methods in special education transition literature. In D. L. Harnisch et al. (Eds.), Selected readings in transition.
This chapter describes several common types of research studies in special education transition literature and the threats to their validity.
Hawisher, G. E. (1989). Research and recommendations for computers and composition. In G. Hawisher and C. Selfe. (Eds.), Critical Perspectives on Computers and Composition Instruction . (pp. 44-69). New York: Teacher's College Press.
An overview of research in computers and composition to date. Includes a synthesis grid of experimental research.
Hillocks, G. Jr. (1982). The interaction of instruction, teacher comment, and revision in teaching the composing process. Research in the Teaching of English, 16 , 261-278.
Hillock conducted a study using three treatments: observational or data collecting activities prior to writing, use of revisions or absence of same, and either brief or lengthy teacher comments to identify effective methods of teaching composition to seventh and eighth graders.
Jenkinson, J. C. (1989). Research design in the experimental study of intellectual disability. International Journal of Disability, Development, and Education, 69-84.
This article catalogues the difficulties of conducting experimental research where the subjects are intellectually disables and suggests alternative research strategies.
Jones, R. A. (1985). Research Methods in the Social and Behavioral Sciences. Sunderland, MA: Sinauer Associates, Inc..
A textbook designed to provide an overview of research strategies in the social sciences, including survey, content analysis, ethnographic approaches, and experimentation. The author emphasizes the importance of applying strategies appropriately and in variety.
Kamil, M. L., Langer, J. A., & Shanahan, T. (1985). Understanding research in reading and writing . Newton, Massachusetts: Allyn and Bacon.
Examines a wide variety of problems in reading and writing, with a broad range of techniques, from different perspectives.
Kennedy, J. L. (1985). An Introduction to the Design and Analysis of Experiments in Behavioral Research . Lanham, MD: University Press of America.
An introductory textbook of psychological and educational research.
Keppel, G. (1991). Design and analysis: a researcher's handbook . Englewood Cliffs, NJ: Prentice Hall.
This updates Keppel's earlier book subtitled "a student's handbook." Focuses on extensive information about analytical research and gives a basic picture of research in psychology. Covers a range of statistical topics. Includes a subject and name index, as well as a glossary.
Knowles, G., Elija, R., & Broadwater, K. (1996, Spring/Summer). Teacher research: enhancing the preparation of teachers? Teaching Education, 8 , 123-31.
Researchers looked at one teacher candidate who participated in a class which designed their own research project correlating to a question they would like answered in the teaching world. The goal of the study was to see if preservice teachers developed reflective practice by researching appropriate classroom contexts.
Lace, J., & De Corte, E. (1986, April 16-20). Research on media in western Europe: A myth of sisyphus? Paper presented at the annual meeting of the American Educational Research Association. San Francisco.
Identifies main trends in media research in western Europe, with emphasis on three successive stages since 1960: tools technology, systems technology, and reflective technology.
Latta, A. (1996, Spring/Summer). Teacher as researcher: selected resources. Teaching Education, 8 , 155-60.
An annotated bibliography on educational research including milestones of thought, practical applications, successful outcomes, seminal works, and immediate practical applications.
Lauer. J.M. & Asher, J. W. (1988). Composition research: Empirical designs . New York: Oxford University Press.
Approaching experimentation from a humanist's perspective to it, authors focus on eight major research designs: Case studies, ethnographies, sampling and surveys, quantitative descriptive studies, measurement, true experiments, quasi-experiments, meta-analyses, and program evaluations. It takes on the challenge of bridging language of social science with that of the humanist. Includes name and subject indexes, as well as a glossary and a glossary of symbols.
Mishler, E. G. (1979). Meaning in context: Is there any other kind? Harvard Educational Review, 49 , 1-19.
Contextual importance has been largely ignored by traditional research approaches in social/behavioral sciences and in their application to the education field. Developmental and social psychologists have increasingly noted the inadequacies of this approach. Drawing examples for phenomenology, sociolinguistics, and ethnomethodology, the author proposes alternative approaches for studying meaning in context.
Mitroff, I., & Bonoma, T. V. (1978, May). Psychological assumptions, experimentations, and real world problems: A critique and an alternate approach to evaluation. Evaluation Quarterly , 235-60.
The authors advance the notion of dialectic as a means to clarify and examine the underlying assumptions of experimental research methodology, both in highly controlled situations and in social evaluation.
Muller, E. W. (1985). Application of experimental and quasi-experimental research designs to educational software evaluation. Educational Technology, 25 , 27-31.
Muller proposes a set of guidelines for the use of experimental and quasi-experimental methods of research in evaluating educational software. By obtaining empirical evidence of student performance, it is possible to evaluate if programs are making the desired learning effect.
Murray, S., et al. (1979, April 8-12). Technical issues as threats to internal validity of experimental and quasi-experimental designs . San Francisco: University of California.
The article reviews three evaluation models and analyzes the flaws common to them. Remedies are suggested.
Muter, P., & Maurutto, P. (1991). Reading and skimming from computer screens and books: The paperless office revisited? Behavior and Information Technology, 10 (4), 257-66.
The researchers test for reading and skimming effectiveness, defined as accuracy combined with speed, for written text compared to text on a computer monitor. They conclude that, given optimal on-line conditions, both are equally effective.
O'Donnell, A., Et al. (1992). The impact of cooperative writing. In J. R. Hayes, et al. (Eds.). Reading empirical research studies: The rhetoric of research . (pp. 371-84). Hillsdale, NJ: Lawrence Erlbaum Associates.
A model of experimental design. The authors investigate the efficacy of cooperative writing strategies, as well as the transferability of skills learned to other, individual writing situations.
Palmer, D. (1988). Looking at philosophy . Mountain View, CA: Mayfield Publishing.
An introductory text with incisive but understandable discussions of the major movements and thinkers in philosophy from the Pre-Socratics through Sartre. With illustrations by the author. Includes a glossary.
Phelps-Gunn, T., & Phelps-Terasaki, D. (1982). Written language instruction: Theory and remediation . London: Aspen Systems Corporation.
The lack of research in written expression is addressed and an application on the Total Writing Process Model is presented.
Poetter, T. (1996, Spring/Summer). From resistance to excitement: becoming qualitative researchers and reflective practitioners. Teaching Education , 8109-19.
An education professor reveals his own problematic research when he attempted to institute a educational research component to a teacher preparation program. He encountered dissent from students and cooperating professionals and ultimately was rewarded with excitement towards research and a recognized correlation to practice.
Purves, A. C. (1992). Reflections on research and assessment in written composition. Research in the Teaching of English, 26 .
Three issues concerning research and assessment is writing are discussed: 1) School writing is a matter of products not process, 2) school writing is an ill-defined domain, 3) the quality of school writing is what observers report they see. Purves discusses these issues while looking at data collected in a ten-year study of achievement in written composition in fourteen countries.
Rathus, S. A. (1987). Psychology . (3rd ed.). Poughkeepsie, NY: Holt, Rinehart, and Winston.
An introductory psychology textbook. Includes overviews of the major movements in psychology, discussions of prominent examples of experimental research, and a basic explanation of relevant physiological factors. With chapter summaries.
Reiser, R. A. (1982). Improving the research skills of instructional designers. Educational Technology, 22 , 19-21.
In his paper, Reiser starts by stating the importance of research in advancing the field of education, and points out that graduate students in instructional design lack the proper skills to conduct research. The paper then goes on to outline the practicum in the Instructional Systems Program at Florida State University which includes: 1) Planning and conducting an experimental research study; 2) writing the manuscript describing the study; 3) giving an oral presentation in which they describe their research findings.
Report on education research . (Journal). Washington, DC: Capitol Publication, Education News Services Division.
This is an independent bi-weekly newsletter on research in education and learning. It has been publishing since Sept. 1969.
Rossell, C. H. (1986). Why is bilingual education research so bad?: Critique of the Walsh and Carballo study of Massachusetts bilingual education programs . Boston: Center for Applied Social Science, Boston University. (ERIC Working Paper 86-5).
The Walsh and Carballo evaluation of the effectiveness of transitional bilingual education programs in five Massachusetts communities has five flaws and the five flaws are discussed in detail.
Rubin, D. L., & Greene, K. (1992). Gender-typical style in written language. Research in the Teaching of English, 26.
This study was designed to find out whether the writing styles of men and women differ. Rubin and Green discuss the pre-suppositions that women are better writers than men.
Sawin, E. (1992). Reaction: Experimental research in the context of other methods. School of Education Review, 4 , 18-21.
Sawin responds to Gage's article on methodologies and issues in educational research. He agrees with most of the article but suggests the concept of scientific should not be regarded in absolute terms and recommends more emphasis on scientific method. He also questions the value of experiments over other types of research.
Schoonmaker, W. E. (1984). Improving classroom instruction: A model for experimental research. The Technology Teacher, 44, 24-25.
The model outlined in this article tries to bridge the gap between classroom practice and laboratory research, using what Schoonmaker calls active research. Research is conducted in the classroom with the students and is used to determine which two methods of classroom instruction chosen by the teacher is more effective.
Schrag, F. (1992). In defense of positivist research paradigms. Educational Researcher, 21, (5), 5-8.
The controversial defense of the use of positivistic research methods to evaluate educational strategies; the author takes on Eisner, Erickson, and Popkewitz.
Smith, J. (1997). The stories educational researchers tell about themselves. Educational Researcher, 33 (3), 4-11.
Recapitulates main features of an on-going debate between advocates for using vocabularies of traditional language arts and whole language in educational research. An "impasse" exists were advocates "do not share a theoretical disposition concerning both language instruction and the nature of research," Smith writes (p. 6). He includes a very comprehensive history of the debate of traditional research methodology and qualitative methods and vocabularies. Definitely worth a read by graduates.
Smith, N. L. (1980). The feasibility and desirability of experimental methods in evaluation. Evaluation and Program Planning: An International Journal , 251-55.
Smith identifies the conditions under which experimental research is most desirable. Includes a review of current thinking and controversies.
Stewart, N. R., & Johnson, R. G. (1986, March 16-20). An evaluation of experimental methodology in counseling and counselor education research. Paper presented at the annual meeting of the American Educational Research Association, San Francisco.
The purpose of this study was to evaluate the quality of experimental research in counseling and counselor education published from 1976 through 1984.
Spector, P. E. (1990). Research Designs. Newbury Park, California: Sage Publications.
In this book, Spector introduces the basic principles of experimental and nonexperimental design in the social sciences.
Tait, P. E. (1984). Do-it-yourself evaluation of experimental research. Journal of Visual Impairment and Blindness, 78 , 356-363 .
Tait's goal is to provide the reader who is unfamiliar with experimental research or statistics with the basic skills necessary for the evaluation of research studies.
Walsh, S. M. (1990). The current conflict between case study and experimental research: A breakthrough study derives benefits from both . (ERIC Document Number ED339721).
This paper describes a study that was not experimentally designed, but its major findings were generalizable to the overall population of writers in college freshman composition classes. The study was not a case study, but it provided insights into the attitudes and feelings of small clusters of student writers.
Waters, G. R. (1976). Experimental designs in communication research. Journal of Business Communication, 14 .
The paper presents a series of discussions on the general elements of experimental design and the scientific process and relates these elements to the field of communication.
Welch, W. W. (March 1969). The selection of a national random sample of teachers for experimental curriculum evaluation. Scholastic Science and Math , 210-216.
Members of the evaluation section of Harvard project physics describe what is said to be the first attempt to select a national random sample of teachers, and list 6 steps to do so. Cost and comparison with a volunteer group are also discussed.
Winer, B.J. (1971). Statistical principles in experimental design , (2nd ed.). New York: McGraw-Hill.
Combines theory and application discussions to give readers a better understanding of the logic behind statistical aspects of experimental design. Introduces the broad topic of design, then goes into considerable detail. Not for light reading. Bring your aspirin if you like statistics. Bring morphine is you're a humanist.
Winn, B. (1986, January 16-21). Emerging trends in educational technology research. Paper presented at the Annual Convention of the Association for Educational Communication Technology.
This examination of the topic of research in educational technology addresses four major areas: (1) why research is conducted in this area and the characteristics of that research; (2) the types of research questions that should or should not be addressed; (3) the most appropriate methodologies for finding answers to research questions; and (4) the characteristics of a research report that make it good and ultimately suitable for publication.
Barnes, Luann, Jennifer Hauser, Luana Heikes, Anthony J. Hernandez, Paul Tim Richard, Katherine Ross, Guo Hua Yang, & Mike Palmquist. (2005). Experimental and Quasi-Experimental Research. Writing@CSU . Colorado State University. https://writing.colostate.edu/guides/guide.cfm?guideid=64
Log in using your username and password
- Search More Search for this keyword Advanced search
- Latest content
- For authors
- Browse by collection
- BMJ Journals More You are viewing from: Google Indexer
You are here
- Volume 14, Issue 11
- Detection, linkage to care, treatment and monitoring of hypertension in coastal communities in Accra, Ghana: protocol for a quasi-experimental study (The Ghana Heart Initiative Hypertension Study)
- Article Text
- Article info
- Citation Tools
- Rapid Responses
- Article metrics
- http://orcid.org/0000-0002-0562-6307 Vincent Boima 1 , 2 ,
- Charles Hayfron-Benjamin 1 , 3 ,
- Alfred Doku 1 , 2 ,
- Afua A A Twumasi 4 ,
- Doris Ottie-Boakye 5 ,
- Juliette Edzeame Selom 6 ,
- Charles Agyemang 1 , 7
- 1 Department of Public and Occupational Health , Amsterdam Public Health Research Institute, Amsterdam UMC, University of Amsterdam , Amsterdam , The Netherlands
- 2 Department of Medicine and Therapeutics, College of Health Sciences , University of Ghana Medical School , Accra , Ghana
- 3 Department of Physiology, College of Health Sciences , University of Ghana Medical School , Accra , Ghana
- 4 Ga South Municipal Health Directorate , Ghana Health Service , Accra , Ghana
- 5 Department of Health Policy Planning and Management, School of Public Health, College of Health Sciences , University of Ghana , Legon , Ghana
- 6 German Development Cooperation House , GIZ, Ghana, Ghana Heart Initiative , Accra , Ghana
- 7 Division of Endocrinology, Diabetes, and Metabolism, Department of Medicine , The Johns Hopkins University School of Medicine , Baltimore , Maryland , USA
- Correspondence to Dr Vincent Boima; vincentboima{at}yahoo.com
Introduction Over the past few decades, the prevalence of hypertension in Ghana has increased significantly. Insufficient diagnosis and suboptimal management of diagnosed cases result in increased mortality and morbidity due to poor blood pressure control and attendant complications. This highlights the need for new models of hypertension control in highly burdened, urban poor communities. This study aims to identify patients with hypertension in the coastal communities of the Greater Accra region, link patients newly diagnosed with hypertension to appropriate medical care and monitor treatment outcomes using task-shifting strategies.
Methods and analysis In this quasi-experimental study, participants with a mean blood pressure of ≥140/90 mm Hg will be recruited from seven coastal communities of Ghana’s Greater Accra region. Based on proportion to the size of these communities, we will screen and recruit 10 000 and 3000 participants, respectively, from all study sites. We will link the recruited individuals to designated health facilities and follow them for a year to assess treatment outcomes, blood pressure control, adherence to treatment, anthropometric measurements, funduscopic assessment, urinalysis, blood urea nitrogen and creatinine level, ECG and echocardiograms. In addition, we will use mobile health technology to support community screening, blood pressure checks and remote monitoring of patients diagnosed with hypertension, as well as send messages on medication adherence and lifestyle changes. Furthermore, we will conduct focus group discussions among community members and indepth interviews with persons considered to be newly diagnosed with hypertension, community health workers and religious leaders/representatives to assess the knowledge and perceptions of different study participants regarding hypertension diagnosis, management, control, experiences and treatment.
Ethics and dissemination The study was approved by the Ghana Health Service Ethics Review Committee (protocol identification number GHS-ERC 028/08/22). We will obtain written informed consent from each participant. In addition to journal publication, dissemination activities will include a report to the Ghana Health Service on the outcome of the project.
Trial registration number ISRCTN76503336 .
- Hypertension
- Preventative Medicine
- Primary Health Care
- PUBLIC HEALTH
- Patient Participation
This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/ .
https://doi.org/10.1136/bmjopen-2023-082954
Statistics from Altmetric.com
Request permissions.
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
STRENGTHS AND LIMITATIONS OF THIS STUDY
The key strength of this study is its use of task-shifting with community healthcare providers and of electronic blood pressure monitoring that is based on existing national electronic platforms, an approach that aims to make healthcare easily accessible to patients while ensuring good blood pressure control.
The study will gather data to determine the feasibility and cost-effectiveness of this approach in treating high blood pressure in coastal areas where it is prevalent.
The results of this study will guide the development of the first ever electronic platform for community healthcare workers for collecting clinical data which doctors can evaluate in clinics, potentially improving communication between doctors and community healthcare providers in Ghana.
The study may result in a low number of recorded consultations during the evaluation phase due to unreliable internet connectivity and power supply issues, limiting the conclusions that can be drawn.
Introduction
Hypertension is a major public health problem given its growing prevalence. It is the leading cause of mortality worldwide, with 10 million people dying from hypertension-related complications. 1 Hypertension-associated mortality is largely due to stroke, heart failure, chronic kidney disease and peripheral vascular disease. 2–4 The number of people with hypertension in Africa has increased from 54.6 million in 1990 to 130.2 million in 2010, with projections indicating it will reach 216.8 million by 2030. 5 In Sub-Saharan Africa (SSA), the prevalence of hypertension in adults ranges from 30.0% to 31.1% 6 7 ; the reported prevalence of hypertension in Ghanaians aged 31–74 years is 30.3%. 8 The burden of hypertension in Africa remains high despite lifestyle changes and medical interventions aimed at preventing and controlling it. 9 Hypertension-related morbidity and mortality tend to affect the economically active age group, and as a result hypertension causes severe economic hardship for many families in Africa. 10 Urban poor or suburban communities, characterised by intake of energy-dense foods and a sedentary lifestyle, bear a disproportionately higher burden of hypertension and its complications. 11
A study in coastal areas in Indonesia showed a hypertension prevalence of 6.45%–51.1%, with the prevalence of prehypertension ranging from 26.5% to 39.75%. 12 In Indonesia’s estuarine areas, the prevalence is 25.3%. 12 High sodium intake is strongly associated with an increased risk of hypertension in these areas. A recent study assessing intake of potassium and salt among Ghanaians using the WHO SAGE wave 3 data showed that more than three-quarters (77.7%) of Ghanaians had salt intake above the WHO maximum recommended level of 5 g/day, with 39% consuming more than twice this level. 13 Furthermore, the study revealed that nearly two-thirds had daily potassium intake below the recommended level of 90 mmol/day. 13 There has been a drastic change in Ghanaians’ nutritional behaviour, with most people switching from local healthy diets to energy-dense, nutrient-poor, processed fast foods such as instant noodles and salty snacks, among other things. 14 15 Furthermore, regular consumption of highly salted foods, such as fish and meat, is still part of traditional Ghanaian cuisine, and use of salt in cooking remains high. 16 17 Fishing is the main occupation of people living along Ghana’s coast, and salting, drying and smoking fish are the main methods of preserving fish.
Undiagnosed hypertension and suboptimal blood pressure control are of clinical and public health significance as they result in increased morbidity and mortality, including end-organ damage and sudden death. 18 Early diagnosis and optimal blood pressure control are key to preventing hypertension-related morbidity and mortality, often achieved through pharmacological and/or lifestyle modifications. Such an intervention entails identification of reasons for poor hypertension control, including non-compliance with antihypertensives. Existing data show that a myriad of factors account for this non-compliance, including forgetfulness, long travel distances to health facilities, long waiting times in clinics and inadequate funds for transportation to clinics for clinical reviews. 19 Given the increasing cases of hypertension in SSA and poorly resourced health facilities, morbidity and mortality are expected to worsen if implementation gaps in hypertension control are not identified and addressed. Thus, the primary goal of this study is to identify individuals with hypertension in coastal communities and link them to care. The study also aims to use mobile technology applications via text messages to remind patients with hypertension to visit community health planning and services (CHPs) and community pharmacists for blood pressure monitoring, medication refilling and adhering to treatment plans as outlined by the physician (task-shifting). Task-shifting comprises the rational redeployment of tasks to persons within the healthcare team with fewer qualifications that, conservatively, are not within their scope of work. 20 This organisational technique has been promoted as a significant strategy to improve health system performance, particularly in resource-deprived settings, taking into account any cultural norms that may exist within a country. 20 21 Policymakers will then have an alternative intervention to the current one, enabling them to determine what is cost-effective and feasible with limited resources, thereby guiding the process of scaling up in the real-world context. Furthermore, future programmes and policies will be equipped with the necessary evidence to select this intervention or create programmes with enhanced effectiveness and efficiency.
Methods and analysis
This will be a multicentre, community-based, quasi-experimental and qualitative study designed to identify persons with hypertension in seven coastal communities in six districts/municipal and metropolitan areas of the Greater Accra region of Ghana, link them to appropriate medical care, explore their perceptions on hypertension and its management and follow them for 1 year. The study will commence on 27 February 2023 and end on 4 October 2024.
Study setting
In Ghana, the Greater Accra region has the highest proportion of people, accounting for 17.7% of the total population. 22 According to the 2021 Ghana Population and Housing Census, the Greater Accra region has 5 455 692 people. The region is 91.7% urbanised and has an average household size of 3.2. Due to immigration and high population growth rate, this region has the highest population density. From 10 administrative areas in 2010, the region now boasts of approximately 29 administrative areas. The region makes up 1.4% of Ghana’s total land area, which is 3245 km 2 . It is bordered by the Central Region to the west, the Volta Region to the east and the Eastern Region to the north and south, with the Gulf of Guinea forming its coastal areas and with Accra being the regional capital 23 ( figure 1 ). The region hosts several manufacturing industries, oil companies, financial institutions, telecommunication, tourism, education and health institutions. Nevertheless, the main economic activity in the coastal zone of Ghana is fishing, and the majority of residents have their lives revolving around this industry. The sector accounts for 4.5% of Ghana’s gross domestic product, employs an estimated two million people and produces a total of 440 000 metric tons of fish annually. 24 In many coastal areas, the fishery sector drives the local economy, and a decline in the ability of the sector to provide employment and income threatens the very survival of the community members. People living in the coastal zone of Ghana, such as in Chorkor, Teshie-Nungua and Jamestown, are particularly subject to poverty due to their vulnerability to shocks from climatic and non-climatic sources. Many of the resources that the urban poor depend on are common pool resources. While this means they are able to use resources such as fish, it also means that a decline in these resources leads to an increase in the incidence of poverty and vulnerability among the coastal people. 25 Nevertheless, the region recorded the highest proportion of persons 12 and older who have functional information and communication technology devices. For instance, 89.2% had smart mobile phones compared with the national average of 73.1%. However, those with non-smart mobile phones (27.9%) were 8 percentage points lower than the national average of 35.9% 26 ( figure 2 ).
- Download figure
- Open in new tab
- Download powerpoint
Study areas in the coastal belt of Accra in the Greater Accra region.
Organogram defining the study procedure and outcome measurements. *To be identified and likned to local health care facitlities for care and follow-up.
Study participants
The study population will be permanent residents 18 years and older who record a three-time mean blood pressure of 140/90 mm Hg and above at the time of recruitment in the eight study sites in seven coastal communities of the Greater Accra region in Ghana. The following are the inclusion criteria: (1) resident of the Greater Accra region of Ghana, (2) resident of the selected urban coastal communities, (3) aged 18 years or older as of last birthday, (4) have no intention of relocating outside of the study communities prior to enrolment or during the study period, (5) have a three-time blood pressure recording with a mean of ≥140/90 mm Hg at the time of recruitment, (6) with newly diagnosed hypertension and (7) willing to participate and has the ability to give consent. The following are the exclusion criteria: (1) not a resident of the Greater Accra region and the selected coastal communities, (2) residents who intend to travel or relocate before or during the study period, (3) known participants with hypertension who are already on medication, (4) known end-organ dysfunction on treatment and (5) not willing to participate in the study or do not give consent.
Sample size and sampling procedure
A recent systematic review and meta-analysis reported a pooled hypertension prevalence of 30.0% in SSA. 6 A more recent systematic review and meta-analysis on the prevalence of hypertension in Ghana also reported a pooled prevalence of 30.3%. 8 Based on the fact that 30% of people in the population have high blood pressure, 6 8 we plan to screen a total of 10 000 people in the community to identify 3000 participants whose three-time mean blood pressure reading is 140/90 mm Hg or higher at the time of recruitment. Furthermore, we will sample and recruit the expected 3000 participants from the eight study sites, taking into account the proportion of their respective populations from the 2021 Population and Housing Census, 27 respectively.
The district, municipality or metropolitan areas from the specific study site determine the number of people to sample and recruit, as shown in table 1 . We will use a stratified two-stage cluster sampling method to ensure the results accurately reflect the eight coastal districts of Accra. The sampling frame for the study will leverage the updated frame prepared by the Ghana Statistical Service based on the 2021 Population and Housing Census. Using a probability proportional to the size of each community, we will select clusters within the eight districts from the sampling frame in the first stage. Following this, a systematic random sampling will be used to select 82 clusters with equal probability from the clusters selected in the first phase. We will then proceed with household listing and map updating operations in all the selected clusters to create a comprehensive list of all the households within each cluster. We will use this list as a guide to select the households that will comprise the final sample. To create the study’s sample, we will start by enumerating areas within the selected clusters, then select one in three households using systematic random sampling method. Once the household is selected, all adults 18 years and older within the household who satisfy the inclusion criteria will be recruited.
- View inline
Expected number of participants to be screened and followed for evaluation
During the qualitative phase of the study, we will select community health workers (CHWs), patients with hypertension, religious leaders or representatives, and community members (men and women) for indepth interviews (IDIs) and focus group discussions (FGDs) using a purposive sampling method. Each of the eight study sites will host a total of 14 FGDs, with six to eight participants per group and two FGDs each. Each of the study sites will also conduct eight IDIs. The eight IDIs will consist of two health workers involved in Non-communicable Diseases (NCD) service provision, two individuals newly diagnosed with hypertension and two community representatives ( table 2 ).
Number of study participants by study site and by type of study population
Community engagement
Permission to conduct the study in the various districts will be sought from the Regional Director of Health Services and the directors of the various intervention districts. A meeting will be held with district directors and the district directors of nursing from the implementation districts to inform and educate them about the screening evaluation and follow-up strategies for the intervention. The district directors will also inform and educate the subdistrict public health nurses and community health nurses (CHNs) on the intervention strategies. In addition, they will engage with opinion leaders and members of the community health management committee, as well as leaders of mosques and churches, to provide them with a detailed explanation of the intervention, emphasise its significance and request their support. Subsequently, each district will launch the project in a community that is near the facility chosen for the evaluation. At this event, the intervention will be presented to the community. Education about hypertension and its implications will be provided. There will be a video presentation on hypertension and its associated problems. The invited attendees will include the community health committee member, influential figures, the assembly representative and the chief or queen mother. Stakeholders will be enlisted in the subdistrict where the intervention will occur. On the designated meeting day, the CHN will meet with different stakeholders in their respective zones. Each district will select a radio station that has a significant listenership among the residents of the neighbourhoods to provide educational information about the intervention. Additional information about the training programme can be found in online supplemental file 1 .
Supplemental material
Training of fieldworkers.
To ensure standardisation and accuracy of data to be collected, all fieldworkers will be trained in all aspects of the study, including blood pressure measurement, participant recruitment and participant consenting process. The training will be conducted in four clusters, involving all implementing districts. The training will involve the evaluation centre staff and the subdistrict team. Fieldworkers will include CHWs, pharmacy staff in community pharmacies, medical doctors, nurses and laboratory scientists. They will assist in the administration of the questionnaires and in the measurement of physiological variables and blood samples. Specialist cardiologists will conduct the ECG and echocardiogram, while specialist ophthalmologists and trained technicians will perform the funduscopic assessment. The details are available in online supplemental files 1 and 2 .
Data collection methods and procedures
Before the start of the project, community sensitisation will be carried out to get the support of the public, community members and their leaders/gatekeepers. Information about the study and their role as potential participants will also be made clear to ensure full participation and cooperation. There will be social mobilisation by stakeholders in each community to create awareness about the research.
Data collection will be carried out by trained health personnel and the project team using a hybrid method, that is, in person and on the phone, where possible, after obtaining informed consent. While both baseline and endline data collection will be conducted in person, the cohort study component of this project may use other methods, such as phone calls for follow-up with study participants, to ensure proper monitoring and adherence to study protocol. Throughout this process, participants’ privacy and confidentiality will be guaranteed. Data collection will take an average of 50 min. Although all study participants will be recruited from their homes, interviews will be conducted at places convenient for them. Expert opinion and pretesting will be used to validate data collection tools.
Quantitative data in the form of records and logs will be generated from baseline interviews, screening and monitoring of blood pressure, including clinical and laboratory information, as shown in table 3 and figure 1 . We will deploy CHWs into various communities to conduct initial screenings during late afternoons and weekends, a time when a significant portion of the coastal community’s target population is available. We will recruit community members who meet the criteria and provide informed consent for the study. We will also train community pharmacies, workers in CHP compounds and chemical sellers within the study areas to recruit individuals who meet the inclusion criteria. These agents will be responsible for identifying persons with hypertension in the communities, collecting the demographic data and linking them to a designated health facility for appropriate medical care. Cases identified and linked to the health facility will have baseline demographic, clinical and laboratory information collected ( table 3 ). We will record the baseline information in a database for further comparative analysis. The health facilities will start the patients on antihypertensive medications and modify their lifestyles according to cardiovascular disease guidelines. They will follow up at 6 months and 12 months, respectively, to monitor compliance with medication and lifestyle changes, treatment rate, control of hypertension and detection of target organ damage ( table 3 ).
Assessments during screening and scheduled visits
Trained research assistants will collect quality data. They will use semistructured interview guides to conduct individual face-to-face IDIs with all study participants, as well as FGDs with community members. All IDIs will be conducted one-on-one with the participants in an enclosed area, away from others, to ensure privacy and confidentiality. Participants of the FGDs will also be urged to respect each other’s confidentiality. Discussions and interviews will take approximately 90 min and 50 min, respectively. Permission will also be sought from both FGD and IDI participants to record the discussions and interviews on a digital recorder. After transcribing the data, these will be permanently deleted from the digital recorder. All discussions and interviews will be conducted at convenient locations for the participants. Expert opinion and pretesting will validate both data collection tools. Information such as demographic and socioeconomic characteristics; knowledge, attitudes, practices and experiences regarding hypertension; knowledge about hypertension, diagnosis and treatment; support systems; and the impact of hypertension on life and family will be collected.
Screening visit
This initial meeting with the community participants will include an assessment of hypertension status. We will recruit those who meet the inclusion criteria after obtaining their informed consent. They will then be directed to the evaluation centre (health facility) with a medical doctor for baseline data collection using a structured electronic data collection instrument that collects demographic, clinical and laboratory data, including blood and urine samples ( table 3 ).
Monitoring and follow-up at the community level
CHWs, including pharmacists, chemical sellers, medical doctors and nurses, will conduct monthly monitoring at the community level. Figure 2 summarises the main activities. The monitoring and evaluation plan for data collectors is available in online supplemental file 3 .
Linkage to care
The community will link participants to nearby community pharmacies and chemical sellers for medication refills and blood pressure monitoring. This will help reduce travel distance to health facilities for follow-up visits, as well as allow for monthly blood pressure checks at these sites.
Study visits
The CHWs in various communities will conduct monthly follow-up visits to ensure and reinforce antihypertensive drug compliance, as well as perform weekly blood pressure checks. The CHWs will also use the opportunity to carry out health education and encourage communities to register for the National Health Insurance Scheme, which gives participants access to health services at a subsidised fee. Reduction in participants’ financial burden will aid in maintaining continuity of health facility visits. This will improve medication monitoring and refilling at the community pharmacy. The CHWs will record blood pressure in the electronic data system, allowing doctors at the evaluation centre to assess them electronically during scheduled physician follow-up visits for evaluation ( online supplemental file 4 ).
Evaluation visits
This is the follow-up encounter with participants after the initial screening and baseline data collection. The following data will be collected during these visits at 6 and 12 months to evaluate hypertension-mediated target organ damage (HMTOD): self-reported stroke, electrocardiography/echocardiography, kidney function tests including urinary albumin to creatinine ratio, and funduscopy ( table 3 ). Patients will continue their usual standard care and follow-up visits with their doctors. The following samples will be collected during each visit: 5 mL of blood (collected into EDTA tubes and stored in a −80°C freezer for analysis of APOL1 genetic polymorphisms to determine its association with high blood pressure) and 5 mL of early morning midstream urine (to be stored in a −80°C freezer for assessment of measures including urine sodium: creatinine, albumin to creatinine ratio, and potassium to creatinine ratio).
Electronic monitoring of blood pressure
The screening will direct all eligible participants diagnosed with hypertension to a community-based clinic for evaluation and follow-up. The long-term goal is to enrol these individuals in regular care at wellness clinics situated within or close to the community. The healthcare delivery model that integrates digital technology will register these individuals. The District Health Information Management System (DHIMS) with electronic tracker (e-tracker) services will be useful in community screening, blood pressure checks and remote monitoring of patients diagnosed with hypertension, with personalised follow-up automatically generated. This is an electronic application that records client transactions and tracks the care continuum using an electronic healthcare register. The US Agency for International Development 28 has launched and made this application accessible at the subdistrict level in Ghana’s community health facilities. The Policy, Planning, Monitoring and Evaluation Division of the Ghana Health Service designed the e-tracker for this project and integrated it into the DHIMS electronic platform, which is accessible at all Ghana Health Service facilities. This will ensure longitudinal data-driven care, timely feedback and easier access to healthcare.
Assessment of adherence
We will adopt the Hill-Bone Compliance to High Blood Pressure Therapy Scale 29 to measure participant adherence, aiding in process monitoring and evaluation. Given its validation in controlling hypertension in human populations, the study will employ this tool. 29 We will divide patients in each health facility into two groups: the intervention arm and the control arm. The control arm will receive standard care, including education on lifestyle changes and usual antihypertensive medication. Using the DHIMS e-tracker, the intervention arm will receive the usual standard care plus mobile health (mHealth) messages. The messages to participants in the intervention group will include healthy lifestyle choices, including fruit and vegetable consumption, low-salt diet, smoking cessation, alcohol reduction and moderate exercise for a minimum of 30 min daily for 5 days a week. The device will also send messages reminding participants of the need to take their antihypertensive medications. Throughout the 12-month study period, the device will send these messages twice a week. The mHealth technology will indicate that the participant has received and read the messages. The 6th and 12th months of the study will see follow-ups for both the control and intervention arms. We will divide the seven study sites into two clusters and randomly assign them to either the intervention arm or the usual standard care arm using simple random sampling. We will carry out the randomisation through balloting.
Clinical measurements and definitions
Blood pressure will be measured three times using an Omron blood pressure monitor (from Detronix Biomedical Technologies) in a sitting position after at least 5 min of rest using appropriate cuffs around the left upper arm. The mean of the last two measurements will be recorded. Hypertension will be defined based on the 2018 European Society of Cardiology guidelines: ‘SBP values ≥140 mmHg and/or diastolic BP (DBP) values ≥90 mmHg and known hypertensives already on medication’. 30 We will use a stadiometer (Lifecare USA) to measure the participant’s body weight and height while barefoot and wearing light clothing, to the nearest 0.1 kg for weight and 0.1 cm for height. We will measure the waist circumference (WC) and hip circumference (HC) using tape measure in a standing position, to the nearest 0.1 cm. We will measure WC and HC in the middle of the lowest gear, the highest point of the iliac crest and the largest external gluteal muscle, respectively. Body mass index of >30 kg/m 2 and WC >102 cm in men and >88 cm in women will be considered obesity. We will assess psychological distress using the Kessler Psychological Distress Scale (K10), which consists of 10 questions about emotional states, each with a five-level response scale. 31 Albuminuria of ≥300 mg per 24 hours or random protein to creatinine ratio of ≥0.5 g/g or random urine albumin to creatinine ratio of >30 mg/mmol will be considered clinically significant. We will evaluate salt restriction of less than 5 g/day; alcohol consumption of less than 14 units per week for men and less than 8 units per week for women; increased consumption of vegetables, fruits and nuts; low consumption of red meat; and consumption of low-fat dairy products. Left ventricular hypertrophy will be defined using the Sokolow-Lyon criteria.
Outcome variables
The main outcome measure is the control of hypertension and the incidence of HMTOD in urban coastal communities in Accra. Secondary outcome measures include the prevalence of hypertension at baseline, the proportion of patients with hypertension linked to care in an urban coastal community in Accra, the medication adherence rate among patients with hypertension linked to care and the perception and lived experiences of community members regarding hypertension and its management.
Future plans
The current project, which is limited to the coastal areas of the Greater Accra region, will be expanded to other regions in the country. Furthermore, we plan to follow up on patients with hypertension in these communities to determine their long-term adherence to lifestyle changes and medications and to assess control and treatment rates over time. Again, it is part of Ghana Heart Initiative’s long-term goal to ensure continuity of care in these communities and prevent or delay the development of HMTOD. As part of this project, we will store samples for future analysis to determine the association between APOL1 polymorphisms and hypertension with or without proteinuria.
Data management and analysis
Quantitative data analysis.
Both paper-based and electronic-based data sets will be pulled together into Microsoft Excel format for management. This is to check for completion, blanks and inconsistencies. Data will be further exported into the latest versions of SPSS (version 18) and STATA (version 26) for analyses. Data analyses will be achieved via descriptive and inferential statistical analyses. The characteristics of the intervention and control arms will be compared using χ 2 or Fisher’s exact test of independence, Mann-Whitney U test for rank-ordered variables and independent t-test for overall outcome and according to cluster. To compare baseline and endline variables within each arm, McNemar’s χ 2 /binomial test, sign test/Wilcoxon signed-rank test and paired t-test will be used for proportions, ranked-ordered variables and measured variables, respectively. The difference-in-difference method will be employed to understand the effect of the intervention on hypertension control. A multivariate technique, that is, a data reduction method factor analysis, will be used to identify the number of distinct dimensions for medication adherence.
Qualitative data analysis
After data collection is completed, the recorded IDIs and FGDs will be transcribed by trained personnel. IDIs and FGDs conducted in the local language will be translated first to English and then transcribed following best practices in qualitative research, 32 while those conducted in English will be transcribed directly. Manually transcribing qualitative data often results in a higher level of accuracy and deeper immersion in the data compared with the use of a transcription software, where poor audio quality, heavy accents or technical jargons can affect the quality of the transcript. Clean verbatim transcription approach will be used in this study. 33 This approach focuses on the content of the interaction with the participants rather than the style or way they speak. Transcriptions done using this approach capture every word spoken, but omit stutters, repetitions, false starts and filler words, ensuring a cleaner, more readable transcript.
After the transcripts are completed, each will be quality-checked by the study team members for accuracy and readability. This will be done by cross-checking the recording with the transcribed data to ensure the transcript best reflects the interaction with the study participants. This process may include corrections and refining the transcript until it accurately represents the interview or group discussion. All transcripts will be anonymised, appropriately labelled and stored on a password-protected device which can be accessed only by the study team members.
The transcribed data will be analysed using the thematic analysis approach, which involves generating codes and organising them into themes. 34 The initial coding of the data will be done using both the deductive and inductive approaches. Deductive codes will be derived from existing studies conducted in Ghana and other African countries with a similar context. On the other hand, inductive codes will be derived from unexpected insights from the data, which can be linked to the study’s objectives. The next stage of the analysis will involve a nuanced linkage of the generated codes into organising themes, which will be informed by the study’s objectives. 35 The generation of codes and themes will be discussed among the study team to validate the coding and overall analysis. Representative quotes that best capture the codes will be presented for illustration. The analysis will be facilitated by the qualitative software package Atlas.ti.
Patient and public involvement
Ethics and dissemination.
The Ghana Health Service Ethics Review Committee approved the study and assigned the protocol identification number GHS-ERC 028/08/22. The study has been registered with ISRCTN registry under number ISRCTN76503336 . We will conduct the study in accordance with the 1964 Helsinki Declaration on Human Experimentation, with subsequent revisions. We will only recruit individuals who meet the eligibility criteria for the study. We will adequately inform the study participants about the purpose, nature, procedures and potential risks of the study. We will emphasise the importance of anonymity, confidentiality and the freedom to decline participation or withdraw from the study without any penalty. The electronic data collection system (DHIMS) secures the collected data, and only the study biostatistician and investigators have access. We will obtain written informed consent from each participant of the study ( online supplemental file 5 ). Dissemination activities, in addition to journal publication, will include a report to the Ghana Health Service on the project’s outcome, outlining how the project can enhance efforts in preventing and reducing morbidity and mortality associated with hypertension in these communities.
The study incorporates a cross-sectional assessment at baseline, a quasi-experiment and a qualitative analysis, making it impossible to establish the causal relationship between the intervention and the desired study outcome. Nevertheless, this design, in the absence of an experiment, provides a high level of evidence without randomisation. Additionally, the findings from both FGDs and IDIs may not necessarily reflect the views of entirely different urban coastal communities, but provide relevant information on hypertension control from the perspectives of health workers, patients with hypertension and members of the communities. Again, we foresee some other challenges carrying out this project. First, there is a possibility that the data collection period will be prolonged due to variations in community dynamics that may exist in these urban coastal communities. The study team looks forward to strongly collaborating with the key gatekeepers in the different study sites to ensure a smooth and successful take-off and recruitment of study participants. The team also acknowledges the sensitivity surrounding illness in Ghanaian culture, as it does in other cultures. Nevertheless, many Ghanaians may freely discuss issues related to their health conditions with their primary caretakers and health providers. We anticipate that the study participants will use these opportunities, when suitable, to impart the knowledge they gained from the community health education. The study team will ensure that all core research activities surrounding formal recruitments, such as ensuring voluntary informed consent and other mandatory ethical requirements, strictly involve trained, experienced and qualified members of the research team. Qualified clinicians on the research team will exclusively handle all clinical aspects of the study. Additionally, the study team will encourage the study participants by continuing to highlight the expected positive outcomes of the study in order to reduce the high likelihood of lost to follow-up or withdrawals. Furthermore, this study will provide community-level information on hypertension and its management from the diverse perspectives of healthcare workers, opinion leaders and patients with hypertension. This will guide the introduction of community-specific interventions that will help detect hypertension early and introduce cost-effective treatments that can reduce the burden of hypertension-associated target organ damage. The results of this study will assist policymakers in improving the use of task-shifting methods in treating patients in these coastal communities.
Ethics statements
Patient consent for publication.
Not required.
- Flaxman AD , et al
- Rapsomaniki E ,
- George J , et al
- Lewington S ,
- Qizilbash N , et al
- Kobeissi E , et al
- Adeloye D ,
- Ataklte F ,
- Kaptoge S , et al
- Stranges S , et al
- Atibila F ,
- Donkoh ET , et al
- Ferdinand KC
- Steven van de V ,
- Samuel O , et al
- Twinamasiko B ,
- Lukenge E ,
- Nabawanga S , et al
- Menyanu EK ,
- Minicuci N , et al
- Monteiro CA ,
- Moubarac JC ,
- Cannon G , et al
- Tschirley D ,
- Asante SB , et al
- Micah FB , et al
- Muntner P ,
- Bosworth HB , et al
- Mukumbang FC ,
- Sayampanathan AA ,
- Pin PH , et al
- Bandyopadhyay D ,
- Hajra A , et al
- Ghana Statistical Service (GSS)
- Ghana Statistical Service
- Agyemang C ,
- Smeeth L , et al
- United States Agency for International Development (USAID)
- Bone LR , et al
- Genovese G ,
- Friedman DJ ,
- Ross MD , et al
- Kessler RC ,
- Barker PR ,
- Colpe LJ , et al
- Helmich E ,
- Cristancho S ,
- Diachun L , et al
- Oluwafemi A ,
- Dlamini N , et al
- Attride-Stirling J
Supplementary materials
Supplementary data.
This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.
- Data supplement 1
- Data supplement 2
- Data supplement 3
- Data supplement 4
- Data supplement 5
Contributors VB and AD were responsible for the concept and write-up of the study. CH-B, AAAT, JES and DO-B were responsible for drafting and editing the manuscript. CA supervised and edited the final manuscript. JES is the guarantor.
Funding This work was supported by Bayer/GIZ (grant number 68.3025.1-001).
Map disclaimer The inclusion of any map (including the depiction of any boundaries therein), or of any geographic or locational reference, does not imply the expression of any opinion whatsoever on the part of BMJ concerning the legal status of any country, territory, jurisdiction or area or of its authorities. Any such expression remains solely that of the relevant source and is not endorsed by BMJ. Maps are provided without any warranty of any kind, either express or implied.
Competing interests None declared.
Patient and public involvement Patients and/or the public were not involved in the design, or conduct, or reporting, or dissemination plans of this research.
Provenance and peer review Not commissioned; externally peer reviewed.
Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.
Read the full text or download the PDF:
The carbon emission reduction effect of digital economy: an empirical study based on chinese national big data comprehensive experimental zone
- Published: 07 November 2024
Cite this article
- Shanyong Wang 1 &
- Rongwei Zhang ORCID: orcid.org/0009-0008-7082-0924 1
Explore all metrics
The digital economy is progressively emerging as the new driving force behind economic and social advancement. Whether the digital economy can drive the green transformation development still requires further exploration. Drawing on the data of 275 prefecture-level cities in China from 2010 to 2019, we employ the time-varying difference-in-differences (DID) method to investigate the influence of the digital economy on regional carbon emissions. By considering the Chinese National Big Data Comprehensive Experimental Zone (NBDCEZ) as a quasi-natural experiment for the development of regional digital economies, we aim to shed light on the potential relationship between these two sectors and their impact on environmental sustainability. The results reveal that the construction of NBDCEZ reduces regional CO 2 by about 14.4%, and the value of emission reduction achieved is about 8.5*10 7 USD. The mechanism analysis indicates that the construction of NBDCEZ achieves the emission reduction target by stimulating green innovation, mobilizing public environmental participation, and cutting energy consumption. Heterogeneity analysis suggests that NBDCEZ exhibits a notable effect on reducing carbon emissions in the eastern regions, non-resource-based cities, larger cities and regions with better digital infrastructure. The findings of this paper provide empirical insights for achieving the goal of energy saving and emission control.
This is a preview of subscription content, log in via an institution to check access.
Access this article
Subscribe and save.
- Get 10 units per month
- Download Article/Chapter or eBook
- 1 Unit = 1 Article or 1 Chapter
- Cancel anytime
Price excludes VAT (USA) Tax calculation will be finalised during checkout.
Instant access to the full article PDF.
Rent this article via DeepDyve
Institutional subscriptions
Data availability
Data will be made available on request.
Bai, J., Zhang, Y., & Bian, Y. (2022). Does innovation-driven policy increase entrepreneurial activity in cities—evidence from the national innovative city pilot policy. China Industrial Economics, 6 , 61–78.
CAS Google Scholar
Baker, A. C., Larcker, D. F., & Wang, C. C. (2022). How much should we trust staggered difference-in-differences estimates? Journal of Financial Economics, 144 (2), 370–395.
Article Google Scholar
Beck, T., Levine, R., & Levkov, A. (2010). Big bad banks? The winners and losers from bank deregulation in the United States. The Journal of Finance, 65 (5), 1637–1667.
Cao, Q. (2020). Driving effects of national new zone on regional economic growth-evidence from 70 cities of China. China Industrial Economics, 7 , 43–60.
Google Scholar
Chang, H., & Xia, L. (2023). Empowerment of digital economy to low-carbon development: Mechanism identification and spatial spillover. Science & Technology Progress and Policy, 40 (10), 48–57.
Che, S., & Wang, J. (2022). Digital economy development and haze pollution: Evidence from China. Environmental Science and Pollution Research, 29 (48), 73210–73226.
Chen, C., Ye, F., Xiao, H., Xie, W., Liu, B., & Wang, L. (2023). The digital economy, spatial spillovers and forestry green total factor productivity. Journal of Cleaner Production, 405 , 136890.
Chen, X., Hu, D., Cao, W., Liang, W., Xu, X., Tang, X., & Wang, Y. (2021). Path of digital technology promoting realization of carbon neutrality goal in Chinas energy industry. Bulletin of the Chinese Academy of Sciences, 36 (9), 1019–1029.
Chen, Y., Fan, Z., Gu, X., & Zhou, L. A. (2020). Arrival of young talent: The send-down movement and rural education in China. American Economic Review, 110 (11), 3393–3430.
Cui, W., Du, N., Li, Z., & Zhan, S. (2015). Climate change cognition, social responsibility and public mitigation action-analysis based on CGSS2010. Soft Science, 29 (10), 39–43.
Dian, J., Song, T., & Li, S. (2024). Facilitating or inhibiting? Spatial effects of the digital economy affecting urban green technology innovation. Energy Economics, 129 , 107223.
Dong, F., Hu, M., Gao, Y., Liu, Y., Zhu, J., & Pan, Y. (2022a). How does digital economy affect carbon emissions? Evidence from global 60 countries. Science of the Total Environment, 852 , 158401.
Article CAS Google Scholar
Dong, H., Liu, W., Liu, Y., & Xiong, Z. (2022b). Fixed asset changes with carbon regulation: The cases of China. Journal of Environmental Management, 306 , 114494.
Dong, H., Xue, M., Xiao, Y., & Liu, Y. (2021). Do carbon emissions impact the health of residents? Considering China’s industrialization and urbanization. Science of the Total Environment, 758 , 143688.
Eckert, E., & Kovalevska, O. (2021). Sustainability in the European Union: Analyzing the discourse of the European green deal. Journal of Risk and Financial Management, 14 (2), 80.
Elahi, E., Li, G., Han, X., Zhu, W., Liu, Y., Cheng, A., & Yang, Y. (2024). Decoupling livestock and poultry pollution emissions from industrial development: A step towards reducing environmental emissions. Journal of Environmental Management, 350 , 119654.
Fei, L., Dong, S., Xue, L., Liang, Q., & Yang, W. (2011). Energy consumption-economic growth relationship and carbon dioxide emissions in China. Energy Policy, 39 (2), 568–574.
Gao, W., & Peng, Y. (2023). Energy saving and emission reduction effects of urban digital economy: Technology dividends or structural dividends? Environmental Science and Pollution Research, 30 (13), 36851–36871.
Goldberg, M. H., van der Linden, S., Maibach, E., & Leiserowitz, A. (2019). Discussing global warming leads to greater acceptance of climate science. Proceedings of the National Academy of Sciences of the United States of America, 116 (30), 14804–14805.
Goldstein, A., Turner, W.R., Spawn, S.A., Anderson-Teixeira, K.J., Cook-Patton, S., Fargione, J., Gibbs, H.K., Griscom, B., Hewson, J.H., Howard, J.F., & Ledezma, J.C. (2020). Protecting irrecoverable carbon in Earth’s ecosystems. Nature Climate Change, 10 (4), 287–295.
Guo, Q., Ma, X., & Zhao, J. (2023). Can the digital economy development achieve the effect of pollution reduction? Evidence from Chinese Cities. Environmental Science and Pollution Research, 30 (29), 74166–74185.
Hao, X., Li, Y., Ren, S., Wu, H., & Hao, Y. (2023). The role of digitalization on green economic growth: Does industrial structure optimization and green innovation matter? Journal of Environmental Management, 325 , 116504.
Hao, Y., & Chen, P. (2023). Do renewable energy consumption and green innovation help to curb CO2 emissions? Evidence from E7 countries. Environmental Science and Pollution Research, 30 (8), 21115–21131.
Heredia, J., Castillo-Vergara, M., Geldes, C., Gamarra, F. M. C., Flores, A., & Heredia, W. (2022). How do digital capabilities affect firm performance? The mediating role of technological capabilities in the “new normal.” Journal of Innovation & Knowledge, 7 (2), 100171.
Hu, B., & Yu, Z. (2021). Research on the heterogeneous impact of local fiscal pressure on urban productivity: Based on the background of new-type urbanization. Journal of Finance and Economics, 47 (06), 139–153.
Hu, J. (2023). Synergistic effect of pollution reduction and carbon emission mitigation in the digital economy. Journal of Environmental Management, 337 , 117755.
Huang, Q., Yu, Y., & Zhang, S. (2019). Internet development and productivity growth in manufacturing industry: internal mechanism and China experiences. China Industrial Economics, 8 , 5–23.
Kan, H., Chen, R., & Tong, S. (2012). Ambient air pollution, climate change, and population health in China. Environment International, 42 , 10–19.
Larch, M., & Wanner, J. (2024). The consequences of non-participation in the Paris Agreement. European Economic Review, 163 , 104699.
Li, K., Kim, D. J., Lang, K. R., Kauffman, R. J., & Naldi, M. (2020). How should we understand the digital economy in Asia? Critical assessment and research agenda. Electronic Commerce Research and Applications, 44 , 101004.
Li, H., & Xu, B. (2018). Curse or blessing: How does natural resource abundance affect green economic growth in China? Economic Research Journal, 53 (09), 151–167.
Li, J., Chen, L., Chen, Y., & He, J. (2022a). Digital economy, technological innovation, and green economic efficiency—Empirical evidence from 277 cities in China. Managerial and Decision Economics, 43 (3), 616–629.
Li, R., Li, L., & Wang, Q. (2022b). The impact of energy efficiency on carbon emissions: Evidence from the transportation sector in Chinese 30 provinces. Sustainable Cities and Society, 82 , 103880.
Li, R., Rao, J., & Wan, L. Y. (2022c). The digital economy, enterprise digital transformation, and enterprise innovation. Managerial and Decision Economics, 43 (7), 2875–2886.
Li, Z., & Liu, F. (2020). Does entrepreneurship improve the total factor productivity of Cities? Economic Review, 221 (01), 131–145.
Lin, B., & Zhu, J. (2019). The role of renewable energy technological innovation on climate change: Empirical evidence from China. Science of the Total Environment, 659 , 1505–1512.
Linde, C. (1994). Sustaining development in mineral economies: The resource curse thesis. Resources Policy, 15 (7), 168–179.
Liu, J.-Y., Woodward, R. T., & Zhang, Y.-J. (2021). Has carbon emissions trading reduced PM2.5 in China? Environmental Science & Technology, 55 (10), 6631–6643.
Liu, J., Duan, Y., & Zhong, S. (2022a). Does green innovation suppress carbon emission intensity? New evidence from China. Environmental Science and Pollution Research, 29 (57), 86722–86743.
Liu, Q., Ma, Y., & Xu, S. (2022b). Has the development of digital economy improved the efficiency of China’ s green economy? China Population Resources and Environment, 32 (3), 72–85.
Liu, S. J., & Guo, Z. X. (2023). Research on evaluation of development level of digital economy based on combination weighting method and improved VIKOR method. Journal of Intelligent & Fuzzy Systems, 44 (5), 7723–7738.
Liu, X., Zhang, S., & Bae, J. (2022c). Nonlinear analysis of technological innovation and electricity generation on carbon dioxide emissions in China. Journal of Cleaner Production, 343 , 131021.
Liu, Z., Deng, Z., He, G., Wang, H., Zhang, X., Lin, J., Qi, Y., & Liang, X. (2022d). Challenges and opportunities for carbon neutrality in China. Nature Reviews Earth & Environment, 3 (2), 141–155.
Long, X., Naminse, E. Y., Du, J., & Zhuang, J. (2015). Nonrenewable energy, renewable energy, carbon dioxide emissions and economic growth in China from 1952 to 2012. Renewable and Sustainable Energy Reviews, 52 , 680–688.
Lyu, K. N., Yang, S. W., Zheng, K., & Zhang, Y. (2023). How does the digital economy affect carbon emission efficiency? Evidence from energy consumption and industrial value chain. Energies, 16 (2), 761.
Ma, D., & Zhu, Q. (2022). Innovation in emerging economies: Research on the digital economy driving high-quality green development. Journal of Business Research, 145 , 801–813.
Mei, Y., Miao, J. Y., & Lu, Y. H. (2022). Digital villages construction accelerates high-quality economic development in rural China through promoting digital entrepreneurship. Sustainability, 14 (21), 14224.
Mulugetta, Y., & Frauke, U. (2010). Deliberating on low carbon development. Energy Policy, 38 (12), 7546–7549.
Nordhaus, W. D. (2017). Revisiting the social cost of carbon. Proceedings of the National Academy of Sciences, 114 (7), 1518–1523.
Nunn, N., & Qian, N. (2014). US food aid and civil conflict. American Economic Review, 104 (6), 1630–1666.
Ozawa, A., Tsani, T., & Kudoh, Y. (2022). Japan’s pathways to achieve carbon neutrality by 2050–Scenario analysis using an energy modeling methodology. Renewable and Sustainable Energy Reviews, 169 , 112943.
Pan, D., Hong, W., & He, M. (2022). Can campaign-style enforcement facilitate water pollution control? Learning from China’s environmental protection interview. Journal of Environmental Management, 301 , 113910.
Parag, Y., & Darby, S. (2009). Consumer–supplier–government triangular relations: Rethinking the UK policy path for carbon emissions reduction from the UK residential sector. Energy Policy, 37 (10), 3984–3992.
Ren, S. G., Zheng, J. J., Liu, D. H., & Chen, X. (2019). Does emissions trading system improve firm’s total factor productivity—Evidence from Chinese listed companies. China Industrial Economics, 5 , 5–23.
Semenza, J. C., Hall, D. E., Wilson, D. J., Bontempo, B. D., Sailor, D. J., & George, L. A. (2008). Public perception of climate change: Voluntary mitigation and barriers to behavior change. American Journal of Preventive Medicine, 35 (5), 479–487.
Song, P., Mao, X., Li, Z., & Tan, Z. (2023). Study on the optimal policy options for improving energy efficiency and Co-controlling carbon emission and local air pollutants in China. Renewable and Sustainable Energy Reviews, 175 , 113167.
Song, Y., Gong, Y., & Song, Y. (2024). The impact of digital financial development on the green economy: An analysis based on a volatility perspective. Journal of Cleaner Production, 434 , 140051.
Soytas, U., & Sari, R. (2009). Energy consumption, economic growth, and carbon emissions: Challenges faced by an EU candidate member. Ecological Economics, 68 (6), 1667–1675.
Su, H. N., & Moaniba, I. M. (2017). Does innovation respond to climate change? Empirical evidence from patents and greenhouse gas emissions. Technological Forecasting and Social Change, 122 , 49–62.
Sun, X., Chen, Z., Shi, T., Yang, G., & Yang, X. (2021). Influence of digital economy on industrial wastewater discharge: Evidence from 281 Chinese prefecture-level cities. Journal of Water and Climate Change, 13 (2), 593–606.
Tang, K., & Yang, G. (2023). Does digital infrastructure cut carbon emissions in Chinese cities? Sustainable Production and Consumption, 35 , 431–443.
Wan, Q. Q., & Shi, D. Q. (2022). Smarter and cleaner: The digital economy and environmental pollution. China & World Economy, 30 (6), 59–85.
Wang, J., Wang, B., Dong, K., & Dong, X. (2022b). How does the digital economy improve high-quality energy development? The case of China. Technological Forecasting and Social Change, 184 , 121960.
Wang, S., Zhang, R., & Wan, L. (2023a). Business environment optimization and regional green innovation: Evidence from Chinese provinces. Journal of Environmental Planning and Management . https://doi.org/10.1080/09640568.2023.2286924
Wang, Q., Zhang, F., Li, R., & Sun, J. (2024b). Does artificial intelligence facilitate the energy transition and curb carbon emissions? The role of trade openness. Journal of Cleaner Production, 447 , 141298.
Wang, J., Dong, K., Dong, X., & Taghizadeh-Hesary, F.(2022a). Assessing the digital economy and its carbon-mitigation effects: The case of China. Energy Economics, 113 , 106198.
Wang, K., Yin, H., & Chen, Y. (2019). The effect of environmental regulation on air quality: A study of new ambient air quality standards in China. Journal of Cleaner Production, 215 , 268–279.
Wang, Q., Hu, A., & Tian, Z. (2022c). Digital transformation and electricity consumption: Evidence from the Broadband China Pilot Policy. Energy Economics, 115 , 106346.
Wang, Q., Hu, S., & Li, R. (2024a). Could information and communication technology (ICT) reduce carbon emissions? The role of trade openness and financial development. Telecommunications Policy, 48 (3), 102699.
Wang, S., Zhang, R., Wan, L., & Chen, J. (2023b). Has central government environmental protection interview improved air quality in China? Ecological Economics, 206 , 107750.
Wang, Z., Yang, Z., Zhang, Y., & Yin, J. (2012). Energy technology patents–CO2 emissions nexus: An empirical analysis from China. Energy Policy, 42 , 248–260.
Wu, J., Wen, J., & Qin, Z. (2018). Does the environmental protection interview really matter? Evidence from Chinese cities on air pollution governance. China Soft Science, 11 , 66–75.
Wu, J., & Wang, R. (2019). Have the talk with local officials about environmental protection promoted the efficiency of local government environmental governance. Empirical analysis based on difference in differences method. Journal of Public Management, 16 (01), 54–65.
Wu, H. X., & Yu, C. H. (2022). The impact of the digital economy on China’s economic growth and productivity performance. China Economic Journal, 15 (2), 153–170.
Wu, L., Wan, X., Jahanger, A., Li, M., Murshed, M., & Balsalobre-Lorente, D. (2023). Does the digital economy reduce air pollution in China? A perspective from industrial agglomeration. Energy Reports, 9 , 3625–3641.
Wu, L., Yang, M., & Sun, K. (2022). Impact of public environmental attention on environmental governance of enterprises and local governments. China Population, Resources and Environment, 32 (2), 1–14.
Wu, X., & Zhang, Y. (2020). An analysis of the status quo and international competitiveness of China’s digital economy. Science Research Management, 41 (5), 250–258.
Xie, N. Y., & Zhang, Y. (2022). The impact of digital economy on industrial carbon emission efficiency: Evidence from Chinese provincial data. Mathematical Problems in Engineering, 2022 (1), 6583809.
Xu, X., & Zhang, M. (2020). Research on the scale measurement of China’s digital economy-based on the perspective of international comparison. China Industrial Economics, 5 , 23–41.
Xu, K., & Zhang, Y. (2021). Can green technology innovation bring “lucid waters and lush mountains”?-A study based on the perspective of green patent. China Population, Resources and Environment, 31 (05), 141–151.
Yan, X., Deng, Y., Peng, L., & Jiang, Z. (2023). Study on the impact of digital economy development on carbon emission intensity of urban agglomerations and its mechanism. Environmental Science and Pollution Research, 30 (12), 33142–33159.
Zeng, M., Zheng, L., Huang, Z., Cheng, X., & Zeng, H. (2023). Does vertical supervision promote regional green transformation? Evidence from central environmental protection inspection. Journal of Environmental Management, 326 , 116681.
Zhang, C., & Liu, L. X. (2023). Corporate inventory and cash holdings in digital economy strategy: Evidence from China. Finance Research Letters, 53 , 103607.
Zhang, J., & Qian, F. (2023). Digital economy enables common prosperity: Analysis of mediating and moderating effects based on green finance and environmental pollution. Frontiers in Energy Research, 10 , 1080230.
Zhang, J., Zhao, W., Cheng, B., Li, A., Wang, Y., Yang, N., & Tian, Y. (2022a). The impact of digital economy on the economic growth and the development strategies in the post-COVID-19 Era: Evidence from countries along the “Belt and Road.” Frontiers in Public Health, 10 , 856142.
Zhang, L., Mu, R., Zhan, Y., Yu, J., Liu, L., Yu, Y., & Zhang, J. (2022b). Digital economy, energy efficiency, and carbon emissions: Evidence from provincial panel data in China. Science of the Total Environment, 852 , 158403.
Zhang, R., Wang, S., & Yuan, C. (2024). Shock or opportunity? Unveiling the effect of low-carbon transition on employment. Journal of Environmental Management, 359 , 120885.
Zhang, S., Li, Y., Hao, Y., & Zhang, Y. (2018). Does public opinion affect air quality? Evidence based on the monthly data of 109 prefecture-level cities in China. Energy Policy, 116 , 299–311.
Zhang, X., Lin, F., Wang, Y., & Wang, M. (2022c). The impact of digital economy on employment polarization: An analysis based on Chinese provincial panel data. Labor History, 63 (5), 636–651.
Zhao, C. Y., Liu, Z. Q., & Yan, X. F. (2023a). Does the digital economy increase green TFP in Cities? International Journal of Environmental Research and Public Health, 20 (2), 1442.
Zhao, L., & Liu, S. (2022). Projecting the spatial-temporal variation and city size distribution of digital economy in China. Urban Studies, 29 (6), 74–83.
Zhao, T., Zhang, K., & Liang, S. (2020). Digital economy, entrepreneurship, and high-quality economic development: Empirical evidence from urban China. Journal of Management World, 36 (10), 65–76.
Zhao, Y., Zhao, Z., Qian, Z., Zheng, L., Fan, S., & Zuo, S. (2023b). Is cooperative green innovation better for carbon reduction? Evidence from China. Journal of Cleaner Production, 394 , 136400.
Zhong, Z., Chen, Y., Fu, M., Li, M., Yang, K., Zeng, L., Liang, J., Ma, R., & Xie, Q. (2023). Role of CO2 geological storage in China’s pledge to carbon peak by 2030 and carbon neutrality by 2060. Energy, 272 , 127165.
Zhou, J., & Liu, Y. (2021). Institutional embeddedness, green technology innovation and carbon emission reduction of new ventures. China Population, Resources and Environment, 31 (06), 90–101.
Zhou, X., Zhou, D., Zhao, Z., & Wang, Q. (2022). A framework to analyze carbon impacts of digital economy: The case of China. Sustainable Production and Consumption, 31 , 357–369.
Zhu, W., & Chen, J. (2022). The spatial analysis of digital economy and urban development: A case study in Hangzhou, China. Cities, 123 , 103563.
Download references
This work was supported by the National Natural Science Foundation of China (Grant number 71974177 and 72,374,187).
Author information
Authors and affiliations.
School of Management, University of Science and Technology of China, No. 96, Jinzhai Road, Hefei, Anhui Province, 230026, People’s Republic of China
Shanyong Wang & Rongwei Zhang
You can also search for this author in PubMed Google Scholar
Corresponding author
Correspondence to Rongwei Zhang .
Ethics declarations
Conflict of interest, additional information, publisher's note.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
Reprints and permissions
About this article
Wang, S., Zhang, R. The carbon emission reduction effect of digital economy: an empirical study based on chinese national big data comprehensive experimental zone. Environ Dev Sustain (2024). https://doi.org/10.1007/s10668-024-05648-5
Download citation
Received : 30 January 2024
Accepted : 30 October 2024
Published : 07 November 2024
DOI : https://doi.org/10.1007/s10668-024-05648-5
Share this article
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
- Digital economy
- Carbon emissions
- National big data comprehensive experimental zone
- Find a journal
- Publish with us
- Track your research
IMAGES
VIDEO
COMMENTS
Quasi-experimental design is a research method that aims to establish a cause-and-effect relationship without random assignment. Learn about the differences, types and advantages of quasi-experiments compared to true experiments.
A quasi-experimental design is a research method that mimics an experiment but lacks some of its key features, such as random assignment or control groups. Learn the advantages and disadvantages of this approach and see examples of different types of quasi-experimental designs.
Quasi-experimental (QE) studies have a key role in the development of bodies of evidence to both inform health policy decisions and guide investments for health systems strengthening. Studies of this type entail a nonrandomized, quantitative approach to causal inference, which may be applied prospectively (as in a trial) or retrospectively (as in the analysis of routine observational or ...
Quasi experimental design is a method for identifying causal relationships that does not randomly assign participants to the experimental groups. Learn about the advantages, disadvantages and types of quasi experiments, such as natural experiments, nonequivalent groups design and regression discontinuity.
Quasi-experimental design is a research method that evaluates causal relationships without full control over the independent variable. Learn about the types, methods, steps, examples, and applications of quasi-experimental design.
Quasi-experiment is a research method that studies cause-and-effect relationships in real-world settings without random assignment of participants. Learn about the types, applications, advantages, and disadvantages of quasi-experiment, and how QuestionPro can help conduct and analyze them.
Quasi-experimental design is a research methodology that studies the effects of independent variables on dependent variables without randomization. Learn the purpose, applications, and key concepts of quasi-experimental design, and see examples from various domains.
There is a practical challenge to quasi-experimental studies that may arise when some patients or hospital units are encouraged to introduce an intervention, while other units retain the standard of care and may feel excluded. 2 Importantly, researchers need to be aware of the biases that may occur in quasi-experimental studies that may lead to ...
Quasi-experimental studies are increasingly used to establish causal relationships in epidemiology and health systems research. Quasi-experimental studies offer important opportunities to increase and improve evidence on causal effects: (1) they can generate causal evidence when randomized controlled trials are impossible; (2) they typically generate causal evidence with a high degree of ...
Quasi-experimental design is a research method that manipulates variables but does not randomly assign participants to conditions. It is used when randomization is unethical or impractical, but it has lower internal validity and higher external validity than true experiments.
Quasi-experimental designs are similar to experimental designs, but lack randomization of groups. This can weaken internal validity and limit generalization of findings. Learn more about the pros and cons of quasi-experimental research.
Learn about the basic principles, advantages, and challenges of quasi-experimental designs (QEDs) for evaluating interventions. Compare four types of QEDs: regression discontinuity, difference-in-differences, interrupted time series, and matched comparison group designs.
Learn about the logic and practice of quasi-experimentation, a method of estimating treatment effects without random assignment. Explore four types of quasi-experimental designs and their statistical analyses, threats to internal validity, and comparisons with randomized experiments.
Learn how to use pre-test and post-test studies to evaluate education, triage, or simulation methods in prehospital and disaster medicine. Understand the advantages, limitations, and methods of quasi-experimental design.
An experimental study design without randomization is referred to as a quasi-experimental study. Experimental studies try to determine the efficacy of a new intervention on a specified population. Table 5 presents the advantages and disadvantages of experimental and non-experimental studies . Table 5. The advantages and disadvantages of ...
In the past few decades, we have seen a rapid proliferation in the use of quasi-experimental research designs in education research. This trend, stemming in part from the "credibility revolution" in the social sciences, particularly economics, is notable along with the increasing use of randomized controlled trials in the strive toward rigorous causal inference.
This paper defines and aligns natural and quasi experiments (NES and QES) to the Target Trial Framework, a systematic basis for evaluating causal claims from NES. NES and QES are appealing for public health research because they exploit exogenous variation in exposure that is not under the control of the researcher.
A quasi-experimental study is an evaluation method that lacks at least one of the requirements of a randomised controlled trial, such as randomisation or control group. Learn how to use quasi ...
Experimental and quasi-experimental designs for research. Boston: Houghton Mifflin. A classic overview of research designs. Campbell, D.T. (1988). Methodology and epistemology for social science: selected papers. ed. E. S. Overman. Chicago: University of Chicago Press. This is an overview of Campbell's 40-year career and his work.
Quasi-experimental studies are increasingly used to establish causal relationships in epidemiology and health systems research. Quasi-experimental studies offer important opportunities to increase and improve evidence on causal effects: (1) they can generate causal evidence when randomized controlled trials are impossible; (2) they typically generate causal evidence with a high degree of ...
Methods and analysis In this quasi-experimental study, participants with a mean blood pressure of ≥140/90 mm Hg will be recruited from seven coastal communities of Ghana's Greater Accra region. Based on proportion to the size of these communities, we will screen and recruit 10 000 and 3000 participants, respectively, from all study sites.
The paper focuses on NBDCEZ, a pilot policy introduced in 2015, and conducts research on this exogenous event as a quasi-experimental opportunity. In light of the COVID-19 pandemic that occurred in 2020 and the effectiveness of data collection in various cities, our sample covers 275 prefecture-level cities in China, spanning a period of 10 ...
EBFs integrate the advantages of both MRF and CBF systems. ... Quasi-X-braced steel moment frames (QXB-MFs), using FEMA P695. methodology. Previous experimental studies have focused on single ...