This website does not fully support Internet Explorer. For a better experience, please consider using a modern browser such as Chrome , Firefox , or Edge .

Types of Quantitative Research Methods and Designs

Professor giving lecture on quantitative methods

Every doctoral student has their own reasons for pursuing a terminal degree. Some are motivated by enhanced career prospects, while others like the idea of being recognized as an expert in their field or have a passion for bringing new knowledge to leaders. Regardless of your own motivations for earning a  doctoral degree , you are sure to develop stronger critical thinking and analytical reasoning abilities along the way. This is thanks in large part to your strategic research design. 

As you prepare for your quantitative dissertation research, you’ll need to think about structuring your research design. There are several types of quantitative research designs, such as the experimental, comparative or predictive correlational designs. The approach you should choose depends primarily on your research aims. Before you decide which of these quantitative research methods to choose, you should have a conversation with your dissertation advisor about your options.

In This Article:

What Is Quantitative Research Design?

  • Taking a Closer Look at the Types of Quantitative Research Designs  
  • Quantitative Research Design Examples  

At the core, dissertations seek to answer research questions. They may develop new theories, expand upon existing theories or otherwise add to the body of knowledge in a field. Whatever the purpose, research questions address a research problem statement, which is the heart of a dissertation.

For example, doctoral students may seek to answer questions such as, If and to what extent do teacher practices influence special education students’ motivation? or Do office perks affect workers’ productivity?

The findings you glean from your research will help you develop fully substantiated answers to your questions. To acquire these findings, however, you’ll need to develop your dissertation’s research design.

“Research design” refers to your approach for answering your fundamental research questions. If you are writing a quantitatively based dissertation, your research design will center on numerical data collection and analysis.

Before you can settle on the details of your quantitative research design, you must decide whether your dissertation will be exploratory or conclusive in nature. Exploratory research seeks to develop general insights by exploring the subject in depth. In contrast, conclusive research aims to arrive at a definitive conclusion about the topic.

Taking a Closer Look at the Types of Quantitative Research Designs

Your quantitative research design is your strategy for carrying out your doctoral research . In the process of establishing your research design, you will need to answer questions such as the following:

  • What are your overall aims and approach? 
  • Which data collection methods will you use? 
  • Which data collection procedures will you use? 
  • What are your criteria for selecting samples or screening research subjects? 
  • How will you prevent the possibility of inadvertent bias that may skew your results? 
  • How will you analyze your data?

You should also consider whether you will need primary or secondary data. “Primary data” refer to information that you collect firsthand from sources such as study participants. “Secondary data” refer to information that was originally collected by other researchers; importantly, you will need to verify these sources’ reliability and validity.

Quantitative Research Design Examples

While reflecting upon the answers to the above questions, consider the main types of quantitative research design:

  • Experimental research design 
  • Quasi-experimental research design
  • (Causal) comparative
  • Correlational including predictive quantitative design
  • General correlation

Descriptive Quantitative Design for Your Research

This type of quantitative research design is appropriate if you intend to measure variables and perhaps establish associations between variables. However, the quantitative descriptive research design cannot establish causal relationships between variables.

Descriptive research is also referred to as “observational studies” because your role is strictly that of an observer. The following are some of the types of descriptive studies you might engage in when writing your dissertation:

  • Case or case study: This is a fairly simple quantitative research design example. It involves the collection of data from only one research subject. 
  • Case series: If the researcher evaluates data from a few research subjects, the study is called a “case series.” 
  • Cross-sectional study: In a cross-sectional study, researchers analyze variables in their sample of subjects. Then, they establish the non-causal relationships between them. 
  • Prospective study: Also called a “cohort study” or “longitudinal study,” this involves analyzing some variables at the beginning of the study. Then, researchers conduct further analyses on outcomes at the conclusion of the study. These studies may take place over a long period of time (e.g., researchers analyzing individuals’ diet habits and then determining incidences of heart disease after 30 years). 
  • Case-control study: Researchers can compare cases or subjects with a certain attribute to cases that lack that attribute (the controls). These are also called “retrospective studies.”

Because the role of the researcher is solely observational, they may not develop a hypothesis beforehand, though some researchers might develop one before beginning their research. Rather, the descriptive researcher develops the hypothesis after collecting the data and analyzing it for their quantitative dissertation.

Correlational Quantitative Research Design

Because it likewise makes no attempt to influence the variables, correlational research is very similar to quantitative descriptive research design. Another similarity is that the researcher conducting the study measures or evaluates the variables involved. The main difference between descriptive and correlational studies is that a correlational study seeks to understand the relationship between the variables.

A correlational study can also establish whether this relationship has a positive or negative direction. A positive correlation means that both variables move in the same direction. In contrast, a negative correlation means that the variables move in opposite directions.

For example, a positive correlation might be expressed as follows: “As a person lifts more weights, they grow greater muscle mass.” A negative correlation, meanwhile, might be expressed as follows: “As a waiter drops more trays, their tips decrease.”

Note that a correlational study can also produce findings of zero correlation. For example, the presence of muscular waiters might not be correlated with tips.

The fact that correlational research cannot be used to establish causality is a common point of confusion among new researchers. After all, it certainly seems to be causal in nature that a waiter who drops trays frequently would receive smaller tips. However, the key is that correlational studies do not provide definitive proof that one variable leads to the second variable.

Quasi-Experimental Quantitative Research Design

In a quasi-experimental quantitative research design, the researcher attempts to establish a cause-effect relationship from one variable to another. For example, a researcher may determine that high school students who study for an hour every day are more likely to earn high grades on their tests. To develop this finding, the researcher would first measure the length of time that the participants study each day (variable one) and then their test scores (variable two).

In this study, one of the variables is independent, and the other is dependent. The value of the independent variable is not influenced by the other variables; the value of the dependent variable, however, is wholly dependent on changes in the independent variable. In the example above, the length of study time is the independent variable, and the test scores are the dependent variable.

A quasi-experimental study is not a true experimental study because it does not randomly assign study participants to groups. Rather, it assigns them to groups specifically because they have a certain attribute or meet non-random criteria. Control groups are not strictly mandatory, although researchers still often use them.

Experimental Quantitative Research Design

Experimental quantitative research design utilizes the scientific approach. It establishes procedures that allow the researcher to test a hypothesis and to systematically and scientifically study causal relationships among variables.

All experimental quantitative research studies include three basic steps:

  • The researcher measures the variables. 
  • The researcher influences or intervenes with the variables in some way. 
  • The researcher measures the variables again to ascertain how the intervention affected the variables.

An experimental quantitative study has the following characteristics:

  • The nature and relationship of the variables 
  • A specific hypothesis that can be tested 
  • Subjects assigned to groups based on pre-determined criteria 
  • Experimental treatments that change the independent variable 
  • Measurements of the dependent variable before and after the independent variable changes

A scientific experiment may use a completely randomized design in which each study participant is assigned randomly to a group. Alternatively, it may use the randomized block design in which study participants who share a certain attribute are grouped together. In either case, the participants are randomly given treatments within their groups.

(Causal) Comparative Research Design

Causal comparative research, or ex post facto research, studies the reasons behind a change that has already occurred. For example, researchers might use a causal comparative design to determine how a new diet affects children who have already begun it. This type of research is especially common in sociological and medical circles.

There are three types of causal comparative research designs, including:

  • Exploring the effects of participating in a group 
  • Exploring the causes of participating in a group 
  • Exploring the consequences of a change on a group

Though causal comparative research designs can provide insight into the relationships between variables, researchers can’t use it to define why an event took place. This is because the event already occurred, so researchers can’t be sure what caused it and what the effects are.

Causal comparative studies include the same general steps:

  • Identify phenomena and think about the causes or consequences of that phenomena 
  • Create a specific problem statement 
  • Create one or more hypotheses 
  • Select a group to study
  • Match the group with one or more variables to control the variables and eliminate differences within the group (this step may differ depending on the type of causal comparative study done) 
  • Select instruments to use in the study 
  • Compare groups using one or more differing variables

Causal comparative studies are similar to correlational studies, but whereas both explore relationships between variables, causal comparative studies compare two or more groups and correlational studies score each variable in a single group. Though correlational studies include multiple quantitative variables, causal comparative studies include one or more categorical variables.

Aspiring doctoral students at Grand Canyon University (GCU) can choose from a wide range of programs in various fields from the College of Doctoral Studies . These include the Doctor of Philosophy in General Psychology: Performance Psychology (Quantitative Research) degree and the Doctor of Education in Organizational Leadership (Quantitative Research) degree. Complete the form on this page to explore your doctoral degree options at GCU. 

Approved by the assistant dean of the College of Doctoral Studies on April 21, 2023. 

The views and opinions expressed in this article are those of the author’s and do not necessarily reflect the official policy or position of Grand Canyon University. Any sources cited were accurate as of the publish date.

  • Dissertation Resources

Loading Form

Top Related Degrees

Loading Degree Programs

Related Articles in Doctoral Journey

Adult Education.

Adult Learning Theories: Definition and Examples

Group of students working on a project together.

What Is a Cohort Program?

Graduates earning their PhD

How Hard Is It To Get a PhD?

More related articles in Doctoral Journey

What Are The Four Types Of Quantitative Research?

Quantitative research is a crucial stage in virtually any research project, discovering the larger trends that can then be further examined.

Along with qualitative research, it is one of the twin pillars of understanding trends and user behaviour.

However, quantitative research is not a singular entity, there is not just one form of this analysis.

In this post, we example the four types of quantitive research, explaining them in simple, understandable terms and providing examples to explain their usage, benefits and potential limitations.

Quantitative research in (about) 50 Words

Quantitative data research relates to numbers and how you can group things together.

It is research that looks at a broad level and does not delve down into the individual.

Quantitative research would look to find which the most popular car make is, or how baby name choices have changed over time. It would not ascertain why car brand X is more popular among under 30s than car brand Y.

The Names of The Four Types

While different terms can be applied, the four different strands of quantitative research are:

  • Descriptive
  • Correlational
  • Quasi-Experimental
  • Experimental

We will taker these in order.

Descriptive Market Research

Descriptive quantitative research looks to describe the current status of a real-world phenomenon.

In this approach, the researcher does not start with a hypothesis, instead they gather data to then draw any conclusion or theory from.

Almost everyone will have been surveyed whereby descriptive information is being sought.

Examples might be acquiring data for shops a town centre would benefit from ‘which type of shop would you like to see in X town centre’ or attitude to switching to a four-term school system.

In each case, the researcher will collect the data and this can then be analysed and conclusions drawn and follow-ups designed.

It is essential that the initial data collection phase enables the correct choices to be recorded. -for instance, a survey asking about desirable new shops would be flawed if it only had a set list of optioned the failed to include choices that would be popular.

Correlational Research

A correlational research project would explore a link between variables but without looking to apply cause and effect reasoning.

As with the descriptive market research, this is an observational form of quantitative research – indeed, sometimes it will be grouped with descriptive research as with both information is gathered without cause analysed.

Examples demonstrate that this is a form of research of which we are all familiar. Essentially, it can be the link between any two things.

The relationship between average hours of sleep and school attainment; the relationship between annual income and depression; the relationship between age and media consumption by type.

It would then be to draw out and further explore the legitimacy of these relationships – some could be purely coincidental. You could examine the relationship between length of surname and IQ, but would any apparent link have any legitimacy?

It is therefore important to select items to observe with care and then use the data as a starting point for further research rather than necessarily a fait accompli.

Quasi-Experimental Research

Quasi-experimental research is a powerful tool but one that needs skilful application to avoid potentially damaging and false correlations being suggested.

This form of research – also known as casual-comparative – looks to establish relationships between the variables, but it must also factor in other variables – both known and potentially unknown.

One example, and this is one that has caused past problems, is to look at education – for instance the impact of children taking multi vitamins on attainment. A link could be shown by the data – the link could be valid, but other factors could also be at play.

Do the children with diets have better overall diets? Do the parents giving their children multivitamins also typically take a more involved approach to their children’s education, perhaps even involving extra tutoring?  

The researcher often has to make do with groups as they already exist – for instance the grouping of children by classes in school.  

It is important for any research project to be clear that it is quasi-experimental rather than a pure experiment. This does not render the data invalid, but it factors in an acceptance that groups could not be chosen completely at random and so other variables will have an impact.  

How much of an impact they have has to be ascertained or at least factored into any analysis.

Experimental Research

Experimental research tests the relationship between variables but, unlike the quasi-experimental approach, this is done in a setting whereby there has been optimal variable control.

This approach can also be known as true experimentation. It can be difficult to set up but it leads to greater confidence and legitimacy in any end result.

The researcher will control – or at least attempt to control – every variable except for the one being manipulated (the independent variable).

Websites are often able to run this form of experimentation by serving up subtly different versions of pages to huge groups of users in a truly randomised way, with the results analysed.  

For instance, font vs click-through rate could be analysed if all the content was the same and the only difference was that the two different font options were served to different users within a huge pool at random.

Being able to randomise to a large group is a key facet. In medicine, different treatments can be trialled this way – the effect of a new treatment plan on dementia for instance.

Which Approach Is Best?

It hopefully will not surprise you to learn that there is no singular best approach for all circumstances – there is also the balance of quantitative and qualitative research to be factored in.

Instead, it is recommended to have a detailed consultation with experts in market research, outlining the areas you wish to explore and working with them to find a plan of best action.

A professional approach can ensure that valid insight is found, insight that can be acted upon and drive future policy and decisions for businesses, organisations or policy.

Quantitative market research is not automatically beneficial – it has to be the right questions asked in the right way, the right data collected and only valid conclusions drawn, or follow-up questions explored.

The Highest Standards of Market Research

The proof of our quality is in our case studies and past clients. Please take some time to view our past work, this shows how we worked with clients to understand their needs, advise as appropriate and deliver the findings that could benefit their business.

We have won awards, received accreditation and have professional certification, for instance for data usage – you can find out more on this site.

We also have a bespoke verification programme called Acumonitor, this verifies all participants. We take every step to ensure you can be confident in the validity of the research and analysis we provide.

Acumonitor is an example of how we are actively looking to drive the standards of market research forward – you can read more and watch a short video that explains more.

On this site, there are contact details for every manager, so you can get in touch with any member of the team and discuss your requirements.

Whether you require further information, would benefit from an obligation-free quote, or simply want to discuss how to approach your healthcare research requirements, please do get in touch.

Call us on 0161 234 9940 or use our Contact Form.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

The PMC website is updating on October 15, 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • J Korean Med Sci
  • v.38(37); 2023 Sep 18
  • PMC10506897

Logo of jkms

Conducting and Writing Quantitative and Qualitative Research

Edward barroga.

1 Department of Medical Education, Showa University School of Medicine, Tokyo, Japan.

Glafera Janet Matanguihan

2 Department of Biological Sciences, Messiah University, Mechanicsburg, PA, USA.

Atsuko Furuta

Makiko arima, shizuma tsuchiya, chikako kawahara, yusuke takamiya.

Comprehensive knowledge of quantitative and qualitative research systematizes scholarly research and enhances the quality of research output. Scientific researchers must be familiar with them and skilled to conduct their investigation within the frames of their chosen research type. When conducting quantitative research, scientific researchers should describe an existing theory, generate a hypothesis from the theory, test their hypothesis in novel research, and re-evaluate the theory. Thereafter, they should take a deductive approach in writing the testing of the established theory based on experiments. When conducting qualitative research, scientific researchers raise a question, answer the question by performing a novel study, and propose a new theory to clarify and interpret the obtained results. After which, they should take an inductive approach to writing the formulation of concepts based on collected data. When scientific researchers combine the whole spectrum of inductive and deductive research approaches using both quantitative and qualitative research methodologies, they apply mixed-method research. Familiarity and proficiency with these research aspects facilitate the construction of novel hypotheses, development of theories, or refinement of concepts.

Graphical Abstract

An external file that holds a picture, illustration, etc.
Object name is jkms-38-e291-abf001.jpg

INTRODUCTION

Novel research studies are conceptualized by scientific researchers first by asking excellent research questions and developing hypotheses, then answering these questions by testing their hypotheses in ethical research. 1 , 2 , 3 Before they conduct novel research studies, scientific researchers must possess considerable knowledge of both quantitative and qualitative research. 2

In quantitative research, researchers describe existing theories, generate and test a hypothesis in novel research, and re-evaluate existing theories deductively based on their experimental results. 1 , 4 , 5 In qualitative research, scientific researchers raise and answer research questions by performing a novel study, then propose new theories by clarifying their results inductively. 1 , 6

RATIONALE OF THIS ARTICLE

When researchers have a limited knowledge of both research types and how to conduct them, this can result in substandard investigation. Researchers must be familiar with both types of research and skilled to conduct their investigations within the frames of their chosen type of research. Thus, meticulous care is needed when planning quantitative and qualitative research studies to avoid unethical research and poor outcomes.

Understanding the methodological and writing assumptions 7 , 8 underpinning quantitative and qualitative research, especially by non-Anglophone researchers, is essential for their successful conduct. Scientific researchers, especially in the academe, face pressure to publish in international journals 9 where English is the language of scientific communication. 10 , 11 In particular, non-Anglophone researchers face challenges related to linguistic, stylistic, and discourse differences. 11 , 12 Knowing the assumptions of the different types of research will help clarify research questions and methodologies, easing the challenge and help.

SEARCH FOR RELEVANT ARTICLES

To identify articles relevant to this topic, we adhered to the search strategy recommended by Gasparyan et al. 7 We searched through PubMed, Scopus, Directory of Open Access Journals, and Google Scholar databases using the following keywords: quantitative research, qualitative research, mixed-method research, deductive reasoning, inductive reasoning, study design, descriptive research, correlational research, experimental research, causal-comparative research, quasi-experimental research, historical research, ethnographic research, meta-analysis, narrative research, grounded theory, phenomenology, case study, and field research.

AIMS OF THIS ARTICLE

This article aims to provide a comparative appraisal of qualitative and quantitative research for scientific researchers. At present, there is still a need to define the scope of qualitative research, especially its essential elements. 13 Consensus on the critical appraisal tools to assess the methodological quality of qualitative research remains lacking. 14 Framing and testing research questions can be challenging in qualitative research. 2 In the healthcare system, it is essential that research questions address increasingly complex situations. Therefore, research has to be driven by the kinds of questions asked and the corresponding methodologies to answer these questions. 15 The mixed-method approach also needs to be clarified as this would appear to arise from different philosophical underpinnings. 16

This article also aims to discuss how particular types of research should be conducted and how they should be written in adherence to international standards. In the US, Europe, and other countries, responsible research and innovation was conceptualized and promoted with six key action points: engagement, gender equality, science education, open access, ethics and governance. 17 , 18 International ethics standards in research 19 as well as academic integrity during doctoral trainings are now integral to the research process. 20

POTENTIAL BENEFITS FROM THIS ARTICLE

This article would be beneficial for researchers in further enhancing their understanding of the theoretical, methodological, and writing aspects of qualitative and quantitative research, and their combination.

Moreover, this article reviews the basic features of both research types and overviews the rationale for their conduct. It imparts information on the most common forms of quantitative and qualitative research, and how they are carried out. These aspects would be helpful for selecting the optimal methodology to use for research based on the researcher’s objectives and topic.

This article also provides information on the strengths and weaknesses of quantitative and qualitative research. Such information would help researchers appreciate the roles and applications of both research types and how to gain from each or their combination. As different research questions require different types of research and analyses, this article is anticipated to assist researchers better recognize the questions answered by quantitative and qualitative research.

Finally, this article would help researchers to have a balanced perspective of qualitative and quantitative research without considering one as superior to the other.

TYPES OF RESEARCH

Research can be classified into two general types, quantitative and qualitative. 21 Both types of research entail writing a research question and developing a hypothesis. 22 Quantitative research involves a deductive approach to prove or disprove the hypothesis that was developed, whereas qualitative research involves an inductive approach to create a hypothesis. 23 , 24 , 25 , 26

In quantitative research, the hypothesis is stated before testing. In qualitative research, the hypothesis is developed through inductive reasoning based on the data collected. 27 , 28 For types of data and their analysis, qualitative research usually includes data in the form of words instead of numbers more commonly used in quantitative research. 29

Quantitative research usually includes descriptive, correlational, causal-comparative / quasi-experimental, and experimental research. 21 On the other hand, qualitative research usually encompasses historical, ethnographic, meta-analysis, narrative, grounded theory, phenomenology, case study, and field research. 23 , 25 , 28 , 30 A summary of the features, writing approach, and examples of published articles for each type of qualitative and quantitative research is shown in Table 1 . 31 , 32 , 33 , 34 , 35 , 36 , 37 , 38 , 39 , 40 , 41 , 42 , 43

ResearchTypeMethodology featureResearch writing pointersExample of published article
QuantitativeDescriptive researchDescribes status of identified variable to provide systematic information about phenomenonExplain how a situation, sample, or variable was examined or observed as it occurred without investigator interferenceÖstlund AS, Kristofferzon ML, Häggström E, Wadensten B. Primary care nurses’ performance in motivational interviewing: a quantitative descriptive study. 2015;16(1):89.
Correlational researchDetermines and interprets extent of relationship between two or more variables using statistical dataDescribe the establishment of reliability and validity, converging evidence, relationships, and predictions based on statistical dataDíaz-García O, Herranz Aguayo I, Fernández de Castro P, Ramos JL. Lifestyles of Spanish elders from supervened SARS-CoV-2 variant onwards: A correlational research on life satisfaction and social-relational praxes. 2022;13:948745.
Causal-comparative/Quasi-experimental researchEstablishes cause-effect relationships among variablesWrite about comparisons of the identified control groups exposed to the treatment variable with unexposed groups : Sharma MK, Adhikari R. Effect of school water, sanitation, and hygiene on health status among basic level students in Nepal. Environ Health Insights 2022;16:11786302221095030.
Uses non-randomly assigned groups where it is not logically feasible to conduct a randomized controlled trialProvide clear descriptions of the causes determined after making data analyses and conclusions, and known and unknown variables that could potentially affect the outcome
[The study applies a causal-comparative research design]
: Tuna F, Tunçer B, Can HB, Süt N, Tuna H. Immediate effect of Kinesio taping® on deep cervical flexor endurance: a non-controlled, quasi-experimental pre-post quantitative study. 2022;40(6):528-35.
Experimental researchEstablishes cause-effect relationship among group of variables making up a study using scientific methodDescribe how an independent variable was manipulated to determine its effects on dependent variablesHyun C, Kim K, Lee S, Lee HH, Lee J. Quantitative evaluation of the consciousness level of patients in a vegetative state using virtual reality and an eye-tracking system: a single-case experimental design study. 2022;32(10):2628-45.
Explain the random assignments of subjects to experimental treatments
QualitativeHistorical researchDescribes past events, problems, issues, and factsWrite the research based on historical reportsSilva Lima R, Silva MA, de Andrade LS, Mello MA, Goncalves MF. Construction of professional identity in nursing students: qualitative research from the historical-cultural perspective. 2020;28:e3284.
Ethnographic researchDevelops in-depth analytical descriptions of current systems, processes, and phenomena or understandings of shared beliefs and practices of groups or cultureCompose a detailed report of the interpreted dataGammeltoft TM, Huyền Diệu BT, Kim Dung VT, Đức Anh V, Minh Hiếu L, Thị Ái N. Existential vulnerability: an ethnographic study of everyday lives with diabetes in Vietnam. 2022;29(3):271-88.
Meta-analysisAccumulates experimental and correlational results across independent studies using statistical methodSpecify the topic, follow reporting guidelines, describe the inclusion criteria, identify key variables, explain the systematic search of databases, and detail the data extractionOeljeklaus L, Schmid HL, Kornfeld Z, Hornberg C, Norra C, Zerbe S, et al. Therapeutic landscapes and psychiatric care facilities: a qualitative meta-analysis. 2022;19(3):1490.
Narrative researchStudies an individual and gathers data by collecting stories for constructing a narrative about the individual’s experiences and their meaningsWrite an in-depth narration of events or situations focused on the participantsAnderson H, Stocker R, Russell S, Robinson L, Hanratty B, Robinson L, et al. Identity construction in the very old: a qualitative narrative study. 2022;17(12):e0279098.
Grounded theoryEngages in inductive ground-up or bottom-up process of generating theory from dataWrite the research as a theory and a theoretical model.Amini R, Shahboulaghi FM, Tabrizi KN, Forouzan AS. Social participation among Iranian community-dwelling older adults: a grounded theory study. 2022;11(6):2311-9.
Describe data analysis procedure about theoretical coding for developing hypotheses based on what the participants say
PhenomenologyAttempts to understand subjects’ perspectivesWrite the research report by contextualizing and reporting the subjects’ experiencesGreen G, Sharon C, Gendler Y. The communication challenges and strength of nurses’ intensive corona care during the two first pandemic waves: a qualitative descriptive phenomenology study. 2022;10(5):837.
Case studyAnalyzes collected data by detailed identification of themes and development of narratives written as in-depth study of lessons from caseWrite the report as an in-depth study of possible lessons learned from the caseHorton A, Nugus P, Fortin MC, Landsberg D, Cantarovich M, Sandal S. Health system barriers and facilitators to living donor kidney transplantation: a qualitative case study in British Columbia. 2022;10(2):E348-56.
Field researchDirectly investigates and extensively observes social phenomenon in natural environment without implantation of controls or experimental conditionsDescribe the phenomenon under the natural environment over timeBuus N, Moensted M. Collectively learning to talk about personal concerns in a peer-led youth program: a field study of a community of practice. 2022;30(6):e4425-32.

QUANTITATIVE RESEARCH

Deductive approach.

The deductive approach is used to prove or disprove the hypothesis in quantitative research. 21 , 25 Using this approach, researchers 1) make observations about an unclear or new phenomenon, 2) investigate the current theory surrounding the phenomenon, and 3) hypothesize an explanation for the observations. Afterwards, researchers will 4) predict outcomes based on the hypotheses, 5) formulate a plan to test the prediction, and 6) collect and process the data (or revise the hypothesis if the original hypothesis was false). Finally, researchers will then 7) verify the results, 8) make the final conclusions, and 9) present and disseminate their findings ( Fig. 1A ).

An external file that holds a picture, illustration, etc.
Object name is jkms-38-e291-g001.jpg

Types of quantitative research

The common types of quantitative research include (a) descriptive, (b) correlational, c) experimental research, and (d) causal-comparative/quasi-experimental. 21

Descriptive research is conducted and written by describing the status of an identified variable to provide systematic information about a phenomenon. A hypothesis is developed and tested after data collection, analysis, and synthesis. This type of research attempts to factually present comparisons and interpretations of findings based on analyses of the characteristics, progression, or relationships of a certain phenomenon by manipulating the employed variables or controlling the involved conditions. 44 Here, the researcher examines, observes, and describes a situation, sample, or variable as it occurs without investigator interference. 31 , 45 To be meaningful, the systematic collection of information requires careful selection of study units by precise measurement of individual variables 21 often expressed as ranges, means, frequencies, and/or percentages. 31 , 45 Descriptive statistical analysis using ANOVA, Student’s t -test, or the Pearson coefficient method has been used to analyze descriptive research data. 46

Correlational research is performed by determining and interpreting the extent of a relationship between two or more variables using statistical data. This involves recognizing data trends and patterns without necessarily proving their causes. The researcher studies only the data, relationships, and distributions of variables in a natural setting, but does not manipulate them. 21 , 45 Afterwards, the researcher establishes reliability and validity, provides converging evidence, describes relationship, and makes predictions. 47

Experimental research is usually referred to as true experimentation. The researcher establishes the cause-effect relationship among a group of variables making up a study using the scientific method or process. This type of research attempts to identify the causal relationships between variables through experiments by arbitrarily controlling the conditions or manipulating the variables used. 44 The scientific manuscript would include an explanation of how the independent variable was manipulated to determine its effects on the dependent variables. The write-up would also describe the random assignments of subjects to experimental treatments. 21

Causal-comparative/quasi-experimental research closely resembles true experimentation but is conducted by establishing the cause-effect relationships among variables. It may also be conducted to establish the cause or consequences of differences that already exist between, or among groups of individuals. 48 This type of research compares outcomes between the intervention groups in which participants are not randomized to their respective interventions because of ethics- or feasibility-related reasons. 49 As in true experiments, the researcher identifies and measures the effects of the independent variable on the dependent variable. However, unlike true experiments, the researchers do not manipulate the independent variable.

In quasi-experimental research, naturally formed or pre-existing groups that are not randomly assigned are used, particularly when an ethical, randomized controlled trial is not feasible or logical. 50 The researcher identifies control groups as those which have been exposed to the treatment variable, and then compares these with the unexposed groups. The causes are determined and described after data analysis, after which conclusions are made. The known and unknown variables that could still affect the outcome are also included. 7

QUALITATIVE RESEARCH

Inductive approach.

Qualitative research involves an inductive approach to develop a hypothesis. 21 , 25 Using this approach, researchers answer research questions and develop new theories, but they do not test hypotheses or previous theories. The researcher seldom examines the effectiveness of an intervention, but rather explores the perceptions, actions, and feelings of participants using interviews, content analysis, observations, or focus groups. 25 , 45 , 51

Distinctive features of qualitative research

Qualitative research seeks to elucidate about the lives of people, including their lived experiences, behaviors, attitudes, beliefs, personality characteristics, emotions, and feelings. 27 , 30 It also explores societal, organizational, and cultural issues. 30 This type of research provides a good story mimicking an adventure which results in a “thick” description that puts readers in the research setting. 52

The qualitative research questions are open-ended, evolving, and non-directional. 26 The research design is usually flexible and iterative, commonly employing purposive sampling. The sample size depends on theoretical saturation, and data is collected using in-depth interviews, focus groups, and observations. 27

In various instances, excellent qualitative research may offer insights that quantitative research cannot. Moreover, qualitative research approaches can describe the ‘lived experience’ perspectives of patients, practitioners, and the public. 53 Interestingly, recent developments have looked into the use of technology in shaping qualitative research protocol development, data collection, and analysis phases. 54

Qualitative research employs various techniques, including conversational and discourse analysis, biographies, interviews, case-studies, oral history, surveys, documentary and archival research, audiovisual analysis, and participant observations. 26

Conducting qualitative research

To conduct qualitative research, investigators 1) identify a general research question, 2) choose the main methods, sites, and subjects, and 3) determine methods of data documentation access to subjects. Researchers also 4) decide on the various aspects for collecting data (e.g., questions, behaviors to observe, issues to look for in documents, how much (number of questions, interviews, or observations), 5) clarify researchers’ roles, and 6) evaluate the study’s ethical implications in terms of confidentiality and sensitivity. Afterwards, researchers 7) collect data until saturation, 8) interpret data by identifying concepts and theories, and 9) revise the research question if necessary and form hypotheses. In the final stages of the research, investigators 10) collect and verify data to address revisions, 11) complete the conceptual and theoretical framework to finalize their findings, and 12) present and disseminate findings ( Fig. 1B ).

Types of qualitative research

The different types of qualitative research include (a) historical research, (b) ethnographic research, (c) meta-analysis, (d) narrative research, (e) grounded theory, (f) phenomenology, (g) case study, and (h) field research. 23 , 25 , 28 , 30

Historical research is conducted by describing past events, problems, issues, and facts. The researcher gathers data from written or oral descriptions of past events and attempts to recreate the past without interpreting the events and their influence on the present. 6 Data is collected using documents, interviews, and surveys. 55 The researcher analyzes these data by describing the development of events and writes the research based on historical reports. 2

Ethnographic research is performed by observing everyday life details as they naturally unfold. 2 It can also be conducted by developing in-depth analytical descriptions of current systems, processes, and phenomena or by understanding the shared beliefs and practices of a particular group or culture. 21 The researcher collects extensive narrative non-numerical data based on many variables over an extended period, in a natural setting within a specific context. To do this, the researcher uses interviews, observations, and active participation. These data are analyzed by describing and interpreting them and developing themes. A detailed report of the interpreted data is then provided. 2 The researcher immerses himself/herself into the study population and describes the actions, behaviors, and events from the perspective of someone involved in the population. 23 As examples of its application, ethnographic research has helped to understand a cultural model of family and community nursing during the coronavirus disease 2019 outbreak. 56 It has also been used to observe the organization of people’s environment in relation to cardiovascular disease management in order to clarify people’s real expectations during follow-up consultations, possibly contributing to the development of innovative solutions in care practices. 57

Meta-analysis is carried out by accumulating experimental and correlational results across independent studies using a statistical method. 21 The report is written by specifying the topic and meta-analysis type. In the write-up, reporting guidelines are followed, which include description of inclusion criteria and key variables, explanation of the systematic search of databases, and details of data extraction. Meta-analysis offers in-depth data gathering and analysis to achieve deeper inner reflection and phenomenon examination. 58

Narrative research is performed by collecting stories for constructing a narrative about an individual’s experiences and the meanings attributed to them by the individual. 9 It aims to hear the voice of individuals through their account or experiences. 17 The researcher usually conducts interviews and analyzes data by storytelling, content review, and theme development. The report is written as an in-depth narration of events or situations focused on the participants. 2 , 59 Narrative research weaves together sequential events from one or two individuals to create a “thick” description of a cohesive story or narrative. 23 It facilitates understanding of individuals’ lives based on their own actions and interpretations. 60

Grounded theory is conducted by engaging in an inductive ground-up or bottom-up strategy of generating a theory from data. 24 The researcher incorporates deductive reasoning when using constant comparisons. Patterns are detected in observations and then a working hypothesis is created which directs the progression of inquiry. The researcher collects data using interviews and questionnaires. These data are analyzed by coding the data, categorizing themes, and describing implications. The research is written as a theory and theoretical models. 2 In the write-up, the researcher describes the data analysis procedure (i.e., theoretical coding used) for developing hypotheses based on what the participants say. 61 As an example, a qualitative approach has been used to understand the process of skill development of a nurse preceptor in clinical teaching. 62 A researcher can also develop a theory using the grounded theory approach to explain the phenomena of interest by observing a population. 23

Phenomenology is carried out by attempting to understand the subjects’ perspectives. This approach is pertinent in social work research where empathy and perspective are keys to success. 21 Phenomenology studies an individual’s lived experience in the world. 63 The researcher collects data by interviews, observations, and surveys. 16 These data are analyzed by describing experiences, examining meanings, and developing themes. The researcher writes the report by contextualizing and reporting the subjects’ experience. This research approach describes and explains an event or phenomenon from the perspective of those who have experienced it. 23 Phenomenology understands the participants’ experiences as conditioned by their worldviews. 52 It is suitable for a deeper understanding of non-measurable aspects related to the meanings and senses attributed by individuals’ lived experiences. 60

Case study is conducted by collecting data through interviews, observations, document content examination, and physical inspections. The researcher analyzes the data through a detailed identification of themes and the development of narratives. The report is written as an in-depth study of possible lessons learned from the case. 2

Field research is performed using a group of methodologies for undertaking qualitative inquiries. The researcher goes directly to the social phenomenon being studied and observes it extensively. In the write-up, the researcher describes the phenomenon under the natural environment over time with no implantation of controls or experimental conditions. 45

DIFFERENCES BETWEEN QUANTITATIVE AND QUALITATIVE RESEARCH

Scientific researchers must be aware of the differences between quantitative and qualitative research in terms of their working mechanisms to better understand their specific applications. This knowledge will be of significant benefit to researchers, especially during the planning process, to ensure that the appropriate type of research is undertaken to fulfill the research aims.

In terms of quantitative research data evaluation, four well-established criteria are used: internal validity, external validity, reliability, and objectivity. 23 The respective correlating concepts in qualitative research data evaluation are credibility, transferability, dependability, and confirmability. 30 Regarding write-up, quantitative research papers are usually shorter than their qualitative counterparts, which allows the latter to pursue a deeper understanding and thus producing the so-called “thick” description. 29

Interestingly, a major characteristic of qualitative research is that the research process is reversible and the research methods can be modified. This is in contrast to quantitative research in which hypothesis setting and testing take place unidirectionally. This means that in qualitative research, the research topic and question may change during literature analysis, and that the theoretical and analytical methods could be altered during data collection. 44

Quantitative research focuses on natural, quantitative, and objective phenomena, whereas qualitative research focuses on social, qualitative, and subjective phenomena. 26 Quantitative research answers the questions “what?” and “when?,” whereas qualitative research answers the questions “why?,” “how?,” and “how come?.” 64

Perhaps the most important distinction between quantitative and qualitative research lies in the nature of the data being investigated and analyzed. Quantitative research focuses on statistical, numerical, and quantitative aspects of phenomena, and employ the same data collection and analysis, whereas qualitative research focuses on the humanistic, descriptive, and qualitative aspects of phenomena. 26 , 28

Structured versus unstructured processes

The aims and types of inquiries determine the difference between quantitative and qualitative research. In quantitative research, statistical data and a structured process are usually employed by the researcher. Quantitative research usually suggests quantities (i.e., numbers). 65 On the other hand, researchers typically use opinions, reasons, verbal statements, and an unstructured process in qualitative research. 63 Qualitative research is more related to quality or kind. 65

In quantitative research, the researcher employs a structured process for collecting quantifiable data. Often, a close-ended questionnaire is used wherein the response categories for each question are designed in which values can be assigned and analyzed quantitatively using a common scale. 66 Quantitative research data is processed consecutively from data management, then data analysis, and finally to data interpretation. Data should be free from errors and missing values. In data management, variables are defined and coded. In data analysis, statistics (e.g., descriptive, inferential) as well as central tendency (i.e., mean, median, mode), spread (standard deviation), and parameter estimation (confidence intervals) measures are used. 67

In qualitative research, the researcher uses an unstructured process for collecting data. These non-statistical data may be in the form of statements, stories, or long explanations. Various responses according to respondents may not be easily quantified using a common scale. 66

Composing a qualitative research paper resembles writing a quantitative research paper. Both papers consist of a title, an abstract, an introduction, objectives, methods, findings, and discussion. However, a qualitative research paper is less regimented than a quantitative research paper. 27

Quantitative research as a deductive hypothesis-testing design

Quantitative research can be considered as a hypothesis-testing design as it involves quantification, statistics, and explanations. It flows from theory to data (i.e., deductive), focuses on objective data, and applies theories to address problems. 45 , 68 It collects numerical or statistical data; answers questions such as how many, how often, how much; uses questionnaires, structured interview schedules, or surveys 55 as data collection tools; analyzes quantitative data in terms of percentages, frequencies, statistical comparisons, graphs, and tables showing statistical values; and reports the final findings in the form of statistical information. 66 It uses variable-based models from individual cases and findings are stated in quantified sentences derived by deductive reasoning. 24

In quantitative research, a phenomenon is investigated in terms of the relationship between an independent variable and a dependent variable which are numerically measurable. The research objective is to statistically test whether the hypothesized relationship is true. 68 Here, the researcher studies what others have performed, examines current theories of the phenomenon being investigated, and then tests hypotheses that emerge from those theories. 4

Quantitative hypothesis-testing research has certain limitations. These limitations include (a) problems with selection of meaningful independent and dependent variables, (b) the inability to reflect subjective experiences as variables since variables are usually defined numerically, and (c) the need to state a hypothesis before the investigation starts. 61

Qualitative research as an inductive hypothesis-generating design

Qualitative research can be considered as a hypothesis-generating design since it involves understanding and descriptions in terms of context. It flows from data to theory (i.e., inductive), focuses on observation, and examines what happens in specific situations with the aim of developing new theories based on the situation. 45 , 68 This type of research (a) collects qualitative data (e.g., ideas, statements, reasons, characteristics, qualities), (b) answers questions such as what, why, and how, (c) uses interviews, observations, or focused-group discussions as data collection tools, (d) analyzes data by discovering patterns of changes, causal relationships, or themes in the data; and (e) reports the final findings as descriptive information. 61 Qualitative research favors case-based models from individual characteristics, and findings are stated using context-dependent existential sentences that are justifiable by inductive reasoning. 24

In qualitative research, texts and interviews are analyzed and interpreted to discover meaningful patterns characteristic of a particular phenomenon. 61 Here, the researcher starts with a set of observations and then moves from particular experiences to a more general set of propositions about those experiences. 4

Qualitative hypothesis-generating research involves collecting interview data from study participants regarding a phenomenon of interest, and then using what they say to develop hypotheses. It involves the process of questioning more than obtaining measurements; it generates hypotheses using theoretical coding. 61 When using large interview teams, the key to promoting high-level qualitative research and cohesion in large team methods and successful research outcomes is the balance between autonomy and collaboration. 69

Qualitative data may also include observed behavior, participant observation, media accounts, and cultural artifacts. 61 Focus group interviews are usually conducted, audiotaped or videotaped, and transcribed. Afterwards, the transcript is analyzed by several researchers.

Qualitative research also involves scientific narratives and the analysis and interpretation of textual or numerical data (or both), mostly from conversations and discussions. Such approach uncovers meaningful patterns that describe a particular phenomenon. 2 Thus, qualitative research requires skills in grasping and contextualizing data, as well as communicating data analysis and results in a scientific manner. The reflective process of the inquiry underscores the strengths of a qualitative research approach. 2

Combination of quantitative and qualitative research

When both quantitative and qualitative research methods are used in the same research, mixed-method research is applied. 25 This combination provides a complete view of the research problem and achieves triangulation to corroborate findings, complementarity to clarify results, expansion to extend the study’s breadth, and explanation to elucidate unexpected results. 29

Moreover, quantitative and qualitative findings are integrated to address the weakness of both research methods 29 , 66 and to have a more comprehensive understanding of the phenomenon spectrum. 66

For data analysis in mixed-method research, real non-quantitized qualitative data and quantitative data must both be analyzed. 70 The data obtained from quantitative analysis can be further expanded and deepened by qualitative analysis. 23

In terms of assessment criteria, Hammersley 71 opined that qualitative and quantitative findings should be judged using the same standards of validity and value-relevance. Both approaches can be mutually supportive. 52

Quantitative and qualitative research must be carefully studied and conducted by scientific researchers to avoid unethical research and inadequate outcomes. Quantitative research involves a deductive process wherein a research question is answered with a hypothesis that describes the relationship between independent and dependent variables, and the testing of the hypothesis. This investigation can be aptly termed as hypothesis-testing research involving the analysis of hypothesis-driven experimental studies resulting in a test of significance. Qualitative research involves an inductive process wherein a research question is explored to generate a hypothesis, which then leads to the development of a theory. This investigation can be aptly termed as hypothesis-generating research. When the whole spectrum of inductive and deductive research approaches is combined using both quantitative and qualitative research methodologies, mixed-method research is applied, and this can facilitate the construction of novel hypotheses, development of theories, or refinement of concepts.

Disclosure: The authors have no potential conflicts of interest to disclose.

Author Contributions:

  • Conceptualization: Barroga E, Matanguihan GJ.
  • Data curation: Barroga E, Matanguihan GJ, Furuta A, Arima M, Tsuchiya S, Kawahara C, Takamiya Y, Izumi M.
  • Formal analysis: Barroga E, Matanguihan GJ, Furuta A, Arima M, Tsuchiya S, Kawahara C.
  • Investigation: Barroga E, Matanguihan GJ, Takamiya Y, Izumi M.
  • Methodology: Barroga E, Matanguihan GJ, Furuta A, Arima M, Tsuchiya S, Kawahara C, Takamiya Y, Izumi M.
  • Project administration: Barroga E, Matanguihan GJ.
  • Resources: Barroga E, Matanguihan GJ, Furuta A, Arima M, Tsuchiya S, Kawahara C, Takamiya Y, Izumi M.
  • Supervision: Barroga E.
  • Validation: Barroga E, Matanguihan GJ, Furuta A, Arima M, Tsuchiya S, Kawahara C, Takamiya Y, Izumi M.
  • Visualization: Barroga E, Matanguihan GJ.
  • Writing - original draft: Barroga E, Matanguihan GJ.
  • Writing - review & editing: Barroga E, Matanguihan GJ, Furuta A, Arima M, Tsuchiya S, Kawahara C, Takamiya Y, Izumi M.

Experimental and Quasi-Experimental Research

Guide Title: Experimental and Quasi-Experimental Research Guide ID: 64

You approach a stainless-steel wall, separated vertically along its middle where two halves meet. After looking to the left, you see two buttons on the wall to the right. You press the top button and it lights up. A soft tone sounds and the two halves of the wall slide apart to reveal a small room. You step into the room. Looking to the left, then to the right, you see a panel of more buttons. You know that you seek a room marked with the numbers 1-0-1-2, so you press the button marked "10." The halves slide shut and enclose you within the cubicle, which jolts upward. Soon, the soft tone sounds again. The door opens again. On the far wall, a sign silently proclaims, "10th floor."

You have engaged in a series of experiments. A ride in an elevator may not seem like an experiment, but it, and each step taken towards its ultimate outcome, are common examples of a search for a causal relationship-which is what experimentation is all about.

You started with the hypothesis that this is in fact an elevator. You proved that you were correct. You then hypothesized that the button to summon the elevator was on the left, which was incorrect, so then you hypothesized it was on the right, and you were correct. You hypothesized that pressing the button marked with the up arrow would not only bring an elevator to you, but that it would be an elevator heading in the up direction. You were right.

As this guide explains, the deliberate process of testing hypotheses and reaching conclusions is an extension of commonplace testing of cause and effect relationships.

Basic Concepts of Experimental and Quasi-Experimental Research

Discovering causal relationships is the key to experimental research. In abstract terms, this means the relationship between a certain action, X, which alone creates the effect Y. For example, turning the volume knob on your stereo clockwise causes the sound to get louder. In addition, you could observe that turning the knob clockwise alone, and nothing else, caused the sound level to increase. You could further conclude that a causal relationship exists between turning the knob clockwise and an increase in volume; not simply because one caused the other, but because you are certain that nothing else caused the effect.

Independent and Dependent Variables

Beyond discovering causal relationships, experimental research further seeks out how much cause will produce how much effect; in technical terms, how the independent variable will affect the dependent variable. You know that turning the knob clockwise will produce a louder noise, but by varying how much you turn it, you see how much sound is produced. On the other hand, you might find that although you turn the knob a great deal, sound doesn't increase dramatically. Or, you might find that turning the knob just a little adds more sound than expected. The amount that you turned the knob is the independent variable, the variable that the researcher controls, and the amount of sound that resulted from turning it is the dependent variable, the change that is caused by the independent variable.

Experimental research also looks into the effects of removing something. For example, if you remove a loud noise from the room, will the person next to you be able to hear you? Or how much noise needs to be removed before that person can hear you?

Treatment and Hypothesis

The term treatment refers to either removing or adding a stimulus in order to measure an effect (such as turning the knob a little or a lot, or reducing the noise level a little or a lot). Experimental researchers want to know how varying levels of treatment will affect what they are studying. As such, researchers often have an idea, or hypothesis, about what effect will occur when they cause something. Few experiments are performed where there is no idea of what will happen. From past experiences in life or from the knowledge we possess in our specific field of study, we know how some actions cause other reactions. Experiments confirm or reconfirm this fact.

Experimentation becomes more complex when the causal relationships they seek aren't as clear as in the stereo knob-turning examples. Questions like "Will olestra cause cancer?" or "Will this new fertilizer help this plant grow better?" present more to consider. For example, any number of things could affect the growth rate of a plant-the temperature, how much water or sun it receives, or how much carbon dioxide is in the air. These variables can affect an experiment's results. An experimenter who wants to show that adding a certain fertilizer will help a plant grow better must ensure that it is the fertilizer, and nothing else, affecting the growth patterns of the plant. To do this, as many of these variables as possible must be controlled.

Matching and Randomization

In the example used in this guide (you'll find the example below), we discuss an experiment that focuses on three groups of plants -- one that is treated with a fertilizer named MegaGro, another group treated with a fertilizer named Plant!, and yet another that is not treated with fetilizer (this latter group serves as a "control" group). In this example, even though the designers of the experiment have tried to remove all extraneous variables, results may appear merely coincidental. Since the goal of the experiment is to prove a causal relationship in which a single variable is responsible for the effect produced, the experiment would produce stronger proof if the results were replicated in larger treatment and control groups.

Selecting groups entails assigning subjects in the groups of an experiment in such a way that treatment and control groups are comparable in all respects except the application of the treatment. Groups can be created in two ways: matching and randomization. In the MegaGro experiment discussed below, the plants might be matched according to characteristics such as age, weight and whether they are blooming. This involves distributing these plants so that each plant in one group exactly matches characteristics of plants in the other groups. Matching may be problematic, though, because it "can promote a false sense of security by leading [the experimenter] to believe that [the] experimental and control groups were really equated at the outset, when in fact they were not equated on a host of variables" (Jones, 291). In other words, you may have flowers for your MegaGro experiment that you matched and distributed among groups, but other variables are unaccounted for. It would be difficult to have equal groupings.

Randomization, then, is preferred to matching. This method is based on the statistical principle of normal distribution. Theoretically, any arbitrarily selected group of adequate size will reflect normal distribution. Differences between groups will average out and become more comparable. The principle of normal distribution states that in a population most individuals will fall within the middle range of values for a given characteristic, with increasingly fewer toward either extreme (graphically represented as the ubiquitous "bell curve").

Differences between Quasi-Experimental and Experimental Research

Thus far, we have explained that for experimental research we need:

  • a hypothesis for a causal relationship;
  • a control group and a treatment group;
  • to eliminate confounding variables that might mess up the experiment and prevent displaying the causal relationship; and
  • to have larger groups with a carefully sorted constituency; preferably randomized, in order to keep accidental differences from fouling things up.

But what if we don't have all of those? Do we still have an experiment? Not a true experiment in the strictest scientific sense of the term, but we can have a quasi-experiment, an attempt to uncover a causal relationship, even though the researcher cannot control all the factors that might affect the outcome.

A quasi-experimenter treats a given situation as an experiment even though it is not wholly by design. The independent variable may not be manipulated by the researcher, treatment and control groups may not be randomized or matched, or there may be no control group. The researcher is limited in what he or she can say conclusively.

The significant element of both experiments and quasi-experiments is the measure of the dependent variable, which it allows for comparison. Some data is quite straightforward, but other measures, such as level of self-confidence in writing ability, increase in creativity or in reading comprehension are inescapably subjective. In such cases, quasi-experimentation often involves a number of strategies to compare subjectivity, such as rating data, testing, surveying, and content analysis.

Rating essentially is developing a rating scale to evaluate data. In testing, experimenters and quasi-experimenters use ANOVA (Analysis of Variance) and ANCOVA (Analysis of Co-Variance) tests to measure differences between control and experimental groups, as well as different correlations between groups.

Since we're mentioning the subject of statistics, note that experimental or quasi-experimental research cannot state beyond a shadow of a doubt that a single cause will always produce any one effect. They can do no more than show a probability that one thing causes another. The probability that a result is the due to random chance is an important measure of statistical analysis and in experimental research.

Example: Causality

Let's say you want to determine that your new fertilizer, MegaGro, will increase the growth rate of plants. You begin by getting a plant to go with your fertilizer. Since the experiment is concerned with proving that MegaGro works, you need another plant, using no fertilizer at all on it, to compare how much change your fertilized plant displays. This is what is known as a control group.

Set up with a control group, which will receive no treatment, and an experimental group, which will get MegaGro, you must then address those variables that could invalidate your experiment. This can be an extensive and exhaustive process. You must ensure that you use the same plant; that both groups are put in the same kind of soil; that they receive equal amounts of water and sun; that they receive the same amount of exposure to carbon-dioxide-exhaling researchers, and so on. In short, any other variable that might affect the growth of those plants, other than the fertilizer, must be the same for both plants. Otherwise, you can't prove absolutely that MegaGro is the only explanation for the increased growth of one of those plants.

Such an experiment can be done on more than two groups. You may not only want to show that MegaGro is an effective fertilizer, but that it is better than its competitor brand of fertilizer, Plant! All you need to do, then, is have one experimental group receiving MegaGro, one receiving Plant! and the other (the control group) receiving no fertilizer. Those are the only variables that can be different between the three groups; all other variables must be the same for the experiment to be valid.

Controlling variables allows the researcher to identify conditions that may affect the experiment's outcome. This may lead to alternative explanations that the researcher is willing to entertain in order to isolate only variables judged significant. In the MegaGro experiment, you may be concerned with how fertile the soil is, but not with the plants'; relative position in the window, as you don't think that the amount of shade they get will affect their growth rate. But what if it did? You would have to go about eliminating variables in order to determine which is the key factor. What if one receives more shade than the other and the MegaGro plant, which received more shade, died? This might prompt you to formulate a plausible alternative explanation, which is a way of accounting for a result that differs from what you expected. You would then want to redo the study with equal amounts of sunlight.

Methods: Five Steps

Experimental research can be roughly divided into five phases:

Identifying a research problem

The process starts by clearly identifying the problem you want to study and considering what possible methods will affect a solution. Then you choose the method you want to test, and formulate a hypothesis to predict the outcome of the test.

For example, you may want to improve student essays, but you don't believe that teacher feedback is enough. You hypothesize that some possible methods for writing improvement include peer workshopping, or reading more example essays. Favoring the former, your experiment would try to determine if peer workshopping improves writing in high school seniors. You state your hypothesis: peer workshopping prior to turning in a final draft will improve the quality of the student's essay.

Planning an experimental research study

The next step is to devise an experiment to test your hypothesis. In doing so, you must consider several factors. For example, how generalizable do you want your end results to be? Do you want to generalize about the entire population of high school seniors everywhere, or just the particular population of seniors at your specific school? This will determine how simple or complex the experiment will be. The amount of time funding you have will also determine the size of your experiment.

Continuing the example from step one, you may want a small study at one school involving three teachers, each teaching two sections of the same course. The treatment in this experiment is peer workshopping. Each of the three teachers will assign the same essay assignment to both classes; the treatment group will participate in peer workshopping, while the control group will receive only teacher comments on their drafts.

Conducting the experiment

At the start of an experiment, the control and treatment groups must be selected. Whereas the "hard" sciences have the luxury of attempting to create truly equal groups, educators often find themselves forced to conduct their experiments based on self-selected groups, rather than on randomization. As was highlighted in the Basic Concepts section, this makes the study a quasi-experiment, since the researchers cannot control all of the variables.

For the peer workshopping experiment, let's say that it involves six classes and three teachers with a sample of students randomly selected from all the classes. Each teacher will have a class for a control group and a class for a treatment group. The essay assignment is given and the teachers are briefed not to change any of their teaching methods other than the use of peer workshopping. You may see here that this is an effort to control a possible variable: teaching style variance.

Analyzing the data

The fourth step is to collect and analyze the data. This is not solely a step where you collect the papers, read them, and say your methods were a success. You must show how successful. You must devise a scale by which you will evaluate the data you receive, therefore you must decide what indicators will be, and will not be, important.

Continuing our example, the teachers' grades are first recorded, then the essays are evaluated for a change in sentence complexity, syntactical and grammatical errors, and overall length. Any statistical analysis is done at this time if you choose to do any. Notice here that the researcher has made judgments on what signals improved writing. It is not simply a matter of improved teacher grades, but a matter of what the researcher believes constitutes improved use of the language.

Writing the paper/presentation describing the findings

Once you have completed the experiment, you will want to share findings by publishing academic paper (or presentations). These papers usually have the following format, but it is not necessary to follow it strictly. Sections can be combined or not included, depending on the structure of the experiment, and the journal to which you submit your paper.

  • Abstract : Summarize the project: its aims, participants, basic methodology, results, and a brief interpretation.
  • Introduction : Set the context of the experiment.
  • Review of Literature : Provide a review of the literature in the specific area of study to show what work has been done. Should lead directly to the author's purpose for the study.
  • Statement of Purpose : Present the problem to be studied.
  • Participants : Describe in detail participants involved in the study; e.g., how many, etc. Provide as much information as possible.
  • Materials and Procedures : Clearly describe materials and procedures. Provide enough information so that the experiment can be replicated, but not so much information that it becomes unreadable. Include how participants were chosen, the tasks assigned them, how they were conducted, how data were evaluated, etc.
  • Results : Present the data in an organized fashion. If it is quantifiable, it is analyzed through statistical means. Avoid interpretation at this time.
  • Discussion : After presenting the results, interpret what has happened in the experiment. Base the discussion only on the data collected and as objective an interpretation as possible. Hypothesizing is possible here.
  • Limitations : Discuss factors that affect the results. Here, you can speculate how much generalization, or more likely, transferability, is possible based on results. This section is important for quasi-experimentation, since a quasi-experiment cannot control all of the variables that might affect the outcome of a study. You would discuss what variables you could not control.
  • Conclusion : Synthesize all of the above sections.
  • References : Document works cited in the correct format for the field.

Experimental and Quasi-Experimental Research: Issues and Commentary

Several issues are addressed in this section, including the use of experimental and quasi-experimental research in educational settings, the relevance of the methods to English studies, and ethical concerns regarding the methods.

Using Experimental and Quasi-Experimental Research in Educational Settings

Charting causal relationships in human settings.

Any time a human population is involved, prediction of casual relationships becomes cloudy and, some say, impossible. Many reasons exist for this; for example,

  • researchers in classrooms add a disturbing presence, causing students to act abnormally, consciously or unconsciously;
  • subjects try to please the researcher, just because of an apparent interest in them (known as the Hawthorne Effect); or, perhaps
  • the teacher as researcher is restricted by bias and time pressures.

But such confounding variables don't stop researchers from trying to identify causal relationships in education. Educators naturally experiment anyway, comparing groups, assessing the attributes of each, and making predictions based on an evaluation of alternatives. They look to research to support their intuitive practices, experimenting whenever they try to decide which instruction method will best encourage student improvement.

Combining Theory, Research, and Practice

The goal of educational research lies in combining theory, research, and practice. Educational researchers attempt to establish models of teaching practice, learning styles, curriculum development, and countless other educational issues. The aim is to "try to improve our understanding of education and to strive to find ways to have understanding contribute to the improvement of practice," one writer asserts (Floden 1996, p. 197).

In quasi-experimentation, researchers try to develop models by involving teachers as researchers, employing observational research techniques. Although results of this kind of research are context-dependent and difficult to generalize, they can act as a starting point for further study. The "educational researcher . . . provides guidelines and interpretive material intended to liberate the teacher's intelligence so that whatever artistry in teaching the teacher can achieve will be employed" (Eisner 1992, p. 8).

Bias and Rigor

Critics contend that the educational researcher is inherently biased, sample selection is arbitrary, and replication is impossible. The key to combating such criticism has to do with rigor. Rigor is established through close, proper attention to randomizing groups, time spent on a study, and questioning techniques. This allows more effective application of standards of quantitative research to qualitative research.

Often, teachers cannot wait to for piles of experimentation data to be analyzed before using the teaching methods (Lauer and Asher 1988). They ultimately must assess whether the results of a study in a distant classroom are applicable in their own classrooms. And they must continuously test the effectiveness of their methods by using experimental and qualitative research simultaneously. In addition to statistics (quantitative), researchers may perform case studies or observational research (qualitative) in conjunction with, or prior to, experimentation.

Relevance to English Studies

Situations in english studies that might encourage use of experimental methods.

Whenever a researcher would like to see if a causal relationship exists between groups, experimental and quasi-experimental research can be a viable research tool. Researchers in English Studies might use experimentation when they believe a relationship exists between two variables, and they want to show that these two variables have a significant correlation (or causal relationship).

A benefit of experimentation is the ability to control variables, such as the amount of treatment, when it is given, to whom and so forth. Controlling variables allows researchers to gain insight into the relationships they believe exist. For example, a researcher has an idea that writing under pseudonyms encourages student participation in newsgroups. Researchers can control which students write under pseudonyms and which do not, then measure the outcomes. Researchers can then analyze results and determine if this particular variable alone causes increased participation.

Transferability-Applying Results

Experimentation and quasi-experimentation allow for generating transferable results and accepting those results as being dependent upon experimental rigor. It is an effective alternative to generalizability, which is difficult to rely upon in educational research. English scholars, reading results of experiments with a critical eye, ultimately decide if results will be implemented and how. They may even extend that existing research by replicating experiments in the interest of generating new results and benefiting from multiple perspectives. These results will strengthen the study or discredit findings.

Concerns English Scholars Express about Experiments

Researchers should carefully consider if a particular method is feasible in humanities studies, and whether it will yield the desired information. Some researchers recommend addressing pertinent issues combining several research methods, such as survey, interview, ethnography, case study, content analysis, and experimentation (Lauer and Asher, 1988).

Advantages and Disadvantages of Experimental Research: Discussion

In educational research, experimentation is a way to gain insight into methods of instruction. Although teaching is context specific, results can provide a starting point for further study. Often, a teacher/researcher will have a "gut" feeling about an issue which can be explored through experimentation and looking at causal relationships. Through research intuition can shape practice .

A preconception exists that information obtained through scientific method is free of human inconsistencies. But, since scientific method is a matter of human construction, it is subject to human error . The researcher's personal bias may intrude upon the experiment , as well. For example, certain preconceptions may dictate the course of the research and affect the behavior of the subjects. The issue may be compounded when, although many researchers are aware of the affect that their personal bias exerts on their own research, they are pressured to produce research that is accepted in their field of study as "legitimate" experimental research.

The researcher does bring bias to experimentation, but bias does not limit an ability to be reflective . An ethical researcher thinks critically about results and reports those results after careful reflection. Concerns over bias can be leveled against any research method.

Often, the sample may not be representative of a population, because the researcher does not have an opportunity to ensure a representative sample. For example, subjects could be limited to one location, limited in number, studied under constrained conditions and for too short a time.

Despite such inconsistencies in educational research, the researcher has control over the variables , increasing the possibility of more precisely determining individual effects of each variable. Also, determining interaction between variables is more possible.

Even so, artificial results may result . It can be argued that variables are manipulated so the experiment measures what researchers want to examine; therefore, the results are merely contrived products and have no bearing in material reality. Artificial results are difficult to apply in practical situations, making generalizing from the results of a controlled study questionable. Experimental research essentially first decontextualizes a single question from a "real world" scenario, studies it under controlled conditions, and then tries to recontextualize the results back on the "real world" scenario. Results may be difficult to replicate .

Perhaps, groups in an experiment may not be comparable . Quasi-experimentation in educational research is widespread because not only are many researchers also teachers, but many subjects are also students. With the classroom as laboratory, it is difficult to implement randomizing or matching strategies. Often, students self-select into certain sections of a course on the basis of their own agendas and scheduling needs. Thus when, as often happens, one class is treated and the other used for a control, the groups may not actually be comparable. As one might imagine, people who register for a class which meets three times a week at eleven o'clock in the morning (young, no full-time job, night people) differ significantly from those who register for one on Monday evenings from seven to ten p.m. (older, full-time job, possibly more highly motivated). Each situation presents different variables and your group might be completely different from that in the study. Long-term studies are expensive and hard to reproduce. And although often the same hypotheses are tested by different researchers, various factors complicate attempts to compare or synthesize them. It is nearly impossible to be as rigorous as the natural sciences model dictates.

Even when randomization of students is possible, problems arise. First, depending on the class size and the number of classes, the sample may be too small for the extraneous variables to cancel out. Second, the study population is not strictly a sample, because the population of students registered for a given class at a particular university is obviously not representative of the population of all students at large. For example, students at a suburban private liberal-arts college are typically young, white, and upper-middle class. In contrast, students at an urban community college tend to be older, poorer, and members of a racial minority. The differences can be construed as confounding variables: the first group may have fewer demands on its time, have less self-discipline, and benefit from superior secondary education. The second may have more demands, including a job and/or children, have more self-discipline, but an inferior secondary education. Selecting a population of subjects which is representative of the average of all post-secondary students is also a flawed solution, because the outcome of a treatment involving this group is not necessarily transferable to either the students at a community college or the students at the private college, nor are they universally generalizable.

When a human population is involved, experimental research becomes concerned if behavior can be predicted or studied with validity. Human response can be difficult to measure . Human behavior is dependent on individual responses. Rationalizing behavior through experimentation does not account for the process of thought, making outcomes of that process fallible (Eisenberg, 1996).

Nevertheless, we perform experiments daily anyway . When we brush our teeth every morning, we are experimenting to see if this behavior will result in fewer cavities. We are relying on previous experimentation and we are transferring the experimentation to our daily lives.

Moreover, experimentation can be combined with other research methods to ensure rigor . Other qualitative methods such as case study, ethnography, observational research and interviews can function as preconditions for experimentation or conducted simultaneously to add validity to a study.

We have few alternatives to experimentation. Mere anecdotal research , for example is unscientific, unreplicatable, and easily manipulated. Should we rely on Ed walking into a faculty meeting and telling the story of Sally? Sally screamed, "I love writing!" ten times before she wrote her essay and produced a quality paper. Therefore, all the other faculty members should hear this anecdote and know that all other students should employ this similar technique.

On final disadvantage: frequently, political pressure drives experimentation and forces unreliable results. Specific funding and support may drive the outcomes of experimentation and cause the results to be skewed. The reader of these results may not be aware of these biases and should approach experimentation with a critical eye.

Advantages and Disadvantages of Experimental Research: Quick Reference List

Experimental and quasi-experimental research can be summarized in terms of their advantages and disadvantages. This section combines and elaborates upon many points mentioned previously in this guide.

gain insight into methods of instruction

subject to human error

intuitive practice shaped by research

personal bias of researcher may intrude

teachers have bias but can be reflective

sample may not be representative

researcher can have control over variables

can produce artificial results

humans perform experiments anyway

results may only apply to one situation and may be difficult to replicate

can be combined with other research methods for rigor

groups may not be comparable

use to determine what is best for population

human response can be difficult to measure

provides for greater transferability than anecdotal research

political pressure may skew results

Ethical Concerns

Experimental research may be manipulated on both ends of the spectrum: by researcher and by reader. Researchers who report on experimental research, faced with naive readers of experimental research, encounter ethical concerns. While they are creating an experiment, certain objectives and intended uses of the results might drive and skew it. Looking for specific results, they may ask questions and look at data that support only desired conclusions. Conflicting research findings are ignored as a result. Similarly, researchers, seeking support for a particular plan, look only at findings which support that goal, dismissing conflicting research.

Editors and journals do not publish only trouble-free material. As readers of experiments members of the press might report selected and isolated parts of a study to the public, essentially transferring that data to the general population which may not have been intended by the researcher. Take, for example, oat bran. A few years ago, the press reported how oat bran reduces high blood pressure by reducing cholesterol. But that bit of information was taken out of context. The actual study found that when people ate more oat bran, they reduced their intake of saturated fats high in cholesterol. People started eating oat bran muffins by the ton, assuming a causal relationship when in actuality a number of confounding variables might influence the causal link.

Ultimately, ethical use and reportage of experimentation should be addressed by researchers, reporters and readers alike.

Reporters of experimental research often seek to recognize their audience's level of knowledge and try not to mislead readers. And readers must rely on the author's skill and integrity to point out errors and limitations. The relationship between researcher and reader may not sound like a problem, but after spending months or years on a project to produce no significant results, it may be tempting to manipulate the data to show significant results in order to jockey for grants and tenure.

Meanwhile, the reader may uncritically accept results that receive validity by being published in a journal. However, research that lacks credibility often is not published; consequentially, researchers who fail to publish run the risk of being denied grants, promotions, jobs, and tenure. While few researchers are anything but earnest in their attempts to conduct well-designed experiments and present the results in good faith, rhetorical considerations often dictate a certain minimization of methodological flaws.

Concerns arise if researchers do not report all, or otherwise alter, results. This phenomenon is counterbalanced, however, in that professionals are also rewarded for publishing critiques of others' work. Because the author of an experimental study is in essence making an argument for the existence of a causal relationship, he or she must be concerned not only with its integrity, but also with its presentation. Achieving persuasiveness in any kind of writing involves several elements: choosing a topic of interest, providing convincing evidence for one's argument, using tone and voice to project credibility, and organizing the material in a way that meets expectations for a logical sequence. Of course, what is regarded as pertinent, accepted as evidence, required for credibility, and understood as logical varies according to context. If the experimental researcher hopes to make an impact on the community of professionals in their field, she must attend to the standards and orthodoxy's of that audience.

Related Links

Contrasts: Traditional and computer-supported writing classrooms. This Web presents a discussion of the Transitions Study, a year-long exploration of teachers and students in computer-supported and traditional writing classrooms. Includes description of study, rationale for conducting the study, results and implications of the study.

http://kairos.technorhetoric.net/2.2/features/reflections/page1.htm

Annotated Bibliography

A cozy world of trivial pursuits? (1996, June 28) The Times Educational Supplement . 4174, pp. 14-15.

A critique discounting the current methods Great Britain employs to fund and disseminate educational research. The belief is that research is performed for fellow researchers not the teaching public and implications for day to day practice are never addressed.

Anderson, J. A. (1979, Nov. 10-13). Research as argument: the experimental form. Paper presented at the annual meeting of the Speech Communication Association, San Antonio, TX.

In this paper, the scientist who uses the experimental form does so in order to explain that which is verified through prediction.

Anderson, Linda M. (1979). Classroom-based experimental studies of teaching effectiveness in elementary schools . (Technical Report UTR&D-R- 4102). Austin: Research and Development Center for Teacher Education, University of Texas.

Three recent large-scale experimental studies have built on a database established through several correlational studies of teaching effectiveness in elementary school.

Asher, J. W. (1976). Educational research and evaluation methods . Boston: Little, Brown.

Abstract unavailable by press time.

Babbie, Earl R. (1979). The Practice of Social Research . Belmont, CA: Wadsworth.

A textbook containing discussions of several research methodologies used in social science research.

Bangert-Drowns, R.L. (1993). The word processor as instructional tool: a meta-analysis of word processing in writing instruction. Review of Educational Research, 63 (1), 69-93.

Beach, R. (1993). The effects of between-draft teacher evaluation versus student self-evaluation on high school students' revising of rough drafts. Research in the Teaching of English, 13 , 111-119.

The question of whether teacher evaluation or guided self-evaluation of rough drafts results in increased revision was addressed in Beach's study. Differences in the effects of teacher evaluations, guided self-evaluation (using prepared guidelines,) and no evaluation of rough drafts were examined. The final drafts of students (10th, 11th, and 12th graders) were compared with their rough drafts and rated by judges according to degree of change.

Beishuizen, J. & Moonen, J. (1992). Research in technology enriched schools: a case for cooperation between teachers and researchers . (ERIC Technical Report ED351006).

This paper describes the research strategies employed in the Dutch Technology Enriched Schools project to encourage extensive and intensive use of computers in a small number of secondary schools, and to study the effects of computer use on the classroom, the curriculum, and school administration and management.

Borg, W. P. (1989). Educational Research: an Introduction . (5th ed.). New York: Longman.

An overview of educational research methodology, including literature review and discussion of approaches to research, experimental design, statistical analysis, ethics, and rhetorical presentation of research findings.

Campbell, D. T., & Stanley, J. C. (1963). Experimental and quasi-experimental designs for research . Boston: Houghton Mifflin.

A classic overview of research designs.

Campbell, D.T. (1988). Methodology and epistemology for social science: selected papers . ed. E. S. Overman. Chicago: University of Chicago Press.

This is an overview of Campbell's 40-year career and his work. It covers in seven parts measurement, experimental design, applied social experimentation, interpretive social science, epistemology and sociology of science. Includes an extensive bibliography.

Caporaso, J. A., & Roos, Jr., L. L. (Eds.). Quasi-experimental approaches: Testing theory and evaluating policy. Evanston, WA: Northwestern University Press.

A collection of articles concerned with explicating the underlying assumptions of quasi-experimentation and relating these to true experimentation. With an emphasis on design. Includes a glossary of terms.

Collier, R. Writing and the word processor: How wary of the gift-giver should we be? Unpublished manuscript.

Unpublished typescript. Charts the developments to date in computers and composition and speculates about the future within the framework of Willie Sypher's model of the evolution of creative discovery.

Cook, T.D. & Campbell, D.T. (1979). Quasi-experimentation: design and analysis issues for field settings . Boston: Houghton Mifflin Co.

The authors write that this book "presents some quasi-experimental designs and design features that can be used in many social research settings. The designs serve to probe causal hypotheses about a wide variety of substantive issues in both basic and applied research."

Cutler, A. (1970). An experimental method for semantic field study. Linguistic Communication, 2 , N. pag.

This paper emphasizes the need for empirical research and objective discovery procedures in semantics, and illustrates a method by which these goals may be obtained.

Daniels, L. B. (1996, Summer). Eisenberg's Heisenberg: The indeterminancies of rationality. Curriculum Inquiry, 26 , 181-92.

Places Eisenberg's theories in relation to the death of foundationalism by showing that he distorts rational studies into a form of relativism. He looks at Eisenberg's ideas on indeterminacy, methods and evidence, what he is against and what we should think of what he says.

Danziger, K. (1990). Constructing the subject: Historical origins of psychological research. Cambridge: Cambridge University Press.

Danzinger stresses the importance of being aware of the framework in which research operates and of the essentially social nature of scientific activity.

Diener, E., et al. (1972, December). Leakage of experimental information to potential future subjects by debriefed subjects. Journal of Experimental Research in Personality , 264-67.

Research regarding research: an investigation of the effects on the outcome of an experiment in which information about the experiment had been leaked to subjects. The study concludes that such leakage is not a significant problem.

Dudley-Marling, C., & Rhodes, L. K. (1989). Reflecting on a close encounter with experimental research. Canadian Journal of English Language Arts. 12 , 24-28.

Researchers, Dudley-Marling and Rhodes, address some problems they met in their experimental approach to a study of reading comprehension. This article discusses the limitations of experimental research, and presents an alternative to experimental or quantitative research.

Edgington, E. S. (1985). Random assignment and experimental research. Educational Administration Quarterly, 21 , N. pag.

Edgington explores ways on which random assignment can be a part of field studies. The author discusses both non-experimental and experimental research and the need for using random assignment.

Eisenberg, J. (1996, Summer). Response to critiques by R. Floden, J. Zeuli, and L. Daniels. Curriculum Inquiry, 26 , 199-201.

A response to critiques of his argument that rational educational research methods are at best suspect and at worst futile. He believes indeterminacy controls this method and worries that chaotic research is failing students.

Eisner, E. (1992, July). Are all causal claims positivistic? A reply to Francis Schrag. Educational Researcher, 21 (5), 8-9.

Eisner responds to Schrag who claimed that critics like Eisner cannot escape a positivistic paradigm whatever attempts they make to do so. Eisner argues that Schrag essentially misses the point for trying to argue for the paradigm solely on the basis of cause and effect without including the rest of positivistic philosophy. This weakens his argument against multiple modal methods, which Eisner argues provides opportunities to apply the appropriate research design where it is most applicable.

Floden, R.E. (1996, Summer). Educational research: limited, but worthwhile and maybe a bargain. (response to J.A. Eisenberg). Curriculum Inquiry, 26 , 193-7.

Responds to John Eisenberg critique of educational research by asserting the connection between improvement of practice and research results. He places high value of teacher discrepancy and knowledge that research informs practice.

Fortune, J. C., & Hutson, B. A. (1994, March/April). Selecting models for measuring change when true experimental conditions do not exist. Journal of Educational Research, 197-206.

This article reviews methods for minimizing the effects of nonideal experimental conditions by optimally organizing models for the measurement of change.

Fox, R. F. (1980). Treatment of writing apprehension and tts effects on composition. Research in the Teaching of English, 14 , 39-49.

The main purpose of Fox's study was to investigate the effects of two methods of teaching writing on writing apprehension among entry level composition students, A conventional teaching procedure was used with a control group, while a workshop method was employed with the treatment group.

Gadamer, H-G. (1976). Philosophical hermeneutics . (D. E. Linge, Trans.). Berkeley, CA: University of California Press.

A collection of essays with the common themes of the mediation of experience through language, the impossibility of objectivity, and the importance of context in interpretation.

Gaise, S. J. (1981). Experimental vs. non-experimental research on classroom second language learning. Bilingual Education Paper Series, 5 , N. pag.

Aims on classroom-centered research on second language learning and teaching are considered and contrasted with the experimental approach.

Giordano, G. (1983). Commentary: Is experimental research snowing us? Journal of Reading, 27 , 5-7.

Do educational research findings actually benefit teachers and students? Giordano states his opinion that research may be helpful to teaching, but is not essential and often is unnecessary.

Goldenson, D. R. (1978, March). An alternative view about the role of the secondary school in political socialization: A field-experimental study of theory and research in social education. Theory and Research in Social Education , 44-72.

This study concludes that when political discussion among experimental groups of secondary school students is led by a teacher, the degree to which the students' views were impacted is proportional to the credibility of the teacher.

Grossman, J., and J. P. Tierney. (1993, October). The fallibility of comparison groups. Evaluation Review , 556-71.

Grossman and Tierney present evidence to suggest that comparison groups are not the same as nontreatment groups.

Harnisch, D. L. (1992). Human judgment and the logic of evidence: A critical examination of research methods in special education transition literature. In D. L. Harnisch et al. (Eds.), Selected readings in transition.

This chapter describes several common types of research studies in special education transition literature and the threats to their validity.

Hawisher, G. E. (1989). Research and recommendations for computers and composition. In G. Hawisher and C. Selfe. (Eds.), Critical Perspectives on Computers and Composition Instruction . (pp. 44-69). New York: Teacher's College Press.

An overview of research in computers and composition to date. Includes a synthesis grid of experimental research.

Hillocks, G. Jr. (1982). The interaction of instruction, teacher comment, and revision in teaching the composing process. Research in the Teaching of English, 16 , 261-278.

Hillock conducted a study using three treatments: observational or data collecting activities prior to writing, use of revisions or absence of same, and either brief or lengthy teacher comments to identify effective methods of teaching composition to seventh and eighth graders.

Jenkinson, J. C. (1989). Research design in the experimental study of intellectual disability. International Journal of Disability, Development, and Education, 69-84.

This article catalogues the difficulties of conducting experimental research where the subjects are intellectually disables and suggests alternative research strategies.

Jones, R. A. (1985). Research Methods in the Social and Behavioral Sciences. Sunderland, MA: Sinauer Associates, Inc..

A textbook designed to provide an overview of research strategies in the social sciences, including survey, content analysis, ethnographic approaches, and experimentation. The author emphasizes the importance of applying strategies appropriately and in variety.

Kamil, M. L., Langer, J. A., & Shanahan, T. (1985). Understanding research in reading and writing . Newton, Massachusetts: Allyn and Bacon.

Examines a wide variety of problems in reading and writing, with a broad range of techniques, from different perspectives.

Kennedy, J. L. (1985). An Introduction to the Design and Analysis of Experiments in Behavioral Research . Lanham, MD: University Press of America.

An introductory textbook of psychological and educational research.

Keppel, G. (1991). Design and analysis: a researcher's handbook . Englewood Cliffs, NJ: Prentice Hall.

This updates Keppel's earlier book subtitled "a student's handbook." Focuses on extensive information about analytical research and gives a basic picture of research in psychology. Covers a range of statistical topics. Includes a subject and name index, as well as a glossary.

Knowles, G., Elija, R., & Broadwater, K. (1996, Spring/Summer). Teacher research: enhancing the preparation of teachers? Teaching Education, 8 , 123-31.

Researchers looked at one teacher candidate who participated in a class which designed their own research project correlating to a question they would like answered in the teaching world. The goal of the study was to see if preservice teachers developed reflective practice by researching appropriate classroom contexts.

Lace, J., & De Corte, E. (1986, April 16-20). Research on media in western Europe: A myth of sisyphus? Paper presented at the annual meeting of the American Educational Research Association. San Francisco.

Identifies main trends in media research in western Europe, with emphasis on three successive stages since 1960: tools technology, systems technology, and reflective technology.

Latta, A. (1996, Spring/Summer). Teacher as researcher: selected resources. Teaching Education, 8 , 155-60.

An annotated bibliography on educational research including milestones of thought, practical applications, successful outcomes, seminal works, and immediate practical applications.

Lauer. J.M. & Asher, J. W. (1988). Composition research: Empirical designs . New York: Oxford University Press.

Approaching experimentation from a humanist's perspective to it, authors focus on eight major research designs: Case studies, ethnographies, sampling and surveys, quantitative descriptive studies, measurement, true experiments, quasi-experiments, meta-analyses, and program evaluations. It takes on the challenge of bridging language of social science with that of the humanist. Includes name and subject indexes, as well as a glossary and a glossary of symbols.

Mishler, E. G. (1979). Meaning in context: Is there any other kind? Harvard Educational Review, 49 , 1-19.

Contextual importance has been largely ignored by traditional research approaches in social/behavioral sciences and in their application to the education field. Developmental and social psychologists have increasingly noted the inadequacies of this approach. Drawing examples for phenomenology, sociolinguistics, and ethnomethodology, the author proposes alternative approaches for studying meaning in context.

Mitroff, I., & Bonoma, T. V. (1978, May). Psychological assumptions, experimentations, and real world problems: A critique and an alternate approach to evaluation. Evaluation Quarterly , 235-60.

The authors advance the notion of dialectic as a means to clarify and examine the underlying assumptions of experimental research methodology, both in highly controlled situations and in social evaluation.

Muller, E. W. (1985). Application of experimental and quasi-experimental research designs to educational software evaluation. Educational Technology, 25 , 27-31.

Muller proposes a set of guidelines for the use of experimental and quasi-experimental methods of research in evaluating educational software. By obtaining empirical evidence of student performance, it is possible to evaluate if programs are making the desired learning effect.

Murray, S., et al. (1979, April 8-12). Technical issues as threats to internal validity of experimental and quasi-experimental designs . San Francisco: University of California.

The article reviews three evaluation models and analyzes the flaws common to them. Remedies are suggested.

Muter, P., & Maurutto, P. (1991). Reading and skimming from computer screens and books: The paperless office revisited? Behavior and Information Technology, 10 (4), 257-66.

The researchers test for reading and skimming effectiveness, defined as accuracy combined with speed, for written text compared to text on a computer monitor. They conclude that, given optimal on-line conditions, both are equally effective.

O'Donnell, A., Et al. (1992). The impact of cooperative writing. In J. R. Hayes, et al. (Eds.). Reading empirical research studies: The rhetoric of research . (pp. 371-84). Hillsdale, NJ: Lawrence Erlbaum Associates.

A model of experimental design. The authors investigate the efficacy of cooperative writing strategies, as well as the transferability of skills learned to other, individual writing situations.

Palmer, D. (1988). Looking at philosophy . Mountain View, CA: Mayfield Publishing.

An introductory text with incisive but understandable discussions of the major movements and thinkers in philosophy from the Pre-Socratics through Sartre. With illustrations by the author. Includes a glossary.

Phelps-Gunn, T., & Phelps-Terasaki, D. (1982). Written language instruction: Theory and remediation . London: Aspen Systems Corporation.

The lack of research in written expression is addressed and an application on the Total Writing Process Model is presented.

Poetter, T. (1996, Spring/Summer). From resistance to excitement: becoming qualitative researchers and reflective practitioners. Teaching Education , 8109-19.

An education professor reveals his own problematic research when he attempted to institute a educational research component to a teacher preparation program. He encountered dissent from students and cooperating professionals and ultimately was rewarded with excitement towards research and a recognized correlation to practice.

Purves, A. C. (1992). Reflections on research and assessment in written composition. Research in the Teaching of English, 26 .

Three issues concerning research and assessment is writing are discussed: 1) School writing is a matter of products not process, 2) school writing is an ill-defined domain, 3) the quality of school writing is what observers report they see. Purves discusses these issues while looking at data collected in a ten-year study of achievement in written composition in fourteen countries.

Rathus, S. A. (1987). Psychology . (3rd ed.). Poughkeepsie, NY: Holt, Rinehart, and Winston.

An introductory psychology textbook. Includes overviews of the major movements in psychology, discussions of prominent examples of experimental research, and a basic explanation of relevant physiological factors. With chapter summaries.

Reiser, R. A. (1982). Improving the research skills of instructional designers. Educational Technology, 22 , 19-21.

In his paper, Reiser starts by stating the importance of research in advancing the field of education, and points out that graduate students in instructional design lack the proper skills to conduct research. The paper then goes on to outline the practicum in the Instructional Systems Program at Florida State University which includes: 1) Planning and conducting an experimental research study; 2) writing the manuscript describing the study; 3) giving an oral presentation in which they describe their research findings.

Report on education research . (Journal). Washington, DC: Capitol Publication, Education News Services Division.

This is an independent bi-weekly newsletter on research in education and learning. It has been publishing since Sept. 1969.

Rossell, C. H. (1986). Why is bilingual education research so bad?: Critique of the Walsh and Carballo study of Massachusetts bilingual education programs . Boston: Center for Applied Social Science, Boston University. (ERIC Working Paper 86-5).

The Walsh and Carballo evaluation of the effectiveness of transitional bilingual education programs in five Massachusetts communities has five flaws and the five flaws are discussed in detail.

Rubin, D. L., & Greene, K. (1992). Gender-typical style in written language. Research in the Teaching of English, 26.

This study was designed to find out whether the writing styles of men and women differ. Rubin and Green discuss the pre-suppositions that women are better writers than men.

Sawin, E. (1992). Reaction: Experimental research in the context of other methods. School of Education Review, 4 , 18-21.

Sawin responds to Gage's article on methodologies and issues in educational research. He agrees with most of the article but suggests the concept of scientific should not be regarded in absolute terms and recommends more emphasis on scientific method. He also questions the value of experiments over other types of research.

Schoonmaker, W. E. (1984). Improving classroom instruction: A model for experimental research. The Technology Teacher, 44, 24-25.

The model outlined in this article tries to bridge the gap between classroom practice and laboratory research, using what Schoonmaker calls active research. Research is conducted in the classroom with the students and is used to determine which two methods of classroom instruction chosen by the teacher is more effective.

Schrag, F. (1992). In defense of positivist research paradigms. Educational Researcher, 21, (5), 5-8.

The controversial defense of the use of positivistic research methods to evaluate educational strategies; the author takes on Eisner, Erickson, and Popkewitz.

Smith, J. (1997). The stories educational researchers tell about themselves. Educational Researcher, 33 (3), 4-11.

Recapitulates main features of an on-going debate between advocates for using vocabularies of traditional language arts and whole language in educational research. An "impasse" exists were advocates "do not share a theoretical disposition concerning both language instruction and the nature of research," Smith writes (p. 6). He includes a very comprehensive history of the debate of traditional research methodology and qualitative methods and vocabularies. Definitely worth a read by graduates.

Smith, N. L. (1980). The feasibility and desirability of experimental methods in evaluation. Evaluation and Program Planning: An International Journal , 251-55.

Smith identifies the conditions under which experimental research is most desirable. Includes a review of current thinking and controversies.

Stewart, N. R., & Johnson, R. G. (1986, March 16-20). An evaluation of experimental methodology in counseling and counselor education research. Paper presented at the annual meeting of the American Educational Research Association, San Francisco.

The purpose of this study was to evaluate the quality of experimental research in counseling and counselor education published from 1976 through 1984.

Spector, P. E. (1990). Research Designs. Newbury Park, California: Sage Publications.

In this book, Spector introduces the basic principles of experimental and nonexperimental design in the social sciences.

Tait, P. E. (1984). Do-it-yourself evaluation of experimental research. Journal of Visual Impairment and Blindness, 78 , 356-363 .

Tait's goal is to provide the reader who is unfamiliar with experimental research or statistics with the basic skills necessary for the evaluation of research studies.

Walsh, S. M. (1990). The current conflict between case study and experimental research: A breakthrough study derives benefits from both . (ERIC Document Number ED339721).

This paper describes a study that was not experimentally designed, but its major findings were generalizable to the overall population of writers in college freshman composition classes. The study was not a case study, but it provided insights into the attitudes and feelings of small clusters of student writers.

Waters, G. R. (1976). Experimental designs in communication research. Journal of Business Communication, 14 .

The paper presents a series of discussions on the general elements of experimental design and the scientific process and relates these elements to the field of communication.

Welch, W. W. (March 1969). The selection of a national random sample of teachers for experimental curriculum evaluation. Scholastic Science and Math , 210-216.

Members of the evaluation section of Harvard project physics describe what is said to be the first attempt to select a national random sample of teachers, and list 6 steps to do so. Cost and comparison with a volunteer group are also discussed.

Winer, B.J. (1971). Statistical principles in experimental design , (2nd ed.). New York: McGraw-Hill.

Combines theory and application discussions to give readers a better understanding of the logic behind statistical aspects of experimental design. Introduces the broad topic of design, then goes into considerable detail. Not for light reading. Bring your aspirin if you like statistics. Bring morphine is you're a humanist.

Winn, B. (1986, January 16-21). Emerging trends in educational technology research. Paper presented at the Annual Convention of the Association for Educational Communication Technology.

This examination of the topic of research in educational technology addresses four major areas: (1) why research is conducted in this area and the characteristics of that research; (2) the types of research questions that should or should not be addressed; (3) the most appropriate methodologies for finding answers to research questions; and (4) the characteristics of a research report that make it good and ultimately suitable for publication.

Citation Information

Luann Barnes, Jennifer Hauser, Luana Heikes, Anthony J. Hernandez, Paul Tim Richard, Katherine Ross, Guo Hua Yang, and Mike Palmquist. (1994-2024). Experimental and Quasi-Experimental Research. The WAC Clearinghouse. Colorado State University. Available at https://wac.colostate.edu/repository/writing/guides/.

Copyright Information

Copyright © 1994-2024 Colorado State University and/or this site's authors, developers, and contributors . Some material displayed on this site is used with permission.

  • Skip to secondary menu
  • Skip to main content
  • Skip to primary sidebar

Statistics By Jim

Making statistics intuitive

Quasi Experimental Design Overview & Examples

By Jim Frost Leave a Comment

What is a Quasi Experimental Design?

A quasi experimental design is a method for identifying causal relationships that does not randomly assign participants to the experimental groups. Instead, researchers use a non-random process. For example, they might use an eligibility cutoff score or preexisting groups to determine who receives the treatment.

Image illustrating a quasi experimental design.

Quasi-experimental research is a design that closely resembles experimental research but is different. The term “quasi” means “resembling,” so you can think of it as a cousin to actual experiments. In these studies, researchers can manipulate an independent variable — that is, they change one factor to see what effect it has. However, unlike true experimental research, participants are not randomly assigned to different groups.

Learn more about Experimental Designs: Definition & Types .

When to Use Quasi-Experimental Design

Researchers typically use a quasi-experimental design because they can’t randomize due to practical or ethical concerns. For example:

  • Practical Constraints : A school interested in testing a new teaching method can only implement it in preexisting classes and cannot randomly assign students.
  • Ethical Concerns : A medical study might not be able to randomly assign participants to a treatment group for an experimental medication when they are already taking a proven drug.

Quasi-experimental designs also come in handy when researchers want to study the effects of naturally occurring events, like policy changes or environmental shifts, where they can’t control who is exposed to the treatment.

Quasi-experimental designs occupy a unique position in the spectrum of research methodologies, sitting between observational studies and true experiments. This middle ground offers a blend of both worlds, addressing some limitations of purely observational studies while navigating the constraints often accompanying true experiments.

A significant advantage of quasi-experimental research over purely observational studies and correlational research is that it addresses the issue of directionality, determining which variable is the cause and which is the effect. In quasi-experiments, an intervention typically occurs during the investigation, and the researchers record outcomes before and after it, increasing the confidence that it causes the observed changes.

However, it’s crucial to recognize its limitations as well. Controlling confounding variables is a larger concern for a quasi-experimental design than a true experiment because it lacks random assignment.

In sum, quasi-experimental designs offer a valuable research approach when random assignment is not feasible, providing a more structured and controlled framework than observational studies while acknowledging and attempting to address potential confounders.

Types of Quasi-Experimental Designs and Examples

Quasi-experimental studies use various methods, depending on the scenario.

Natural Experiments

This design uses naturally occurring events or changes to create the treatment and control groups. Researchers compare outcomes between those whom the event affected and those it did not affect. Analysts use statistical controls to account for confounders that the researchers must also measure.

Natural experiments are related to observational studies, but they allow for a clearer causality inference because the external event or policy change provides both a form of quasi-random group assignment and a definite start date for the intervention.

For example, in a natural experiment utilizing a quasi-experimental design, researchers study the impact of a significant economic policy change on small business growth. The policy is implemented in one state but not in neighboring states. This scenario creates an unplanned experimental setup, where the state with the new policy serves as the treatment group, and the neighboring states act as the control group.

Researchers are primarily interested in small business growth rates but need to record various confounders that can impact growth rates. Hence, they record state economic indicators, investment levels, and employment figures. By recording these metrics across the states, they can include them in the model as covariates and control them statistically. This method allows researchers to estimate differences in small business growth due to the policy itself, separate from the various confounders.

Nonequivalent Groups Design

This method involves matching existing groups that are similar but not identical. Researchers attempt to find groups that are as equivalent as possible, particularly for factors likely to affect the outcome.

For instance, researchers use a nonequivalent groups quasi-experimental design to evaluate the effectiveness of a new teaching method in improving students’ mathematics performance. A school district considering the teaching method is planning the study. Students are already divided into schools, preventing random assignment.

The researchers matched two schools with similar demographics, baseline academic performance, and resources. The school using the traditional methodology is the control, while the other uses the new approach. Researchers are evaluating differences in educational outcomes between the two methods.

They perform a pretest to identify differences between the schools that might affect the outcome and include them as covariates to control for confounding. They also record outcomes before and after the intervention to have a larger context for the changes they observe.

Regression Discontinuity

This process assigns subjects to a treatment or control group based on a predetermined cutoff point (e.g., a test score). The analysis primarily focuses on participants near the cutoff point, as they are likely similar except for the treatment received. By comparing participants just above and below the cutoff, the design controls for confounders that vary smoothly around the cutoff.

For example, in a regression discontinuity quasi-experimental design focusing on a new medical treatment for depression, researchers use depression scores as the cutoff point. Individuals with depression scores just above a certain threshold are assigned to receive the latest treatment, while those just below the threshold do not receive it. This method creates two closely matched groups: one that barely qualifies for treatment and one that barely misses out.

By comparing the mental health outcomes of these two groups over time, researchers can assess the effectiveness of the new treatment. The assumption is that the only significant difference between the groups is whether they received the treatment, thereby isolating its impact on depression outcomes.

Controlling Confounders in a Quasi-Experimental Design

Accounting for confounding variables is a challenging but essential task for a quasi-experimental design.

In a true experiment, the random assignment process equalizes confounders across the groups to nullify their overall effect. It’s the gold standard because it works on all confounders, known and unknown.

Unfortunately, the lack of random assignment can allow differences between the groups to exist before the intervention. These confounding factors might ultimately explain the results rather than the intervention.

Consequently, researchers must use other methods to equalize the groups roughly using matching and cutoff values or statistically adjust for preexisting differences they measure to reduce the impact of confounders.

A key strength of quasi-experiments is their frequent use of “pre-post testing.” This approach involves conducting initial tests before collecting data to check for preexisting differences between groups that could impact the study’s outcome. By identifying these variables early on and including them as covariates, researchers can more effectively control potential confounders in their statistical analysis.

Additionally, researchers frequently track outcomes before and after the intervention to better understand the context for changes they observe.

Statisticians consider these methods to be less effective than randomization. Hence, quasi-experiments fall somewhere in the middle when it comes to internal validity , or how well the study can identify causal relationships versus mere correlation . They’re more conclusive than correlational studies but not as solid as true experiments.

In conclusion, quasi-experimental designs offer researchers a versatile and practical approach when random assignment is not feasible. This methodology bridges the gap between controlled experiments and observational studies, providing a valuable tool for investigating cause-and-effect relationships in real-world settings. Researchers can address ethical and logistical constraints by understanding and leveraging the different types of quasi-experimental designs while still obtaining insightful and meaningful results.

Cook, T. D., & Campbell, D. T. (1979).  Quasi-experimentation: Design & analysis issues in field settings . Boston, MA: Houghton Mifflin

Share this:

descriptive correlational quasi experimental and experimental

Reader Interactions

Comments and questions cancel reply.

Scientific Research and Methodology : An introduction to quantitative research and statistics

4 types of research studies.

You have learnt how to ask a RQ and identify contributors to variation in the values of the response variable. In this chapter , you will learn to:

  • identify and describe the types of quantitative research studies.
  • compare and distinguish experimental and observational studies.
  • describe and identify the directionality in observational studies.
  • describe and identify true experimental and quasi-experimental studies.

descriptive correlational quasi experimental and experimental

4.1 Introduction

Chapter  2 introduced four types of research questions: descriptive, relational, repeated-measures and correlational. This chapter discusses the types of research studies needed to answer these RQs, while Chaps.  5 to  9 discuss the details of designing these studies and collecting the data.

The RQ implies what data must be collected from the individuals in the study (the response and explanatory variables), but the data can be collected in many different ways. Different types of studies are used to answer different types of RQs:

  • descriptive studies (Sect.  4.2 ) answer descriptive RQs;
  • observational studies (Sect.  4.3 ) answer RQs with an explanatory variable but no intervention ; and
  • experimental studies (Sect.  4.4 ) answer RQs with an explanatory variable, that have an intervention .

Observational and experimental studies are sometimes collectively called analytical studies .

4.2 Descriptive studies

Descriptive studies answer descriptive RQs.

Definition 4.1 (Descriptive study) Descriptive studies answer descriptive research questions.

descriptive correlational quasi experimental and experimental

Example 4.1 (Descriptive study) A study in Hong Kong determined the percentage of people wearing face masks in a variety of situations ( L. Y. Lee et al. 2020 ) . This is a descriptive study, where the population is 'residents of Hong Kong', and the outcome is (for example) 'the percentage who wear face masks when taking care of family members with fever'.

We do not explicitly discuss descriptive studies further, as the necessary ideas are present in the discussion of observational and experimental studies.

4.3 Observational studies

Observational studies are used for RQs with an explanatory variable but no intervention. They are commonly-used, and sometimes are the only type of research design possible.

Definition 4.2 (Observational study) Observational studies study relationships without an intervention.

Definition 4.3 (Condition) The conditions are the values of the explanatory variable that those in the observational study experience, but are not imposed by the researchers.

descriptive correlational quasi experimental and experimental

Example 4.2 (Between-individuals observational study) Consider again this one-tailed, decision-making RQ (based on the ideas in Sect.  2.11 ):

Among Australian teenagers with a common cold, is the average duration of cold symptoms shorter for teens taking a daily dose of echinacea compared to teens taking no medication?

This RQ has a between-individuals comparison, so is a relational RQ. If the researchers do not impose the taking of echinacea (that is, the individuals make this decision themselves), the study is observational. The two conditions are 'taking echinacea', and 'not taking echinacea' (Fig.  4.1 ).

Observational studies with a relational RQ.\spacex The dashed lines indicate steps not under the control of the researchers.

FIGURE 4.1: Observational studies with a relational RQ.The dashed lines indicate steps not under the control of the researchers.

Example 4.3 (Within-individuals repeated-measures RQ; observational study) D. A. Levitsky, Halbmaier, and Mrdjenovic ( 2004 ) recorded the weights of university students at the beginning of university, and then after \(12\) weeks from the same students. The comparison is within individuals, this is a repeated measures (paired) RQ. Since the researchers do not impose anything on the students; there is no intervention (Fig.  4.2 ).

The outcome is the average weight. The response variable is the weight of individuals. The within-individuals comparison is the number of weeks after university started ( \(0\) and \(12\) ).

Observational studies with a repeated-measures RQ.\spacex The dashed lines indicate steps not under the control of the researchers.

FIGURE 4.2: Observational studies with a repeated-measures RQ.The dashed lines indicate steps not under the control of the researchers.

Example 4.4 (Correlational RQ; observational study) Poovaragavan et al. ( 2023 ) explored the relationship between time since death, and the concentration of sodium in synovial (knee) fluid. This is a correlational RQ as groups are not being compared. The time since death is the explanatory variable, and the concentration of sodium in synovial fluid is the response variable. The researchers do not impose the time since death, so there is no intervention (Fig.  4.3 ).

Observational studies with a correlational RQ.\spacex The dashed lines indicate steps not under the control of the researchers.

FIGURE 4.3: Observational studies with a correlational RQ.The dashed lines indicate steps not under the control of the researchers.

4.4 Experimental studies

Experimental studies , or experiments , are used for RQs with an explanatory variable and an intervention, and are commonly-used. Well-designed experimental studies can establish a cause-and-effect relationship between the response and explanatory variables. However, using experimental studies is not always possible. In general, well-designed experimental studies are more likely to be internally valid than observational studies.

Definition 4.4 (Experiment) Experimental studies (or experiments ) study relationships with an intervention.

Definition 4.5 (Treatments) The treatments are the values of the explanatory variable that the researchers impose upon the individuals in the experimental study.

In an experimental study , the unit of analysis (Def.  2.18 ) is the smallest collection of units of observations that can be randomly allocated to separate treatments.

Example 4.5 (Within-individuals relational RQ; experimental study) Consider this estimation RQ:

For obese men over \(60\) , what is the average increase in heart rate after walking \(400\) ?

This RQ uses a within-individuals comparison (before and after walking \(400\) ) so is a repeated-measures (and paired) RQ. The study has an intervention if researchers impose the \(400\) walk on the subjects (Fig.  4.4 ). The outcome is the average heart rate. The response variable is the heart rate for each individual man.

Experimental studies with a repeated-measures RQ.\spacex The dashed lines indicate steps not under the control of the researchers.

FIGURE 4.4: Experimental studies with a repeated-measures RQ.The dashed lines indicate steps not under the control of the researchers.

Example 4.6 (Correlational RQ experimental study) Xu et al. ( 2023 ) studied leaf-drip irrigation, exploring the relationship between the water pressure and flow rate. This is a correlational RQ, where the hydraulic pressure time is the explanatory variable, and the flow rate is the response variable. The researchers imposed nine different values for water pressure, so there is an intervention (Fig.  4.5 ).

Experimental studies with a correlational RQ.\spacex The dashed lines indicate steps not under the control of the researchers.

FIGURE 4.5: Experimental studies with a correlational RQ.The dashed lines indicate steps not under the control of the researchers.

Between -individuals experimental studies can be either true experiments (Sect.  4.4.1 ) or quasi-experiments (Sect.  4.4.2 ); see Table  4.1 .

TABLE 4.1: Comparing analytical designs with a between-individuals comparison.
Study type Individuals allocated to groups? Treatments allocated to groups Reference
Observational No No Sect.
True experiment Yes Yes Sect.
Quasi-experiment No Yes Sect.

4.4.1 True experimental studies

True experiments are commonly used to answer relational RQs. An example of a true experiment is a randomised controlled trial , often used in drug trials.

Definition 4.6 (True experiment) In a true experiment , the researchers:

  • allocate treatments to groups of individuals (i.e., allocate the values of the explanatory variable to the individuals), and
  • determine who or what individuals are in those groups.

While the steps may not happen explicit , they happen conceptually .

Example 4.7 (True experiment) The echinacea study (Sect.  2.11 ) could be designed as a true experiment . The researchers would allocate individuals to one of two groups, and then decide which group took echinacea and which group did not (Fig.  4.6 ).

These steps may happen implicitly: researchers may allocate each person at random to one of the two groups (echinacea; no echinacea). This is still a true experiment, since the researchers could decide to switch which group receives echinacea; ultimately, the decision is still made by the researchers.

True experimental studies: researchers allocate individuals to groups, and treatments to groups.

FIGURE 4.6: True experimental studies: researchers allocate individuals to groups, and treatments to groups.

4.4.2 Quasi-experimental studies

Quasi-experiments are similar to true experiments (i.e., answer relational RQs) where treatments are allocated to groups that already exist (e.g., may be naturally occurring).

Definition 4.7 (Quasi-experiment) In a quasi-experiment , the researchers:

  • allocate treatments to groups of individuals (i.e., allocate the values of the explanatory variable to the individuals), but
  • do not determine who or what individuals are in those groups.

Example 4.8 (Quasi-experiments) The echinacea study (Sect.  2.11 ) could be designed as a quasi-experiment. The researchers could find two existing groups of people (say, from Suburbs A and B), then decide to allocate people in Suburb A to take echinacea, and people in Suburb B to not take echinacea (Fig.  4.7 ).

Quasi-experimental studies: researchers do not allocate individuals to groups, but do allocate treatments to groups. The dashed lines indicate steps not under the control of the researchers.

FIGURE 4.7: Quasi-experimental studies: researchers do not allocate individuals to groups, but do allocate treatments to groups. The dashed lines indicate steps not under the control of the researchers.

Example 4.9 (Quasi-experiments) A researcher wants to examine the effect of an alcohol awareness program (based on M. MacDonald ( 2008 ) ) on the average amount of alcohol consumed per student in a university Orientation Week. She runs the program at University A only, then compares the average amount of alcohol consumed per person at two universities (A and B).

This study is a quasi-experiment since the researcher did not (and can not) determine the groups: the students (not the researcher) would have chosen University A or University B for many reasons. However, the researcher did decide whether to allocate the program to University A or University B.

4.5 Comparing study types

In experimental studies, researchers create differences in the values of the explanatory variable through allocation, and then note the effect this has on the values of the response variable. In observational studies, researchers observe differences in the values of the explanatory variable, and observe the values of the response variable.

Importantly, only well-designed true experiments can show cause-and-effect . Nonetheless, well-designed observational and quasi-experimental studies can provide evidence to support cause-and-effect conclusions, especially when supported by other evidence. Although only true experimental studies can show cause-and-effect, true experimental studies are often not possible for ethical, financial, practical or logistical reasons.

The advantages and disadvantages of each study type are discussed later (Sect.  8.2 ), after these study types are discussed in greater detail in the following chapters.

Example 4.10 (Cause and effect) Many studies report that the bacteria in the gut of people on the autism spectrum is different than the bacteria in the gut of people not on the autism spectrum ( Kang et al. ( 2019 ) , Ho et al. ( 2020 ) ), and suggest the bacteria may contribute whether a person is autistic. These studies were observational, so the suggestion of a cause-and-effect relationship may be inaccurate .

Other studies ( Yap et al. 2021 ) suggest that people on the autism spectrum are more likely to be 'picky eaters', which contributes to the differences in gut bacteria.

The animation below compares observational, quasi-experimental and true experimental designs.

FIGURE 4.8: The three main research designs.

4.6 Directionality

Analytical research studies (observational; experimental) can be classified by their directionality (Table  4.2 ):

  • Forward direction (Sect.  4.6.1 ): The values of the explanatory variable are obtained, and the study determines what values of the response variable occur in the future. All experimental studies have a forward direction.
  • Backward direction (Sect.  4.6.2 ): The values of the response variable are obtained, then the study determines what values of the explanatory variable occurred in the past.
  • No direction (Sect.  4.6.3 ): The values of the response and explanatory variables are obtained at the same time.

Directionality is important for understanding cause-and-effect relationships. If the explanatory variable occurs before the outcome is observed, a cause-and-effect relationship may be possible. That is, studies with a forward direction are more likely to provide evidence of causality.

TABLE 4.2: Classifying observational studies. (All experimental studies have a forward direction.)
Type Explanatory variable Response variable
Forward direction When study begins Determined in the future
Backward direction Determined from the past When study begins
No direction When study begins When study begins

Example 4.11 (Directionality) In South Australia in 1988--1989, \(25\) cases of legionella infections (an unusually high number) were investigated ( O’Connor et al. 2007 ) . All \(25\) cases were gardeners.

Researchers compared \(25\) people with legionella infections with \(75\) similar people without the infection. The recent use of potting mix was associated with an increase in the risk of contracting illness.

This study has a backward direction : people were identified with an infection, and then the researchers looked back at past activities.

Research studies are sometimes described as 'prospective' or 'retrospective', but these terms can be misleading ( Ranganathan and Aggarwal 2018 ) and their use not recommended ( Vandenbroucke et al. 2014 ) .

Experimental studies always have a forward direction. Observational studies may have any directionality, and may be given different names accordingly.

4.6.1 Forward-directional studies

All experimental studies have a forward direction, and include randomised controlled trials (RCTs) and clinical trials .

Observational studies with a forward direction are often called cohort studies . Both experimental studies and cohort studies can be expensive and tricky: tracking individuals (a cohort ) into the future is not always easy, and the ability to track some individuals into the future may be lost ( drop outs ): plants or animals may die, people may move or decide to no longer participate, etc. Forward-directional observational studies:

  • may add support to cause-and-effect conclusions, since the comparison occurs before the outcome (only well-designed experimental studies can establish cause-and-effect).
  • can examine many different outcomes in one study, since the outcome(s) occur in the future.
  • can be problematic for rare outcomes, as the outcome of interest may not appear (or may appear rarely) in the future.

Example 4.12 (Forward study) Chih et al. ( 2018 ) studied dogs and cats who had been recommended to receive intermittent nasogastric tube (NGT) aspiration for up to \(36\) . Some pet owners did not give permission for NGT, while some did; thus, whether the animal received NGT was not determined by the researchers (the study is observational). The researchers then observed whether the animals developed hypochloremic metabolic alkalosis (HCMA) in the next \(36\) .

Since the explanatory variable (whether NGT was used or not) was recorded at the start of the study, and the response variable (whether HCMA was observed or not) was determined within the following \(36\) , this study has a forward direction .

4.6.2 Backward-directional studies

Observational studies with a backward direction are often called case-control studies. The 'cases' are often individuals with a certain disease, and then the controls are those without the disease (see Example  4.11 ). Researchers find individuals with specific values of the response variable (cases and controls), and determine values of the explanatory variable from the past. Case-control studies:

  • only allow one outcome to be studied, since individuals are chosen to be in the study based on the value of the response variable of interest.
  • are useful for rare outcomes: the researchers can purposely select large numbers with the rare outcome of interest.
  • do not effectively eliminate other explanations for the relationship between the response and explanatory variables (called confounding ; Def.  3.7 ).
  • may suffer from selection bias (Sect.  6.7 ), as researchers try to locate individuals with a rare outcome.
  • may suffer from recall bias (Sect.  9.2.2 ) when the individuals are people: accurately recalling the past can be unreliable.

Example 4.13 (Backwards study) Pamphlett ( 2012 ) examined patients with and without sporadic motor neurone disease (SMND), and asked about past exposure to metals.

The response variable (whether or not the respondent had SMND) is assessed when the study begins, and whether or not subjects had exposure to metals (explanatory variable) is determined from the past . This observational study has a backward direction.

4.6.3 Non-directional studies

Non-directional observational studies are called cross-sectional studies. Cross-sectional studies:

  • are good for findings associations between variables (which may or may not be causation).
  • are generally quicker and cheaper than other types of studies.
  • are not useful for studying rare outcomes.

Example 4.14 (Non-directional study) J. Russell et al. ( 2014 ) asked older Australian their opinions of their own food security, and recorded their living arrangements. Individuals' responses to both the response variable and explanatory variable were gathered at the same time. This observational study is non-directional .

4.7 The role of research design

Choosing the type of study is only one part of research design; many other decisions must be made also. The purpose of these decisions is to ensure researchers can confidently study the relationship between the response and explanatory variables ( internal validity ) in the population of interest ( external validity ) by studying one the many possible samples. This is related to the idea of bias .

Definition 4.8 (Bias) Bias refers to any systematic misrepresentation of the target population by the sample.

Various types of bias are possible, some of which are studied later. Maximising internal and external validity reduces bias. Bias may occur during research design, sample selection (Sect.  6.7 ), data collection (selection bias; Sect.  6.6 ), analysis, or interpretation of results (Chap.  8 ). This book only discusses a small number of possible biases.

Designing a study to maximise internal validity means:

  • identifying what else might influence the values of the response variable, apart from the explanatory variable (Chap.  3 ); and
  • designing the study to be effective (Chaps.  7 ).

In general, experimental studies have better internal validity than observational studies.

Designing a study to maximise external validity means:

  • identifying who or what to study, since the whole population cannot be studied (Chap.  6 ); and
  • determining how many individuals to study. (We need to learn more before we can answer this critical question in Chap.  29 .)

Details of the data collection (Chap.  9 ) and ethical issues (Chap.  5 ) must also be considered.

The following short (humourous) video demonstrates the importance of understanding the design!

4.8 Chapter summary

Three types of research studies are: descriptive studies (for studying descriptive RQs), observational studies (for studying relationships without an intervention), and experimental (for studying relationships with an intervention).

Observational studies can be classified as having a forward direction (cohort studies), backward direction (case-control studies), or no direction (cross-sectional studies). Experimental studies always have a forward direction. Relational RQs with an intervention can be classified as true experiments or quasi-experiments . Cause-and-effect conclusions can only be made from well-designed true experiments .

Ideally studies should be designed to be internally and externally valid. In general, experimental studies have better internal validity than observational studies.

The following short videos may help explain some of these concepts:

4.9 Quick review questions

  • A study ( Fraboni et al. 2018 ) examined the 'red-light running behaviour of cyclists in Italy'. This study is most likely to be: an observational study a quasi-experimental study an experimental study
  • What is the difference between a true and a quasi-experiment?
  • True or false: In a quasi-experiment, the researchers allocate treatments to groups that they cannot manipulate. TRUE FALSE
  • True or false: True experiments generally have a higher internal validity than observational studies. TRUE FALSE
  • True or false: Observational studies generally have a higher external validity than quasi-experimental studies. TRUE FALSE

4.10 Exercises

Answers to odd-numbered exercises are available in App.  E .

Exercise 4.1 Consider this RQ ( McLinn et al. 1994 ) :

In children with acute otitis media, what is the difference in the average duration of symptoms when treated with cefuroxime compared to amoxicillin?
  • Is the comparison a within- or between-individuals comparison?
  • Is this RQ relational, repeated-measures or correlational?
  • Is there likely an intervention?
  • Is the RQ an estimation or decision-making RQ?
  • Is the study observational or experimental? If observational, what is the direction ? If experiment, is this a quasi-experiment or true experiment?

Exercise 4.2 Khair et al. ( 2015 ) studied the time needed for organic waste to turn into compost. For some batches of compost, earthworms were added. In other batches, earthworms were not added to the waste.

One RQs asked whether the composting times for waste with and without earthworms was the same or not.

  • Is there an intervention?

Exercise 4.3 Gonzalez-Fonteboa and Martinez-Abella ( 2007 ) studied recycled concrete beams. Beams were divided into three groups, different loads were then applied to each group, then the shear strength needed to fracture the beams was measured. Is this a quasi-experiment or a true experiment ? Explain.

Exercise 4.4 A research study compared the use of two different education programs to reduce the percentage of patients experiencing ventilator-associated pneumonia (VAP). Paramedics from two cities were chosen to participate. Paramedics in City A were chosen to receive Program 1, and paramedics in the other city to receive Program 2.

Exercise 4.5 Manzano et al. ( 2013 ) compared 'the effectiveness of alternating pressure air mattresses vs. overlays, to prevent pressure ulcers' (p. 2099). Patients were provided with alternating pressure air overlays (in 2001) or alternating pressure air mattresses (in 2006). The number of pressure ulcers were recorded.

This study is experimental, because the researchers provided the mattresses. Is this a true experiment or quasi -experiment? Explain.

Exercise 4.6 Consider this journal article extract ( Sacks et al. ( 2009 ) , p. 859):

We randomly assigned \(811\) overweight adults to one of four diets [...] The diets consisted of similar foods [...] The primary outcome was the change in body weight after \(2\) years in [...] comparisons of low fat versus high fat and average protein versus high protein and in the comparison of highest and lowest carbohydrate content.
  • What is the between -individuals comparison?
  • What is the within -individuals comparison?
  • Is this study observational or experimental? Why?
  • Is this study a quasi-experiment or a true experiment? Why?
  • What are the units of analysis?
  • What are the units of observation?
  • What is the response variable?
  • What is the explanatory variable?

Exercise 4.7 Consider this initial RQ (based on Friedmann and Thomas ( 1985 ) ), that clearly needs refining: 'Are people with pets healthier?'

  • Briefly describe useful and practical definitions for P, O and C.
  • Briefly describe an experimental study to answer the RQ.
  • Briefly describe an observational study to answer the RQ.

Exercise 4.8 Consider this initial RQ, that clearly needs refining: 'Are seeds more likely to sprout when a seed-raising is used?'

Experimental vs Quasi-Experimental Design: Which to Choose?

Here’s a table that summarizes the similarities and differences between an experimental and a quasi-experimental study design:

 Experimental Study (a.k.a. Randomized Controlled Trial)Quasi-Experimental Study
ObjectiveEvaluate the effect of an intervention or a treatmentEvaluate the effect of an intervention or a treatment
How participants get assigned to groups?Random assignmentNon-random assignment (participants get assigned according to their choosing or that of the researcher)
Is there a control group?YesNot always (although, if present, a control group will provide better evidence for the study results)
Is there any room for confounding?No (although check for a detailed discussion on post-randomization confounding in randomized controlled trials)Yes (however, statistical techniques can be used to study causal relationships in quasi-experiments)
Level of evidenceA randomized trial is at the highest level in the hierarchy of evidenceA quasi-experiment is one level below the experimental study in the hierarchy of evidence [ ]
AdvantagesMinimizes bias and confounding– Can be used in situations where an experiment is not ethically or practically feasible
– Can work with smaller sample sizes than randomized trials
Limitations– High cost (as it generally requires a large sample size)
– Ethical limitations
– Generalizability issues
– Sometimes practically infeasible
Lower ranking in the hierarchy of evidence as losing the power of randomization causes the study to be more susceptible to bias and confounding

What is a quasi-experimental design?

A quasi-experimental design is a non-randomized study design used to evaluate the effect of an intervention. The intervention can be a training program, a policy change or a medical treatment.

Unlike a true experiment, in a quasi-experimental study the choice of who gets the intervention and who doesn’t is not randomized. Instead, the intervention can be assigned to participants according to their choosing or that of the researcher, or by using any method other than randomness.

Having a control group is not required, but if present, it provides a higher level of evidence for the relationship between the intervention and the outcome.

(for more information, I recommend my other article: Understand Quasi-Experimental Design Through an Example ) .

Examples of quasi-experimental designs include:

  • One-Group Posttest Only Design
  • Static-Group Comparison Design
  • One-Group Pretest-Posttest Design
  • Separate-Sample Pretest-Posttest Design

What is an experimental design?

An experimental design is a randomized study design used to evaluate the effect of an intervention. In its simplest form, the participants will be randomly divided into 2 groups:

  • A treatment group: where participants receive the new intervention which effect we want to study.
  • A control or comparison group: where participants do not receive any intervention at all (or receive some standard intervention).

Randomization ensures that each participant has the same chance of receiving the intervention. Its objective is to equalize the 2 groups, and therefore, any observed difference in the study outcome afterwards will only be attributed to the intervention – i.e. it removes confounding.

(for more information, I recommend my other article: Purpose and Limitations of Random Assignment ).

Examples of experimental designs include:

  • Posttest-Only Control Group Design
  • Pretest-Posttest Control Group Design
  • Solomon Four-Group Design
  • Matched Pairs Design
  • Randomized Block Design

When to choose an experimental design over a quasi-experimental design?

Although many statistical techniques can be used to deal with confounding in a quasi-experimental study, in practice, randomization is still the best tool we have to study causal relationships.

Another problem with quasi-experiments is the natural progression of the disease or the condition under study — When studying the effect of an intervention over time, one should consider natural changes because these can be mistaken with changes in outcome that are caused by the intervention. Having a well-chosen control group helps dealing with this issue.

So, if losing the element of randomness seems like an unwise step down in the hierarchy of evidence, why would we ever want to do it?

This is what we’re going to discuss next.

When to choose a quasi-experimental design over a true experiment?

The issue with randomness is that it cannot be always achievable.

So here are some cases where using a quasi-experimental design makes more sense than using an experimental one:

  • If being in one group is believed to be harmful for the participants , either because the intervention is harmful (ex. randomizing people to smoking), or the intervention has a questionable efficacy, or on the contrary it is believed to be so beneficial that it would be malevolent to put people in the control group (ex. randomizing people to receiving an operation).
  • In cases where interventions act on a group of people in a given location , it becomes difficult to adequately randomize subjects (ex. an intervention that reduces pollution in a given area).
  • When working with small sample sizes , as randomized controlled trials require a large sample size to account for heterogeneity among subjects (i.e. to evenly distribute confounding variables between the intervention and control groups).

Further reading

  • Statistical Software Popularity in 40,582 Research Papers
  • Checking the Popularity of 125 Statistical Tests and Models
  • Objectives of Epidemiology (With Examples)
  • 12 Famous Epidemiologists and Why

Our websites may use cookies to personalize and enhance your experience. By continuing without changing your cookie settings, you agree to this collection. For more information, please see our University Websites Privacy Notice .

Neag School of Education

Educational Research Basics by Del Siegle

Types of Research

How do we know something exists? There are a numbers of ways of knowing…

  • -Sensory Experience
  • -Agreement with others
  • -Expert Opinion
  • -Scientific Method (we’re using this one)

The Scientific Process (replicable)

  • Identify a problem
  • Clarify the problem
  • Determine what data would help solve the problem
  • Organize the data
  • Interpret the results

General Types of Educational Research

  • Descriptive — survey, historical, content analysis, qualitative (ethnographic, narrative, phenomenological, grounded theory, and case study)
  • Associational — correlational, causal-comparative
  • Intervention — experimental, quasi-experimental, action research (sort of)

Graphic showing images illustrating the text above

Researchers Sometimes Have a Category Called Group Comparison

  • Ex Post Facto (Causal-Comparative): GROUPS ARE ALREADY FORMED
  • Experimental: RANDOM ASSIGNMENT OF INDIVIDUALS
  • Quasi-Experimental: RANDOM ASSIGNMENT OF GROUPS (oversimplified, but fine for now)

General Format of a Research Publication

  • Background of the Problem (ending with a problem statement) — Why is this important to study? What is the problem being investigated?
  • Review of Literature — What do we already know about this problem or situation?
  • Methodology (participants, instruments, procedures) — How was the study conducted? Who were the participants? What data were collected and how?
  • Analysis — What are the results? What did the data indicate?
  • Results — What are the implications of these results? How do they agree or disagree with previous research? What do we still need to learn? What are the limitations of this study?

Del Siegle, PhD [email protected]

Last modified 6/18/2019

Logo for M Libraries Publishing

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

7.3 Quasi-Experimental Research

Learning objectives.

  • Explain what quasi-experimental research is and distinguish it clearly from both experimental and correlational research.
  • Describe three different types of quasi-experimental research designs (nonequivalent groups, pretest-posttest, and interrupted time series) and identify examples of each one.

The prefix quasi means “resembling.” Thus quasi-experimental research is research that resembles experimental research but is not true experimental research. Although the independent variable is manipulated, participants are not randomly assigned to conditions or orders of conditions (Cook & Campbell, 1979). Because the independent variable is manipulated before the dependent variable is measured, quasi-experimental research eliminates the directionality problem. But because participants are not randomly assigned—making it likely that there are other differences between conditions—quasi-experimental research does not eliminate the problem of confounding variables. In terms of internal validity, therefore, quasi-experiments are generally somewhere between correlational studies and true experiments.

Quasi-experiments are most likely to be conducted in field settings in which random assignment is difficult or impossible. They are often conducted to evaluate the effectiveness of a treatment—perhaps a type of psychotherapy or an educational intervention. There are many different kinds of quasi-experiments, but we will discuss just a few of the most common ones here.

Nonequivalent Groups Design

Recall that when participants in a between-subjects experiment are randomly assigned to conditions, the resulting groups are likely to be quite similar. In fact, researchers consider them to be equivalent. When participants are not randomly assigned to conditions, however, the resulting groups are likely to be dissimilar in some ways. For this reason, researchers consider them to be nonequivalent. A nonequivalent groups design , then, is a between-subjects design in which participants have not been randomly assigned to conditions.

Imagine, for example, a researcher who wants to evaluate a new method of teaching fractions to third graders. One way would be to conduct a study with a treatment group consisting of one class of third-grade students and a control group consisting of another class of third-grade students. This would be a nonequivalent groups design because the students are not randomly assigned to classes by the researcher, which means there could be important differences between them. For example, the parents of higher achieving or more motivated students might have been more likely to request that their children be assigned to Ms. Williams’s class. Or the principal might have assigned the “troublemakers” to Mr. Jones’s class because he is a stronger disciplinarian. Of course, the teachers’ styles, and even the classroom environments, might be very different and might cause different levels of achievement or motivation among the students. If at the end of the study there was a difference in the two classes’ knowledge of fractions, it might have been caused by the difference between the teaching methods—but it might have been caused by any of these confounding variables.

Of course, researchers using a nonequivalent groups design can take steps to ensure that their groups are as similar as possible. In the present example, the researcher could try to select two classes at the same school, where the students in the two classes have similar scores on a standardized math test and the teachers are the same sex, are close in age, and have similar teaching styles. Taking such steps would increase the internal validity of the study because it would eliminate some of the most important confounding variables. But without true random assignment of the students to conditions, there remains the possibility of other important confounding variables that the researcher was not able to control.

Pretest-Posttest Design

In a pretest-posttest design , the dependent variable is measured once before the treatment is implemented and once after it is implemented. Imagine, for example, a researcher who is interested in the effectiveness of an antidrug education program on elementary school students’ attitudes toward illegal drugs. The researcher could measure the attitudes of students at a particular elementary school during one week, implement the antidrug program during the next week, and finally, measure their attitudes again the following week. The pretest-posttest design is much like a within-subjects experiment in which each participant is tested first under the control condition and then under the treatment condition. It is unlike a within-subjects experiment, however, in that the order of conditions is not counterbalanced because it typically is not possible for a participant to be tested in the treatment condition first and then in an “untreated” control condition.

If the average posttest score is better than the average pretest score, then it makes sense to conclude that the treatment might be responsible for the improvement. Unfortunately, one often cannot conclude this with a high degree of certainty because there may be other explanations for why the posttest scores are better. One category of alternative explanations goes under the name of history . Other things might have happened between the pretest and the posttest. Perhaps an antidrug program aired on television and many of the students watched it, or perhaps a celebrity died of a drug overdose and many of the students heard about it. Another category of alternative explanations goes under the name of maturation . Participants might have changed between the pretest and the posttest in ways that they were going to anyway because they are growing and learning. If it were a yearlong program, participants might become less impulsive or better reasoners and this might be responsible for the change.

Another alternative explanation for a change in the dependent variable in a pretest-posttest design is regression to the mean . This refers to the statistical fact that an individual who scores extremely on a variable on one occasion will tend to score less extremely on the next occasion. For example, a bowler with a long-term average of 150 who suddenly bowls a 220 will almost certainly score lower in the next game. Her score will “regress” toward her mean score of 150. Regression to the mean can be a problem when participants are selected for further study because of their extreme scores. Imagine, for example, that only students who scored especially low on a test of fractions are given a special training program and then retested. Regression to the mean all but guarantees that their scores will be higher even if the training program has no effect. A closely related concept—and an extremely important one in psychological research—is spontaneous remission . This is the tendency for many medical and psychological problems to improve over time without any form of treatment. The common cold is a good example. If one were to measure symptom severity in 100 common cold sufferers today, give them a bowl of chicken soup every day, and then measure their symptom severity again in a week, they would probably be much improved. This does not mean that the chicken soup was responsible for the improvement, however, because they would have been much improved without any treatment at all. The same is true of many psychological problems. A group of severely depressed people today is likely to be less depressed on average in 6 months. In reviewing the results of several studies of treatments for depression, researchers Michael Posternak and Ivan Miller found that participants in waitlist control conditions improved an average of 10 to 15% before they received any treatment at all (Posternak & Miller, 2001). Thus one must generally be very cautious about inferring causality from pretest-posttest designs.

Does Psychotherapy Work?

Early studies on the effectiveness of psychotherapy tended to use pretest-posttest designs. In a classic 1952 article, researcher Hans Eysenck summarized the results of 24 such studies showing that about two thirds of patients improved between the pretest and the posttest (Eysenck, 1952). But Eysenck also compared these results with archival data from state hospital and insurance company records showing that similar patients recovered at about the same rate without receiving psychotherapy. This suggested to Eysenck that the improvement that patients showed in the pretest-posttest studies might be no more than spontaneous remission. Note that Eysenck did not conclude that psychotherapy was ineffective. He merely concluded that there was no evidence that it was, and he wrote of “the necessity of properly planned and executed experimental studies into this important field” (p. 323). You can read the entire article here:

http://psychclassics.yorku.ca/Eysenck/psychotherapy.htm

Fortunately, many other researchers took up Eysenck’s challenge, and by 1980 hundreds of experiments had been conducted in which participants were randomly assigned to treatment and control conditions, and the results were summarized in a classic book by Mary Lee Smith, Gene Glass, and Thomas Miller (Smith, Glass, & Miller, 1980). They found that overall psychotherapy was quite effective, with about 80% of treatment participants improving more than the average control participant. Subsequent research has focused more on the conditions under which different types of psychotherapy are more or less effective.

Han Eysenck

In a classic 1952 article, researcher Hans Eysenck pointed out the shortcomings of the simple pretest-posttest design for evaluating the effectiveness of psychotherapy.

Wikimedia Commons – CC BY-SA 3.0.

Interrupted Time Series Design

A variant of the pretest-posttest design is the interrupted time-series design . A time series is a set of measurements taken at intervals over a period of time. For example, a manufacturing company might measure its workers’ productivity each week for a year. In an interrupted time series-design, a time series like this is “interrupted” by a treatment. In one classic example, the treatment was the reduction of the work shifts in a factory from 10 hours to 8 hours (Cook & Campbell, 1979). Because productivity increased rather quickly after the shortening of the work shifts, and because it remained elevated for many months afterward, the researcher concluded that the shortening of the shifts caused the increase in productivity. Notice that the interrupted time-series design is like a pretest-posttest design in that it includes measurements of the dependent variable both before and after the treatment. It is unlike the pretest-posttest design, however, in that it includes multiple pretest and posttest measurements.

Figure 7.5 “A Hypothetical Interrupted Time-Series Design” shows data from a hypothetical interrupted time-series study. The dependent variable is the number of student absences per week in a research methods course. The treatment is that the instructor begins publicly taking attendance each day so that students know that the instructor is aware of who is present and who is absent. The top panel of Figure 7.5 “A Hypothetical Interrupted Time-Series Design” shows how the data might look if this treatment worked. There is a consistently high number of absences before the treatment, and there is an immediate and sustained drop in absences after the treatment. The bottom panel of Figure 7.5 “A Hypothetical Interrupted Time-Series Design” shows how the data might look if this treatment did not work. On average, the number of absences after the treatment is about the same as the number before. This figure also illustrates an advantage of the interrupted time-series design over a simpler pretest-posttest design. If there had been only one measurement of absences before the treatment at Week 7 and one afterward at Week 8, then it would have looked as though the treatment were responsible for the reduction. The multiple measurements both before and after the treatment suggest that the reduction between Weeks 7 and 8 is nothing more than normal week-to-week variation.

Figure 7.5 A Hypothetical Interrupted Time-Series Design

A Hypothetical Interrupted Time-Series Design - The top panel shows data that suggest that the treatment caused a reduction in absences. The bottom panel shows data that suggest that it did not

The top panel shows data that suggest that the treatment caused a reduction in absences. The bottom panel shows data that suggest that it did not.

Combination Designs

A type of quasi-experimental design that is generally better than either the nonequivalent groups design or the pretest-posttest design is one that combines elements of both. There is a treatment group that is given a pretest, receives a treatment, and then is given a posttest. But at the same time there is a control group that is given a pretest, does not receive the treatment, and then is given a posttest. The question, then, is not simply whether participants who receive the treatment improve but whether they improve more than participants who do not receive the treatment.

Imagine, for example, that students in one school are given a pretest on their attitudes toward drugs, then are exposed to an antidrug program, and finally are given a posttest. Students in a similar school are given the pretest, not exposed to an antidrug program, and finally are given a posttest. Again, if students in the treatment condition become more negative toward drugs, this could be an effect of the treatment, but it could also be a matter of history or maturation. If it really is an effect of the treatment, then students in the treatment condition should become more negative than students in the control condition. But if it is a matter of history (e.g., news of a celebrity drug overdose) or maturation (e.g., improved reasoning), then students in the two conditions would be likely to show similar amounts of change. This type of design does not completely eliminate the possibility of confounding variables, however. Something could occur at one of the schools but not the other (e.g., a student drug overdose), so students at the first school would be affected by it while students at the other school would not.

Finally, if participants in this kind of design are randomly assigned to conditions, it becomes a true experiment rather than a quasi experiment. In fact, it is the kind of experiment that Eysenck called for—and that has now been conducted many times—to demonstrate the effectiveness of psychotherapy.

Key Takeaways

  • Quasi-experimental research involves the manipulation of an independent variable without the random assignment of participants to conditions or orders of conditions. Among the important types are nonequivalent groups designs, pretest-posttest, and interrupted time-series designs.
  • Quasi-experimental research eliminates the directionality problem because it involves the manipulation of the independent variable. It does not eliminate the problem of confounding variables, however, because it does not involve random assignment to conditions. For these reasons, quasi-experimental research is generally higher in internal validity than correlational studies but lower than true experiments.
  • Practice: Imagine that two college professors decide to test the effect of giving daily quizzes on student performance in a statistics course. They decide that Professor A will give quizzes but Professor B will not. They will then compare the performance of students in their two sections on a common final exam. List five other variables that might differ between the two sections that could affect the results.

Discussion: Imagine that a group of obese children is recruited for a study in which their weight is measured, then they participate for 3 months in a program that encourages them to be more active, and finally their weight is measured again. Explain how each of the following might affect the results:

  • regression to the mean
  • spontaneous remission

Cook, T. D., & Campbell, D. T. (1979). Quasi-experimentation: Design & analysis issues in field settings . Boston, MA: Houghton Mifflin.

Eysenck, H. J. (1952). The effects of psychotherapy: An evaluation. Journal of Consulting Psychology, 16 , 319–324.

Posternak, M. A., & Miller, I. (2001). Untreated short-term course of major depression: A meta-analysis of studies using outcomes from studies using wait-list control groups. Journal of Affective Disorders, 66 , 139–146.

Smith, M. L., Glass, G. V., & Miller, T. I. (1980). The benefits of psychotherapy . Baltimore, MD: Johns Hopkins University Press.

Research Methods in Psychology Copyright © 2016 by University of Minnesota is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

FIU Libraries Logo

  •   LibGuides
  •   A-Z List
  •   Help

Research Methods Help Guide

  • Quick Glossary
  • Types of Data

Introduction

Observational, correlational, experimental, quasi-experimental.

  • Types of Studies
  • Helpful Resources
  • Get Help @ FIU

More Information

  • More Information on Samples, Sampling, and Populations Scroll down to the "Populations and Samples" section.

Research types on this page are modeled after those listed in the Introduction to Measurement and Statistics website created by Dr. Linda M. Woolf , Professor of Psychology at Webster University. The definitions are based on Dr. Woolf's explanations. Go to Dr. Woolf's website for much more information as well as practice pages.

Researchers observe participants but do not attempt to influence them.

Researchers examine how two or more variables are related. It is not possible to tell which variable is affecting the other(s). As you have probably heard, "correlation is not causation."

descriptive correlational quasi experimental and experimental

Researchers control conditions to examine how one variable affects the other(s). Participants are assigned to random groups (at least two). There is a control group that does not experience or receive the variable being examined and a experimental group that does experience or receive the variable being examined. The groups are compared to examine the effect of the variable being investigated.

In experiments, causation can be explored.

A quasi-experiment is like an experiment, but the groups cannot be assigned. Quasi-experiments use pre-existing groups.

  • << Previous: Types of Data
  • Next: Types of Studies >>
  • Last Updated: Aug 16, 2024 5:59 PM
  • URL: https://library.fiu.edu/researchmethods

Information

Fiu libraries floorplans, green library, modesto a. maidique campus.

Floor Resources
One
Two
Three
Four
Five
Six
Seven
Eight

Hubert Library, Biscayne Bay Campus

Floor Resources
One
Two
Three

Federal Depository Library Program logo

Directions: Green Library, MMC

Directions: Hubert Library, BBC

Logo for BCcampus Open Publishing

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Chapter 7: Nonexperimental Research

Quasi-Experimental Research

Learning Objectives

  • Explain what quasi-experimental research is and distinguish it clearly from both experimental and correlational research.
  • Describe three different types of quasi-experimental research designs (nonequivalent groups, pretest-posttest, and interrupted time series) and identify examples of each one.

The prefix  quasi  means “resembling.” Thus quasi-experimental research is research that resembles experimental research but is not true experimental research. Although the independent variable is manipulated, participants are not randomly assigned to conditions or orders of conditions (Cook & Campbell, 1979). [1] Because the independent variable is manipulated before the dependent variable is measured, quasi-experimental research eliminates the directionality problem. But because participants are not randomly assigned—making it likely that there are other differences between conditions—quasi-experimental research does not eliminate the problem of confounding variables. In terms of internal validity, therefore, quasi-experiments are generally somewhere between correlational studies and true experiments.

Quasi-experiments are most likely to be conducted in field settings in which random assignment is difficult or impossible. They are often conducted to evaluate the effectiveness of a treatment—perhaps a type of psychotherapy or an educational intervention. There are many different kinds of quasi-experiments, but we will discuss just a few of the most common ones here.

Nonequivalent Groups Design

Recall that when participants in a between-subjects experiment are randomly assigned to conditions, the resulting groups are likely to be quite similar. In fact, researchers consider them to be equivalent. When participants are not randomly assigned to conditions, however, the resulting groups are likely to be dissimilar in some ways. For this reason, researchers consider them to be nonequivalent. A  nonequivalent groups design , then, is a between-subjects design in which participants have not been randomly assigned to conditions.

Imagine, for example, a researcher who wants to evaluate a new method of teaching fractions to third graders. One way would be to conduct a study with a treatment group consisting of one class of third-grade students and a control group consisting of another class of third-grade students. This design would be a nonequivalent groups design because the students are not randomly assigned to classes by the researcher, which means there could be important differences between them. For example, the parents of higher achieving or more motivated students might have been more likely to request that their children be assigned to Ms. Williams’s class. Or the principal might have assigned the “troublemakers” to Mr. Jones’s class because he is a stronger disciplinarian. Of course, the teachers’ styles, and even the classroom environments, might be very different and might cause different levels of achievement or motivation among the students. If at the end of the study there was a difference in the two classes’ knowledge of fractions, it might have been caused by the difference between the teaching methods—but it might have been caused by any of these confounding variables.

Of course, researchers using a nonequivalent groups design can take steps to ensure that their groups are as similar as possible. In the present example, the researcher could try to select two classes at the same school, where the students in the two classes have similar scores on a standardized math test and the teachers are the same sex, are close in age, and have similar teaching styles. Taking such steps would increase the internal validity of the study because it would eliminate some of the most important confounding variables. But without true random assignment of the students to conditions, there remains the possibility of other important confounding variables that the researcher was not able to control.

Pretest-Posttest Design

In a  pretest-posttest design , the dependent variable is measured once before the treatment is implemented and once after it is implemented. Imagine, for example, a researcher who is interested in the effectiveness of an antidrug education program on elementary school students’ attitudes toward illegal drugs. The researcher could measure the attitudes of students at a particular elementary school during one week, implement the antidrug program during the next week, and finally, measure their attitudes again the following week. The pretest-posttest design is much like a within-subjects experiment in which each participant is tested first under the control condition and then under the treatment condition. It is unlike a within-subjects experiment, however, in that the order of conditions is not counterbalanced because it typically is not possible for a participant to be tested in the treatment condition first and then in an “untreated” control condition.

If the average posttest score is better than the average pretest score, then it makes sense to conclude that the treatment might be responsible for the improvement. Unfortunately, one often cannot conclude this with a high degree of certainty because there may be other explanations for why the posttest scores are better. One category of alternative explanations goes under the name of  history . Other things might have happened between the pretest and the posttest. Perhaps an antidrug program aired on television and many of the students watched it, or perhaps a celebrity died of a drug overdose and many of the students heard about it. Another category of alternative explanations goes under the name of  maturation . Participants might have changed between the pretest and the posttest in ways that they were going to anyway because they are growing and learning. If it were a yearlong program, participants might become less impulsive or better reasoners and this might be responsible for the change.

Another alternative explanation for a change in the dependent variable in a pretest-posttest design is  regression to the mean . This refers to the statistical fact that an individual who scores extremely on a variable on one occasion will tend to score less extremely on the next occasion. For example, a bowler with a long-term average of 150 who suddenly bowls a 220 will almost certainly score lower in the next game. Her score will “regress” toward her mean score of 150. Regression to the mean can be a problem when participants are selected for further study  because  of their extreme scores. Imagine, for example, that only students who scored especially low on a test of fractions are given a special training program and then retested. Regression to the mean all but guarantees that their scores will be higher even if the training program has no effect. A closely related concept—and an extremely important one in psychological research—is  spontaneous remission . This is the tendency for many medical and psychological problems to improve over time without any form of treatment. The common cold is a good example. If one were to measure symptom severity in 100 common cold sufferers today, give them a bowl of chicken soup every day, and then measure their symptom severity again in a week, they would probably be much improved. This does not mean that the chicken soup was responsible for the improvement, however, because they would have been much improved without any treatment at all. The same is true of many psychological problems. A group of severely depressed people today is likely to be less depressed on average in 6 months. In reviewing the results of several studies of treatments for depression, researchers Michael Posternak and Ivan Miller found that participants in waitlist control conditions improved an average of 10 to 15% before they received any treatment at all (Posternak & Miller, 2001) [2] . Thus one must generally be very cautious about inferring causality from pretest-posttest designs.

Does Psychotherapy Work?

Early studies on the effectiveness of psychotherapy tended to use pretest-posttest designs. In a classic 1952 article, researcher Hans Eysenck summarized the results of 24 such studies showing that about two thirds of patients improved between the pretest and the posttest (Eysenck, 1952) [3] . But Eysenck also compared these results with archival data from state hospital and insurance company records showing that similar patients recovered at about the same rate  without  receiving psychotherapy. This parallel suggested to Eysenck that the improvement that patients showed in the pretest-posttest studies might be no more than spontaneous remission. Note that Eysenck did not conclude that psychotherapy was ineffective. He merely concluded that there was no evidence that it was, and he wrote of “the necessity of properly planned and executed experimental studies into this important field” (p. 323). You can read the entire article here: Classics in the History of Psychology .

Fortunately, many other researchers took up Eysenck’s challenge, and by 1980 hundreds of experiments had been conducted in which participants were randomly assigned to treatment and control conditions, and the results were summarized in a classic book by Mary Lee Smith, Gene Glass, and Thomas Miller (Smith, Glass, & Miller, 1980) [4] . They found that overall psychotherapy was quite effective, with about 80% of treatment participants improving more than the average control participant. Subsequent research has focused more on the conditions under which different types of psychotherapy are more or less effective.

Interrupted Time Series Design

A variant of the pretest-posttest design is the  interrupted time-series design . A time series is a set of measurements taken at intervals over a period of time. For example, a manufacturing company might measure its workers’ productivity each week for a year. In an interrupted time series-design, a time series like this one is “interrupted” by a treatment. In one classic example, the treatment was the reduction of the work shifts in a factory from 10 hours to 8 hours (Cook & Campbell, 1979) [5] . Because productivity increased rather quickly after the shortening of the work shifts, and because it remained elevated for many months afterward, the researcher concluded that the shortening of the shifts caused the increase in productivity. Notice that the interrupted time-series design is like a pretest-posttest design in that it includes measurements of the dependent variable both before and after the treatment. It is unlike the pretest-posttest design, however, in that it includes multiple pretest and posttest measurements.

Figure 7.3 shows data from a hypothetical interrupted time-series study. The dependent variable is the number of student absences per week in a research methods course. The treatment is that the instructor begins publicly taking attendance each day so that students know that the instructor is aware of who is present and who is absent. The top panel of  Figure 7.3 shows how the data might look if this treatment worked. There is a consistently high number of absences before the treatment, and there is an immediate and sustained drop in absences after the treatment. The bottom panel of  Figure 7.3 shows how the data might look if this treatment did not work. On average, the number of absences after the treatment is about the same as the number before. This figure also illustrates an advantage of the interrupted time-series design over a simpler pretest-posttest design. If there had been only one measurement of absences before the treatment at Week 7 and one afterward at Week 8, then it would have looked as though the treatment were responsible for the reduction. The multiple measurements both before and after the treatment suggest that the reduction between Weeks 7 and 8 is nothing more than normal week-to-week variation.

Image description available

Combination Designs

A type of quasi-experimental design that is generally better than either the nonequivalent groups design or the pretest-posttest design is one that combines elements of both. There is a treatment group that is given a pretest, receives a treatment, and then is given a posttest. But at the same time there is a control group that is given a pretest, does  not  receive the treatment, and then is given a posttest. The question, then, is not simply whether participants who receive the treatment improve but whether they improve  more  than participants who do not receive the treatment.

Imagine, for example, that students in one school are given a pretest on their attitudes toward drugs, then are exposed to an antidrug program, and finally are given a posttest. Students in a similar school are given the pretest, not exposed to an antidrug program, and finally are given a posttest. Again, if students in the treatment condition become more negative toward drugs, this change in attitude could be an effect of the treatment, but it could also be a matter of history or maturation. If it really is an effect of the treatment, then students in the treatment condition should become more negative than students in the control condition. But if it is a matter of history (e.g., news of a celebrity drug overdose) or maturation (e.g., improved reasoning), then students in the two conditions would be likely to show similar amounts of change. This type of design does not completely eliminate the possibility of confounding variables, however. Something could occur at one of the schools but not the other (e.g., a student drug overdose), so students at the first school would be affected by it while students at the other school would not.

Finally, if participants in this kind of design are randomly assigned to conditions, it becomes a true experiment rather than a quasi experiment. In fact, it is the kind of experiment that Eysenck called for—and that has now been conducted many times—to demonstrate the effectiveness of psychotherapy.

Key Takeaways

  • Quasi-experimental research involves the manipulation of an independent variable without the random assignment of participants to conditions or orders of conditions. Among the important types are nonequivalent groups designs, pretest-posttest, and interrupted time-series designs.
  • Quasi-experimental research eliminates the directionality problem because it involves the manipulation of the independent variable. It does not eliminate the problem of confounding variables, however, because it does not involve random assignment to conditions. For these reasons, quasi-experimental research is generally higher in internal validity than correlational studies but lower than true experiments.
  • Practice: Imagine that two professors decide to test the effect of giving daily quizzes on student performance in a statistics course. They decide that Professor A will give quizzes but Professor B will not. They will then compare the performance of students in their two sections on a common final exam. List five other variables that might differ between the two sections that could affect the results.
  • regression to the mean
  • spontaneous remission

Image Descriptions

Figure 7.3 image description: Two line graphs charting the number of absences per week over 14 weeks. The first 7 weeks are without treatment and the last 7 weeks are with treatment. In the first line graph, there are between 4 to 8 absences each week. After the treatment, the absences drop to 0 to 3 each week, which suggests the treatment worked. In the second line graph, there is no noticeable change in the number of absences per week after the treatment, which suggests the treatment did not work. [Return to Figure 7.3]

  • Cook, T. D., & Campbell, D. T. (1979). Quasi-experimentation: Design & analysis issues in field settings . Boston, MA: Houghton Mifflin. ↵
  • Posternak, M. A., & Miller, I. (2001). Untreated short-term course of major depression: A meta-analysis of studies using outcomes from studies using wait-list control groups. Journal of Affective Disorders, 66 , 139–146. ↵
  • Eysenck, H. J. (1952). The effects of psychotherapy: An evaluation. Journal of Consulting Psychology, 16 , 319–324. ↵
  • Smith, M. L., Glass, G. V., & Miller, T. I. (1980). The benefits of psychotherapy . Baltimore, MD: Johns Hopkins University Press. ↵

A between-subjects design in which participants have not been randomly assigned to conditions.

The dependent variable is measured once before the treatment is implemented and once after it is implemented.

A category of alternative explanations for differences between scores such as events that happened between the pretest and posttest, unrelated to the study.

An alternative explanation that refers to how the participants might have changed between the pretest and posttest in ways that they were going to anyway because they are growing and learning.

The statistical fact that an individual who scores extremely on a variable on one occasion will tend to score less extremely on the next occasion.

The tendency for many medical and psychological problems to improve over time without any form of treatment.

A set of measurements taken at intervals over a period of time that are interrupted by a treatment.

Research Methods in Psychology - 2nd Canadian Edition Copyright © 2015 by Paul C. Price, Rajiv Jhangiani, & I-Chant A. Chiang is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

descriptive correlational quasi experimental and experimental

Library Research Guides - University of Wisconsin Ebling Library

Uw-madison libraries research guides.

  • Course Guides
  • Subject Guides
  • University of Wisconsin-Madison
  • Research Guides
  • Nursing Resources
  • Types of Research within Qualitative and Quantitative

Nursing Resources : Types of Research within Qualitative and Quantitative

  • Definitions of
  • Professional Organizations
  • Nursing Informatics
  • Nursing Related Apps
  • EBP Resources
  • PICO-Clinical Question
  • Types of PICO Question (D, T, P, E)
  • Secondary & Guidelines
  • Bedside--Point of Care
  • Pre-processed Evidence
  • Measurement Tools, Surveys, Scales
  • Types of Studies
  • Table of Evidence
  • Qualitative vs Quantitative
  • Cohort vs Case studies
  • Independent Variable VS Dependent Variable
  • Sampling Methods and Statistics
  • Systematic Reviews
  • Review vs Systematic Review vs ETC...
  • Standard, Guideline, Protocol, Policy
  • Additional Guidelines Sources
  • Peer Reviewed Articles
  • Conducting a Literature Review
  • Systematic Reviews and Meta-Analysis
  • Writing a Research Paper or Poster
  • Annotated Bibliographies
  • Levels of Evidence (I-VII)
  • Reliability
  • Validity Threats
  • Threats to Validity of Research Designs
  • Nursing Theory
  • Nursing Models
  • PRISMA, RevMan, & GRADEPro
  • ORCiD & NIH Submission System
  • Understanding Predatory Journals
  • Nursing Scope & Standards of Practice, 4th Ed
  • Distance Ed & Scholarships
  • Assess A Quantitative Study?
  • Assess A Qualitative Study?
  • Find Health Statistics?
  • Choose A Citation Manager?
  • Find Instruments, Measurements, and Tools
  • Write a CV for a DNP or PhD?
  • Find information about graduate programs?
  • Learn more about Predatory Journals
  • Get writing help?
  • Choose a Citation Manager?
  • Other questions you may have
  • Search the Databases?
  • Get Grad School information?

Aspects of Quantative (Empirical) Research

♦   Statement of purpose—what was studied and why.

  ♦   Description of the methodology (experimental group, control group, variables, test conditions, test subjects, etc.).

  ♦   Results (usually numeric in form presented in tables or graphs, often with statistical analysis).

♦   Conclusions drawn from the results.

  ♦   Footnotes, a bibliography, author credentials.

Hint: the abstract (summary) of an article is the first place to check for most of the above features.  The abstract appears both in the database you search and at the top of the actual article.

Types of Quantitative Research

There are four (4) main types of quantitative designs: descriptive, correlational, quasi-experimental, and experimental.

samples.jbpub.com/9780763780586/80586_CH03_Keele.pdf

Types of Qualitative Research

 

Attempts to shed light on a phenomena by studying indepth a single case example of the phenomena.  The case can be an individual person, an event, a group, or an institution.

To understand the social and psychological processes that characterize an event or situation.

Describes the structures of experience as they present themselves to consciousness, without recourse to theory, deduction, or assumptions from other disciplines

Focuses on the sociology of meaning through close field observation of sociocultural phenomena. Typically, the ethnographer focuses on a community.

Systematic collection and objective evaluation of data related to past occurrences in order to test hypotheses concerning causes, effects, or trends of these events that may help to explain present events and anticipate future events. (Gay, 1996)

http://wilderdom.com/OEcourses/PROFLIT/Class6Qualitative1.htm

  • << Previous: Qualitative vs Quantitative
  • Next: Cohort vs Case studies >>
  • Last Updated: Sep 4, 2024 3:12 PM
  • URL: https://researchguides.library.wisc.edu/nursing

Information

  • Author Services

Initiatives

You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.

All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .

Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.

Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Original Submission Date Received: .

  • Active Journals
  • Find a Journal
  • Journal Proposal
  • Proceedings Series
  • For Authors
  • For Reviewers
  • For Editors
  • For Librarians
  • For Publishers
  • For Societies
  • For Conference Organizers
  • Open Access Policy
  • Institutional Open Access Program
  • Special Issues Guidelines
  • Editorial Process
  • Research and Publication Ethics
  • Article Processing Charges
  • Testimonials
  • Preprints.org
  • SciProfiles
  • Encyclopedia

healthcare-logo

Article Menu

descriptive correlational quasi experimental and experimental

  • Subscribe SciFeed
  • Recommended Articles
  • Google Scholar
  • on Google Scholar
  • Table of Contents

Find support for a specific problem in the support section of our website.

Please let us know what you think of our products and services.

Visit our dedicated information section to learn more about MDPI.

JSmol Viewer

Effectiveness of training in evidence-based practice on the development of communicative skills in nursing students: a quasi-experimental design.

descriptive correlational quasi experimental and experimental

1. Introduction

2. materials and methods, 2.1. design, 2.2. environment and participants, 2.3. sample selection, 2.4. intervention, 2.5. variables and data collection instruments, 2.6. data analysis, 2.7. ethical aspects, 4. discussion, study limitations, 5. conclusions, author contributions, institutional review board statement, informed consent statement, data availability statement, acknowledgments, conflicts of interest.

  • Deveugele, M. Communication training: Skills and beyond. Patient Educ. Couns. 2015 , 98 , 1287–1291. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Sharma, N.P.; Gupta, V. Therapeutic Communication. In StatPearls ; StatPearls Publishing: Treasure Island, FL, USA, 2023. Available online: https://www.ncbi.nlm.nih.gov/books/NBK567775/ (accessed on 23 March 2024).
  • Peplau, H.E. Interpersonal Relations in Nursing: A Conceptual Frame of Reference for Psychodynamic Nursing ; Bloomsbury Publishing: London, UK, 1988. [ Google Scholar ]
  • Alligood, M.R. Modelos y Teorías en Enfermería , 9th ed.; Elsevier España: Madrid, Spain, 2018. [ Google Scholar ]
  • Manojlovich, M.; Hofer, T.P.; Krein, S.L. Advancing patient safety through the clinical application of a framework focused on communication. J. Patient Saf. 2021 , 17 , e732–e737. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • American Association of Colleges of Nursing. The Essentials of Baccalaureate Education for Professional Nursing Practice. 2008. Available online: https://www.aacnnursing.org/Portals/42/Publications/BaccEssentials08.pdf (accessed on 23 March 2024).
  • Özdemir, G.; Kaya, H. Nursing and Midwifery Studies Healthcare/Nursing/Midwifery/Nursing Management Midwifery and Nursing Students’ Communication Skills and Life Orientation: Correlation with Stress Coping Approaches. Nurs. Midwifery Stud. 2013 , 2 , 198–205. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Den Hertog, R.; Niessen, T. The role of patient preferences in nursing decision-making in evidence-based practice: Excellent nurses’ communication tools. J. Adv. Nurs. 2019 , 75 , 1987–1995. [ Google Scholar ] [ CrossRef ]
  • Adams, A.M.N.; Mannix, T.; Harrington, A. Nurses’ communication with families in the intensive care unit—A literature review. Nurs. Crit. Care 2017 , 22 , 70–80. [ Google Scholar ] [ CrossRef ]
  • Ellison, D. Communication skills. Nurs. Clin. N. Am. 2015 , 50 , 45–57. [ Google Scholar ] [ CrossRef ]
  • Ghaffarifar, S.; Ghofranipour, F.; Ahmadi, F.; Khoshbaten, M. Why Educators Should Apply Theories and Models of Health Education and Health Promotion to Teach Communication Skills to Nursing and Medical Students. Nurs. Midwifery Stud. 2015 , 4 , e29774. [ Google Scholar ] [ CrossRef ]
  • Kerr, D.; Martin, P.; Furber, L.; Winterburn, S.; Milnes, S.; Nielsen, A.; Strachan, P. Communication skills training for nurses: Is it time for a standardised nursing model? Patient Educ. Couns. 2022 , 105 , 1970–1975. [ Google Scholar ] [ CrossRef ]
  • Kirca, N.; Bademli, K. Relationship between communication skills and care behaviors of nurses. Perspect. Psychiatr. Care 2019 , 55 , 624–631. [ Google Scholar ] [ CrossRef ]
  • Park, M.S.; Jeoung, Y.; Lee, H.K.; Sok, S.R. Relationships among communication competence, selfefficacy, and job satisfaction in korean nurses working in the emergency medical center setting. J. Nurs. Res. 2015 , 23 , 101–108. [ Google Scholar ] [ CrossRef ]
  • Pehrson, C.; Banerjee, S.C.; Manna, R.; Shen, M.J.; Hammonds, S.; Coyle, N.; Krueger, C.A.; Maloney, E.; Zaider, T.; Bylund, C.L. Responding empathically to patients: Development, implementation, and evaluation of a communication skills training module for oncology nurses. Patient Educ. Couns. 2016 , 99 , 610–616. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Allenbaugh, J.; Corbelli, J.; Rack, L.; Rubio, D.; Spagnoletti, C. A Brief Communication Curriculum Improves Resident and Nurse Communication Skills and Patient Satisfaction. J. Gen. Intern. Med. 2019 , 34 , 1167–1173. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Capone, V. Patient communication self-efficacy, self-reported illness symptoms, physician communication style and mental health and illness in hospital outpatients. J. Health Psychol. 2016 , 21 , 1271–1282. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Gutiérrez-Puertas, L.; Márquez-Hernández, V.V.; Gutiérrez-Puertas, V.; Granados-Gámez, G.; Aguilera-Manrique, G. Educational interventions for nursing students to develop communication skills with patients: A systematic review. Int. J. Environ. Res. Public Health 2020 , 17 , 2241. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Leal-Costa, C.; Tirado González, S.; Ramos-Morcillo, A.J.; Ruzafa-Martínez, M.; Díaz Agea, J.L.; van-der Hofstadt Román, C.J. Communication Skills and Professional Practice: Does It Increase Self-Efficacy in Nurses? Front. Psychol. 2020 , 11 , 1169. [ Google Scholar ] [ CrossRef ]
  • Panczyk, M.; Iwanow, L.; Zarzeka, A.; Jaworski, M.; Gotlib, J. Communication skills attitude scale: A translation and validation study in asample of registered nurses in Poland. BMJ Open 2019 , 9 , e028691. [ Google Scholar ] [ CrossRef ]
  • Diep, A.N.; Servotte, J.C.; Dardenne, N.; Vanbelle, S.; Wauthier, V.; Paquay, M.; Campbell, S.H.; Goffoy, J.; Donneau, A.F.; Ghuysen, A. The Performance of the Health Communication Assessment Tool© (HCAT-f) in Calibrating Different Levels of Nurse Communication Skills in a French-Speaking Context. Eur. J. Health Commun. 2022 , 3 , 164–179. [ Google Scholar ] [ CrossRef ]
  • Bramhall, E. Effective Communication Skills in Nursing Practice. Nurs. Stand. 2014 , 29 , 53–59. [ Google Scholar ] [ CrossRef ]
  • Leal-Costa, C.; Tirado-González, S.; van-der Hofstadt Román, C.J.; Rodríguez-Marín, J. Creación de la Escala sobre Habilidades de Comunicación en Profesionales de la Salud, EHC-PS. An. Psicol. 2016 , 32 , 49–59. [ Google Scholar ] [ CrossRef ]
  • Hsu, L.L.; Chang, W.H.; Hsieh, S.I. The Effects of Scenario-Based Simulation Course Training on Nurses’ Communication Competence and Self-Efficacy: A Randomized Controlled Trial. J. Prof. Nurs. 2015 , 31 , 37–49. [ Google Scholar ] [ CrossRef ]
  • McQueen, A.C. Emotional intelligence in nursing work. J. Adv. Nurs. 2004 , 47 , 101–108. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Cunico, L.; Sartori, R.; Marognolli, O.; Meneghini, A.M. Developing empathy in nursing students: A cohort longitudinal study. J. Clin. Nurs. 2012 , 21 , 2016–2025. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Ozcan, C.T.; Oflaz, F.; Sutcu Cicek, H. Empathy: The effects of undergraduate nursing education in Turkey. Int. Nurs. Rev. 2010 , 57 , 493–499. [ Google Scholar ] [ CrossRef ]
  • Wang, Y.; Zhang, Y.; Liu, M.; Zhou, L.; Zhang, J.; Tao, H.; Li, X. Research on the formation of humanistic care ability in nursing students: A structural equation approach. Nurse Educ. Today 2020 , 86 , 104315. [ Google Scholar ] [ CrossRef ]
  • Yıldırım, S.; Kazandı, E.; Cirit, K.; Yağız, H. The effects of communication skills on resilience in undergraduate nursing students in Turkey. Perspect. Psychiatr. Care 2021 , 57 , 1120–1125. [ Google Scholar ] [ CrossRef ]
  • Panczyk, M.; Iwanow, L.; Musik, S.; Wawrzuta, D.; Gotlib, J.; Jaworski, M. Perceiving the role of communication skills as a bridge between the perception of spiritual care and acceptance of evidence-based nursing practice—Empirical model. Int. J. Environ. Res. Public Health 2021 , 18 , 12591. [ Google Scholar ] [ CrossRef ]
  • Anvik, T.; Grimstad, H.; Baerheim, A.; Bernt Fasmer, O.; Gude, T.; Hjortdahl, P.; Holen, A.; Risberg, T.; Vaglum, P. Medical students’ cognitive and affective attitudes towards learning and using communication skills-A nationwide cross-sectional study. Med. Teach. 2008 , 30 , 272–279. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Skoglund, K.; Holmström, I.K.; Sundler, A.J.; Hammar, L.M. Previous work experience and age do not affect final semester nursing student self-efficacy in communication skills. Nurse Educ. Today 2018 , 68 , 182–187. [ Google Scholar ] [ CrossRef ]
  • Abajas-Bustillo, R.; Amo-Setién, F.; Aparicio, M.; Ruiz-Pellón, N.; Fernández-Peña, R.; Silio-García, T.; Leal-Costa, C.; Ortego-Mate, C. Using high-fidelity simulation to introduce communication skills about end-of-life to novice nursing students. Healthcare 2020 , 8 , 238. [ Google Scholar ] [ CrossRef ]
  • Lee, S.E.; Kim, E.; Lee, J.Y.; Morse, B.L. Assertiveness educational interventions for nursing students and nurses: A systematic review. Nurse Educ. Today 2023 , 120 , 105655. [ Google Scholar ] [ CrossRef ]
  • Gilligan, C.; Powell, M.; Lynagh, M.C.; Ward, B.M.; Lonsdale, C.; Harvey, P.; James, E.L.; Rich, D.; Dewi, S.P.; Nepal, S.; et al. Interventions for improving medical students’ interpersonal communication in medical consultations. Cochrane Database Syst. Rev. 2021 , 2 , CD012418. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Sadeghi-Bazargani, H.; Tabrizi, J.S.; Azami-Aghdash, S. Barriers to evidence-based medicine: A systematic review. J. Eval. Clin. Pract. 2014 , 20 , 793–802. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Sackett, D.L.; Rosenberg, W.M.C.; Gray, J.A.M.; Haynes, R.B.; Richardson, W.S. Evidence based medicine: What it is and what it isn’t. BMJ 1996 , 312 , 71. [ Google Scholar ] [ CrossRef ]
  • Sackett, D.L.; Straus, S.E.; Richardson, W.S.; Rosenberg, W.; Haynes, R.B. Evidence-Based Medicine: How to Practice and Teach EBM ; Churchill Livingstone: London, UK, 2000; ISBN 0443062404. [ Google Scholar ]
  • Den Hertog, R.; Niessen, T. Taking into account patient preferences in personalised care: Blending types of nursing knowledge in evidence-based practice. J. Clin. Nurs. 2021 , 30 , 1904–1915. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Carrasco-Guirao, J.J.; Leal-Costa, C.; Castaño-Molina, M.D.L.Á.; Conesa-Ferrer, M.B.; Molina-Rodríguez, A.; Díaz-Agea, J.L.; Adánez-Martínez, M.G. Exploring How Evidence-Based Practice, Communication, and Clinical Simulation Outcomes Interact in Nursing Education: A Cross-Sectional Study. Nurs. Rep. 2024 , 14 , 616–626. [ Google Scholar ] [ CrossRef ]
  • Vranceanu, A.M.; Cooper, C.; Ring, D. Integrating patient values into evidence-based practice: Effective communication for shared decision-making. Hand Clin. 2009 , 25 , 83–96. [ Google Scholar ] [ CrossRef ]
  • Velasco Rodriguez, V.M.; Martinez Ordaz, V.A.; Roiz Hernandez, J.; Huazano Garcia, F.; Nieves, R.A. Muestreo y Tamaño de La Muestra: Una Guía Práctica Para Personal de Salud Que Realiza Investigación ; e-libro.net Editor: Buenos Aires, Argentina, 2003; ISBN 987-9499-36-0. [ Google Scholar ]
  • Juliá-Sanchis, R.; Cabañero-Martínez, M.J.; Leal-Costa, C.; Fernández-Alcántara, M.; Escribano, S. Psychometric properties of the health professionals communication skills scale in university students of health sciences. Int. J. Environ. Res. Public Health 2020 , 17 , 7565. [ Google Scholar ] [ CrossRef ]
  • Ruzafa-Martinez, M.; Lopez-Iborra, L.; Moreno-Casbas, T.; Madrigal-Torres, M. Development and validation of the competence in evidence based practice questionnaire (EBP-COQ) among nursing students. BMC Med. Educ. 2013 , 13 , 19. [ Google Scholar ] [ CrossRef ]
  • Finotto, S.; Garofalo, E. Italian Validation of Evidence-Based Practice Evaluation Competence Questionnaire (EBP-COQ). Prof. Inferm. 2020 , 73 , 98–105. [ Google Scholar ] [ CrossRef ]
  • Kantek, F.; Kartal, E. Adaptation of the Evidence-Based Practice Competence Questionnaire (EBP-COQ) for Turkish Nursing Students. Int. Nurs. Rev. 2022 , 69 , 88–96. [ Google Scholar ] [ CrossRef ]
  • Schetaki, S.; Patelarou, E.; Giakoumidakis, K.; Kleisiaris, C.; Patelarou, A. Evidence-Based Practice Competency of Registered Nurses in the Greek National Health Service. Nurs. Rep. 2023 , 13 , 1225–1235. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Panczyk, M.; Iwanow, L.; Gaworska-Krzemińska, A.; Grochans, E.; Kózka, M.; Kulik, H.; Lewko, J.; Marcysiak, M.; Młynarska, K.; Nowak-Starz, G.; et al. Validation study and setting norms of the evidence based practice competence questionnaire for nursing students: A cross-sectional study in Poland. Nurse Educ. Today 2020 , 88 , 104383. [ Google Scholar ] [ CrossRef ]
  • Leal-Costa, C.; Tirado-González, S.; Rodríguez-Marín, J.; Vander-Hofstadt-Román, C.J. Psychometric properties of the Health Professionals Communication Skills Scale (HP-CSS). Int. J. Clin. Health Psychol. 2016 , 16 , 76–86. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Zhong, X.; Tang, F.; Lai, D.; Guo, X.; Yang, X.; Hu, R.; Li, D.; Lu, Y.; Liu, S.; Leal-Costa, C. The Chinese version of the Health Professional Communication skills Scale: Psychometric evaluation. Front. Psychol. 2023 , 14 , 1125404. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Ferguson, C.J. An Effect Size Primer: A Guide for Clinicians and Researchers. Prof. Psychol. Res. Pract. 2009 , 40 , 532–538. [ Google Scholar ] [ CrossRef ]
  • Cleland, J.; Foster, K.; Moffat, M. Undergraduate students’ attitudes to communication skills learning differ depending on year of study and gender. Med. Teach. 2005 , 27 , 246–251. [ Google Scholar ] [ CrossRef ]
  • Ruzafa-Martinez, M.; Leal-Costa, C.; Garcia-Gonzalez, J.; Sánchez-Torrano, M.; Ramos-Morcillo, A.J. Evaluation of evidence-based practice learning among undergraduate nursing students: Relationship between self-reported and objective assessment. Nurse Educ. Today. 2021 , 105 , 105040. [ Google Scholar ] [ CrossRef ]
  • Mendi, O.; Yildirim, N.; Mendi, B. Cross-cultural Adaptation, Reliability, and Validity of the Turkish Version of the Health Professionals Communication Skills Scale. Asian Nurs. Res. 2020 , 14 , 312–319. [ Google Scholar ] [ CrossRef ]
  • Omura, M.; Levett-Jones, T.; Stone, T.E. Design and evaluation of an assertiveness communication training programme for nursing students. J. Clin. Nurs. 2019 , 28 , 1990–1998. [ Google Scholar ] [ CrossRef ]
  • Bullington, J.; Söderlund, M.; Sparén, E.B.; Kneck, Å.; Omérov, P.; Cronqvist, A. Communication skills in nursing: A phenomenologically-based communication training approach. Nurse Educ. Pract. 2019 , 39 , 136–141. [ Google Scholar ] [ CrossRef ]
  • Ramos-Morcillo, A.J.; Harillo-Acevedo, D.; Ruzafa-Martinez, M. Using the Knowledge-to-Action Framework to understand experiences of breastfeeding guideline implementation: A qualitative study. J. Nurs. Manag. 2020 , 28 , 1670–1685. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Choi, Y.J.; Um, Y.J. The effects of a home-visit nursing simulation for older people with dementia on nursing students’ communication skills, self-efficacy, and critical thinking propensity: Quantitative research. Nurse Educ. Today 2022 , 119 , 105564. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Cannity, K.M.; Banerjee, S.C.; Hichenberg, S.; Leon-Nastasi, A.D.; Howell, F.; Coyle, N.; Zaider, T.; Parker, P.A. Acceptability and efficacy of a communication skills training for nursing students: Building empathy and discussing complex situations. Nurse Educ. Pract. 2021 , 50 , 102928. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Çınar, N.; Cevahir, R.; Sahin, S.; Sözeri, C.; Kuguoglu, S. Evaluation of the empathic skills of nursing students with respect to the classes they are attending. Rev. Eletrôn. Enferm. 2007 , 9 , 588–595. [ Google Scholar ]
  • González-Serna, J.M.G.; Serrano, R.R.; Martín, M.S.M.; Fernández, J.M.A. Descenso de empatía en estudiantes de enfermería y análisis de posibles factores implicados. Psicol. Educ. 2014 , 20 , 53–60. [ Google Scholar ] [ CrossRef ]
  • Zhao, J.; Xiantao, O.; Li, Q.; Liu, H.; Wang, F.; Li, Q.; Xu, Z.; Ji, S.; Yue, S. Role of narrative medicine-based education in cultivating empathy in residents. BMC Med. Educ. 2023 , 23 , 124. [ Google Scholar ] [ CrossRef ]
  • Davis, M.H. Measuring individual differences in empathy: Evidence for a multidimension approach. J. Personal. Soc. Psychol. 1983 , 44 , 113–126. [ Google Scholar ] [ CrossRef ]
  • Moreno-Poyato, A.R.; Delgado-Hito, P.; Suárez-Pérez, R.; Lluch-Canut, T.; Roldán-Merino, J.F.; Montesó-Curto, P. Improving the therapeutic relationship in inpatient psychiatric care: Assessment of the therapeutic alliance and empathy after implementing evidence-based practices resulting from participatory action research. Perspect. Psychiatr. Care 2018 , 54 , 300–308. [ Google Scholar ] [ CrossRef ]
  • Bas-Sarmiento, P.; Fernández-Gutiérrez, M.; Díaz-Rodríguez, M.; iCARE Team. Teaching empathy to nursing students: A randomised controlled trial. Nurse Educ. Today 2019 , 80 , 40–51. [ Google Scholar ] [ CrossRef ]
  • Ilhan, N.; Sukut, Ö.; Akhan, L.U.; Batmaz, M. The effect of nurse education on the self-esteem and assertiveness of nursing students: A four-year longitudinal study. Nurse Educ. Today 2016 , 39 , 72–78. [ Google Scholar ] [ CrossRef ]
  • Nosek, M.; Gifford, E.; Kober, B. Nonviolent Communication (NVC) training increases empathy in baccalaureate nursing students: A mixed method study. J. Nurs. Educ. Pract. 2014 , 4 , 1–15. [ Google Scholar ] [ CrossRef ]
  • Batt-Rawden, S.A.; Chisolm, M.S.; Anton, B.; Flickinger, T.E. Teaching empathy to medical students: An updated, systematic review. Acad. Med. 2013 , 88 , 1171–1177. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Song, C. Changes in evidence-based practice self-efficacy among nursing students and the impact of clinical competencies: Longitudinal descriptive study. Nurse Educ. Today 2024 , 132 , 106008. [ Google Scholar ] [ CrossRef ] [ PubMed ]
CLASS Hours per
Student
Working Hours
per Student
MethodologyContentAssignment
1010Seminar
55Laboratory
510Laboratory
3050Seminar
1015Laboratory
Age in years (Mean; SD) 24.2 (8.05)
Gender % (N)Female78.4% (120)
Male21.6% (33)
Method of admission % (N)High School77.1% (118)
Vocational training14.4% (22)
Special admission8.5% (13)
Other education % (N)None75.2% (115)
Vocational training19% (29)
5-year bachelors0.7% (1)
4-year bachelors3.3% (5)
Master’s2% (3)
Class attendance % (N)<24%3.3% (5)
24–49%9.8% (15)
50–74%30.7% (47)
>75%56.2% (86)
EBP training % (N)None88.2% (135)
<40 h5.2% (8)
40–150 h4.6% (7)
>150 h2% (3)
Training on Scientific Methodology % (N)None91.5% (140)
<40 h3.9% (6)
40–150 h3.9% (6)
>150 h0.7% (1)
Reading of articles per month % (N)None2.0% (3)
1–3 articles34.0% (52)
>3 articles64.1% (98)
Twitter/Facebook/IG % (N)Yes74.5% (114)
No25.5% (39)
Reading of social networks % (N)Never9.8% (15)
Occasionally32.0% (49)
Monthly13.1% (20)
Weekly33.3% (51)
Daily11.8% (18)
Mean (SD)Difference in Means95%CICohen’s dp
PrePostLower LimitUpper Limit
Attitude EBP 3.78 (0.24) 4.36 (0.43) −0.585 −0.655 −0.51570 −1.3445 <0.001
EBP Skills 3.10 (0.28) 4.00 (0.56) −0.904 −1.008 −0.80051 −1.3936 <0.001
EBP Knowledge 2.88 (0.38) 4.20 (0.48) −1.319 −1.416 −1.22256 −2.1810 <0.001
Total EBP 3.25 (0.17) 4.19 (0.42) −0.936 −1.007 −0.86544 −2.1140 <0.001
CS Informative Comm. 30.78 (3.11) 31.32 (3.27) −0.536 −1.063 −0.00905 −0.1625 0.046
CS Empathy 26.66 (2.90) 26.50 (3.17) 0.163 −0.266 0.59326 0.0607 0.454
CS Respect 16.93 (1.41) 16.77 (1.65) 0.163 −0.113 0.43972 0.0945 0.245
CS Assertiveness 15.84 (3.25) 16.44 (3.38) −0.601 −1.100 −0.10245 −0.1925 0.018
Total CS 90.22 (8.14) 91.03 (8.78) −0.810 −2.054 0.43283 −0.1041 0.200
Informative CommunicationEmpathyRespectAssertivenessTotal CS
RpRpRpRpRp
Age0.0460.5720.1420.0810.0170.8390.0500.5430.0910.266
Attitude EBP0.491<0.001 ***0.381<0.001 ***0.378<0.001 ***0.1680.038 *0.456<0.001 ***
EBP Skills0.461<0.001 ***0.332<0.001 ***0.395<0.001 ***0.0810.3210.397<0.001 ***
EBP Knowledge0.469<0.001 ***0.343<0.001 ***0.406<0.001 ***0.1650.041 *0.439<0.001 ***
Total EBP0.553<0.001 ***0.410<0.001 ***0.460<0.001 ***0.1570.0530.501<0.001 ***
Informative Comm.EmpathyRespectAssertivenessTotal CS
Mean (SD)pMean (SD)pMean (SD)pMean (SD)pMean (SD)p
GenderFemale31.5 (3.03)0.11126.8 (3.07)0.010 *16.9 (1.57)0.08616.2 (3.29)0.10091.5 (8.34)0.228
Male30.5 (4.00)25.2 (3.26)16.3 (1.88)17.3 (3.64)89.4 (10.21)
Method of admissionHigh School31.2 (3.41)0.50826.3 (3.15)0.021 *16.8 (1.67)0.51616.2 (3.43)0.29790.6 (9.08)0.131
Vocational training31.2 (2.75)26.2 (3.50)16.5 (1.77)17.2 (3.16)91.2 (7.81)
Special admission32.2 (2.86)28.4 (2.26)17.2 (1.34)17.2 (3.32)95.0 (6.92)
Other educationNone31.3 (3.39)0.99726.5 (3.11)0.86616.8 (1.69)0.76016.3 (3.54)0.80990.8 (9.17)0.921
Vocational training31.3 (2.94)26.9 (3.40)16.8 (1.63)17.0 (3.13)92.0 (7.57)
5-year bachelors31.5 (3.78)25.8 (3.81)16.1 (1.47)16.5 (2.51)90.0 (9.75)
4-year bachelors31.7 (1.15)26.0 (3.61)17.3 (0.57)16.3 (0.57)91.3 (4.51)
Previous EBP trainingNone31.3 (3.34)0.16526.4 (3.21)0.449 16.7 (1.70) 0.337 16.5 (3.48) 0.875 91.0 (9.10) 0.740
<40 h30.3 (2.71)27.9 (2.41) 16.9 (1.24) 16.3 (3.01) 91.3 (5.57)
40–150 h33.4 (2.37)26.1 (3.71) 17.4 (0.78) 16.9 (1.95) 93.9 (7.26)
>150 h30.3 (1.52)26.0 (1.00) 16.5 (2.00) 15.0 (3.60) 87.3 (1.52)
Previous research trainingNone 31.3 (3.31) 0.441 26.5 (3.20) 0.376 16.8 (1.69) 0.927 16.4 (3.42) 0.987 90.9 (8.91) 0.842
<40 h 31.0 (3.63) 28.2 (2.04) 16.8 (1.33) 16.7 (4.08) 92.7 (9.37)
≥40 h 32.8 (2.11) 25.8 (3.23) 17.0 (1.15) 16.4 (2.37) 92.1 (6.38)
Reading or articles in the previous monthNone29.3 (4.16)0.57727.7 (3.21)0.72117.0 (1.73)0.91216.3 (0.57)0.85190.3 (9.01)0.991
1–3 articles31.1 (3.38)26.7 (3.22)16.7 (1.68)16.6 (3.18)91.1 (8.62)
>3 articles31.5 (3.19)26.4 (3.16)16.8 (1.64)16.3 (3.55)91.0 (8.95)
Consultation of Twitter/Facebook/IGYes31.7 (3.15)0.00626.9 (2.97)0.01316.9 (1.60)0.032 *16.6 (3.31)0.34492.1 (8.08)0.016
No30.1 (3.36)25.4 (3.52)16.3 (1.72)16.0 (3.62)87.8 (9.99)
Consultation of social networksNever30.2 (3.69)0.26824.9 (3.64)0.37716.0 (1.60)0.08115.4 (4.45)0.24286.5 (12.06)0.444
Occasionally31.2 (2.74)26.7 (2.91)16.8 (1.39)16.1 (3.18)90.9 (7.38)
Monthly30.3 (4.31)26.7 (3.48)16.4 (1.90)16.9 (2.46)90.2 (8.10)
Weekly32.0 (2.87)26.9 (2.91)17.2 (1.36)16.3 (3.61)92.4 (8.15)
Daily31.7 (3.77)25.9 (3.67)16.5 (2.46)18.2 (2.80)92.3 (11.02)
ModelR2Non-Standardized CoefficientsStandardized Coefficientstp95% Confidence Interval for B
BError Dev.BetaLower LimitUpper Limit
CS Informative Comm.Attitude EBP 0.3132.6020.5770.3474.506<0.0011.4613.743
EBP Knowledge2.0650.5220.3043.957<0.0011.0343.096
CS EmpathySpecial Admission0.3133.1010.8130.2983.812<0.0011.4914.711
EBP Knowledge2.2690.5710.3553.972<0.0011.1393.400
Female1.6270.5620.2142.8960.0040.5152.739
Attitude EBP1.3840.6170.1922.2420.0270.1632.606
CS RespectEBP Knowledge0.2091.0000.2830.2923.538<0.0010.4411.558
Attitude EBP0.9070.3130.2392.9000.0040.2891.525
CS AssertivenessAttitude EBP0.0281.3040.6240.1682.0910.0380.0722.536
Total CSAttitude EBP 0.2726.4541.5950.3204.046<0.0013.3029.606
EBP Knowledge5.2191.4410.2873.621<0.0012.3718.067
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

Ruzafa-Martínez, M.; Pérez-Muñoz, V.; Conesa-Ferrer, M.B.; Ramos-Morcillo, A.J.; Molina-Rodríguez, A. Effectiveness of Training in Evidence-Based Practice on the Development of Communicative Skills in Nursing Students: A Quasi-Experimental Design. Healthcare 2024 , 12 , 1895. https://doi.org/10.3390/healthcare12181895

Ruzafa-Martínez M, Pérez-Muñoz V, Conesa-Ferrer MB, Ramos-Morcillo AJ, Molina-Rodríguez A. Effectiveness of Training in Evidence-Based Practice on the Development of Communicative Skills in Nursing Students: A Quasi-Experimental Design. Healthcare . 2024; 12(18):1895. https://doi.org/10.3390/healthcare12181895

Ruzafa-Martínez, María, Verónica Pérez-Muñoz, María Belén Conesa-Ferrer, Antonio Jesús Ramos-Morcillo, and Alonso Molina-Rodríguez. 2024. "Effectiveness of Training in Evidence-Based Practice on the Development of Communicative Skills in Nursing Students: A Quasi-Experimental Design" Healthcare 12, no. 18: 1895. https://doi.org/10.3390/healthcare12181895

Article Metrics

Article access statistics, further information, mdpi initiatives, follow mdpi.

MDPI

Subscribe to receive issue release notifications and newsletters from MDPI journals

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Correlational Research | When & How to Use

Correlational Research | When & How to Use

Published on July 7, 2021 by Pritha Bhandari . Revised on June 22, 2023.

A correlational research design investigates relationships between variables without the researcher controlling or manipulating any of them.

A correlation reflects the strength and/or direction of the relationship between two (or more) variables. The direction of a correlation can be either positive or negative.

Positive correlation Both variables change in the same direction As height increases, weight also increases
Negative correlation The variables change in opposite directions As coffee consumption increases, tiredness decreases
Zero correlation There is no relationship between the variables Coffee consumption is not correlated with height

Table of contents

Correlational vs. experimental research, when to use correlational research, how to collect correlational data, how to analyze correlational data, correlation and causation, other interesting articles, frequently asked questions about correlational research.

Correlational and experimental research both use quantitative methods to investigate relationships between variables. But there are important differences in data collection methods and the types of conclusions you can draw.

Correlational research Experimental research
Purpose Used to test strength of association between variables Used to test cause-and-effect relationships between variables
Variables Variables are only observed with no manipulation or intervention by researchers An is manipulated and a dependent variable is observed
Control Limited is used, so other variables may play a role in the relationship are controlled so that they can’t impact your variables of interest
Validity High : you can confidently generalize your conclusions to other populations or settings High : you can confidently draw conclusions about causation

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

descriptive correlational quasi experimental and experimental

Correlational research is ideal for gathering data quickly from natural settings. That helps you generalize your findings to real-life situations in an externally valid way.

There are a few situations where correlational research is an appropriate choice.

To investigate non-causal relationships

You want to find out if there is an association between two variables, but you don’t expect to find a causal relationship between them.

Correlational research can provide insights into complex real-world relationships, helping researchers develop theories and make predictions.

To explore causal relationships between variables

You think there is a causal relationship between two variables, but it is impractical, unethical, or too costly to conduct experimental research that manipulates one of the variables.

Correlational research can provide initial indications or additional support for theories about causal relationships.

To test new measurement tools

You have developed a new instrument for measuring your variable, and you need to test its reliability or validity .

Correlational research can be used to assess whether a tool consistently or accurately captures the concept it aims to measure.

There are many different methods you can use in correlational research. In the social and behavioral sciences, the most common data collection methods for this type of research include surveys, observations , and secondary data.

It’s important to carefully choose and plan your methods to ensure the reliability and validity of your results. You should carefully select a representative sample so that your data reflects the population you’re interested in without research bias .

In survey research , you can use questionnaires to measure your variables of interest. You can conduct surveys online, by mail, by phone, or in person.

Surveys are a quick, flexible way to collect standardized data from many participants, but it’s important to ensure that your questions are worded in an unbiased way and capture relevant insights.

Naturalistic observation

Naturalistic observation is a type of field research where you gather data about a behavior or phenomenon in its natural environment.

This method often involves recording, counting, describing, and categorizing actions and events. Naturalistic observation can include both qualitative and quantitative elements, but to assess correlation, you collect data that can be analyzed quantitatively (e.g., frequencies, durations, scales, and amounts).

Naturalistic observation lets you easily generalize your results to real world contexts, and you can study experiences that aren’t replicable in lab settings. But data analysis can be time-consuming and unpredictable, and researcher bias may skew the interpretations.

Secondary data

Instead of collecting original data, you can also use data that has already been collected for a different purpose, such as official records, polls, or previous studies.

Using secondary data is inexpensive and fast, because data collection is complete. However, the data may be unreliable, incomplete or not entirely relevant, and you have no control over the reliability or validity of the data collection procedures.

After collecting data, you can statistically analyze the relationship between variables using correlation or regression analyses, or both. You can also visualize the relationships between variables with a scatterplot.

Different types of correlation coefficients and regression analyses are appropriate for your data based on their levels of measurement and distributions .

Correlation analysis

Using a correlation analysis, you can summarize the relationship between variables into a correlation coefficient : a single number that describes the strength and direction of the relationship between variables. With this number, you’ll quantify the degree of the relationship between variables.

The Pearson product-moment correlation coefficient , also known as Pearson’s r , is commonly used for assessing a linear relationship between two quantitative variables.

Correlation coefficients are usually found for two variables at a time, but you can use a multiple correlation coefficient for three or more variables.

Regression analysis

With a regression analysis , you can predict how much a change in one variable will be associated with a change in the other variable. The result is a regression equation that describes the line on a graph of your variables.

You can use this equation to predict the value of one variable based on the given value(s) of the other variable(s). It’s best to perform a regression analysis after testing for a correlation between your variables.

Prevent plagiarism. Run a free check.

It’s important to remember that correlation does not imply causation . Just because you find a correlation between two things doesn’t mean you can conclude one of them causes the other for a few reasons.

Directionality problem

If two variables are correlated, it could be because one of them is a cause and the other is an effect. But the correlational research design doesn’t allow you to infer which is which. To err on the side of caution, researchers don’t conclude causality from correlational studies.

Third variable problem

A confounding variable is a third variable that influences other variables to make them seem causally related even though they are not. Instead, there are separate causal links between the confounder and each variable.

In correlational research, there’s limited or no researcher control over extraneous variables . Even if you statistically control for some potential confounders, there may still be other hidden variables that disguise the relationship between your study variables.

Although a correlational study can’t demonstrate causation on its own, it can help you develop a causal hypothesis that’s tested in controlled experiments.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Normal distribution
  • Degrees of freedom
  • Null hypothesis
  • Discourse analysis
  • Control groups
  • Mixed methods research
  • Non-probability sampling
  • Quantitative research
  • Ecological validity

Research bias

  • Rosenthal effect
  • Implicit bias
  • Cognitive bias
  • Selection bias
  • Negativity bias
  • Status quo bias

A correlation reflects the strength and/or direction of the association between two or more variables.

  • A positive correlation means that both variables change in the same direction.
  • A negative correlation means that the variables change in opposite directions.
  • A zero correlation means there’s no relationship between the variables.

A correlational research design investigates relationships between two variables (or more) without the researcher controlling or manipulating any of them. It’s a non-experimental type of quantitative research .

Controlled experiments establish causality, whereas correlational studies only show associations between variables.

  • In an experimental design , you manipulate an independent variable and measure its effect on a dependent variable. Other variables are controlled so they can’t impact the results.
  • In a correlational design , you measure variables without manipulating any of them. You can test whether your variables change together, but you can’t be sure that one variable caused a change in another.

In general, correlational research is high in external validity while experimental research is high in internal validity .

A correlation is usually tested for two variables at a time, but you can test correlations between three or more variables.

A correlation coefficient is a single number that describes the strength and direction of the relationship between your variables.

Different types of correlation coefficients might be appropriate for your data based on their levels of measurement and distributions . The Pearson product-moment correlation coefficient (Pearson’s r ) is commonly used to assess a linear relationship between two quantitative variables.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Bhandari, P. (2023, June 22). Correlational Research | When & How to Use. Scribbr. Retrieved September 21, 2024, from https://www.scribbr.com/methodology/correlational-research/

Is this article helpful?

Pritha Bhandari

Pritha Bhandari

Other students also liked, what is quantitative research | definition, uses & methods, correlation vs. causation | difference, designs & examples, correlation coefficient | types, formulas & examples, what is your plagiarism score.

  • Open access
  • Published: 20 September 2024

The effect of clinical supervision model on nurses’ self-efficacy and communication skills in the handover process of medical and surgical wards: an experimental study

  • Faezeh Gheisari   ORCID: orcid.org/0009-0008-2699-9023 1 ,
  • Sedigheh Farzi   ORCID: orcid.org/0000-0001-9952-1516 2 ,
  • Mohammad Javad Tarrahi   ORCID: orcid.org/0000-0001-7875-4572 3 &
  • Tahere Momeni-Ghaleghasemi   ORCID: orcid.org/0000-0002-7476-0294 4  

BMC Nursing volume  23 , Article number:  672 ( 2024 ) Cite this article

Metrics details

The handover process is a vital part of patient safety continuity, particularly conducted between nurses at the end of shifts. Nurses often face challenges in handover due to a lack of self-efficacy and inadequate communication skills. The clinical supervision model, by providing emotional, educational, and organizational support, aids in skill acquisition and instills confidence.

This study was conducted to investigate the effect of the clinical supervision model on nurses’ self-efficacy and communication skills in the handover process within medical and surgical wards.

This experimental two-group (pre-and post-test) study was conducted in 2024 at selected hospital affiliated with Isfahan University of Medical Sciences, Isfahan, Iran. Convenience sampling was used, and participants were randomly assigned to either the intervention or control group. Data were collected using the ISBAR communication checklist, communication clarity, the Sherer General Self-Efficacy Scale (GSES), the Visual Analog Scale (VAS) for handover self-efficacy, and the Manchester Clinical Supervision Scale (MCSS). The clinical supervision model and routine supervision were implemented in six sessions for the intervention and control groups, respectively. Data were analyzed using SPSS version 16, employing independent t-tests, covariance analysis, paired t-tests, chi-square tests, and repeated measures ANOVA with a significance level of p  < 0.05.

No significant differences were observed between the intervention and control groups in terms of baseline characteristics. Inter-group analysis indicated that there were no significant differences in the scores of self-efficacy, ISBAR, and communication clarity between the control and intervention groups before the intervention ( P  > 0.05). According to the intra-group analysis, the ISBAR and communication clarity scores in the intervention group significantly increased over time ( p  < 0.001), whereas no such increase was observed in the control group. The intervention group showed a significant increase in general self-efficacy ( p  < 0.001) compared to the control group. Although both groups showed a significant improvement in handover self-efficacy, the mean scores of the intervention group were higher than those of the control group ( p  < 0.001). The mean score of the Manchester Clinical Supervision Scale in the intervention group was 128.98, indicating the high effectiveness of implementing the clinical supervision model.

The findings indicated that the use of the clinical supervision model improves self-efficacy and communication skills in the handover process of nurses in medical and surgical wards. Therefore, it is recommended to use this model in handover training to enhance the quality of care and improve patient safety.

Peer Review reports

Introduction

The handover process involves the efficient transfer of clinical information to delegate professional responsibility and accountability for patient care to another individual or professional group [ 1 ]. This process is one of the top five priorities for improving patient safety worldwide [ 2 ]. Handover, especially at the end of shifts, occurs at least 2–3 times daily and is an integral part of nursing practice. With the increasing emphasis interprofessional patient care, the frequency of handovers has also increased [ 3 ].

Inefficient handover leads to incomplete information transfer, resulting in repeated assessments, treatment delays, medication errors, avoidable readmissions, increased complications and patient mortality, and additional financial burdens on the healthcare system [ 4 , 5 , 6 ]. The United State Safety Committee has reported that poor handover is the primary cause of 65% of adverse events and 90% of root causes of errors [ 7 ]. Many nurses suffer from omissions, inaccuracies, and irrelevant information during handovers [ 5 ]. Essential information is omitted in 43.17% of handovers and nursing documentation [ 8 ], and approximately 22% of adverse events related to nursing care are associated with poor handovers [ 9 ]. Literature reviews have shown that nurses often struggle with handover execution due to a lack of self-efficacy and communication skills [ 4 , 10 , 11 ].

Self-efficacy is the extent of an individual’s belief in their ability to complete a task or achieve a goal [ 12 ]. Self-efficacy increases confidence and motivation to communicate with others [ 13 ] and important factor in improving the quality of patient care [ 14 ]. Low self-efficacy among nurses leads to delays in intervention and negatively impacts patient care [ 15 , 16 ]. Also, the World Health Organization (WHO) has identified communication failure as the primary cause of adverse events in healthcare [ 17 ] and stated that precise and skilled communication should be a high priority in handover [ 18 ].

To improve self-efficacy in handovers, nursing managers should create a positive organizational climate for relationships among nurses so that they feel satisfied with their communication with colleagues. They should also provide opportunities, such as education programs or systems, for nurses to develop their communication skills [ 19 ]. The ISBAR describes a structured form of handover and facilitates intra and interprofessional communication within healthcare providers has been endorsed by the WHO [ 20 , 21 ]. (Table  1 ).

The clinical supervision model (CSM) is one of the clinical education models for nurses designed to reduce the gap between theory and practice [ 22 ]. This model is a structured program in which nurses receive guidance and support from a trained supervisor, who provides feedback on their performance [ 23 ]. In cases where cutting corners’ and ‘gaps in care’ are regular occurrences in daily nursing practice; however, this often goes unnoticed and subsequently continues [ 24 ], the CSM provides an opportunity for reflection on current practice and the development and improvement of future practice [ 25 ]. The CSM aids learning through emotional, educational, and organizational support [ 26 ] and it’s recommended to enhance the quality of patient care in healthcare settings [ 27 ]. Education and support to enhance self-efficacy and communication skills in nurses are identified as two influential factors in improving effective handover. Therefore, the present study was conducted with the aim of examining the impact of the clinical supervision model on nurses’ self-efficacy and communication skills in the handover process within medical and surgical wards.

This experimental two-group study with a pre- and post-test design was conducted in 2024 in the selected hospital affiliated with Isfahan University of Medical Sciences, Isfahan, Iran. This study was single-blinded by a statistical analyst.

Participants

Participants included all nurses working in the medical and surgical departments of selected hospital. Inclusion criteria were willingness to participate in the study, holding a bachelor’s degree, being a nurse responsible for direct patient care, and not using the ISBAR framework prior to the study. Exclusion criteria were discontinuation of collaboration with the study department and unwillingness to continue participation in the study.

Sample size

The sample size was estimated based on a similar study [ 4 ] with the following parameters: S 1  = 15.11, S 2  = 12.10, µ 1  = 60.94, µ 2  = 51.54, α = 0.05, and β = 0.2, assuming a 15% attrition rate, resulting in an estimated sample size of 80 nurses.

The researcher first visited the hospital, which had two medical departments (medical 1 and medical 2) and two surgical departments (Women’s Surgery and Men’s Surgery). Using a random number table, one medical department and one surgical department were selected as the intervention group, while the other medical and surgical departments were designated as the control group. From medical 1, medical 2, and Women’s Surgery departments, each with 20 nurses, all were included in the study (census sampling). From the Men’s Surgery department, which had 30 nurses, 20 were randomly selected using the random number table. Thus, the number of samples in each control and intervention group was 40 (Fig.  1 ).

figure 1

Consort flowchart

Study tools

Data were collected using a demographic questionnaire, ISBAR Communication Checklist and Communication Clarity, Visual Analog Scale (VAS), Sherer Self-Efficacy Scale (GSES) and Manchester Clinical Supervision Scale (MCSS).

The demographic questionnaire included individual information (age, gender, marital status) and professional details (work experience, average number of shifts per month, and average number of patients under care).

ISBAR communication checklist

This checklist includes 12 items rated on a 3-point Likert scale (0 = Not Implemented, 1 = Incomplete, 2 = Acceptable), with a total score range from 0 to 24. This scale is used to evaluate nurses’ performance in implementing the ISBAR framework during handovers [ 4 , 28 ]. The checklist was translated into Persian, and its content and face validity were assessed with the consultation of 10 nursing faculty experts specializing in handover and shift reports. The Content Validity Index (CVI), Content Validity Ratio (CVR), and face validity were 1, 1, and above 1.5, respectively. External reliability was assessed using test-retest method and its intraclass correlation coefficient (ICC) was 0.803 (95% CI: 0.628–0.901, p  < 0.001). Internal reliability, measured by Cronbach’s alpha, was 0.739.

The communication clarity checklist

This checklist consists of 7 items, rated on a 5-point Likert scale, with a total score range of 7 to 35. The goal of this scale is to assess participants’ ability to identify important information and convey it accurately and understandably. Higher scores indicate greater clarity in their handovers [ 18 ]. The checklist was translated into Persian, and its content and face validity were assessed with the consultation of 10 nursing faculty experts specializing in handover and shift reports. The Content Validity Index (CVI), Content Validity Ratio (CVR), and face validity excluding item 8 which was removed, for the remaining items were 0.94, 1, and above 1.5, respectively. External reliability was assessed using test-retest method and its intraclass correlation coefficient (ICC) was 0.941 (95% CI: 0.880–0.972, p  < 0.001). Internal reliability, measured by Cronbach’s alpha, was 0.871. Communication Clarity assesses the clarity of communication, complementing the ISBAR checklist in evaluating the effectiveness of communication skills.

The visual analog scale (VAS)

This scale was used to assess participants’ self-efficacy in performing handovers. Participants were asked to indicate on a scale from 0 to 100 how confident they felt about their ability to perform handovers (0 = not confident at all, 100 = very confident). The VAS is a reliable and valid method for measuring subjective feelings with minimal distortion and bias [ 4 ]. Its validity as a measure of self-efficacy has been confirmed by Turner et al. (2008) [ 29 ].

The sherer self-efficacy scale (GSES) questionnaire

This questionnaire consists of 17 items, rated on a 5-point Likert scale (1 = Strongly Disagree, 2 = Disagree, 3 = Neutral, 4 = Agree, 5 = Strongly Agree), with a total score range from 17 to 85. The questionnaire was originally developed and psychometrically validated by Sherer and colleagues [ 30 ]. The validity and reliability of the Persian version of the questionnaire have been confirmed in Iran [ 31 ].

The manchester clinical supervision scale (MCSS)

This scale was used to assess the effectiveness of the clinical supervision model. The MCSS was created in 1995 at the University of Manchester, England [ 32 ]. This questionnaire consists of 32 items covering 7 subscales: Trust and Relationships, Supervisor’s Advice and Support, Care and Improved Skills, Importance and Value of Clinical Supervision, Finding Time, Personal Issues, and Feedback. Each item is rated on a 5-point Likert scale: Strongly Disagree (1 point), Disagree (2 points), Neutral (3 points), Agree (4 points), and Strongly Agree (5 points). Scores for each subscale are summed, with higher scores indicating better clinical supervision performance in that area. The validity and reliability of the Persian version of the questionnaire have been confirmed by Khani et al. (2009), and the effectiveness score was reported as 122 and more [ 33 ].

Initially, the researcher approached nurses during their free time, explained the importance of handover and the negative impacts of incomplete handover, and outlined the study procedure. Informed consent was obtained from the participants, and they were provided with the Sherer General Self-Efficacy Scale (GSES) and the Visual Analog Scale (VAS) to assess their self-efficacy in handovers. Additionally, the ISBAR scores and communication clarity were assessed using the checklist by observing their handover performance in both the intervention and control groups.

Intervention group

In the intervention group, handover based on the ISBAR framework were implemented through clinical supervision model, which included three phases as follows [ 34 ]:

In this phase, the nurse educator organized individual meeting outside the regular shift times for the nurses to avoid any stress related to their clinical duties. During this meeting, the importance of handover, the consequences of incomplete handover for both nurses and patients, and criteria for effective handover were discussed. The CSM, its benefits, stages, and the roles of the supervisor and nurses were also explained. Questions were answered, ambiguities were addressed. The ISBAR-based handover checklist was then distributed, and each item was discussed. Nurses were asked to apply the ISBAR framework to two clinical cases and provide feedback on the checklist items. The nurses were reminded that in future supervision sessions, they were expected to use the checklist items during patient handovers.

One week after the first phase, the observer attended the medical and surgical wards to conduct clinical supervision sessions while the supervised nurses were completing their shifts and handing over patients to the next shift nurse at the bedside. In this study, a nurse educator with years of experience in supervision and teaching was selected for the role. She was competent in communication skills, providing feedback, and nursing handovers. The clinical supervision sessions were held at the bedside, and the nurses’ performance was assessed using ISBAR communication and Communication Clarity checklists, also at the bedside. These sessions, conducted over 3 months, occurred 6 times (two morning shifts, two afternoon shifts, and two night shifts per participant) at two-week intervals. During these sessions, nurses brought the ISBAR checklist, followed its items, received feedback from the supervisor if errors were made, and discussed any issues with the supervisor. The nurses’ communication skills scores were calculated according to the checklist in each session. Each clinical supervision session lasted between 40 and 60 min and was conducted individually.

In this phase, the Manchester scale was used at this stage to determine the effectiveness of implementing the clinical supervision model.

Control group

For the control group, the nurse educator organized individual meeting outside the regular shift times for the nurses to avoid any stress related to their clinical duties. During this meeting the study objectives and the number of supervision sessions were discussed, and it was mentioned that their handover performance would be evaluated based on the ISBAR communication checklist and communication clarity during the sessions. However, they were not provided with the checklist. The control group also underwent 6 supervision sessions, held at two-week intervals over a period of 3 months. During these sessions, the nurses’ performance using ISBAR communication and Communication Clarity checklists at the bedside during handovers was assessed and recorded by the supervisor. Although feedback on erroneous performance was not provided, any questions from the nurses regarding handovers were addressed.

At the end of the study, the general self-efficacy scores and Visual Analog Scale (VAS) scores for both the control and intervention groups were obtained through self-reports by the nurses.

Data analysis

Data were analyzed using SPSS version 16 (SPSS, Inc., Chicago, IL, USA). Descriptive statistics (frequency, percentage, mean, and standard deviation) were used to describe the data. The normality of quantitative variables was assessed using the Kolmogorov-Smirnov test. To compare qualitative variables between the two groups, the chi-square test was used. To compare means between groups and within groups, independent t-tests, multivariate analysis of covariance (MANCOVA), and paired t-tests were employed. Additionally, repeated measures analysis of variance (ANOVA) was used to compare mean scores at six time points. A significance level of < 0.05 was set.

There were no significant differences between the intervention and control groups regarding demographic characteristics ( p  > 0.05). Since the p -value for gender was close to 0.05, it could have been a confounding factor; therefore, it was considered as such in the analyses (Table  2 ).

The independent t-test revealed no significant difference in baseline ISBAR scores between the two groups. Repeated measures analysis showed that changes in ISBAR scores depended on the type of group, with the mean ISBAR scores significantly increasing over time in the intervention group ( p  < 0.001), while there was no significant change in the control group ( p  = 0.780). Multivariate analysis of covariance was used to compare scores between the two groups at six time points, accounting for gender and baseline ISBAR scores as confounders. The results indicated significant differences in mean ISBAR scores between the two groups across all measurement points ( p  < 0.001) (Table  3 ).

The independent t-test indicated no significant difference between the two groups in baseline communication clarity scores. Repeated measures analysis revealed that changes in communication clarity scores were dependent on the group type. Specifically, the mean scores for communication clarity significantly improved over time in the intervention group ( p  < 0.001), while no such improvement was observed in the control group ( p  = 0.882). A multivariate analysis of covariance was used to compare scores between the two groups across six time points, considering gender and baseline communication clarity scores as confounders. The results demonstrated significant differences in mean communication clarity scores between the two groups across all measurement points ( p  < 0.001) (Table  4 ).

Finally, a pairwise comparison of the scores for both ISBAR and clarity communication of the intervention group sessions, using the LSD test, revealed a significant increase in scores for each supervision session compared to the other sessions ( p  < 0.001).

The independent t-test revealed no significant difference between the two groups in baseline general self-efficacy scores ( p  = 0.537). The multivariate analysis of covariance indicated that the mean general self-efficacy scores in the intervention group were significantly higher at the end of the intervention (considering gender and baseline self-efficacy scores as confounders) ( p  < 0.001). The paired t-test showed a significant difference in the mean scores of the intervention group before and after the clinical supervision sessions ( p  < 0.001), whereas no significant difference was observed in the control group before and after the intervention ( p  = 0.872) (Table  5 ).

The independent t-test indicated no significant difference between the two groups in baseline scores for delivery and handover self-efficacy ( p  = 0.762). The multivariate analysis of covariance showed that the mean scores for delivery and handover self-efficacy in the intervention group were significantly higher at the end of the intervention (considering gender and baseline scores as confounders) ( p  < 0.001). The paired t-test revealed a significant difference in the mean scores of the intervention group before and after the clinical supervision sessions ( p  < 0.001). Likewise, the change in the mean scores in the control group was significantly different before and after the intervention ( p  = 0.012). However, the mean scores of the intervention group were higher than those of the control group (Table  6 ).

The mean total score for the Manchester Clinical Supervision Scale (MCSS) was 128.98, indicating an excellent effectiveness of the Clinical Supervision Model (CSM) from the perspective of the nurses (Table  7 ).

This study aimed to assess the impact of the Clinical Supervision Model (CSM) on the handover process among nurses in medical and surgical wards, based on the ISBAR framework, to enhance communication skills and self-efficacy, which are essential components of patient care. The results of our study demonstrated the significant impact of the CSM on improving nurses’ communication skills and self-efficacy in the handover process. The CSM plays a crucial role in enhancing skills by providing appropriate feedback and creating a supportive learning environment. In the CSM, the supervisor identifies individual needs through observing performance, plans for improvements, and fosters a supportive and motivating environment that encourages active participation in skill development [ 35 ]. The effective supervision, through support and providing opportunities to identify strengths and weaknesses, reduces anxiety in supervisees and fosters a better sense of overall performance and ability, consequently having a positive effect on self-efficacy [ 36 ]. Supervisors can also significantly enhance self-efficacy by providing feedback on positive behaviors [ 37 ].

The pre-intervention ISBAR scores revealed that despite the incorporation of the ISBAR framework into continuing education programs and the hospital’s requirement for its implementation, including the design of handover documents consistent with this framework, nurses still did not adhere to it during handover, resulting in incomplete information transfer. Furthermore, the mean score of communication clarity before the intervention indicated that the quality of communication during handover was inadequate, highlighting the need for effective communication techniques to convey important issues concisely and clearly.

The results of present study demonstrate that the CSM significantly improved nurses’ performance in handover. This improvement underscores the model’s effectiveness in addressing the gaps identified in pre-intervention practices and enhancing both the adherence to the ISBAR framework and the overall quality of communication during the handover process. The clinical supervision provided not only facilitated adherence to structured communication frameworks but also enhanced nurses’ self-efficacy and communication skills, contributing to more effective and safe patient care transitions.

In the first phase of the CSM, a session was held with nurses to discuss not only the importance of handover but also the CSM, its benefits, stages, and the roles of supervisors and supervisees. Rothwell et al. (2021) identified a significant barrier to effective clinical supervision as a lack of understanding of the role and purpose of supervision. In such conditions, supervisees reported anxiety and sometimes perceived supervision as an intrusion into their work, leading to a negative association with the term “clinical supervision” and consequently decreased participation [ 38 ].

In the first phase, the ISBAR checklist and communication clarity were also agreed upon for use in implementing the model. Terry et al. (2020) demonstrated that a mutually agreed-upon program between the supervisor and supervisee can serve as a basis for periodic reviews, feedback, and a key indicator of successful clinical supervision [ 39 ]. Similarly, Thyness et al. (2022) highlighted that students viewed the use of checklists as a strength in executing clinical supervision due to its role in preventing confusion and increasing orderliness [ 40 ].

In the second stage, six clinical supervision sessions were conducted at two-week intervals over a period of three months. Continuous clinical supervision is essential for establishing a positive relationship between the supervisor and the supervisee, and for achieving success in clinical practice [ 41 ]. Studies have also highlighted the need for prolonged training in handovers and shift reports to improve communication clarity [ 18 ] and self-efficacy [ 42 ]. During the supervision sessions, the supervisor provided comprehensive support to the nurses in addressing issues related to handover execution, offered feedback based on their performance, and discussed any deficiencies. A notable advantage of the clinical supervision model is the shared dialogue between the supervisor and the supervisee and the feedback provided, as it facilitates agreement and collaboration, challenges individuals’ ideologies, and enhances both performance [ 43 ] and nurses’ self-confidence [ 44 ].

In the third stage of the clinical supervision model, the MCSS was used to examine the effectiveness of clinical supervision in the intervention group. The scores from the Manchester Scale indicated a high level of effectiveness of the clinical supervision. Snowden et al. also examined the effectiveness of the clinical supervision model among healthcare providers. Participants in their study assessed the model as effective and had a positive perception of its implementation [ 45 ].

In the present study, we assessed nurses’ communication skills using the ISBAR checklist and communication clarity. The communication skills scores, based on the ISBAR checklist, significantly improved in the intervention group following the implementation of the clinical supervision model. This finding is consistent with the results of the study by Fahim Yegane et al. (2017) [ 7 ]. The use of the standard ISBAR framework in handover prevents the omission of critical details and reduces the focus on irrelevant and unnecessary information [ 46 ]. Additionally, the communication clarity scores for nurses during handovers also improved in the present study. Uhm et al. (2019) found that using the ISBAR framework and providing feedback to final-year nursing students in real-world settings led to improvements in ISBAR communication and communication clarity [ 42 ]. These results align with our findings in the real-world nursing environment. Ikbal et al. (2019) conducted a study to determine the impact of clinical supervision on nurses’ performance, showing improvements in knowledge, attitudes, and skills [ 47 ]. Similarly, the study by Setiawan et al. demonstrated that implementing the clinical supervision model led to improvements in performance, including technical skills and knowledge [ 48 ]. In our study, which lasted for three months, the average scores for ISBAR and communication clarity showed a consistent upward trend over time, and self-efficacy also showed significant changes after three months. This reinforces the strength of the clinical supervision model in creating a supportive environment for addressing individual issues and ensuring adherence to training. Ultimately, improved communication skills can lead to enhanced patient safety, better quality of care, and increased inter professional collaboration.

Another finding of our study was the improvement in nurses’ self-efficacy in handover and general self-efficacy. Self-efficacy refers to self-confidence and a belief in one’s ability to perform tasks effectively, which implies ease, reduced anxiety, and a belief in the success of handovers [ 49 ]. Our results indicated a significant increase in handover self-efficacy following the implementation of the clinical supervision model. This finding is consistent with a study on nurses where self-efficacy and adherence to evidence-based handover practices improved after participation in a simulation-based program [ 50 ].

In our study, there was a significant difference in the mean general self-efficacy scores between the two groups. This finding aligns with the study by Lohani and Sharma (2023), which examined the impact of clinical supervision on self-awareness and self-efficacy among psychotherapists and counselors [ 36 ]. Additionally, Abrishami et al. (2024) found that training based on the ISBAR framework was effective in enhancing patient safety and nurse self-efficacy [ 16 ]. Self-efficacy is a crucial aspect of nursing practice and is associated with greater control, motivation, and resilience in challenging situations, such as the COVID-19 pandemic, which can impact patient outcomes and nurse job satisfaction [ 51 ]. Incorporating a long-term ISBAR-based handover training program into ongoing nursing education, rather than a single-session program, is essential for the continuous improvement of communication clarity, self-efficacy, safety, and quality of nursing care.

Conclusions

Communication deficiencies and lack of self-confidence are associated with poor information transfer during handovers, which threatens patient safety and care quality. The clinical supervision model offers a flexible opportunity for nurses to gain knowledge and extensively practice communication skills, while also providing emotional support that enhances their self-efficacy. Participants in the clinical supervision model reported high levels of satisfaction, adherence to the ISBAR framework, and improvements in communication clarity and self-efficacy. Therefore, the clinical supervision model is an effective method for training nurses in handovers and transitions.

Limitations

This study had several limitations. Firstly, it was conducted solely with nurses from a single hospital, which may limit the generalizability of the findings. Additionally, rather than randomizing individual participants, entire wards were randomly assigned. However, baseline variables did not differ between the intervention and control groups, and to ensure accuracy, baseline values of dependent variables were considered in statistical analyses. Also, we used a one observer according to the intervention protocol. We suggest that future studies utilize two observers and assess inter-observer reliability.

Implication

These findings underline the importance of clearly defining the roles and expectations of clinical supervision to increase engagement among supervisees. The successful implementation of the ISBAR checklist and the focus on communication clarity further supported the effective execution of the Clinical Supervision Model, enhancing the overall quality of handover practices.

Data availability

The data supporting the findings of this study are available on request from the corresponding author. The data are not publicly available due to privacy or ethical restrictions.

Brown-Deveaux D, Kaplan S, Gabbe L, Mansfield L. Transformational Leadership meets innovative strategy: how nurse leaders and clinical nurses redesigned Bedside Handover to improve nursing practice. Nurse Lead. 2022;20(3):290–6. https://doi.org/10.1016/j.mnl.2021.10.010

Article   PubMed   Google Scholar  

Pakcheshm B, Bagheri I, Kalani Z. The impact of using ISBAR standard checklist on nursing clinical handoff in coronary care units. Nurs Pract Today. 2020;7(4):266–74. https://doi.org/10.18502/npt.v7i4.4036

Article   Google Scholar  

Thompson JE, Collett LW, Langbart MJ, Purcell NJ, Boyd SM, Yuminaga Y, et al. Using the ISBAR handover tool in junior medical officer handover: a study in an Australian tertiary hospital. Postgrad Med J. 2011;87(1027):340–4. https://doi.org/10.1136/pgmj.2010.105569

Chung JYS, Li WHC, Cheung AT, Ho LLK, Chung JOK. Efficacy of a blended learning programme in enhancing the communication skill competence and self-efficacy of nursing students in conducting clinical handovers: a randomised controlled trial. BMC Med Educ. 2022;22(1):275. https://doi.org/10.1186/s12909-022-03361-3

Article   PubMed   PubMed Central   Google Scholar  

Chung JYS, Li WHC, Ho LLK, Cheung AT, Chung JOK. Newly graduate nurse perception and experience of clinical handover. Nurse Educ Today. 2021;97:104693. https://doi.org/10.1016/j.nedt.2020.104693

Desmedt M, Ulenaers D, Grosemans J, Hellings J, Bergs J. Clinical handover and handoff in healthcare: a systematic review of systematic reviews. Int J Qual Health Care. 2021;33(1):4. https://doi.org/10.1093/intqhc/mzaa170

Fahim Yegane SA, Shahrami A, Hatamabadi HR, Hosseini-Zijoud SM. Clinical information transfer between EMS Staff and Emergency Medicine assistants during Handover of Trauma patients. Prehosp Disaster Med. 2017;32(5):541–7. https://doi.org/10.1017/S1049023X17006562

Speltri MF, Vaselli M, Baratta S, Lazzarini M, Baroni M, Ripoli A, et al. [Nursing handover: an observational study at the inpatient wards - cardiothoracic department]. Prof Inferm. 2021;74(4):160–5. https://doi.org/10.7429/pi.2021.744230

Tran DT, Johnson M. Classifying nursing errors in clinical management within an Australian hospital. Int Nurs Rev. 2010;57(4):454– 62. https://doi.org/10.1111/j.1466-7657.2010.00846.x

Article   CAS   PubMed   Google Scholar  

Lavoie P, Clausen C, Purden M, Emed J, Frunchak V, Clarke SP. Nurses’ experience of handoffs on four Canadian medical and surgical units: a shared accountability for knowing and safeguarding the patient. J Adv Nurs. 2021;77(10):4156–69. https://doi.org/10.1111/jan.14997

Sabet Sarvestani R, Moattari M, Nasrabadi AN, Momennasab M, Yektatalab S. Challenges of nursing handover: a qualitative study. Clin Nurs Res. 2015;24(3):234–52. https://doi.org/10.1177/1054773813508134

Croy G, Garvey L, Willetts G, Wheelahan J, Hood K. Anxiety, flipped approach and self-efficacy: exploring nursing student outcomes. Nurse Educ Today. 2020;93:104534. https://doi.org/10.1016/j.nedt.2020.104534

Jun WH. Anger expression, self-efficacy and interpersonal competency of Korean nursing students. Int Nurs Rev. 2016;63(4):539–46. https://doi.org/10.1111/inr.12314

Yada H, Odachi R, Adachi K, Abe H, Yonemoto F, Fujiki T, et al. Validity and reliability of Psychiatric Nurse Self-Efficacy Scales: cross-sectional study. BMJ Open. 2022;12(1):e055922. https://doi.org/10.1136/bmjopen-2021-055922

Takashiki R, Komatsu J, Nowicki M, Moritoki Y, Okazaki M, Ohshima S, et al. Improving performance and self-efficacy of novice nurses using hybrid simulation-based mastery learning. Jpn J Nurs Sci. 2022;e12519. https://doi.org/10.1111/jjns.12519

Abrishami R, Golestani K, Farhang Ranjbar M, Ghasemie Abarghouie MH, Ghadami A. A survey on the effects of patient safety training programs based on SBAR and FMEA techniques on the level of self-efficacy and observance of patient safety culture in Iran hospital, Shiraz in 2022–2023. J Educ Health Promot. 2024;13:66. https://doi.org/10.4103/jehp.jehp_194_23

Martínez-Fernández MC, Castiñeiras-Martín S, Liébana-Presa C, Fernández-Martínez E, Gomes L, Marques-Sanchez P. SBAR Method for improving well-being in the Internal Medicine Unit: Quasi-experimental Research. Int J Environ Res Public Health. 2022;19(24). https://doi.org/10.3390/ijerph192416813

Yu M, Kang KJ. Effectiveness of a role-play simulation program involving the sbar technique: a quasi-experimental study. Nurse Educ Today. 2017;53:41–7. https://doi.org/10.1016/j.nedt.2017.04.002

Lee Y, Kim H, Oh Y. Effects of Communication skills and Organisational Communication satisfaction on self-efficacy for handoffs among nurses in South Korea. Healthc (Basel). 2023;11(24). https://doi.org/10.3390/healthcare11243125

Moore M, Roberts C, Newbury J, Crossley J. Am I getting an accurate picture: a tool to assess clinical handover in remote settings? BMC Med Educ. 2017;17:1–9. https://doi.org/10.1186/s12909-017-1067-0

Andreasen EM, Høigaard R, Berg H, Steinsbekk A, Haraldstad K. Usability evaluation of the Preoperative ISBAR (Identification, Situation, background, Assessment, and recommendation) Desktop virtual reality application: qualitative observational study. JMIR Hum Factors. 2022;9(4):e40400. https://doi.org/10.2196/40400

Crafoord MT, Fagerdahl AM. Clinical supervision in perioperative nursing education in Sweden - A questionnaire study. Nurse Educ Pract. 2017;24:29–33. https://doi.org/10.1016/j.nepr.2017.03.006

Gonge H, Buus N. Is it possible to strengthen psychiatric nursing staff’s clinical supervision? RCT of a meta-supervision intervention. J Adv Nurs. 2015;71(4):909–21. https://doi.org/10.1111/jan.12569

Markey K, Murphy L, O’Donnell C, Turner J, Doody O. Clinical supervision: a panacea for missed care. J Nurs Manag. 2020;28(8):2113–7. https://doi.org/10.1111/jonm.13001

White E, Winstanley J. A randomised controlled trial of clinical supervision: selected findings from a novel Australian attempt to establish the evidence base for causal relationships with quality of care and patient outcomes, as an informed contribution to mental health nursing practice development. J Res Nurs. 2010;15(2):151–67. https://doi.org/10.1177/1744987109357816

Dornan T, Conn R, Monaghan H, Kearney G, Gillespie H, Bennett D. Experience based learning (ExBL): clinical teaching for the twenty-first century. Med Teach. 2019;41(10):1098–105. https://doi.org/10.1080/0142159x.2019.1630730

Yuswanto TJA, Ernawati N, Rajiani I. The effectiveness of clinical supervision model based on proctor theory and interpersonal relationship cycle (PIR-C) toward nurses’ performance in improving the quality of nursing care documentation. Indian J Public Health. 2018;9(10):562. https://doi.org/10.5958/0976-5506.2018.01405.5

World Health Organisation (W.H.O.). Patient safety curriculum guide: multi professional edition. 2011. http://apps.who.int/iris/bitstream/10665/44641/1/9789241501958_eng.pdf Accessed on 03 Sep 2024.

Turner NM, van de Leemput AJ, Draaisma JM, Oosterveld P, ten Cate OT. Validity of the visual analogue scale as an instrument to measure self-efficacy in resuscitation skills. Med Educ. 2008;42(5):503–11. https://doi.org/10.1111/j.1365-2923.2007.02950.x

Sherer M, Maddux JE, Mercandante B, Prentice-Dunn S, Jacobs B, Rogers RW. The self-efficacy scale: construction and validation. Psychol Rep. 1982;51(2):663–71. https://doi.org/10.2466/pr0.1982.51.2.663

Haghayegh A, Ghasemi N, Neshatdoost H, Kajbaf M, Khanbani M. Psychometric properties of Diabetes Management Self-Efficacy Scale (DMSES). Iran J Endocrinol Metabolism. 2010;12(2):111–95.

Google Scholar  

Winstanley J. Manchester Clinical Supervision Scale. Nurs Stand. 2000;14(19):31–2. https://doi.org/10.7748/ns.14.19.31.s54

Khani A, Jaafarpour M, Jamshidbeigi Y. Translating and validating the Iranian version of the Manchester Clinical Supervision Scale (MCSS). J Clin Diagn Res. 2009;3(2):1402–7.

Mokhtari M, Khalifehzadeh-Esfahani A, Mohamadirizi S. The Effect of the Clinical Supervision Model on nurses’ performance in Atrial Fibrillation Care. Iran J Nurs Midwifery Res. 2022;27(3):216–20. https://doi.org/10.4103/ijnmr.IJNMR_203_20

Keshavarzi MH, Azandehi SK, Koohestani HR, Baradaran HR, Hayat AA, Ghorbani AA. Exploration the role of a clinical supervisor to improve the professional skills of medical students: a content analysis study. BMC Med Educ. 2022;22(1):399. https://doi.org/10.1186/s12909-022-03473-w

Lohani G, Sharma P. Effect of clinical supervision on self-awareness and self-efficacy of psychotherapists and counselors: a systematic review. Psychol Serv. 2023;20(2):291–9. https://doi.org/10.1037/ser0000693

Baylor J. Graduate Student Self-Efficacy during the Psychology Practicum Experience. 2019.

Rothwell C, Kehoe A, Farook SF, Illing J. Enablers and barriers to effective clinical supervision in the workplace: a rapid evidence review. BMJ Open. 2021;11(9):e052929. https://doi.org/10.1136/bmjopen-2021-052929

Terry D, Nguyen H, Perkins AJ, Peck B. Supervision in healthcare: a critical review of the role, function and capacity for training. 2020. https://doi.org/10.13189/ujph.2020.080101

Thyness C, Steinsbekk A, Grimstad H. Learning from clinical supervision–a qualitative study of undergraduate medical students’ experiences. Med Educ Online. 2022;27(1):2048514. https://doi.org/10.1080/10872981.2022.2048514

Bourke-Matas E, Maloney S, Jepson M, Bowles K. Evidence-based practice conversations with clinical supervisors during paramedic placements: an exploratory study of students’ perceptions. J Contemp Med Educ. 2020;10(4):123–30.

Uhm JY, Ko Y, Kim S. Implementation of an SBAR communication program based on experiential learning theory in a pediatric nursing practicum: a quasi-experimental study. Nurse Educ Today. 2019;80:78–84. https://doi.org/10.1016/j.nedt.2019.05.034

Dilworth S, Higgins I, Parker V, Kelly B, Turner J. Finding a way forward: a literature review on the current debates around clinical supervision. Contemp Nurse. 2013;45(1):22–32. https://doi.org/10.5172/conu.2013.45.1.22

Awiagah SK, Dordunu R, Hukporti N, Nukunu PE, Dzando G. Barriers and facilitators to Clinical Supervision in Ghana: a scoping review. SAGE Open Nurs. 2024;10:23779608241255263. https://doi.org/10.1177/23779608241255263

Snowdon DA, Sargent M, Williams CM, Maloney S, Caspers K, Taylor NF. Effective clinical supervision of allied health professionals: a mixed methods study. BMC Health Serv Res. 2019;20(1):2. https://doi.org/10.1186/s12913-019-4873-8

Burgess A, van Diggele C, Roberts C, Mellis C. Teaching clinical handover with ISBAR. BMC Med Educ. 2020;20(Suppl 2):459. https://doi.org/10.1186/s12909-020-02285-0

Ikbal RN, Arif Y. The influence of implementation of the clinical supervision team to monitor the performance of the employees in the hospital room of x padang 2015. Malaysian J Nurs (MJN). 2019;10(3):9–15. https://doi.org/10.31674/mjn.2019.v10i03.002

Setiawan A, Keliat BA, Rustina Y, Prasetyo S. The effectiveness of educative, supportive, and administrative cycle (ESA-C) clinical supervision model in improving the performance of public hospital nurses. KnE Life Sci. 2019:41–56 https://doi.org/10.18502/kls.v4i15.5734

Kim JH, Lim JM, Kim EM. Patient handover education programme based on situated learning theory for nursing students in clinical practice. Int J Nurs Pract. 2022;28(1):e13005. https://doi.org/10.1111/ijn.13005

Elgin KW, Poston RD. Optimizing registered nurse Bedside Shift Report: innovative application of Simulation methods. J Nurses Prof Dev. 2019;35(2):E6. https://doi.org/10.1097/nnd.0000000000000526

Magon A, Conte G, Dellafiore F, Arrigoni C, Baroni I, Brera AS, et al. Healthc (Basel). 2023;11(5). https://doi.org/10.3390/healthcare11050754 . Nursing Profession Self-Efficacy Scale-Version 2: A Stepwise Validation with Three Cross-Sectional Data Collections.

Download references

Acknowledgements

The researchers would like to express their gratitude to the Vice Chancellor for Research of Isfahan University of Medical Sciences for the financial support of this study (project number: 3402282) and all participants.

This study was financed by the Vice Chancellor for Research of Isfahan University of Medical Sciences (Project number 3402282).

Author information

Authors and affiliations.

Student Research Committee, Isfahan University of Medical Sciences, Isfahan, Iran

Faezeh Gheisari

Nursing and Midwifery Care Research Center, Department of Adult Health Nursing, Faculty of Nursing and Midwifery, Isfahan University of Medical Sciences, Isfahan, Iran

Sedigheh Farzi

Department of Epidemiology and Biostatistics, School of Health, Isfahan University of Medical Sciences, Isfahan, Iran

Mohammad Javad Tarrahi

Tahere Momeni-Ghaleghasemi

You can also search for this author in PubMed   Google Scholar

Contributions

FGH, SF, and MJT designed the study. FGH, SF, and TMGh collected the study data. MJT, FGH, and SF performed data analysis and interpretation. FGH and SF prepared the manuscript, and all authors read and approved the final manuscript.

Corresponding author

Correspondence to Sedigheh Farzi .

Ethics declarations

Ethics approval and consent to participate.

This study was approved by the Ethics Committee of Isfahan University of Medical Sciences (IR.MUI.NUREMA.REC.1402.100). All participants were informed about the study’s objectives and were assured that their personal information would remain confidential, participation was voluntary, and they could withdraw from the study at any time. All participants signed an informed consent form to participate in the study.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/ .

Reprints and permissions

About this article

Cite this article.

Gheisari, F., Farzi, S., Tarrahi, M.J. et al. The effect of clinical supervision model on nurses’ self-efficacy and communication skills in the handover process of medical and surgical wards: an experimental study. BMC Nurs 23 , 672 (2024). https://doi.org/10.1186/s12912-024-02350-9

Download citation

Received : 27 July 2024

Accepted : 16 September 2024

Published : 20 September 2024

DOI : https://doi.org/10.1186/s12912-024-02350-9

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Clinical supervision
  • Self-efficacy

BMC Nursing

ISSN: 1472-6955

descriptive correlational quasi experimental and experimental

IMAGES

  1. Quantitative Research Design, Descriptive, Correlational, Quasi experiment, 5Minutes Info. Ch Ep# 46

    descriptive correlational quasi experimental and experimental

  2. PPT

    descriptive correlational quasi experimental and experimental

  3. Comparing Descriptive, Correlational, and Experimental Studies

    descriptive correlational quasi experimental and experimental

  4. Correlational Vs Experimental Research

    descriptive correlational quasi experimental and experimental

  5. Explain the Difference Between Descriptive and Experimental Research

    descriptive correlational quasi experimental and experimental

  6. descriptive study vs case study

    descriptive correlational quasi experimental and experimental

VIDEO

  1. TYPES OF RESEARCH : Quick Review (Comprehensive Exam Reviewer)

  2. Research Revision 1 descriptive & experimental studies

  3. 3rd Semester Jainology

  4. Quasi Experimental & Experimental Research Strategies in Social & Behavioral Sciences

  5. Quasi experimental research design|3rd yr bsc nursing #notes #nursing #research

  6. 5. Alternatives to Experimentation: Correlational and Quasi-Experimental Designs

COMMENTS

  1. Types of Research Designs Compared

    You can also create a mixed methods research design that has elements of both. Descriptive research vs experimental research. Descriptive research gathers data without controlling any variables, while experimental research manipulates and controls variables to determine cause and effect.

  2. 3.2 Psychologists Use Descriptive, Correlational, and Experimental

    Descriptive, correlational, and experimental research designs are used to collect and analyze data. Descriptive designs include case studies, surveys, and naturalistic observation. The goal of these designs is to get a picture of the current thoughts, feelings, or behaviours in a given group of people. Descriptive research is summarized using ...

  3. Types of Quantitative Research Methods and Designs

    However, the key is that correlational studies do not provide definitive proof that one variable leads to the second variable. Quasi-Experimental Quantitative Research Design. In a quasi-experimental quantitative research design, the researcher attempts to establish a cause-effect relationship from one variable to another.

  4. What Are The Four Types Of Quantitative Research?

    Descriptive; Correlational; Quasi-Experimental; Experimental; We will taker these in order. Descriptive Market Research. Descriptive quantitative research looks to describe the current status of a real-world phenomenon. In this approach, the researcher does not start with a hypothesis, instead they gather data to then draw any conclusion or ...

  5. Conducting and Writing Quantitative and Qualitative Research

    Quantitative research usually includes descriptive, correlational, causal-comparative / quasi-experimental, and experimental research.21 On the other hand, qualitative research usually encompasses historical, ethnographic, meta-analysis, narrative, grounded theory, phenomenology, case study, and field research.23,25,28,30 A summary of the ...

  6. 2.2 Psychologists Use Descriptive, Correlational, and Experimental

    Descriptive, correlational, and experimental research designs are used to collect and analyze data. Descriptive designs include case studies, surveys, and naturalistic observation. The goal of these designs is to get a picture of the current thoughts, feelings, or behaviors in a given group of people. Descriptive research is summarized using ...

  7. Quasi-Experimental Design

    Revised on January 22, 2024. Like a true experiment, a quasi-experimental design aims to establish a cause-and-effect relationship between an independent and dependent variable. However, unlike a true experiment, a quasi-experiment does not rely on random assignment. Instead, subjects are assigned to groups based on non-random criteria.

  8. PDF Quantitative Research Designs: Experimental, Quasi-Experimental, and

    a bit from book to book. First are experimental designs with an in tervention, control group, and randomization of participants into groups. Next are quasi-experimental designs with an in tervention but no randomization.Descriptive designs d o not have an intervention or treatment and are considered nonexperimental.

  9. What's the difference between correlational and experimental research?

    Controlled experiments establish causality, whereas correlational studies only show associations between variables. In an experimental design, you manipulate an independent variable and measure its effect on a dependent variable. Other variables are controlled so they can't impact the results. In a correlational design, you measure variables ...

  10. Experimental and Quasi-Experimental Research

    Experimental and Quasi-Experimental Research. Guide Title: Experimental and Quasi-Experimental Research Guide ID: 64. You approach a stainless-steel wall, separated vertically along its middle where two halves meet. After looking to the left, you see two buttons on the wall to the right. You press the top button and it lights up.

  11. Quasi Experimental Design Overview & Examples

    A significant advantage of quasi-experimental research over purely observational studies and correlational research is that it addresses the issue of directionality, determining which variable is the cause and which is the effect. In quasi-experiments, an intervention typically occurs during the investigation, and the researchers record outcomes before and after it, increasing the confidence ...

  12. 4 Types of research studies

    Chapter 2 introduced four types of research questions: descriptive, relational, repeated-measures and correlational. This chapter discusses the types of research studies needed to answer these RQs, while Chaps. 5 to 9 discuss the details of designing these studies and collecting the data. The RQ implies what data must be collected from the ...

  13. Experimental vs Quasi-Experimental Design: Which to Choose?

    A quasi-experimental design is a non-randomized study design used to evaluate the effect of an intervention. The intervention can be a training program, a policy change or a medical treatment. Unlike a true experiment, in a quasi-experimental study the choice of who gets the intervention and who doesn't is not randomized.

  14. Types of Research

    Interpret the results. General Types of Educational Research. Descriptive — survey, historical, content analysis, qualitative (ethnographic, narrative, phenomenological, grounded theory, and case study) Associational — correlational, causal-comparative. Intervention — experimental, quasi-experimental, action research (sort of)

  15. 7.3 Quasi-Experimental Research

    Describe three different types of quasi-experimental research designs (nonequivalent groups, pretest-posttest, and interrupted time series) and identify examples of each one. The prefix quasi means "resembling.". Thus quasi-experimental research is research that resembles experimental research but is not true experimental research.

  16. (PDF) Quantitative Research Designs

    In this chapter, we will explore several types of research designs. The designs in this chapter are survey design, descriptive design, correlational design, experimental design, and causal ...

  17. FIU Libraries: Research Methods Help Guide: Types of Research

    Introduction. Research types on this page are modeled after those listed in the Introduction to Measurement and Statistics website created by Dr. Linda M. Woolf, Professor of Psychology at Webster University. The definitions are based on Dr. Woolf's explanations. Go to Dr. Woolf's website for much more information as well as practice pages.

  18. Use of Quasi-Experimental Research Designs in Education Research

    In the past few decades, we have seen a rapid proliferation in the use of quasi-experimental research designs in education research. This trend, stemming in part from the "credibility revolution" in the social sciences, particularly economics, is notable along with the increasing use of randomized controlled trials in the strive toward rigorous causal inference.

  19. Quasi-Experimental Research

    The prefix quasi means "resembling." Thus quasi-experimental research is research that resembles experimental research but is not true experimental research. Although the independent variable is manipulated, participants are not randomly assigned to conditions or orders of conditions (Cook & Campbell, 1979). [1] Because the independent variable is manipulated before the dependent variable ...

  20. Descriptive Research

    Descriptive research aims to accurately and systematically describe a population, situation or phenomenon. It can answer what, where, when and how questions, but not why questions. A descriptive research design can use a wide variety of research methods to investigate one or more variables. Unlike in experimental research, the researcher does ...

  21. Types of Research within Qualitative and Quantitative

    ♦ Statement of purpose—what was studied and why.. ♦ Description of the methodology (experimental group, control group, variables, test conditions, test subjects, etc.).. ♦ Results (usually numeric in form presented in tables or graphs, often with statistical analysis).. ♦ Conclusions drawn from the results.. ♦ Footnotes, a bibliography, author credentials.

  22. Effectiveness of Training in Evidence-Based Practice on the ...

    For a quasi-experimental study with only one group (experimental group), the required sample size is approximately 128 subjects. The study assumes a standard deviation of 8.04 , a minimal detectable difference of 0.25 in the HP-SCC total competence score, 80% power, and a significance level of 0.05 (two-tailed).

  23. Correlational Research

    A correlational research design investigates relationships between variables without the researcher controlling or manipulating any of them. A correlation reflects the strength and/or direction of the relationship between two (or more) variables. The direction of a correlation can be either positive or negative. Positive correlation.

  24. The effect of clinical supervision model on nurses' self-efficacy and

    The handover process involves the efficient transfer of clinical information to delegate professional responsibility and accountability for patient care to another individual or professional group [].This process is one of the top five priorities for improving patient safety worldwide [].Handover, especially at the end of shifts, occurs at least 2-3 times daily and is an integral part of ...