Service update: Some parts of the Library’s website will be down for maintenance on August 11.

Secondary menu

  • Log in to your Library account
  • Hours and Maps
  • Connect from Off Campus
  • UC Berkeley Home

Search form

Research methods--quantitative, qualitative, and more: overview.

  • Quantitative Research
  • Qualitative Research
  • Data Science Methods (Machine Learning, AI, Big Data)
  • Text Mining and Computational Text Analysis
  • Evidence Synthesis/Systematic Reviews
  • Get Data, Get Help!

About Research Methods

This guide provides an overview of research methods, how to choose and use them, and supports and resources at UC Berkeley. 

As Patten and Newhart note in the book Understanding Research Methods , "Research methods are the building blocks of the scientific enterprise. They are the "how" for building systematic knowledge. The accumulation of knowledge through research is by its nature a collective endeavor. Each well-designed study provides evidence that may support, amend, refute, or deepen the understanding of existing knowledge...Decisions are important throughout the practice of research and are designed to help researchers collect evidence that includes the full spectrum of the phenomenon under study, to maintain logical rules, and to mitigate or account for possible sources of bias. In many ways, learning research methods is learning how to see and make these decisions."

The choice of methods varies by discipline, by the kind of phenomenon being studied and the data being used to study it, by the technology available, and more.  This guide is an introduction, but if you don't see what you need here, always contact your subject librarian, and/or take a look to see if there's a library research guide that will answer your question. 

Suggestions for changes and additions to this guide are welcome! 

START HERE: SAGE Research Methods

Without question, the most comprehensive resource available from the library is SAGE Research Methods.  HERE IS THE ONLINE GUIDE  to this one-stop shopping collection, and some helpful links are below:

  • SAGE Research Methods
  • Little Green Books  (Quantitative Methods)
  • Little Blue Books  (Qualitative Methods)
  • Dictionaries and Encyclopedias  
  • Case studies of real research projects
  • Sample datasets for hands-on practice
  • Streaming video--see methods come to life
  • Methodspace- -a community for researchers
  • SAGE Research Methods Course Mapping

Library Data Services at UC Berkeley

Library Data Services Program and Digital Scholarship Services

The LDSP offers a variety of services and tools !  From this link, check out pages for each of the following topics:  discovering data, managing data, collecting data, GIS data, text data mining, publishing data, digital scholarship, open science, and the Research Data Management Program.

Be sure also to check out the visual guide to where to seek assistance on campus with any research question you may have!

Library GIS Services

Other Data Services at Berkeley

D-Lab Supports Berkeley faculty, staff, and graduate students with research in data intensive social science, including a wide range of training and workshop offerings Dryad Dryad is a simple self-service tool for researchers to use in publishing their datasets. It provides tools for the effective publication of and access to research data. Geospatial Innovation Facility (GIF) Provides leadership and training across a broad array of integrated mapping technologies on campu Research Data Management A UC Berkeley guide and consulting service for research data management issues

General Research Methods Resources

Here are some general resources for assistance:

  • Assistance from ICPSR (must create an account to access): Getting Help with Data , and Resources for Students
  • Wiley Stats Ref for background information on statistics topics
  • Survey Documentation and Analysis (SDA) .  Program for easy web-based analysis of survey data.

Consultants

  • D-Lab/Data Science Discovery Consultants Request help with your research project from peer consultants.
  • Research data (RDM) consulting Meet with RDM consultants before designing the data security, storage, and sharing aspects of your qualitative project.
  • Statistics Department Consulting Services A service in which advanced graduate students, under faculty supervision, are available to consult during specified hours in the Fall and Spring semesters.

Related Resourcex

  • IRB / CPHS Qualitative research projects with human subjects often require that you go through an ethics review.
  • OURS (Office of Undergraduate Research and Scholarships) OURS supports undergraduates who want to embark on research projects and assistantships. In particular, check out their "Getting Started in Research" workshops
  • Sponsored Projects Sponsored projects works with researchers applying for major external grants.
  • Next: Quantitative Research >>
  • Last Updated: Sep 6, 2024 8:59 PM
  • URL: https://guides.lib.berkeley.edu/researchmethods

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology

Research Methods | Definition, Types, Examples

Research methods are specific procedures for collecting and analysing data. Developing your research methods is an integral part of your research design . When planning your methods, there are two key decisions you will make.

First, decide how you will collect data . Your methods depend on what type of data you need to answer your research question :

  • Qualitative vs quantitative : Will your data take the form of words or numbers?
  • Primary vs secondary : Will you collect original data yourself, or will you use data that have already been collected by someone else?
  • Descriptive vs experimental : Will you take measurements of something as it is, or will you perform an experiment?

Second, decide how you will analyse the data .

  • For quantitative data, you can use statistical analysis methods to test relationships between variables.
  • For qualitative data, you can use methods such as thematic analysis to interpret patterns and meanings in the data.

Table of contents

Methods for collecting data, examples of data collection methods, methods for analysing data, examples of data analysis methods, frequently asked questions about methodology.

Data are the information that you collect for the purposes of answering your research question . The type of data you need depends on the aims of your research.

Qualitative vs quantitative data

Your choice of qualitative or quantitative data collection depends on the type of knowledge you want to develop.

For questions about ideas, experiences and meanings, or to study something that can’t be described numerically, collect qualitative data .

If you want to develop a more mechanistic understanding of a topic, or your research involves hypothesis testing , collect quantitative data .

Qualitative
Quantitative .

You can also take a mixed methods approach, where you use both qualitative and quantitative research methods.

Primary vs secondary data

Primary data are any original information that you collect for the purposes of answering your research question (e.g. through surveys , observations and experiments ). Secondary data are information that has already been collected by other researchers (e.g. in a government census or previous scientific studies).

If you are exploring a novel research question, you’ll probably need to collect primary data. But if you want to synthesise existing knowledge, analyse historical trends, or identify patterns on a large scale, secondary data might be a better choice.

Primary
Secondary

Descriptive vs experimental data

In descriptive research , you collect data about your study subject without intervening. The validity of your research will depend on your sampling method .

In experimental research , you systematically intervene in a process and measure the outcome. The validity of your research will depend on your experimental design .

To conduct an experiment, you need to be able to vary your independent variable , precisely measure your dependent variable, and control for confounding variables . If it’s practically and ethically possible, this method is the best choice for answering questions about cause and effect.

Descriptive
Experimental

Prevent plagiarism, run a free check.

Research methods for collecting data
Research method Primary or secondary? Qualitative or quantitative? When to use
Primary Quantitative To test cause-and-effect relationships.
Primary Quantitative To understand general characteristics of a population.
Interview/focus group Primary Qualitative To gain more in-depth understanding of a topic.
Observation Primary Either To understand how something occurs in its natural setting.
Secondary Either To situate your research in an existing body of work, or to evaluate trends within a research topic.
Either Either To gain an in-depth understanding of a specific group or context, or when you don’t have the resources for a large study.

Your data analysis methods will depend on the type of data you collect and how you prepare them for analysis.

Data can often be analysed both quantitatively and qualitatively. For example, survey responses could be analysed qualitatively by studying the meanings of responses or quantitatively by studying the frequencies of responses.

Qualitative analysis methods

Qualitative analysis is used to understand words, ideas, and experiences. You can use it to interpret data that were collected:

  • From open-ended survey and interview questions, literature reviews, case studies, and other sources that use text rather than numbers.
  • Using non-probability sampling methods .

Qualitative analysis tends to be quite flexible and relies on the researcher’s judgement, so you have to reflect carefully on your choices and assumptions.

Quantitative analysis methods

Quantitative analysis uses numbers and statistics to understand frequencies, averages and correlations (in descriptive studies) or cause-and-effect relationships (in experiments).

You can use quantitative analysis to interpret data that were collected either:

  • During an experiment.
  • Using probability sampling methods .

Because the data are collected and analysed in a statistically valid way, the results of quantitative analysis can be easily standardised and shared among researchers.

Research methods for analysing data
Research method Qualitative or quantitative? When to use
Quantitative To analyse data collected in a statistically valid manner (e.g. from experiments, surveys, and observations).
Meta-analysis Quantitative To statistically analyse the results of a large collection of studies.

Can only be applied to studies that collected data in a statistically valid manner.

Qualitative To analyse data collected from interviews, focus groups or textual sources.

To understand general themes in the data and how they are communicated.

Either To analyse large volumes of textual or visual data collected from surveys, literature reviews, or other sources.

Can be quantitative (i.e. frequencies of words) or qualitative (i.e. meanings of words).

Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.

Quantitative methods allow you to test a hypothesis by systematically collecting and analysing data, while qualitative methods allow you to explore ideas and experiences in depth.

In mixed methods research , you use both qualitative and quantitative data collection and analysis methods to answer your research question .

A sample is a subset of individuals from a larger population. Sampling means selecting the group that you will actually collect data from in your research.

For example, if you are researching the opinions of students in your university, you could survey a sample of 100 students.

Statistical sampling allows you to test a hypothesis about the characteristics of a population. There are various sampling methods you can use to ensure that your sample is representative of the population as a whole.

The research methods you use depend on the type of data you need to answer your research question .

  • If you want to measure something or test a hypothesis , use quantitative methods . If you want to explore ideas, thoughts, and meanings, use qualitative methods .
  • If you want to analyse a large amount of readily available data, use secondary data. If you want data specific to your purposes with control over how they are generated, collect primary data.
  • If you want to establish cause-and-effect relationships between variables , use experimental methods. If you want to understand the characteristics of a research subject, use descriptive methods.

Methodology refers to the overarching strategy and rationale of your research project . It involves studying the methods used in your field and the theories or principles behind them, in order to develop an approach that matches your objectives.

Methods are the specific tools and procedures you use to collect and analyse data (e.g. experiments, surveys , and statistical tests ).

In shorter scientific papers, where the aim is to report the findings of a specific study, you might simply describe what you did in a methods section .

In a longer or more complex research project, such as a thesis or dissertation , you will probably include a methodology section , where you explain your approach to answering the research questions and cite relevant sources to support your choice of methods.

Is this article helpful?

More interesting articles.

  • A Quick Guide to Experimental Design | 5 Steps & Examples
  • Between-Subjects Design | Examples, Pros & Cons
  • Case Study | Definition, Examples & Methods
  • Cluster Sampling | A Simple Step-by-Step Guide with Examples
  • Confounding Variables | Definition, Examples & Controls
  • Construct Validity | Definition, Types, & Examples
  • Content Analysis | A Step-by-Step Guide with Examples
  • Control Groups and Treatment Groups | Uses & Examples
  • Controlled Experiments | Methods & Examples of Control
  • Correlation vs Causation | Differences, Designs & Examples
  • Correlational Research | Guide, Design & Examples
  • Critical Discourse Analysis | Definition, Guide & Examples
  • Cross-Sectional Study | Definitions, Uses & Examples
  • Data Cleaning | A Guide with Examples & Steps
  • Data Collection Methods | Step-by-Step Guide & Examples
  • Descriptive Research Design | Definition, Methods & Examples
  • Doing Survey Research | A Step-by-Step Guide & Examples
  • Ethical Considerations in Research | Types & Examples
  • Explanatory Research | Definition, Guide, & Examples
  • Explanatory vs Response Variables | Definitions & Examples
  • Exploratory Research | Definition, Guide, & Examples
  • External Validity | Types, Threats & Examples
  • Extraneous Variables | Examples, Types, Controls
  • Face Validity | Guide with Definition & Examples
  • How to Do Thematic Analysis | Guide & Examples
  • How to Write a Strong Hypothesis | Guide & Examples
  • Inclusion and Exclusion Criteria | Examples & Definition
  • Independent vs Dependent Variables | Definition & Examples
  • Inductive Reasoning | Types, Examples, Explanation
  • Inductive vs Deductive Research Approach (with Examples)
  • Internal Validity | Definition, Threats & Examples
  • Internal vs External Validity | Understanding Differences & Examples
  • Longitudinal Study | Definition, Approaches & Examples
  • Mediator vs Moderator Variables | Differences & Examples
  • Mixed Methods Research | Definition, Guide, & Examples
  • Multistage Sampling | An Introductory Guide with Examples
  • Naturalistic Observation | Definition, Guide & Examples
  • Operationalisation | A Guide with Examples, Pros & Cons
  • Population vs Sample | Definitions, Differences & Examples
  • Primary Research | Definition, Types, & Examples
  • Qualitative vs Quantitative Research | Examples & Methods
  • Quasi-Experimental Design | Definition, Types & Examples
  • Questionnaire Design | Methods, Question Types & Examples
  • Random Assignment in Experiments | Introduction & Examples
  • Reliability vs Validity in Research | Differences, Types & Examples
  • Reproducibility vs Replicability | Difference & Examples
  • Research Design | Step-by-Step Guide with Examples
  • Sampling Methods | Types, Techniques, & Examples
  • Semi-Structured Interview | Definition, Guide & Examples
  • Simple Random Sampling | Definition, Steps & Examples
  • Stratified Sampling | A Step-by-Step Guide with Examples
  • Structured Interview | Definition, Guide & Examples
  • Systematic Review | Definition, Examples & Guide
  • Systematic Sampling | A Step-by-Step Guide with Examples
  • Textual Analysis | Guide, 3 Approaches & Examples
  • The 4 Types of Reliability in Research | Definitions & Examples
  • The 4 Types of Validity | Types, Definitions & Examples
  • Transcribing an Interview | 5 Steps & Transcription Software
  • Triangulation in Research | Guide, Types, Examples
  • Types of Interviews in Research | Guide & Examples
  • Types of Research Designs Compared | Examples
  • Types of Variables in Research | Definitions & Examples
  • Unstructured Interview | Definition, Guide & Examples
  • What Are Control Variables | Definition & Examples
  • What Is a Case-Control Study? | Definition & Examples
  • What Is a Cohort Study? | Definition & Examples
  • What Is a Conceptual Framework? | Tips & Examples
  • What Is a Double-Barrelled Question?
  • What Is a Double-Blind Study? | Introduction & Examples
  • What Is a Focus Group? | Step-by-Step Guide & Examples
  • What Is a Likert Scale? | Guide & Examples
  • What is a Literature Review? | Guide, Template, & Examples
  • What Is a Prospective Cohort Study? | Definition & Examples
  • What Is a Retrospective Cohort Study? | Definition & Examples
  • What Is Action Research? | Definition & Examples
  • What Is an Observational Study? | Guide & Examples
  • What Is Concurrent Validity? | Definition & Examples
  • What Is Content Validity? | Definition & Examples
  • What Is Convenience Sampling? | Definition & Examples
  • What Is Convergent Validity? | Definition & Examples
  • What Is Criterion Validity? | Definition & Examples
  • What Is Deductive Reasoning? | Explanation & Examples
  • What Is Discriminant Validity? | Definition & Example
  • What Is Ecological Validity? | Definition & Examples
  • What Is Ethnography? | Meaning, Guide & Examples
  • What Is Non-Probability Sampling? | Types & Examples
  • What Is Participant Observation? | Definition & Examples
  • What Is Peer Review? | Types & Examples
  • What Is Predictive Validity? | Examples & Definition
  • What Is Probability Sampling? | Types & Examples
  • What Is Purposive Sampling? | Definition & Examples
  • What Is Qualitative Observation? | Definition & Examples
  • What Is Qualitative Research? | Methods & Examples
  • What Is Quantitative Observation? | Definition & Examples
  • What Is Quantitative Research? | Definition & Methods
  • What Is Quota Sampling? | Definition & Examples
  • What is Secondary Research? | Definition, Types, & Examples
  • What Is Snowball Sampling? | Definition & Examples
  • Within-Subjects Design | Explanation, Approaches, Examples

Research Methods: What are research methods?

  • What are research methods?
  • Searching specific databases

What are research methods

Research methods are the strategies, processes or techniques utilized in the collection of data or evidence for analysis in order to uncover new information or create better understanding of a topic.

There are different types of research methods which use different tools for data collection.

Types of research

  • Qualitative Research
  • Quantitative Research
  • Mixed Methods Research

Qualitative Research gathers data about lived experiences, emotions or behaviours, and the meanings individuals attach to them. It assists in enabling researchers to gain a better understanding of complex concepts, social interactions or cultural phenomena. This type of research is useful in the exploration of how or why things have occurred, interpreting events and describing actions.

Quantitative Research gathers numerical data which can be ranked, measured or categorised through statistical analysis. It assists with uncovering patterns or relationships, and for making generalisations. This type of research is useful for finding out how many, how much, how often, or to what extent.

Mixed Methods Research integrates both Q ualitative and Quantitative Research . It provides a holistic approach combining and analysing the statistical data with deeper contextualised insights. Using Mixed Methods also enables Triangulation,  or verification, of the data from two or more sources.

Finding Mixed Methods research in the Databases 

“mixed model*” OR “mixed design*” OR “multiple method*” OR multimethod* OR triangulat*

Data collection tools

Techniques or tools used for gathering research data include:

Qualitative Techniques or Tools Quantitative Techniques or Tools
: these can be structured, semi-structured or unstructured in-depth sessions with the researcher and a participant. Surveys or questionnaires: which ask the same questions to large numbers of participants or use Likert scales which measure opinions as numerical data.
: with several participants discussing a particular topic or a set of questions. Researchers can be facilitators or observers. Observation: which can either involve counting the number of times a specific phenomenon occurs, or the coding of observational data in order to translate it into numbers.
: On-site, in-context or role-play options. Document screening: sourcing numerical data from financial reports or counting word occurrences.
: Interrogation of correspondence (letters, diaries, emails etc) or reports. Experiments: testing hypotheses in laboratories, testing cause and effect relationships, through field experiments, or via quasi- or natural experiments.
: Remembrances or memories of experiences told to the researcher.  

SAGE research methods

  • SAGE research methods online This link opens in a new window Research methods tool to help researchers gather full-text resources, design research projects, understand a particular method and write up their research. Includes access to collections of video, business cases and eBooks,

Help and Information

Help and information

  • Next: Finding qualitative research >>
  • Last Updated: Aug 19, 2024 3:39 PM
  • URL: https://libguides.newcastle.edu.au/researchmethods
  • Research Process
  • Manuscript Preparation
  • Manuscript Review
  • Publication Process
  • Publication Recognition
  • Language Editing Services
  • Translation Services

Elsevier QRcode Wechat

Choosing the Right Research Methodology: A Guide for Researchers

  • 3 minute read
  • 51.4K views

Table of Contents

Choosing an optimal research methodology is crucial for the success of any research project. The methodology you select will determine the type of data you collect, how you collect it, and how you analyse it. Understanding the different types of research methods available along with their strengths and weaknesses, is thus imperative to make an informed decision.

Understanding different research methods:

There are several research methods available depending on the type of study you are conducting, i.e., whether it is laboratory-based, clinical, epidemiological, or survey based . Some common methodologies include qualitative research, quantitative research, experimental research, survey-based research, and action research. Each method can be opted for and modified, depending on the type of research hypotheses and objectives.

Qualitative vs quantitative research:

When deciding on a research methodology, one of the key factors to consider is whether your research will be qualitative or quantitative. Qualitative research is used to understand people’s experiences, concepts, thoughts, or behaviours . Quantitative research, on the contrary, deals with numbers, graphs, and charts, and is used to test or confirm hypotheses, assumptions, and theories. 

Qualitative research methodology:

Qualitative research is often used to examine issues that are not well understood, and to gather additional insights on these topics. Qualitative research methods include open-ended survey questions, observations of behaviours described through words, and reviews of literature that has explored similar theories and ideas. These methods are used to understand how language is used in real-world situations, identify common themes or overarching ideas, and describe and interpret various texts. Data analysis for qualitative research typically includes discourse analysis, thematic analysis, and textual analysis. 

Quantitative research methodology:

The goal of quantitative research is to test hypotheses, confirm assumptions and theories, and determine cause-and-effect relationships. Quantitative research methods include experiments, close-ended survey questions, and countable and numbered observations. Data analysis for quantitative research relies heavily on statistical methods.

Analysing qualitative vs quantitative data:

The methods used for data analysis also differ for qualitative and quantitative research. As mentioned earlier, quantitative data is generally analysed using statistical methods and does not leave much room for speculation. It is more structured and follows a predetermined plan. In quantitative research, the researcher starts with a hypothesis and uses statistical methods to test it. Contrarily, methods used for qualitative data analysis can identify patterns and themes within the data, rather than provide statistical measures of the data. It is an iterative process, where the researcher goes back and forth trying to gauge the larger implications of the data through different perspectives and revising the analysis if required.

When to use qualitative vs quantitative research:

The choice between qualitative and quantitative research will depend on the gap that the research project aims to address, and specific objectives of the study. If the goal is to establish facts about a subject or topic, quantitative research is an appropriate choice. However, if the goal is to understand people’s experiences or perspectives, qualitative research may be more suitable. 

Conclusion:

In conclusion, an understanding of the different research methods available, their applicability, advantages, and disadvantages is essential for making an informed decision on the best methodology for your project. If you need any additional guidance on which research methodology to opt for, you can head over to Elsevier Author Services (EAS). EAS experts will guide you throughout the process and help you choose the perfect methodology for your research goals.

Why is data validation important in research

Why is data validation important in research?

Importance-of-Data-Collection

When Data Speak, Listen: Importance of Data Collection and Analysis Methods

You may also like.

what is a descriptive research design

Descriptive Research Design and Its Myriad Uses

Doctor doing a Biomedical Research Paper

Five Common Mistakes to Avoid When Writing a Biomedical Research Paper

Writing in Environmental Engineering

Making Technical Writing in Environmental Engineering Accessible

Risks of AI-assisted Academic Writing

To Err is Not Human: The Dangers of AI-assisted Academic Writing

Importance-of-Data-Collection

Writing a good review article

Scholarly Sources What are They and Where can You Find Them

Scholarly Sources: What are They and Where can You Find Them?

Input your search keywords and press Enter.

ResearchMethodology.org

Home » What are Research Methods?

What are Research Methods?

Imagine you’re starting on a journey of discovery, and research methods are your compass, map, and tools. These methods guide us in exploring the vast landscape of knowledge, ensuring our journey is structured, reliable, and fruitful.

Table of Contents

Research Methods

Research Methods

Research Methods are systematic strategies, steps, and tools that researchers use to gather, analyze, and interpret data about a particular topic. It’s like cooking a new recipe; you need the right ingredients (data), a good method (research design), and the proper tools (instruments like surveys or experiments) to create a delightful dish (knowledge).

Types of Research Methods

Qualitative research.

This is akin to painting a portrait. It focuses on understanding concepts, thoughts, and experiences through detailed, descriptive data. Imagine sitting down with someone and listening to their story to grasp the depth of their experiences. Tools for this might include interviews , observations , and textual analysis .

Quantitative Research

Now, imagine yourself counting stars in the sky. This method deals with numbers and statistical analysis. It seeks to quantify the problem by generating numerical data or data that can be transformed into usable statistics. Surveys with multiple-choice questions or experiments where you measure and compare are typical tools here.

Mixed Methods

Sometimes, a single perspective isn’t enough. Mixed methods blend the colors of both qualitative and quantitative research, offering a more comprehensive picture. It’s like using both a microscope and a telescope; you get the detail and the big picture.

Steps in the Research Process

Identifying the Problem : Every journey begins with recognizing where you want to go. What’s the question you’re burning to answer? This step involves defining the scope and purpose of your research.

Literature Review : Before you set out, you need to map the terrain by exploring what others have discovered before you. This involves reading and summarizing existing research on your topic.

Designing the Study : Here’s where you plan your route. Will you conduct interviews? Send out surveys? Observe behaviors? This step involves deciding on your research method, participants, and tools.

Collecting Data : Time to hit the road and gather your data. This is the hands-on part of your research, where you implement your chosen methods to collect information.

Analyzing Data : With your treasures in hand, you now sift through your findings, looking for patterns, themes, or statistical relationships. This step often involves software for qualitative or quantitative analysis.

Interpreting Results : What have you discovered? This stage is about making sense of your data, connecting the dots, and understanding what your findings mean in the context of your research question.

Reporting and Sharing Findings : The final step is to share your journey’s story. This could be a research paper, a presentation, or any format that communicates your discoveries to others.

Ethics in Research

Imagine you’re a guest in someone’s home; you must be respectful and considerate. Similarly, ethical considerations are paramount in research. This means ensuring confidentiality, obtaining informed consent, and treating all subjects (people, animals, the environment) with respect and dignity.

Research methods are the compass, map, and tools that guide us through the terrain of knowledge. They enable us to ask important questions, systematically gather and analyze data, and contribute valuable insights to our understanding of the world. As you start on your research journey, embrace the adventure, respect the process, and look forward to the discoveries that await you.

You may also like

Types of Mixed Research Methods

Types of Mixed Research Methods

Mixed Research Methods

What is Mixed Research Methods

Types of Quantitative Research Methods

Types of Quantitative Research Methods

Quantitative Research Methods

Quantitative Research Methods

Types of Qualitative Research Methods

Types of Qualitative Research Methods

Qualitative Research Methods

Qualitative Research Methods

method used in research

What is Research Methodology? Definition, Types, and Examples

method used in research

Research methodology 1,2 is a structured and scientific approach used to collect, analyze, and interpret quantitative or qualitative data to answer research questions or test hypotheses. A research methodology is like a plan for carrying out research and helps keep researchers on track by limiting the scope of the research. Several aspects must be considered before selecting an appropriate research methodology, such as research limitations and ethical concerns that may affect your research.

The research methodology section in a scientific paper describes the different methodological choices made, such as the data collection and analysis methods, and why these choices were selected. The reasons should explain why the methods chosen are the most appropriate to answer the research question. A good research methodology also helps ensure the reliability and validity of the research findings. There are three types of research methodology—quantitative, qualitative, and mixed-method, which can be chosen based on the research objectives.

What is research methodology ?

A research methodology describes the techniques and procedures used to identify and analyze information regarding a specific research topic. It is a process by which researchers design their study so that they can achieve their objectives using the selected research instruments. It includes all the important aspects of research, including research design, data collection methods, data analysis methods, and the overall framework within which the research is conducted. While these points can help you understand what is research methodology, you also need to know why it is important to pick the right methodology.

Paperpal your AI academic writing assistant

Having a good research methodology in place has the following advantages: 3

  • Helps other researchers who may want to replicate your research; the explanations will be of benefit to them.
  • You can easily answer any questions about your research if they arise at a later stage.
  • A research methodology provides a framework and guidelines for researchers to clearly define research questions, hypotheses, and objectives.
  • It helps researchers identify the most appropriate research design, sampling technique, and data collection and analysis methods.
  • A sound research methodology helps researchers ensure that their findings are valid and reliable and free from biases and errors.
  • It also helps ensure that ethical guidelines are followed while conducting research.
  • A good research methodology helps researchers in planning their research efficiently, by ensuring optimum usage of their time and resources.

Writing the methods section of a research paper? Let Paperpal help you achieve perfection  

Types of research methodology.

There are three types of research methodology based on the type of research and the data required. 1

  • Quantitative research methodology focuses on measuring and testing numerical data. This approach is good for reaching a large number of people in a short amount of time. This type of research helps in testing the causal relationships between variables, making predictions, and generalizing results to wider populations.
  • Qualitative research methodology examines the opinions, behaviors, and experiences of people. It collects and analyzes words and textual data. This research methodology requires fewer participants but is still more time consuming because the time spent per participant is quite large. This method is used in exploratory research where the research problem being investigated is not clearly defined.
  • Mixed-method research methodology uses the characteristics of both quantitative and qualitative research methodologies in the same study. This method allows researchers to validate their findings, verify if the results observed using both methods are complementary, and explain any unexpected results obtained from one method by using the other method.

What are the types of sampling designs in research methodology?

Sampling 4 is an important part of a research methodology and involves selecting a representative sample of the population to conduct the study, making statistical inferences about them, and estimating the characteristics of the whole population based on these inferences. There are two types of sampling designs in research methodology—probability and nonprobability.

  • Probability sampling

In this type of sampling design, a sample is chosen from a larger population using some form of random selection, that is, every member of the population has an equal chance of being selected. The different types of probability sampling are:

  • Systematic —sample members are chosen at regular intervals. It requires selecting a starting point for the sample and sample size determination that can be repeated at regular intervals. This type of sampling method has a predefined range; hence, it is the least time consuming.
  • Stratified —researchers divide the population into smaller groups that don’t overlap but represent the entire population. While sampling, these groups can be organized, and then a sample can be drawn from each group separately.
  • Cluster —the population is divided into clusters based on demographic parameters like age, sex, location, etc.
  • Convenience —selects participants who are most easily accessible to researchers due to geographical proximity, availability at a particular time, etc.
  • Purposive —participants are selected at the researcher’s discretion. Researchers consider the purpose of the study and the understanding of the target audience.
  • Snowball —already selected participants use their social networks to refer the researcher to other potential participants.
  • Quota —while designing the study, the researchers decide how many people with which characteristics to include as participants. The characteristics help in choosing people most likely to provide insights into the subject.

What are data collection methods?

During research, data are collected using various methods depending on the research methodology being followed and the research methods being undertaken. Both qualitative and quantitative research have different data collection methods, as listed below.

Qualitative research 5

  • One-on-one interviews: Helps the interviewers understand a respondent’s subjective opinion and experience pertaining to a specific topic or event
  • Document study/literature review/record keeping: Researchers’ review of already existing written materials such as archives, annual reports, research articles, guidelines, policy documents, etc.
  • Focus groups: Constructive discussions that usually include a small sample of about 6-10 people and a moderator, to understand the participants’ opinion on a given topic.
  • Qualitative observation : Researchers collect data using their five senses (sight, smell, touch, taste, and hearing).

Quantitative research 6

  • Sampling: The most common type is probability sampling.
  • Interviews: Commonly telephonic or done in-person.
  • Observations: Structured observations are most commonly used in quantitative research. In this method, researchers make observations about specific behaviors of individuals in a structured setting.
  • Document review: Reviewing existing research or documents to collect evidence for supporting the research.
  • Surveys and questionnaires. Surveys can be administered both online and offline depending on the requirement and sample size.

Let Paperpal help you write the perfect research methods section. Start now!

What are data analysis methods.

The data collected using the various methods for qualitative and quantitative research need to be analyzed to generate meaningful conclusions. These data analysis methods 7 also differ between quantitative and qualitative research.

Quantitative research involves a deductive method for data analysis where hypotheses are developed at the beginning of the research and precise measurement is required. The methods include statistical analysis applications to analyze numerical data and are grouped into two categories—descriptive and inferential.

Descriptive analysis is used to describe the basic features of different types of data to present it in a way that ensures the patterns become meaningful. The different types of descriptive analysis methods are:

  • Measures of frequency (count, percent, frequency)
  • Measures of central tendency (mean, median, mode)
  • Measures of dispersion or variation (range, variance, standard deviation)
  • Measure of position (percentile ranks, quartile ranks)

Inferential analysis is used to make predictions about a larger population based on the analysis of the data collected from a smaller population. This analysis is used to study the relationships between different variables. Some commonly used inferential data analysis methods are:

  • Correlation: To understand the relationship between two or more variables.
  • Cross-tabulation: Analyze the relationship between multiple variables.
  • Regression analysis: Study the impact of independent variables on the dependent variable.
  • Frequency tables: To understand the frequency of data.
  • Analysis of variance: To test the degree to which two or more variables differ in an experiment.

Qualitative research involves an inductive method for data analysis where hypotheses are developed after data collection. The methods include:

  • Content analysis: For analyzing documented information from text and images by determining the presence of certain words or concepts in texts.
  • Narrative analysis: For analyzing content obtained from sources such as interviews, field observations, and surveys. The stories and opinions shared by people are used to answer research questions.
  • Discourse analysis: For analyzing interactions with people considering the social context, that is, the lifestyle and environment, under which the interaction occurs.
  • Grounded theory: Involves hypothesis creation by data collection and analysis to explain why a phenomenon occurred.
  • Thematic analysis: To identify important themes or patterns in data and use these to address an issue.

How to choose a research methodology?

Here are some important factors to consider when choosing a research methodology: 8

  • Research objectives, aims, and questions —these would help structure the research design.
  • Review existing literature to identify any gaps in knowledge.
  • Check the statistical requirements —if data-driven or statistical results are needed then quantitative research is the best. If the research questions can be answered based on people’s opinions and perceptions, then qualitative research is most suitable.
  • Sample size —sample size can often determine the feasibility of a research methodology. For a large sample, less effort- and time-intensive methods are appropriate.
  • Constraints —constraints of time, geography, and resources can help define the appropriate methodology.

Got writer’s block? Kickstart your research paper writing with Paperpal now!

How to write a research methodology .

A research methodology should include the following components: 3,9

  • Research design —should be selected based on the research question and the data required. Common research designs include experimental, quasi-experimental, correlational, descriptive, and exploratory.
  • Research method —this can be quantitative, qualitative, or mixed-method.
  • Reason for selecting a specific methodology —explain why this methodology is the most suitable to answer your research problem.
  • Research instruments —explain the research instruments you plan to use, mainly referring to the data collection methods such as interviews, surveys, etc. Here as well, a reason should be mentioned for selecting the particular instrument.
  • Sampling —this involves selecting a representative subset of the population being studied.
  • Data collection —involves gathering data using several data collection methods, such as surveys, interviews, etc.
  • Data analysis —describe the data analysis methods you will use once you’ve collected the data.
  • Research limitations —mention any limitations you foresee while conducting your research.
  • Validity and reliability —validity helps identify the accuracy and truthfulness of the findings; reliability refers to the consistency and stability of the results over time and across different conditions.
  • Ethical considerations —research should be conducted ethically. The considerations include obtaining consent from participants, maintaining confidentiality, and addressing conflicts of interest.

Streamline Your Research Paper Writing Process with Paperpal  

The methods section is a critical part of the research papers, allowing researchers to use this to understand your findings and replicate your work when pursuing their own research. However, it is usually also the most difficult section to write. This is where Paperpal can help you overcome the writer’s block and create the first draft in minutes with Paperpal Copilot, its secure generative AI feature suite.  

With Paperpal you can get research advice, write and refine your work, rephrase and verify the writing, and ensure submission readiness, all in one place. Here’s how you can use Paperpal to develop the first draft of your methods section.  

  • Generate an outline: Input some details about your research to instantly generate an outline for your methods section 
  • Develop the section: Use the outline and suggested sentence templates to expand your ideas and develop the first draft.  
  • P araph ras e and trim : Get clear, concise academic text with paraphrasing that conveys your work effectively and word reduction to fix redundancies. 
  • Choose the right words: Enhance text by choosing contextual synonyms based on how the words have been used in previously published work.  
  • Check and verify text : Make sure the generated text showcases your methods correctly, has all the right citations, and is original and authentic. .   

You can repeat this process to develop each section of your research manuscript, including the title, abstract and keywords. Ready to write your research papers faster, better, and without the stress? Sign up for Paperpal and start writing today!

Frequently Asked Questions

Q1. What are the key components of research methodology?

A1. A good research methodology has the following key components:

  • Research design
  • Data collection procedures
  • Data analysis methods
  • Ethical considerations

Q2. Why is ethical consideration important in research methodology?

A2. Ethical consideration is important in research methodology to ensure the readers of the reliability and validity of the study. Researchers must clearly mention the ethical norms and standards followed during the conduct of the research and also mention if the research has been cleared by any institutional board. The following 10 points are the important principles related to ethical considerations: 10

  • Participants should not be subjected to harm.
  • Respect for the dignity of participants should be prioritized.
  • Full consent should be obtained from participants before the study.
  • Participants’ privacy should be ensured.
  • Confidentiality of the research data should be ensured.
  • Anonymity of individuals and organizations participating in the research should be maintained.
  • The aims and objectives of the research should not be exaggerated.
  • Affiliations, sources of funding, and any possible conflicts of interest should be declared.
  • Communication in relation to the research should be honest and transparent.
  • Misleading information and biased representation of primary data findings should be avoided.

method used in research

Q3. What is the difference between methodology and method?

A3. Research methodology is different from a research method, although both terms are often confused. Research methods are the tools used to gather data, while the research methodology provides a framework for how research is planned, conducted, and analyzed. The latter guides researchers in making decisions about the most appropriate methods for their research. Research methods refer to the specific techniques, procedures, and tools used by researchers to collect, analyze, and interpret data, for instance surveys, questionnaires, interviews, etc.

Research methodology is, thus, an integral part of a research study. It helps ensure that you stay on track to meet your research objectives and answer your research questions using the most appropriate data collection and analysis tools based on your research design.

Accelerate your research paper writing with Paperpal. Try for free now!  

  • Research methodologies. Pfeiffer Library website. Accessed August 15, 2023. https://library.tiffin.edu/researchmethodologies/whatareresearchmethodologies
  • Types of research methodology. Eduvoice website. Accessed August 16, 2023. https://eduvoice.in/types-research-methodology/
  • The basics of research methodology: A key to quality research. Voxco. Accessed August 16, 2023. https://www.voxco.com/blog/what-is-research-methodology/
  • Sampling methods: Types with examples. QuestionPro website. Accessed August 16, 2023. https://www.questionpro.com/blog/types-of-sampling-for-social-research/
  • What is qualitative research? Methods, types, approaches, examples. Researcher.Life blog. Accessed August 15, 2023. https://researcher.life/blog/article/what-is-qualitative-research-methods-types-examples/
  • What is quantitative research? Definition, methods, types, and examples. Researcher.Life blog. Accessed August 15, 2023. https://researcher.life/blog/article/what-is-quantitative-research-types-and-examples/
  • Data analysis in research: Types & methods. QuestionPro website. Accessed August 16, 2023. https://www.questionpro.com/blog/data-analysis-in-research/#Data_analysis_in_qualitative_research
  • Factors to consider while choosing the right research methodology. PhD Monster website. Accessed August 17, 2023. https://www.phdmonster.com/factors-to-consider-while-choosing-the-right-research-methodology/
  • What is research methodology? Research and writing guides. Accessed August 14, 2023. https://paperpile.com/g/what-is-research-methodology/
  • Ethical considerations. Business research methodology website. Accessed August 17, 2023. https://research-methodology.net/research-methodology/ethical-considerations/

Paperpal is a comprehensive AI writing toolkit that helps students and researchers achieve 2x the writing in half the time. It leverages 21+ years of STM experience and insights from millions of research articles to provide in-depth academic writing, language editing, and submission readiness support to help you write better, faster.  

Get accurate academic translations, rewriting support, grammar checks, vocabulary suggestions, and generative AI assistance that delivers human precision at machine speed. Try for free or upgrade to Paperpal Prime starting at US$19 a month to access premium features, including consistency, plagiarism, and 30+ submission readiness checks to help you succeed.  

Experience the future of academic writing – Sign up to Paperpal and start writing for free!  

Related Reads:

  • Dangling Modifiers and How to Avoid Them in Your Writing 
  • Research Outlines: How to Write An Introduction Section in Minutes with Paperpal Copilot
  • How to Paraphrase Research Papers Effectively
  • What is a Literature Review? How to Write It (with Examples)

Language and Grammar Rules for Academic Writing

Climatic vs. climactic: difference and examples, you may also like, dissertation printing and binding | types & comparison , what is a dissertation preface definition and examples , how to write a research proposal: (with examples..., how to write your research paper in apa..., how to choose a dissertation topic, how to write a phd research proposal, how to write an academic paragraph (step-by-step guide), maintaining academic integrity with paperpal’s generative ai writing..., research funding basics: what should a grant proposal..., how to write an abstract in research papers....

helpful professor logo

15 Types of Research Methods

15 Types of Research Methods

Chris Drew (PhD)

Dr. Chris Drew is the founder of the Helpful Professor. He holds a PhD in education and has published over 20 articles in scholarly journals. He is the former editor of the Journal of Learning Development in Higher Education. [Image Descriptor: Photo of Chris]

Learn about our Editorial Process

types of research methods, explained below

Research methods refer to the strategies, tools, and techniques used to gather and analyze data in a structured way in order to answer a research question or investigate a hypothesis (Hammond & Wellington, 2020).

Generally, we place research methods into two categories: quantitative and qualitative. Each has its own strengths and weaknesses, which we can summarize as:

  • Quantitative research can achieve generalizability through scrupulous statistical analysis applied to large sample sizes.
  • Qualitative research achieves deep, detailed, and nuance accounts of specific case studies, which are not generalizable.

Some researchers, with the aim of making the most of both quantitative and qualitative research, employ mixed methods, whereby they will apply both types of research methods in the one study, such as by conducting a statistical survey alongside in-depth interviews to add context to the quantitative findings.

Below, I’ll outline 15 common research methods, and include pros, cons, and examples of each .

Types of Research Methods

Research methods can be broadly categorized into two types: quantitative and qualitative.

  • Quantitative methods involve systematic empirical investigation of observable phenomena via statistical, mathematical, or computational techniques, providing an in-depth understanding of a specific concept or phenomenon (Schweigert, 2021). The strengths of this approach include its ability to produce reliable results that can be generalized to a larger population, although it can lack depth and detail.
  • Qualitative methods encompass techniques that are designed to provide a deep understanding of a complex issue, often in a specific context, through collection of non-numerical data (Tracy, 2019). This approach often provides rich, detailed insights but can be time-consuming and its findings may not be generalizable.

These can be further broken down into a range of specific research methods and designs:

Primarily Quantitative MethodsPrimarily Qualitative methods
Experimental ResearchCase Study
Surveys and QuestionnairesEthnography
Longitudinal StudiesPhenomenology
Cross-Sectional StudiesHistorical research
Correlational ResearchContent analysis
Causal-Comparative ResearchGrounded theory
Meta-AnalysisAction research
Quasi-Experimental DesignObservational research

Combining the two methods above, mixed methods research mixes elements of both qualitative and quantitative research methods, providing a comprehensive understanding of the research problem . We can further break these down into:

  • Sequential Explanatory Design (QUAN→QUAL): This methodology involves conducting quantitative analysis first, then supplementing it with a qualitative study.
  • Sequential Exploratory Design (QUAL→QUAN): This methodology goes in the other direction, starting with qualitative analysis and ending with quantitative analysis.

Let’s explore some methods and designs from both quantitative and qualitative traditions, starting with qualitative research methods.

Qualitative Research Methods

Qualitative research methods allow for the exploration of phenomena in their natural settings, providing detailed, descriptive responses and insights into individuals’ experiences and perceptions (Howitt, 2019).

These methods are useful when a detailed understanding of a phenomenon is sought.

1. Ethnographic Research

Ethnographic research emerged out of anthropological research, where anthropologists would enter into a setting for a sustained period of time, getting to know a cultural group and taking detailed observations.

Ethnographers would sometimes even act as participants in the group or culture, which many scholars argue is a weakness because it is a step away from achieving objectivity (Stokes & Wall, 2017).

In fact, at its most extreme version, ethnographers even conduct research on themselves, in a fascinating methodology call autoethnography .

The purpose is to understand the culture, social structure, and the behaviors of the group under study. It is often useful when researchers seek to understand shared cultural meanings and practices in their natural settings.

However, it can be time-consuming and may reflect researcher biases due to the immersion approach.

Pros of Ethnographic ResearchCons of Ethnographic Research
1. Provides deep cultural insights1. Time-consuming
2. Contextually relevant findings2. Potential researcher bias
3. Explores dynamic social processes3. May

Example of Ethnography

Liquidated: An Ethnography of Wall Street  by Karen Ho involves an anthropologist who embeds herself with Wall Street firms to study the culture of Wall Street bankers and how this culture affects the broader economy and world.

2. Phenomenological Research

Phenomenological research is a qualitative method focused on the study of individual experiences from the participant’s perspective (Tracy, 2019).

It focuses specifically on people’s experiences in relation to a specific social phenomenon ( see here for examples of social phenomena ).

This method is valuable when the goal is to understand how individuals perceive, experience, and make meaning of particular phenomena. However, because it is subjective and dependent on participants’ self-reports, findings may not be generalizable, and are highly reliant on self-reported ‘thoughts and feelings’.

Pros of Phenomenological ResearchCons of Phenomenological Research
1. Provides rich, detailed data1. Limited generalizability
2. Highlights personal experience and perceptions2. Data collection can be time-consuming
3. Allows exploration of complex phenomena3. Requires highly skilled researchers

Example of Phenomenological Research

A phenomenological approach to experiences with technology  by Sebnem Cilesiz represents a good starting-point for formulating a phenomenological study. With its focus on the ‘essence of experience’, this piece presents methodological, reliability, validity, and data analysis techniques that phenomenologists use to explain how people experience technology in their everyday lives.

3. Historical Research

Historical research is a qualitative method involving the examination of past events to draw conclusions about the present or make predictions about the future (Stokes & Wall, 2017).

As you might expect, it’s common in the research branches of history departments in universities.

This approach is useful in studies that seek to understand the past to interpret present events or trends. However, it relies heavily on the availability and reliability of source materials, which may be limited.

Common data sources include cultural artifacts from both material and non-material culture , which are then examined, compared, contrasted, and contextualized to test hypotheses and generate theories.

Pros of Historical ResearchCons of Historical Research
1. 1. Dependent on available sources
2. Can help understand current events or trends2. Potential bias in source materials
3. Allows the study of change over time3. Difficult to replicate

Example of Historical Research

A historical research example might be a study examining the evolution of gender roles over the last century. This research might involve the analysis of historical newspapers, advertisements, letters, and company documents, as well as sociocultural contexts.

4. Content Analysis

Content analysis is a research method that involves systematic and objective coding and interpreting of text or media to identify patterns, themes, ideologies, or biases (Schweigert, 2021).

A content analysis is useful in analyzing communication patterns, helping to reveal how texts such as newspapers, movies, films, political speeches, and other types of ‘content’ contain narratives and biases.

However, interpretations can be very subjective, which often requires scholars to engage in practices such as cross-comparing their coding with peers or external researchers.

Content analysis can be further broken down in to other specific methodologies such as semiotic analysis, multimodal analysis , and discourse analysis .

Pros of Content AnalysisCons of Content Analysis
1. Unobtrusive data collection1. Lacks contextual information
2. Allows for large sample analysis2. Potential coder bias
3. Replicable and reliable if done properly3. May overlook nuances

Example of Content Analysis

How is Islam Portrayed in Western Media?  by Poorebrahim and Zarei (2013) employs a type of content analysis called critical discourse analysis (common in poststructuralist and critical theory research ). This study by Poorebrahum and Zarei combs through a corpus of western media texts to explore the language forms that are used in relation to Islam and Muslims, finding that they are overly stereotyped, which may represent anti-Islam bias or failure to understand the Islamic world.

5. Grounded Theory Research

Grounded theory involves developing a theory  during and after  data collection rather than beforehand.

This is in contrast to most academic research studies, which start with a hypothesis or theory and then testing of it through a study, where we might have a null hypothesis (disproving the theory) and an alternative hypothesis (supporting the theory).

Grounded Theory is useful because it keeps an open mind to what the data might reveal out of the research. It can be time-consuming and requires rigorous data analysis (Tracy, 2019).

Pros of Grounded Theory ResearchCons of Grounded Theory Research
1. Helps with theory development1. Time-consuming
2. Rigorous data analysis2. Requires iterative data collection and analysis
3. Can fill gaps in existing theories3. Requires skilled researchers

Grounded Theory Example

Developing a Leadership Identity   by Komives et al (2005) employs a grounded theory approach to develop a thesis based on the data rather than testing a hypothesis. The researchers studied the leadership identity of 13 college students taking on leadership roles. Based on their interviews, the researchers theorized that the students’ leadership identities shifted from a hierarchical view of leadership to one that embraced leadership as a collaborative concept.

6. Action Research

Action research is an approach which aims to solve real-world problems and bring about change within a setting. The study is designed to solve a specific problem – or in other words, to take action (Patten, 2017).

This approach can involve mixed methods, but is generally qualitative because it usually involves the study of a specific case study wherein the researcher works, e.g. a teacher studying their own classroom practice to seek ways they can improve.

Action research is very common in fields like education and nursing where practitioners identify areas for improvement then implement a study in order to find paths forward.

Pros of Action ResearchCons of Action Research
1. Addresses real-world problems and seeks to find solutions.1. It is time-consuming and often hard to implement into a practitioner’s already busy schedule
2. Integrates research and action in an action-research cycle.2. Requires collaboration between researcher, practitioner, and research participants.
3. Can bring about positive change in isolated instances, such as in a school or nursery setting.3. Complexity of managing dual roles (where the researcher is also often the practitioner)

Action Research Example

Using Digital Sandbox Gaming to Improve Creativity Within Boys’ Writing   by Ellison and Drew was a research study one of my research students completed in his own classroom under my supervision. He implemented a digital game-based approach to literacy teaching with boys and interviewed his students to see if the use of games as stimuli for storytelling helped draw them into the learning experience.

7. Natural Observational Research

Observational research can also be quantitative (see: experimental research), but in naturalistic settings for the social sciences, researchers tend to employ qualitative data collection methods like interviews and field notes to observe people in their day-to-day environments.

This approach involves the observation and detailed recording of behaviors in their natural settings (Howitt, 2019). It can provide rich, in-depth information, but the researcher’s presence might influence behavior.

While observational research has some overlaps with ethnography (especially in regard to data collection techniques), it tends not to be as sustained as ethnography, e.g. a researcher might do 5 observations, every second Monday, as opposed to being embedded in an environment.

Pros of Qualitative Observational ResearchCons of Qualitative Observational Research
1. Captures behavior in natural settings, allowing for interesting insights into authentic behaviors. 1. Researcher’s presence may influence behavior
2. Can provide rich, detailed data through the researcher’s vignettes.2. Can be time-consuming
3. Non-invasive because researchers want to observe natural activities rather than interfering with research participants.3. Requires skilled and trained observers

Observational Research Example

A researcher might use qualitative observational research to study the behaviors and interactions of children at a playground. The researcher would document the behaviors observed, such as the types of games played, levels of cooperation , and instances of conflict.

8. Case Study Research

Case study research is a qualitative method that involves a deep and thorough investigation of a single individual, group, or event in order to explore facets of that phenomenon that cannot be captured using other methods (Stokes & Wall, 2017).

Case study research is especially valuable in providing contextualized insights into specific issues, facilitating the application of abstract theories to real-world situations (Patten, 2017).

However, findings from a case study may not be generalizable due to the specific context and the limited number of cases studied (Walliman, 2021).

Pros of Case Study ResearchCons of Case Study Research
1. Provides detailed insights1. Limited generalizability
2. Facilitates the study of complex phenomena2. Can be time-consuming
3. Can test or generate theories3. Subject to observer bias

See More: Case Study Advantages and Disadvantages

Example of a Case Study

Scholars conduct a detailed exploration of the implementation of a new teaching method within a classroom setting. The study focuses on how the teacher and students adapt to the new method, the challenges encountered, and the outcomes on student performance and engagement. While the study provides specific and detailed insights of the teaching method in that classroom, it cannot be generalized to other classrooms, as statistical significance has not been established through this qualitative approach.

Quantitative Research Methods

Quantitative research methods involve the systematic empirical investigation of observable phenomena via statistical, mathematical, or computational techniques (Pajo, 2022). The focus is on gathering numerical data and generalizing it across groups of people or to explain a particular phenomenon.

9. Experimental Research

Experimental research is a quantitative method where researchers manipulate one variable to determine its effect on another (Walliman, 2021).

This is common, for example, in high-school science labs, where students are asked to introduce a variable into a setting in order to examine its effect.

This type of research is useful in situations where researchers want to determine causal relationships between variables. However, experimental conditions may not reflect real-world conditions.

Pros of Experimental ResearchCons of Experimental Research
1. Allows for determination of causality1. Might not reflect real-world conditions
2. Allows for the study of phenomena in highly controlled environments to minimize research contamination.2. Can be costly and time-consuming to create a controlled environment.
3. Can be replicated so other researchers can test and verify the results.3. Ethical concerns need to be addressed as the research is directly manipulating variables.

Example of Experimental Research

A researcher may conduct an experiment to determine the effects of a new educational approach on student learning outcomes. Students would be randomly assigned to either the control group (traditional teaching method) or the experimental group (new educational approach).

10. Surveys and Questionnaires

Surveys and questionnaires are quantitative methods that involve asking research participants structured and predefined questions to collect data about their attitudes, beliefs, behaviors, or characteristics (Patten, 2017).

Surveys are beneficial for collecting data from large samples, but they depend heavily on the honesty and accuracy of respondents.

They tend to be seen as more authoritative than their qualitative counterparts, semi-structured interviews, because the data is quantifiable (e.g. a questionnaire where information is presented on a scale from 1 to 10 can allow researchers to determine and compare statistical means, averages, and variations across sub-populations in the study).

Pros of Surveys and QuestionnairesCons of Surveys and Questionnaires
1. Data can be gathered from larger samples than is possible in qualitative research. 1. There is heavy dependence on respondent honesty
2. The data is quantifiable, allowing for comparison across subpopulations2. There is limited depth of response as opposed to qualitative approaches.
3. Can be cost-effective and time-efficient3. Static with no flexibility to explore responses (unlike semi- or unstrcutured interviewing)

Example of a Survey Study

A company might use a survey to gather data about employee job satisfaction across its offices worldwide. Employees would be asked to rate various aspects of their job satisfaction on a Likert scale. While this method provides a broad overview, it may lack the depth of understanding possible with other methods (Stokes & Wall, 2017).

11. Longitudinal Studies

Longitudinal studies involve repeated observations of the same variables over extended periods (Howitt, 2019). These studies are valuable for tracking development and change but can be costly and time-consuming.

With multiple data points collected over extended periods, it’s possible to examine continuous changes within things like population dynamics or consumer behavior. This makes a detailed analysis of change possible.

a visual representation of a longitudinal study demonstrating that data is collected over time on one sample so researchers can examine how variables change over time

Perhaps the most relatable example of a longitudinal study is a national census, which is taken on the same day every few years, to gather comparative demographic data that can show how a nation is changing over time.

While longitudinal studies are commonly quantitative, there are also instances of qualitative ones as well, such as the famous 7 Up study from the UK, which studies 14 individuals every 7 years to explore their development over their lives.

Pros of Longitudinal StudiesCons of Longitudinal Studies
1. Tracks changes over time allowing for comparison of past to present events.1. Is almost by definition time-consuming because time needs to pass between each data collection session.
2. Can identify sequences of events, but causality is often harder to determine.2. There is high risk of participant dropout over time as participants move on with their lives.

Example of a Longitudinal Study

A national census, taken every few years, uses surveys to develop longitudinal data, which is then compared and analyzed to present accurate trends over time. Trends a census can reveal include changes in religiosity, values and attitudes on social issues, and much more.

12. Cross-Sectional Studies

Cross-sectional studies are a quantitative research method that involves analyzing data from a population at a specific point in time (Patten, 2017). They provide a snapshot of a situation but cannot determine causality.

This design is used to measure and compare the prevalence of certain characteristics or outcomes in different groups within the sampled population.

A visual representation of a cross-sectional group of people, demonstrating that the data is collected at a single point in time and you can compare groups within the sample

The major advantage of cross-sectional design is its ability to measure a wide range of variables simultaneously without needing to follow up with participants over time.

However, cross-sectional studies do have limitations . This design can only show if there are associations or correlations between different variables, but cannot prove cause and effect relationships, temporal sequence, changes, and trends over time.

Pros of Cross-Sectional StudiesCons of Cross-Sectional Studies
1. Quick and inexpensive, with no long-term commitment required.1. Cannot determine causality because it is a simple snapshot, with no time delay between data collection points.
2. Good for descriptive analyses.2. Does not allow researchers to follow up with research participants.

Example of a Cross-Sectional Study

Our longitudinal study example of a national census also happens to contain cross-sectional design. One census is cross-sectional, displaying only data from one point in time. But when a census is taken once every few years, it becomes longitudinal, and so long as the data collection technique remains unchanged, identification of changes will be achievable, adding another time dimension on top of a basic cross-sectional study.

13. Correlational Research

Correlational research is a quantitative method that seeks to determine if and to what degree a relationship exists between two or more quantifiable variables (Schweigert, 2021).

This approach provides a fast and easy way to make initial hypotheses based on either positive or  negative correlation trends  that can be observed within dataset.

While correlational research can reveal relationships between variables, it cannot establish causality.

Methods used for data analysis may include statistical correlations such as Pearson’s or Spearman’s.

Pros of Correlational ResearchCons of Correlational Research
1. Reveals relationships between variables1. Cannot determine causality
2. Can use existing data2. May be
3. Can guide further experimental research3. Correlation may be coincidental

Example of Correlational Research

A team of researchers is interested in studying the relationship between the amount of time students spend studying and their academic performance. They gather data from a high school, measuring the number of hours each student studies per week and their grade point averages (GPAs) at the end of the semester. Upon analyzing the data, they find a positive correlation, suggesting that students who spend more time studying tend to have higher GPAs.

14. Quasi-Experimental Design Research

Quasi-experimental design research is a quantitative research method that is similar to experimental design but lacks the element of random assignment to treatment or control.

Instead, quasi-experimental designs typically rely on certain other methods to control for extraneous variables.

The term ‘quasi-experimental’ implies that the experiment resembles a true experiment, but it is not exactly the same because it doesn’t meet all the criteria for a ‘true’ experiment, specifically in terms of control and random assignment.

Quasi-experimental design is useful when researchers want to study a causal hypothesis or relationship, but practical or ethical considerations prevent them from manipulating variables and randomly assigning participants to conditions.

Pros Cons
1. It’s more feasible to implement than true experiments.1. Without random assignment, it’s harder to rule out confounding variables.
2. It can be conducted in real-world settings, making the findings more applicable to the real world.2. The lack of random assignment may of the study.
3. Useful when it’s unethical or impossible to manipulate the independent variable or randomly assign participants.3. It’s more difficult to establish a cause-effect relationship due to the potential for confounding variables.

Example of Quasi-Experimental Design

A researcher wants to study the impact of a new math tutoring program on student performance. However, ethical and practical constraints prevent random assignment to the “tutoring” and “no tutoring” groups. Instead, the researcher compares students who chose to receive tutoring (experimental group) to similar students who did not choose to receive tutoring (control group), controlling for other variables like grade level and previous math performance.

Related: Examples and Types of Random Assignment in Research

15. Meta-Analysis Research

Meta-analysis statistically combines the results of multiple studies on a specific topic to yield a more precise estimate of the effect size. It’s the gold standard of secondary research .

Meta-analysis is particularly useful when there are numerous studies on a topic, and there is a need to integrate the findings to draw more reliable conclusions.

Some meta-analyses can identify flaws or gaps in a corpus of research, when can be highly influential in academic research, despite lack of primary data collection.

However, they tend only to be feasible when there is a sizable corpus of high-quality and reliable studies into a phenomenon.

Pros Cons
Increased Statistical Power: By combining data from multiple studies, meta-analysis increases the statistical power to detect effects.Publication Bias: Studies with null or negative findings are less likely to be published, leading to an overestimation of effect sizes.
Greater Precision: It provides more precise estimates of effect sizes by reducing the influence of random error.Quality of Studies: of a meta-analysis depends on the quality of the studies included.
Resolving Discrepancies: Meta-analysis can help resolve disagreements between different studies on a topic.Heterogeneity: Differences in study design, sample, or procedures can introduce heterogeneity, complicating interpretation of results.

Example of a Meta-Analysis

The power of feedback revisited (Wisniewski, Zierer & Hattie, 2020) is a meta-analysis that examines 435 empirical studies research on the effects of feedback on student learning. They use a random-effects model to ascertain whether there is a clear effect size across the literature. The authors find that feedback tends to impact cognitive and motor skill outcomes but has less of an effect on motivational and behavioral outcomes.

Choosing a research method requires a lot of consideration regarding what you want to achieve, your research paradigm, and the methodology that is most valuable for what you are studying. There are multiple types of research methods, many of which I haven’t been able to present here. Generally, it’s recommended that you work with an experienced researcher or research supervisor to identify a suitable research method for your study at hand.

Hammond, M., & Wellington, J. (2020). Research methods: The key concepts . New York: Routledge.

Howitt, D. (2019). Introduction to qualitative research methods in psychology . London: Pearson UK.

Pajo, B. (2022). Introduction to research methods: A hands-on approach . New York: Sage Publications.

Patten, M. L. (2017). Understanding research methods: An overview of the essentials . New York: Sage

Schweigert, W. A. (2021). Research methods in psychology: A handbook . Los Angeles: Waveland Press.

Stokes, P., & Wall, T. (2017). Research methods . New York: Bloomsbury Publishing.

Tracy, S. J. (2019). Qualitative research methods: Collecting evidence, crafting analysis, communicating impact . London: John Wiley & Sons.

Walliman, N. (2021). Research methods: The basics. London: Routledge.

Chris

  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd-2/ 10 Reasons you’re Perpetually Single
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd-2/ 20 Montessori Toddler Bedrooms (Design Inspiration)
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd-2/ 21 Montessori Homeschool Setups
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd-2/ 101 Hidden Talents Examples

Leave a Comment Cancel Reply

Your email address will not be published. Required fields are marked *

method used in research

How To Choose Your Research Methodology

Qualitative vs quantitative vs mixed methods.

By: Derek Jansen (MBA). Expert Reviewed By: Dr Eunice Rautenbach | June 2021

Without a doubt, one of the most common questions we receive at Grad Coach is “ How do I choose the right methodology for my research? ”. It’s easy to see why – with so many options on the research design table, it’s easy to get intimidated, especially with all the complex lingo!

In this post, we’ll explain the three overarching types of research – qualitative, quantitative and mixed methods – and how you can go about choosing the best methodological approach for your research.

Overview: Choosing Your Methodology

Understanding the options – Qualitative research – Quantitative research – Mixed methods-based research

Choosing a research methodology – Nature of the research – Research area norms – Practicalities

Free Webinar: Research Methodology 101

1. Understanding the options

Before we jump into the question of how to choose a research methodology, it’s useful to take a step back to understand the three overarching types of research – qualitative , quantitative and mixed methods -based research. Each of these options takes a different methodological approach.

Qualitative research utilises data that is not numbers-based. In other words, qualitative research focuses on words , descriptions , concepts or ideas – while quantitative research makes use of numbers and statistics. Qualitative research investigates the “softer side” of things to explore and describe, while quantitative research focuses on the “hard numbers”, to measure differences between variables and the relationships between them.

Importantly, qualitative research methods are typically used to explore and gain a deeper understanding of the complexity of a situation – to draw a rich picture . In contrast to this, quantitative methods are usually used to confirm or test hypotheses . In other words, they have distinctly different purposes. The table below highlights a few of the key differences between qualitative and quantitative research – you can learn more about the differences here.

  • Uses an inductive approach
  • Is used to build theories
  • Takes a subjective approach
  • Adopts an open and flexible approach
  • The researcher is close to the respondents
  • Interviews and focus groups are oftentimes used to collect word-based data.
  • Generally, draws on small sample sizes
  • Uses qualitative data analysis techniques (e.g. content analysis , thematic analysis , etc)
  • Uses a deductive approach
  • Is used to test theories
  • Takes an objective approach
  • Adopts a closed, highly planned approach
  • The research is disconnected from respondents
  • Surveys or laboratory equipment are often used to collect number-based data.
  • Generally, requires large sample sizes
  • Uses statistical analysis techniques to make sense of the data

Mixed methods -based research, as you’d expect, attempts to bring these two types of research together, drawing on both qualitative and quantitative data. Quite often, mixed methods-based studies will use qualitative research to explore a situation and develop a potential model of understanding (this is called a conceptual framework), and then go on to use quantitative methods to test that model empirically.

In other words, while qualitative and quantitative methods (and the philosophies that underpin them) are completely different, they are not at odds with each other. It’s not a competition of qualitative vs quantitative. On the contrary, they can be used together to develop a high-quality piece of research. Of course, this is easier said than done, so we usually recommend that first-time researchers stick to a single approach , unless the nature of their study truly warrants a mixed-methods approach.

The key takeaway here, and the reason we started by looking at the three options, is that it’s important to understand that each methodological approach has a different purpose – for example, to explore and understand situations (qualitative), to test and measure (quantitative) or to do both. They’re not simply alternative tools for the same job. 

Right – now that we’ve got that out of the way, let’s look at how you can go about choosing the right methodology for your research.

Methodology choices in research

2. How to choose a research methodology

To choose the right research methodology for your dissertation or thesis, you need to consider three important factors . Based on these three factors, you can decide on your overarching approach – qualitative, quantitative or mixed methods. Once you’ve made that decision, you can flesh out the finer details of your methodology, such as the sampling , data collection methods and analysis techniques (we discuss these separately in other posts ).

The three factors you need to consider are:

  • The nature of your research aims, objectives and research questions
  • The methodological approaches taken in the existing literature
  • Practicalities and constraints

Let’s take a look at each of these.

Factor #1: The nature of your research

As I mentioned earlier, each type of research (and therefore, research methodology), whether qualitative, quantitative or mixed, has a different purpose and helps solve a different type of question. So, it’s logical that the key deciding factor in terms of which research methodology you adopt is the nature of your research aims, objectives and research questions .

But, what types of research exist?

Broadly speaking, research can fall into one of three categories:

  • Exploratory – getting a better understanding of an issue and potentially developing a theory regarding it
  • Confirmatory – confirming a potential theory or hypothesis by testing it empirically
  • A mix of both – building a potential theory or hypothesis and then testing it

As a rule of thumb, exploratory research tends to adopt a qualitative approach , whereas confirmatory research tends to use quantitative methods . This isn’t set in stone, but it’s a very useful heuristic. Naturally then, research that combines a mix of both, or is seeking to develop a theory from the ground up and then test that theory, would utilize a mixed-methods approach.

Exploratory vs confirmatory research

Let’s look at an example in action.

If your research aims were to understand the perspectives of war veterans regarding certain political matters, you’d likely adopt a qualitative methodology, making use of interviews to collect data and one or more qualitative data analysis methods to make sense of the data.

If, on the other hand, your research aims involved testing a set of hypotheses regarding the link between political leaning and income levels, you’d likely adopt a quantitative methodology, using numbers-based data from a survey to measure the links between variables and/or constructs .

So, the first (and most important thing) thing you need to consider when deciding which methodological approach to use for your research project is the nature of your research aims , objectives and research questions. Specifically, you need to assess whether your research leans in an exploratory or confirmatory direction or involves a mix of both.

The importance of achieving solid alignment between these three factors and your methodology can’t be overstated. If they’re misaligned, you’re going to be forcing a square peg into a round hole. In other words, you’ll be using the wrong tool for the job, and your research will become a disjointed mess.

If your research is a mix of both exploratory and confirmatory, but you have a tight word count limit, you may need to consider trimming down the scope a little and focusing on one or the other. One methodology executed well has a far better chance of earning marks than a poorly executed mixed methods approach. So, don’t try to be a hero, unless there is a very strong underpinning logic.

Need a helping hand?

method used in research

Factor #2: The disciplinary norms

Choosing the right methodology for your research also involves looking at the approaches used by other researchers in the field, and studies with similar research aims and objectives to yours. Oftentimes, within a discipline, there is a common methodological approach (or set of approaches) used in studies. While this doesn’t mean you should follow the herd “just because”, you should at least consider these approaches and evaluate their merit within your context.

A major benefit of reviewing the research methodologies used by similar studies in your field is that you can often piggyback on the data collection techniques that other (more experienced) researchers have developed. For example, if you’re undertaking a quantitative study, you can often find tried and tested survey scales with high Cronbach’s alphas. These are usually included in the appendices of journal articles, so you don’t even have to contact the original authors. By using these, you’ll save a lot of time and ensure that your study stands on the proverbial “shoulders of giants” by using high-quality measurement instruments .

Of course, when reviewing existing literature, keep point #1 front of mind. In other words, your methodology needs to align with your research aims, objectives and questions. Don’t fall into the trap of adopting the methodological “norm” of other studies just because it’s popular. Only adopt that which is relevant to your research.

Factor #3: Practicalities

When choosing a research methodology, there will always be a tension between doing what’s theoretically best (i.e., the most scientifically rigorous research design ) and doing what’s practical , given your constraints . This is the nature of doing research and there are always trade-offs, as with anything else.

But what constraints, you ask?

When you’re evaluating your methodological options, you need to consider the following constraints:

  • Data access
  • Equipment and software
  • Your knowledge and skills

Let’s look at each of these.

Constraint #1: Data access

The first practical constraint you need to consider is your access to data . If you’re going to be undertaking primary research , you need to think critically about the sample of respondents you realistically have access to. For example, if you plan to use in-person interviews , you need to ask yourself how many people you’ll need to interview, whether they’ll be agreeable to being interviewed, where they’re located, and so on.

If you’re wanting to undertake a quantitative approach using surveys to collect data, you’ll need to consider how many responses you’ll require to achieve statistically significant results. For many statistical tests, a sample of a few hundred respondents is typically needed to develop convincing conclusions.

So, think carefully about what data you’ll need access to, how much data you’ll need and how you’ll collect it. The last thing you want is to spend a huge amount of time on your research only to find that you can’t get access to the required data.

Constraint #2: Time

The next constraint is time. If you’re undertaking research as part of a PhD, you may have a fairly open-ended time limit, but this is unlikely to be the case for undergrad and Masters-level projects. So, pay attention to your timeline, as the data collection and analysis components of different methodologies have a major impact on time requirements . Also, keep in mind that these stages of the research often take a lot longer than originally anticipated.

Another practical implication of time limits is that it will directly impact which time horizon you can use – i.e. longitudinal vs cross-sectional . For example, if you’ve got a 6-month limit for your entire research project, it’s quite unlikely that you’ll be able to adopt a longitudinal time horizon. 

Constraint #3: Money

As with so many things, money is another important constraint you’ll need to consider when deciding on your research methodology. While some research designs will cost near zero to execute, others may require a substantial budget .

Some of the costs that may arise include:

  • Software costs – e.g. survey hosting services, analysis software, etc.
  • Promotion costs – e.g. advertising a survey to attract respondents
  • Incentive costs – e.g. providing a prize or cash payment incentive to attract respondents
  • Equipment rental costs – e.g. recording equipment, lab equipment, etc.
  • Travel costs
  • Food & beverages

These are just a handful of costs that can creep into your research budget. Like most projects, the actual costs tend to be higher than the estimates, so be sure to err on the conservative side and expect the unexpected. It’s critically important that you’re honest with yourself about these costs, or you could end up getting stuck midway through your project because you’ve run out of money.

Budgeting for your research

Constraint #4: Equipment & software

Another practical consideration is the hardware and/or software you’ll need in order to undertake your research. Of course, this variable will depend on the type of data you’re collecting and analysing. For example, you may need lab equipment to analyse substances, or you may need specific analysis software to analyse statistical data. So, be sure to think about what hardware and/or software you’ll need for each potential methodological approach, and whether you have access to these.

Constraint #5: Your knowledge and skillset

The final practical constraint is a big one. Naturally, the research process involves a lot of learning and development along the way, so you will accrue knowledge and skills as you progress. However, when considering your methodological options, you should still consider your current position on the ladder.

Some of the questions you should ask yourself are:

  • Am I more of a “numbers person” or a “words person”?
  • How much do I know about the analysis methods I’ll potentially use (e.g. statistical analysis)?
  • How much do I know about the software and/or hardware that I’ll potentially use?
  • How excited am I to learn new research skills and gain new knowledge?
  • How much time do I have to learn the things I need to learn?

Answering these questions honestly will provide you with another set of criteria against which you can evaluate the research methodology options you’ve shortlisted.

So, as you can see, there is a wide range of practicalities and constraints that you need to take into account when you’re deciding on a research methodology. These practicalities create a tension between the “ideal” methodology and the methodology that you can realistically pull off. This is perfectly normal, and it’s your job to find the option that presents the best set of trade-offs.

Recap: Choosing a methodology

In this post, we’ve discussed how to go about choosing a research methodology. The three major deciding factors we looked at were:

  • Exploratory
  • Confirmatory
  • Combination
  • Research area norms
  • Hardware and software
  • Your knowledge and skillset

If you have any questions, feel free to leave a comment below. If you’d like a helping hand with your research methodology, check out our 1-on-1 research coaching service , or book a free consultation with a friendly Grad Coach.

method used in research

Psst... there’s more!

This post was based on one of our popular Research Bootcamps . If you're working on a research project, you'll definitely want to check this out ...

Dr. Zara

Very useful and informative especially for beginners

Goudi

Nice article! I’m a beginner in the field of cybersecurity research. I am a Telecom and Network Engineer and Also aiming for PhD scholarship.

Margaret Mutandwa

I find the article very informative especially for my decitation it has been helpful and an eye opener.

Anna N Namwandi

Hi I am Anna ,

I am a PHD candidate in the area of cyber security, maybe we can link up

Tut Gatluak Doar

The Examples shows by you, for sure they are really direct me and others to knows and practices the Research Design and prepration.

Tshepo Ngcobo

I found the post very informative and practical.

Baraka Mfilinge

I struggle so much with designs of the research for sure!

Joyce

I’m the process of constructing my research design and I want to know if the data analysis I plan to present in my thesis defense proposal possibly change especially after I gathered the data already.

Janine Grace Baldesco

Thank you so much this site is such a life saver. How I wish 1-1 coaching is available in our country but sadly it’s not.

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

  • Print Friendly

Pfeiffer Library

Research Methodologies

  • What are research designs?

What are research methodologies?

Quantitative research methodologies, qualitative research methodologies, mixed method methodologies, selecting a methodology.

  • What are research methods?
  • Additional Sources

According to Dawson (2019),a research methodology is the primary principle that will guide your research.  It becomes the general approach in conducting research on your topic and determines what research method you will use. A research methodology is different from a research method because research methods are the tools you use to gather your data (Dawson, 2019).  You must consider several issues when it comes to selecting the most appropriate methodology for your topic.  Issues might include research limitations and ethical dilemmas that might impact the quality of your research.  Descriptions of each type of methodology are included below.

Quantitative research methodologies are meant to create numeric statistics by using survey research to gather data (Dawson, 2019).  This approach tends to reach a larger amount of people in a shorter amount of time.  According to Labaree (2020), there are three parts that make up a quantitative research methodology:

  • Sample population
  • How you will collect your data (this is the research method)
  • How you will analyze your data

Once you decide on a methodology, you can consider the method to which you will apply your methodology.

Qualitative research methodologies examine the behaviors, opinions, and experiences of individuals through methods of examination (Dawson, 2019).  This type of approach typically requires less participants, but more time with each participant.  It gives research subjects the opportunity to provide their own opinion on a certain topic.

Examples of Qualitative Research Methodologies

  • Action research:  This is when the researcher works with a group of people to improve something in a certain environment.  It is a common approach for research in organizational management, community development, education, and agriculture (Dawson, 2019).
  • Ethnography:  The process of organizing and describing cultural behaviors (Dawson, 2019).  Researchers may immerse themselves into another culture to receive in "inside look" into the group they are studying.  It is often a time consuming process because the researcher will do this for a long period of time.  This can also be called "participant observation" (Dawson, 2019).
  • Feminist research:  The goal of this methodology is to study topics that have been dominated by male test subjects.  It aims to study females and compare the results to previous studies that used male participants (Dawson, 2019).
  • Grounded theory:  The process of developing a theory to describe a phenomenon strictly through the data results collected in a study.  It is different from other research methodologies where the researcher attempts to prove a hypothesis that they create before collecting data.  Popular research methods for this approach include focus groups and interviews (Dawson, 2019).

A mixed methodology allows you to implement the strengths of both qualitative and quantitative research methods.  In some cases, you may find that your research project would benefit from this.  This approach is beneficial because it allows each methodology to counteract the weaknesses of the other (Dawson, 2019).  You should consider this option carefully, as it can make your research complicated if not planned correctly.

What should you do to decide on a research methodology?  The most logical way to determine your methodology is to decide whether you plan on conducting qualitative or qualitative research.  You also have the option to implement a mixed methods approach.  Looking back on Dawson's (2019) five "W's" on the previous page , may help you with this process.  You should also look for key words that indicate a specific type of research methodology in your hypothesis or proposal.  Some words may lean more towards one methodology over another.

Quantitative Research Key Words

  • How satisfied

Qualitative Research Key Words

  • Experiences
  • Thoughts/Think
  • Relationship
  • << Previous: What are research designs?
  • Next: What are research methods? >>
  • Last Updated: Aug 2, 2022 2:36 PM
  • URL: https://library.tiffin.edu/researchmethodologies
  • University Libraries
  • Research Guides
  • Topic Guides
  • Research Methods Guide
  • Research Design & Method

Research Methods Guide: Research Design & Method

  • Introduction
  • Survey Research
  • Interview Research
  • Data Analysis
  • Resources & Consultation

Tutorial Videos: Research Design & Method

Research Methods (sociology-focused)

Qualitative vs. Quantitative Methods (intro)

Qualitative vs. Quantitative Methods (advanced)

method used in research

FAQ: Research Design & Method

What is the difference between Research Design and Research Method?

Research design is a plan to answer your research question.  A research method is a strategy used to implement that plan.  Research design and methods are different but closely related, because good research design ensures that the data you obtain will help you answer your research question more effectively.

Which research method should I choose ?

It depends on your research goal.  It depends on what subjects (and who) you want to study.  Let's say you are interested in studying what makes people happy, or why some students are more conscious about recycling on campus.  To answer these questions, you need to make a decision about how to collect your data.  Most frequently used methods include:

  • Observation / Participant Observation
  • Focus Groups
  • Experiments
  • Secondary Data Analysis / Archival Study
  • Mixed Methods (combination of some of the above)

One particular method could be better suited to your research goal than others, because the data you collect from different methods will be different in quality and quantity.   For instance, surveys are usually designed to produce relatively short answers, rather than the extensive responses expected in qualitative interviews.

What other factors should I consider when choosing one method over another?

Time for data collection and analysis is something you want to consider.  An observation or interview method, so-called qualitative approach, helps you collect richer information, but it takes time.  Using a survey helps you collect more data quickly, yet it may lack details.  So, you will need to consider the time you have for research and the balance between strengths and weaknesses associated with each method (e.g., qualitative vs. quantitative).

  • << Previous: Introduction
  • Next: Survey Research >>
  • Last Updated: Aug 21, 2023 10:42 AM

Research Methods In Psychology

Saul McLeod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul McLeod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Learn about our Editorial Process

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

Research methods in psychology are systematic procedures used to observe, describe, predict, and explain behavior and mental processes. They include experiments, surveys, case studies, and naturalistic observations, ensuring data collection is objective and reliable to understand and explain psychological phenomena.

research methods3

Hypotheses are statements about the prediction of the results, that can be verified or disproved by some investigation.

There are four types of hypotheses :
  • Null Hypotheses (H0 ) – these predict that no difference will be found in the results between the conditions. Typically these are written ‘There will be no difference…’
  • Alternative Hypotheses (Ha or H1) – these predict that there will be a significant difference in the results between the two conditions. This is also known as the experimental hypothesis.
  • One-tailed (directional) hypotheses – these state the specific direction the researcher expects the results to move in, e.g. higher, lower, more, less. In a correlation study, the predicted direction of the correlation can be either positive or negative.
  • Two-tailed (non-directional) hypotheses – these state that a difference will be found between the conditions of the independent variable but does not state the direction of a difference or relationship. Typically these are always written ‘There will be a difference ….’

All research has an alternative hypothesis (either a one-tailed or two-tailed) and a corresponding null hypothesis.

Once the research is conducted and results are found, psychologists must accept one hypothesis and reject the other. 

So, if a difference is found, the Psychologist would accept the alternative hypothesis and reject the null.  The opposite applies if no difference is found.

Sampling techniques

Sampling is the process of selecting a representative group from the population under study.

Sample Target Population

A sample is the participants you select from a target population (the group you are interested in) to make generalizations about.

Representative means the extent to which a sample mirrors a researcher’s target population and reflects its characteristics.

Generalisability means the extent to which their findings can be applied to the larger population of which their sample was a part.

  • Volunteer sample : where participants pick themselves through newspaper adverts, noticeboards or online.
  • Opportunity sampling : also known as convenience sampling , uses people who are available at the time the study is carried out and willing to take part. It is based on convenience.
  • Random sampling : when every person in the target population has an equal chance of being selected. An example of random sampling would be picking names out of a hat.
  • Systematic sampling : when a system is used to select participants. Picking every Nth person from all possible participants. N = the number of people in the research population / the number of people needed for the sample.
  • Stratified sampling : when you identify the subgroups and select participants in proportion to their occurrences.
  • Snowball sampling : when researchers find a few participants, and then ask them to find participants themselves and so on.
  • Quota sampling : when researchers will be told to ensure the sample fits certain quotas, for example they might be told to find 90 participants, with 30 of them being unemployed.

Experiments always have an independent and dependent variable .

  • The independent variable is the one the experimenter manipulates (the thing that changes between the conditions the participants are placed into). It is assumed to have a direct effect on the dependent variable.
  • The dependent variable is the thing being measured, or the results of the experiment.

variables

Operationalization of variables means making them measurable/quantifiable. We must use operationalization to ensure that variables are in a form that can be easily tested.

For instance, we can’t really measure ‘happiness’, but we can measure how many times a person smiles within a two-hour period. 

By operationalizing variables, we make it easy for someone else to replicate our research. Remember, this is important because we can check if our findings are reliable.

Extraneous variables are all variables which are not independent variable but could affect the results of the experiment.

It can be a natural characteristic of the participant, such as intelligence levels, gender, or age for example, or it could be a situational feature of the environment such as lighting or noise.

Demand characteristics are a type of extraneous variable that occurs if the participants work out the aims of the research study, they may begin to behave in a certain way.

For example, in Milgram’s research , critics argued that participants worked out that the shocks were not real and they administered them as they thought this was what was required of them. 

Extraneous variables must be controlled so that they do not affect (confound) the results.

Randomly allocating participants to their conditions or using a matched pairs experimental design can help to reduce participant variables. 

Situational variables are controlled by using standardized procedures, ensuring every participant in a given condition is treated in the same way

Experimental Design

Experimental design refers to how participants are allocated to each condition of the independent variable, such as a control or experimental group.
  • Independent design ( between-groups design ): each participant is selected for only one group. With the independent design, the most common way of deciding which participants go into which group is by means of randomization. 
  • Matched participants design : each participant is selected for only one group, but the participants in the two groups are matched for some relevant factor or factors (e.g. ability; sex; age).
  • Repeated measures design ( within groups) : each participant appears in both groups, so that there are exactly the same participants in each group.
  • The main problem with the repeated measures design is that there may well be order effects. Their experiences during the experiment may change the participants in various ways.
  • They may perform better when they appear in the second group because they have gained useful information about the experiment or about the task. On the other hand, they may perform less well on the second occasion because of tiredness or boredom.
  • Counterbalancing is the best way of preventing order effects from disrupting the findings of an experiment, and involves ensuring that each condition is equally likely to be used first and second by the participants.

If we wish to compare two groups with respect to a given independent variable, it is essential to make sure that the two groups do not differ in any other important way. 

Experimental Methods

All experimental methods involve an iv (independent variable) and dv (dependent variable)..

The researcher decides where the experiment will take place, at what time, with which participants, in what circumstances,  using a standardized procedure.

  • Field experiments are conducted in the everyday (natural) environment of the participants. The experimenter still manipulates the IV, but in a real-life setting. It may be possible to control extraneous variables, though such control is more difficult than in a lab experiment.
  • Natural experiments are when a naturally occurring IV is investigated that isn’t deliberately manipulated, it exists anyway. Participants are not randomly allocated, and the natural event may only occur rarely.

Case studies are in-depth investigations of a person, group, event, or community. It uses information from a range of sources, such as from the person concerned and also from their family and friends.

Many techniques may be used such as interviews, psychological tests, observations and experiments. Case studies are generally longitudinal: in other words, they follow the individual or group over an extended period of time. 

Case studies are widely used in psychology and among the best-known ones carried out were by Sigmund Freud . He conducted very detailed investigations into the private lives of his patients in an attempt to both understand and help them overcome their illnesses.

Case studies provide rich qualitative data and have high levels of ecological validity. However, it is difficult to generalize from individual cases as each one has unique characteristics.

Correlational Studies

Correlation means association; it is a measure of the extent to which two variables are related. One of the variables can be regarded as the predictor variable with the other one as the outcome variable.

Correlational studies typically involve obtaining two different measures from a group of participants, and then assessing the degree of association between the measures. 

The predictor variable can be seen as occurring before the outcome variable in some sense. It is called the predictor variable, because it forms the basis for predicting the value of the outcome variable.

Relationships between variables can be displayed on a graph or as a numerical score called a correlation coefficient.

types of correlation. Scatter plot. Positive negative and no correlation

  • If an increase in one variable tends to be associated with an increase in the other, then this is known as a positive correlation .
  • If an increase in one variable tends to be associated with a decrease in the other, then this is known as a negative correlation .
  • A zero correlation occurs when there is no relationship between variables.

After looking at the scattergraph, if we want to be sure that a significant relationship does exist between the two variables, a statistical test of correlation can be conducted, such as Spearman’s rho.

The test will give us a score, called a correlation coefficient . This is a value between 0 and 1, and the closer to 1 the score is, the stronger the relationship between the variables. This value can be both positive e.g. 0.63, or negative -0.63.

Types of correlation. Strong, weak, and perfect positive correlation, strong, weak, and perfect negative correlation, no correlation. Graphs or charts ...

A correlation between variables, however, does not automatically mean that the change in one variable is the cause of the change in the values of the other variable. A correlation only shows if there is a relationship between variables.

Correlation does not always prove causation, as a third variable may be involved. 

causation correlation

Interview Methods

Interviews are commonly divided into two types: structured and unstructured.

A fixed, predetermined set of questions is put to every participant in the same order and in the same way. 

Responses are recorded on a questionnaire, and the researcher presets the order and wording of questions, and sometimes the range of alternative answers.

The interviewer stays within their role and maintains social distance from the interviewee.

There are no set questions, and the participant can raise whatever topics he/she feels are relevant and ask them in their own way. Questions are posed about participants’ answers to the subject

Unstructured interviews are most useful in qualitative research to analyze attitudes and values.

Though they rarely provide a valid basis for generalization, their main advantage is that they enable the researcher to probe social actors’ subjective point of view. 

Questionnaire Method

Questionnaires can be thought of as a kind of written interview. They can be carried out face to face, by telephone, or post.

The choice of questions is important because of the need to avoid bias or ambiguity in the questions, ‘leading’ the respondent or causing offense.

  • Open questions are designed to encourage a full, meaningful answer using the subject’s own knowledge and feelings. They provide insights into feelings, opinions, and understanding. Example: “How do you feel about that situation?”
  • Closed questions can be answered with a simple “yes” or “no” or specific information, limiting the depth of response. They are useful for gathering specific facts or confirming details. Example: “Do you feel anxious in crowds?”

Its other practical advantages are that it is cheaper than face-to-face interviews and can be used to contact many respondents scattered over a wide area relatively quickly.

Observations

There are different types of observation methods :
  • Covert observation is where the researcher doesn’t tell the participants they are being observed until after the study is complete. There could be ethical problems or deception and consent with this particular observation method.
  • Overt observation is where a researcher tells the participants they are being observed and what they are being observed for.
  • Controlled : behavior is observed under controlled laboratory conditions (e.g., Bandura’s Bobo doll study).
  • Natural : Here, spontaneous behavior is recorded in a natural setting.
  • Participant : Here, the observer has direct contact with the group of people they are observing. The researcher becomes a member of the group they are researching.  
  • Non-participant (aka “fly on the wall): The researcher does not have direct contact with the people being observed. The observation of participants’ behavior is from a distance

Pilot Study

A pilot  study is a small scale preliminary study conducted in order to evaluate the feasibility of the key s teps in a future, full-scale project.

A pilot study is an initial run-through of the procedures to be used in an investigation; it involves selecting a few people and trying out the study on them. It is possible to save time, and in some cases, money, by identifying any flaws in the procedures designed by the researcher.

A pilot study can help the researcher spot any ambiguities (i.e. unusual things) or confusion in the information given to participants or problems with the task devised.

Sometimes the task is too hard, and the researcher may get a floor effect, because none of the participants can score at all or can complete the task – all performances are low.

The opposite effect is a ceiling effect, when the task is so easy that all achieve virtually full marks or top performances and are “hitting the ceiling”.

Research Design

In cross-sectional research , a researcher compares multiple segments of the population at the same time

Sometimes, we want to see how people change over time, as in studies of human development and lifespan. Longitudinal research is a research design in which data-gathering is administered repeatedly over an extended period of time.

In cohort studies , the participants must share a common factor or characteristic such as age, demographic, or occupation. A cohort study is a type of longitudinal study in which researchers monitor and observe a chosen population over an extended period.

Triangulation means using more than one research method to improve the study’s validity.

Reliability

Reliability is a measure of consistency, if a particular measurement is repeated and the same result is obtained then it is described as being reliable.

  • Test-retest reliability :  assessing the same person on two different occasions which shows the extent to which the test produces the same answers.
  • Inter-observer reliability : the extent to which there is an agreement between two or more observers.

Meta-Analysis

Meta-analysis is a statistical procedure used to combine and synthesize findings from multiple independent studies to estimate the average effect size for a particular research question.

Meta-analysis goes beyond traditional narrative reviews by using statistical methods to integrate the results of several studies, leading to a more objective appraisal of the evidence.

This is done by looking through various databases, and then decisions are made about what studies are to be included/excluded.

  • Strengths : Increases the conclusions’ validity as they’re based on a wider range.
  • Weaknesses : Research designs in studies can vary, so they are not truly comparable.

Peer Review

A researcher submits an article to a journal. The choice of the journal may be determined by the journal’s audience or prestige.

The journal selects two or more appropriate experts (psychologists working in a similar field) to peer review the article without payment. The peer reviewers assess: the methods and designs used, originality of the findings, the validity of the original research findings and its content, structure and language.

Feedback from the reviewer determines whether the article is accepted. The article may be: Accepted as it is, accepted with revisions, sent back to the author to revise and re-submit or rejected without the possibility of submission.

The editor makes the final decision whether to accept or reject the research report based on the reviewers comments/ recommendations.

Peer review is important because it prevent faulty data from entering the public domain, it provides a way of checking the validity of findings and the quality of the methodology and is used to assess the research rating of university departments.

Peer reviews may be an ideal, whereas in practice there are lots of problems. For example, it slows publication down and may prevent unusual, new work being published. Some reviewers might use it as an opportunity to prevent competing researchers from publishing work.

Some people doubt whether peer review can really prevent the publication of fraudulent research.

The advent of the internet means that a lot of research and academic comment is being published without official peer reviews than before, though systems are evolving on the internet where everyone really has a chance to offer their opinions and police the quality of research.

Types of Data

  • Quantitative data is numerical data e.g. reaction time or number of mistakes. It represents how much or how long, how many there are of something. A tally of behavioral categories and closed questions in a questionnaire collect quantitative data.
  • Qualitative data is virtually any type of information that can be observed and recorded that is not numerical in nature and can be in the form of written or verbal communication. Open questions in questionnaires and accounts from observational studies collect qualitative data.
  • Primary data is first-hand data collected for the purpose of the investigation.
  • Secondary data is information that has been collected by someone other than the person who is conducting the research e.g. taken from journals, books or articles.

Validity means how well a piece of research actually measures what it sets out to, or how well it reflects the reality it claims to represent.

Validity is whether the observed effect is genuine and represents what is actually out there in the world.

  • Concurrent validity is the extent to which a psychological measure relates to an existing similar measure and obtains close results. For example, a new intelligence test compared to an established test.
  • Face validity : does the test measure what it’s supposed to measure ‘on the face of it’. This is done by ‘eyeballing’ the measuring or by passing it to an expert to check.
  • Ecological validit y is the extent to which findings from a research study can be generalized to other settings / real life.
  • Temporal validity is the extent to which findings from a research study can be generalized to other historical times.

Features of Science

  • Paradigm – A set of shared assumptions and agreed methods within a scientific discipline.
  • Paradigm shift – The result of the scientific revolution: a significant change in the dominant unifying theory within a scientific discipline.
  • Objectivity – When all sources of personal bias are minimised so not to distort or influence the research process.
  • Empirical method – Scientific approaches that are based on the gathering of evidence through direct observation and experience.
  • Replicability – The extent to which scientific procedures and findings can be repeated by other researchers.
  • Falsifiability – The principle that a theory cannot be considered scientific unless it admits the possibility of being proved untrue.

Statistical Testing

A significant result is one where there is a low probability that chance factors were responsible for any observed difference, correlation, or association in the variables tested.

If our test is significant, we can reject our null hypothesis and accept our alternative hypothesis.

If our test is not significant, we can accept our null hypothesis and reject our alternative hypothesis. A null hypothesis is a statement of no effect.

In Psychology, we use p < 0.05 (as it strikes a balance between making a type I and II error) but p < 0.01 is used in tests that could cause harm like introducing a new drug.

A type I error is when the null hypothesis is rejected when it should have been accepted (happens when a lenient significance level is used, an error of optimism).

A type II error is when the null hypothesis is accepted when it should have been rejected (happens when a stringent significance level is used, an error of pessimism).

Ethical Issues

  • Informed consent is when participants are able to make an informed judgment about whether to take part. It causes them to guess the aims of the study and change their behavior.
  • To deal with it, we can gain presumptive consent or ask them to formally indicate their agreement to participate but it may invalidate the purpose of the study and it is not guaranteed that the participants would understand.
  • Deception should only be used when it is approved by an ethics committee, as it involves deliberately misleading or withholding information. Participants should be fully debriefed after the study but debriefing can’t turn the clock back.
  • All participants should be informed at the beginning that they have the right to withdraw if they ever feel distressed or uncomfortable.
  • It causes bias as the ones that stayed are obedient and some may not withdraw as they may have been given incentives or feel like they’re spoiling the study. Researchers can offer the right to withdraw data after participation.
  • Participants should all have protection from harm . The researcher should avoid risks greater than those experienced in everyday life and they should stop the study if any harm is suspected. However, the harm may not be apparent at the time of the study.
  • Confidentiality concerns the communication of personal information. The researchers should not record any names but use numbers or false names though it may not be possible as it is sometimes possible to work out who the researchers were.

Print Friendly, PDF & Email

  • Privacy Policy

Research Method

Home » Research Techniques – Methods, Types and Examples

Research Techniques – Methods, Types and Examples

Table of Contents

Research Techniques

Research Techniques

Definition:

Research techniques refer to the various methods, processes, and tools used to collect, analyze, and interpret data for the purpose of answering research questions or testing hypotheses.

Methods of Research Techniques

The methods of research techniques refer to the overall approaches or frameworks that guide a research study, including the theoretical perspective, research design, sampling strategy, data collection and analysis techniques, and ethical considerations. Some common methods of research techniques are:

  • Quantitative research: This is a research method that focuses on collecting and analyzing numerical data to establish patterns, relationships, and cause-and-effect relationships. Examples of quantitative research techniques are surveys, experiments, and statistical analysis.
  • Qualitative research: This is a research method that focuses on collecting and analyzing non-numerical data, such as text, images, and videos, to gain insights into the subjective experiences and perspectives of the participants. Examples of qualitative research techniques are interviews, focus groups, and content analysis.
  • Mixed-methods research: This is a research method that combines quantitative and qualitative research techniques to provide a more comprehensive understanding of a research question. Examples of mixed-methods research techniques are surveys with open-ended questions and case studies with statistical analysis.
  • Action research: This is a research method that focuses on solving real-world problems by collaborating with stakeholders and using a cyclical process of planning, action, and reflection. Examples of action research techniques are participatory action research and community-based participatory research.
  • Experimental research : This is a research method that involves manipulating one or more variables to observe the effect on an outcome, to establish cause-and-effect relationships. Examples of experimental research techniques are randomized controlled trials and quasi-experimental designs.
  • Observational research: This is a research method that involves observing and recording behavior or phenomena in natural settings to gain insights into the subject of study. Examples of observational research techniques are naturalistic observation and structured observation.

Types of Research Techniques

There are several types of research techniques used in various fields. Some of the most common ones are:

  • Surveys : This is a quantitative research technique that involves collecting data through questionnaires or interviews to gather information from a large group of people.
  • Experiments : This is a scientific research technique that involves manipulating one or more variables to observe the effect on an outcome, to establish cause-and-effect relationships.
  • Case studies: This is a qualitative research technique that involves in-depth analysis of a single case, such as an individual, group, or event, to understand the complexities of the case.
  • Observational studies : This is a research technique that involves observing and recording behavior or phenomena in natural settings to gain insights into the subject of study.
  • Content analysis: This is a research technique used to analyze text or other media content to identify patterns, themes, or meanings.
  • Focus groups: This is a research technique that involves gathering a small group of people to discuss a topic or issue and provide feedback on a product or service.
  • Meta-analysis: This is a statistical research technique that involves combining data from multiple studies to assess the overall effect of a treatment or intervention.
  • Action research: This is a research technique used to solve real-world problems by collaborating with stakeholders and using a cyclical process of planning, action, and reflection.
  • Interviews : Interviews are another technique used in research, and they can be conducted in person or over the phone. They are often used to gather in-depth information about an individual’s experiences or opinions. For example, a researcher might conduct interviews with cancer patients to learn more about their experiences with treatment.

Example of Research Techniques

Here’s an example of how research techniques might be used by a student conducting a research project:

Let’s say a high school student is interested in investigating the impact of social media on mental health. They could use a variety of research techniques to gather data and analyze their findings, including:

  • Literature review : The student could conduct a literature review to gather existing research studies, articles, and books that discuss the relationship between social media and mental health. This will provide a foundation of knowledge on the topic and help the student identify gaps in the research that they could address.
  • Surveys : The student could design and distribute a survey to gather information from a sample of individuals about their social media usage and how it affects their mental health. The survey could include questions about the frequency of social media use, the types of content consumed, and how it makes them feel.
  • Interviews : The student could conduct interviews with individuals who have experienced mental health issues and ask them about their social media use, and how it has impacted their mental health. This could provide a more in-depth understanding of how social media affects people on an individual level.
  • Data analysis : The student could use statistical software to analyze the data collected from the surveys and interviews. This would allow them to identify patterns and relationships between social media usage and mental health outcomes.
  • Report writing : Based on the findings from their research, the student could write a report that summarizes their research methods, findings, and conclusions. They could present their report to their peers or their teacher to share their insights on the topic.

Overall, by using a combination of research techniques, the student can investigate their research question thoroughly and systematically, and make meaningful contributions to the field of social media and mental health research.

Purpose of Research Techniques

The Purposes of Research Techniques are as follows:

  • To investigate and gain knowledge about a particular phenomenon or topic
  • To generate new ideas and theories
  • To test existing theories and hypotheses
  • To identify and evaluate potential solutions to problems
  • To gather data and evidence to inform decision-making
  • To identify trends and patterns in data
  • To explore cause-and-effect relationships between variables
  • To develop and refine measurement tools and methodologies
  • To establish the reliability and validity of research findings
  • To communicate research findings to others in a clear and concise manner.

Applications of Research Techniques

Here are some applications of research techniques:

  • Scientific research: to explore, investigate and understand natural phenomena, and to generate new knowledge and theories.
  • Market research: to collect and analyze data about consumer behavior, preferences, and trends, and to help businesses make informed decisions about product development, pricing, and marketing strategies.
  • Medical research : to study diseases and their treatments, and to develop new medicines, therapies, and medical technologies.
  • Social research : to explore and understand human behavior, attitudes, and values, and to inform public policy decisions related to education, health care, social welfare, and other areas.
  • Educational research : to study teaching and learning processes, and to develop effective teaching methods and instructional materials.
  • Environmental research: to investigate the impact of human activities on the environment, and to develop solutions to environmental problems.
  • Engineering Research: to design, develop, and improve products, processes, and systems, and to optimize their performance and efficiency.
  • Criminal justice research : to study crime patterns, causes, and prevention strategies, and to evaluate the effectiveness of criminal justice policies and programs.
  • Psychological research : to investigate human cognition, emotion, and behavior, and to develop interventions to address mental health issues.
  • Historical research: to study past events, societies, and cultures, and to develop an understanding of how they shape our present.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Theoretical Framework

Theoretical Framework – Types, Examples and...

Implications in Research

Implications in Research – Types, Examples and...

Chapter Summary

Chapter Summary & Overview – Writing Guide...

Survey Instruments

Survey Instruments – List and Their Uses

Research Recommendations

Research Recommendations – Examples and Writing...

Significance of the Study

Significance of the Study – Examples and Writing...

  • Resources Home
  • All Publications
  • Digital Resources
  • The BERA Podcast
  • Research Intelligence
  • BERA Journals
  • Ethics and guidance
  • BERA Blog Home
  • About the blog
  • Blog special issues
  • Blog series
  • Submission policy
  • Opportunities Home
  • Awards and Funding
  • Communities Home
  • Special Interest Groups
  • Early Career Researcher Network
  • Events Home
  • BERA Conference 2024
  • Upcoming Events
  • Past Events
  • Past Conferences
  • BERA Membership

Harnessing the power of poetry in academic research – the author’s use of poetry as a tool for analysis

Holly Bennion, PhD graduate at Durham University 9 Sep 2024

This blog post focuses on my approach to using poetry as an analytical tool in a recent empirical study. There is an exciting body of research highlighting the potential for incorporating poetry into the various stages of the research process. Writing and sharing poems can be an effective data collection method, whereby poems are constructed by/with participants to explore their stories, feelings and memories. Poetry can also be used as an analytical/interpretative lens – for example, Carr (2003) created poems to document the experiences of family members of hospitalised relatives, transforming interview transcripts into poetry. Researchers can also use poetry to disseminate educational research and extend the tone and scope of research communication. The growing emergence of poetry in research, underpinned by arts-based research, is also connected to theoretical insights by postmodern, poststructural and feminist theories, which invites transformative and inclusive possibilities for research that goes beyond hegemonic and traditional forms of knowledge (Cutts & Sankofa Waters, 2019).

My PhD research explored children’s experiences and perspectives of belonging and school inclusion. I explored the interconnectivity in discourses on self-identification, otherness and school inclusion in multilingual and multicultural spaces. The methods included focus groups, children’s artwork, co-analysis with participants, and dance and drama workshops. As part of the data analysis process, I chose to experiment with poems. This process involved going back and forth between the transcriptions, the NVivo coding, and looking closely at the participants’ artwork and what they said about it.

To begin the process, I experimented with free-verse poetry, whereby I attempted to use poetry to identify connections between participants’ comments, further identify themes and keywords, and document my own reflections and feelings as I delved into the data.

Then, I began experimenting with structure and specific words and phrases. I used linguistic devices such as repetition to illustrate aspects that the participants felt strongly about or things they mentioned frequently. I experimented with using short, snappy lines or long, stream-of-consciousness lines to imply the tone of voice and the atmosphere of the workshops. I selected six poems to include in my thesis. Below is one example, which takes verbatim the words of the participants:

Something for you

It belongs to me and

I own it, just mine, not sharing

I may share it sometimes

My life, my bed

The first part of this poem reflects Aasab’s comment: ‘belonging is something for you, it’s like a surprise for you and we have to keep it’. I was interested in her view of belonging as a ‘surprise’. The exclamation mark was used to convey her excited tone of voice. The repetition of ‘my’ – ‘my life, my bed, my things’ – was utilised to highlight how participants often distinguished between what is ‘mine’ and ‘yours’.

‘Through poetry, I was liberated from the structured form of academic writing; I could experiment with themes, form, language, tone and imagery to interpret and represent the children’s comments about belonging and school inclusion.’

The notion of material possessions and human–object relationships was significant in the findings. Furman and colleagues (2007) note that poetry can be a powerful tool for communication through the playfulness of metaphor, alliteration and visual elements. Through poetry, I was liberated from the structured form of academic writing; I could experiment with themes, form, language, tone and imagery to interpret and represent the children’s comments about belonging and school inclusion. I found that poetry as an analysis tool gave me enthusiasm for and confidence in my data.

Reflecting on my research approach, I advocate that poetry can serve as a valuable analysis tool for research, and it can be utilised as part of a multi-level approach. Poetry can be a powerful tool for communicating the researcher’s reflections and interpretations of the data and representing the voices of participants in engaging ways. Importantly, I was not seeking to create a single narrative through the poetry. Poetry is open to interpretation; it is evocative and invites emotional engagement. Like my data collection methods – which invited collaboration, imagination and contradictions among participants – the poetry was an interesting tool that enabled multiple narratives, opinions and clarifications for the researcher and audience.

To conclude, I quote poet and academic Neil McBride (2009, p. 43):

‘[Poetry] questions, it leaves frayed edges and loose writes. It draws out the hidden, the spiritual, the underlying rhythms of life that we swamp with information, noise and news channels.’

Holly will be presenting at the  BERA Conference 2024 and WERA Focal Meeting on Monday 9 September at 12:45pm for a symposium panel on ‘Migration and Education across the Four Nations of the UK’. 

Carr, J. (2003). Poetic expressions of vigilance. Qualitative Health Research , 13 (9), 1334–1331. https://doi.org/DOI: 10.1177/1049732303254018

Cutts, Q., & Sankofa Waters, M. (2019). Poetic approaches to qualitative data analysis. Education Publications , 145. https://doi.org/10.1093/acrefore/9780190264093.013.993   

Furman, R., Langer, C., Davis, C. S., Gallardo, H. P., & Kulkarni, S. (2007). Expressive, research and reflective poetry as qualitative inquiry: A study of adolescent identity. Qualitative Research , 7 (3), 301–315. https://doi.org/10.1177/1468794107078511

McBride, N. (2009, December 3). Poetry cornered. Times Higher Education , 1 (925), 42–44.   https://www.timeshighereducation.com/features/poetry-cornered/409334.article

More related content

Publication series

Publishing opportunity Closed

Award Closed

BERA in the news 6 Sep 2024

News 30 Aug 2024

News 12 Aug 2024

News 2 Aug 2024

Frequently asked questions

How do i decide which research methods to use.

The research methods you use depend on the type of data you need to answer your research question .

  • If you want to measure something or test a hypothesis , use quantitative methods . If you want to explore ideas, thoughts and meanings, use qualitative methods .
  • If you want to analyze a large amount of readily-available data, use secondary data. If you want data specific to your purposes with control over how it is generated, collect primary data.
  • If you want to establish cause-and-effect relationships between variables , use experimental methods. If you want to understand the characteristics of a research subject, use descriptive methods.

Frequently asked questions: Methodology

Attrition refers to participants leaving a study. It always happens to some extent—for example, in randomized controlled trials for medical research.

Differential attrition occurs when attrition or dropout rates differ systematically between the intervention and the control group . As a result, the characteristics of the participants who drop out differ from the characteristics of those who stay in the study. Because of this, study results may be biased .

Action research is conducted in order to solve a particular issue immediately, while case studies are often conducted over a longer period of time and focus more on observing and analyzing a particular ongoing phenomenon.

Action research is focused on solving a problem or informing individual and community-based knowledge in a way that impacts teaching, learning, and other related processes. It is less focused on contributing theoretical input, instead producing actionable input.

Action research is particularly popular with educators as a form of systematic inquiry because it prioritizes reflection and bridges the gap between theory and practice. Educators are able to simultaneously investigate an issue as they solve it, and the method is very iterative and flexible.

A cycle of inquiry is another name for action research . It is usually visualized in a spiral shape following a series of steps, such as “planning → acting → observing → reflecting.”

To make quantitative observations , you need to use instruments that are capable of measuring the quantity you want to observe. For example, you might use a ruler to measure the length of an object or a thermometer to measure its temperature.

Criterion validity and construct validity are both types of measurement validity . In other words, they both show you how accurately a method measures something.

While construct validity is the degree to which a test or other measurement method measures what it claims to measure, criterion validity is the degree to which a test can predictively (in the future) or concurrently (in the present) measure something.

Construct validity is often considered the overarching type of measurement validity . You need to have face validity , content validity , and criterion validity in order to achieve construct validity.

Convergent validity and discriminant validity are both subtypes of construct validity . Together, they help you evaluate whether a test measures the concept it was designed to measure.

  • Convergent validity indicates whether a test that is designed to measure a particular construct correlates with other tests that assess the same or similar construct.
  • Discriminant validity indicates whether two tests that should not be highly related to each other are indeed not related. This type of validity is also called divergent validity .

You need to assess both in order to demonstrate construct validity. Neither one alone is sufficient for establishing construct validity.

  • Discriminant validity indicates whether two tests that should not be highly related to each other are indeed not related

Content validity shows you how accurately a test or other measurement method taps  into the various aspects of the specific construct you are researching.

In other words, it helps you answer the question: “does the test measure all aspects of the construct I want to measure?” If it does, then the test has high content validity.

The higher the content validity, the more accurate the measurement of the construct.

If the test fails to include parts of the construct, or irrelevant parts are included, the validity of the instrument is threatened, which brings your results into question.

Face validity and content validity are similar in that they both evaluate how suitable the content of a test is. The difference is that face validity is subjective, and assesses content at surface level.

When a test has strong face validity, anyone would agree that the test’s questions appear to measure what they are intended to measure.

For example, looking at a 4th grade math test consisting of problems in which students have to add and multiply, most people would agree that it has strong face validity (i.e., it looks like a math test).

On the other hand, content validity evaluates how well a test represents all the aspects of a topic. Assessing content validity is more systematic and relies on expert evaluation. of each question, analyzing whether each one covers the aspects that the test was designed to cover.

A 4th grade math test would have high content validity if it covered all the skills taught in that grade. Experts(in this case, math teachers), would have to evaluate the content validity by comparing the test to the learning objectives.

Snowball sampling is a non-probability sampling method . Unlike probability sampling (which involves some form of random selection ), the initial individuals selected to be studied are the ones who recruit new participants.

Because not every member of the target population has an equal chance of being recruited into the sample, selection in snowball sampling is non-random.

Snowball sampling is a non-probability sampling method , where there is not an equal chance for every member of the population to be included in the sample .

This means that you cannot use inferential statistics and make generalizations —often the goal of quantitative research . As such, a snowball sample is not representative of the target population and is usually a better fit for qualitative research .

Snowball sampling relies on the use of referrals. Here, the researcher recruits one or more initial participants, who then recruit the next ones.

Participants share similar characteristics and/or know each other. Because of this, not every member of the population has an equal chance of being included in the sample, giving rise to sampling bias .

Snowball sampling is best used in the following cases:

  • If there is no sampling frame available (e.g., people with a rare disease)
  • If the population of interest is hard to access or locate (e.g., people experiencing homelessness)
  • If the research focuses on a sensitive topic (e.g., extramarital affairs)

The reproducibility and replicability of a study can be ensured by writing a transparent, detailed method section and using clear, unambiguous language.

Reproducibility and replicability are related terms.

  • Reproducing research entails reanalyzing the existing data in the same manner.
  • Replicating (or repeating ) the research entails reconducting the entire analysis, including the collection of new data . 
  • A successful reproduction shows that the data analyses were conducted in a fair and honest manner.
  • A successful replication shows that the reliability of the results is high.

Stratified sampling and quota sampling both involve dividing the population into subgroups and selecting units from each subgroup. The purpose in both cases is to select a representative sample and/or to allow comparisons between subgroups.

The main difference is that in stratified sampling, you draw a random sample from each subgroup ( probability sampling ). In quota sampling you select a predetermined number or proportion of units, in a non-random manner ( non-probability sampling ).

Purposive and convenience sampling are both sampling methods that are typically used in qualitative data collection.

A convenience sample is drawn from a source that is conveniently accessible to the researcher. Convenience sampling does not distinguish characteristics among the participants. On the other hand, purposive sampling focuses on selecting participants possessing characteristics associated with the research study.

The findings of studies based on either convenience or purposive sampling can only be generalized to the (sub)population from which the sample is drawn, and not to the entire population.

Random sampling or probability sampling is based on random selection. This means that each unit has an equal chance (i.e., equal probability) of being included in the sample.

On the other hand, convenience sampling involves stopping people at random, which means that not everyone has an equal chance of being selected depending on the place, time, or day you are collecting your data.

Convenience sampling and quota sampling are both non-probability sampling methods. They both use non-random criteria like availability, geographical proximity, or expert knowledge to recruit study participants.

However, in convenience sampling, you continue to sample units or cases until you reach the required sample size.

In quota sampling, you first need to divide your population of interest into subgroups (strata) and estimate their proportions (quota) in the population. Then you can start your data collection, using convenience sampling to recruit participants, until the proportions in each subgroup coincide with the estimated proportions in the population.

A sampling frame is a list of every member in the entire population . It is important that the sampling frame is as complete as possible, so that your sample accurately reflects your population.

Stratified and cluster sampling may look similar, but bear in mind that groups created in cluster sampling are heterogeneous , so the individual characteristics in the cluster vary. In contrast, groups created in stratified sampling are homogeneous , as units share characteristics.

Relatedly, in cluster sampling you randomly select entire groups and include all units of each group in your sample. However, in stratified sampling, you select some units of all groups and include them in your sample. In this way, both methods can ensure that your sample is representative of the target population .

A systematic review is secondary research because it uses existing research. You don’t collect new data yourself.

The key difference between observational studies and experimental designs is that a well-done observational study does not influence the responses of participants, while experiments do have some sort of treatment condition applied to at least some participants by random assignment .

An observational study is a great choice for you if your research question is based purely on observations. If there are ethical, logistical, or practical concerns that prevent you from conducting a traditional experiment , an observational study may be a good choice. In an observational study, there is no interference or manipulation of the research subjects, as well as no control or treatment groups .

It’s often best to ask a variety of people to review your measurements. You can ask experts, such as other researchers, or laypeople, such as potential participants, to judge the face validity of tests.

While experts have a deep understanding of research methods , the people you’re studying can provide you with valuable insights you may have missed otherwise.

Face validity is important because it’s a simple first step to measuring the overall validity of a test or technique. It’s a relatively intuitive, quick, and easy way to start checking whether a new measure seems useful at first glance.

Good face validity means that anyone who reviews your measure says that it seems to be measuring what it’s supposed to. With poor face validity, someone reviewing your measure may be left confused about what you’re measuring and why you’re using this method.

Face validity is about whether a test appears to measure what it’s supposed to measure. This type of validity is concerned with whether a measure seems relevant and appropriate for what it’s assessing only on the surface.

Statistical analyses are often applied to test validity with data from your measures. You test convergent validity and discriminant validity with correlations to see if results from your test are positively or negatively related to those of other established tests.

You can also use regression analyses to assess whether your measure is actually predictive of outcomes that you expect it to predict theoretically. A regression analysis that supports your expectations strengthens your claim of construct validity .

When designing or evaluating a measure, construct validity helps you ensure you’re actually measuring the construct you’re interested in. If you don’t have construct validity, you may inadvertently measure unrelated or distinct constructs and lose precision in your research.

Construct validity is often considered the overarching type of measurement validity ,  because it covers all of the other types. You need to have face validity , content validity , and criterion validity to achieve construct validity.

Construct validity is about how well a test measures the concept it was designed to evaluate. It’s one of four types of measurement validity , which includes construct validity, face validity , and criterion validity.

There are two subtypes of construct validity.

  • Convergent validity : The extent to which your measure corresponds to measures of related constructs
  • Discriminant validity : The extent to which your measure is unrelated or negatively related to measures of distinct constructs

Naturalistic observation is a valuable tool because of its flexibility, external validity , and suitability for topics that can’t be studied in a lab setting.

The downsides of naturalistic observation include its lack of scientific control , ethical considerations , and potential for bias from observers and subjects.

Naturalistic observation is a qualitative research method where you record the behaviors of your research subjects in real world settings. You avoid interfering or influencing anything in a naturalistic observation.

You can think of naturalistic observation as “people watching” with a purpose.

A dependent variable is what changes as a result of the independent variable manipulation in experiments . It’s what you’re interested in measuring, and it “depends” on your independent variable.

In statistics, dependent variables are also called:

  • Response variables (they respond to a change in another variable)
  • Outcome variables (they represent the outcome you want to measure)
  • Left-hand-side variables (they appear on the left-hand side of a regression equation)

An independent variable is the variable you manipulate, control, or vary in an experimental study to explore its effects. It’s called “independent” because it’s not influenced by any other variables in the study.

Independent variables are also called:

  • Explanatory variables (they explain an event or outcome)
  • Predictor variables (they can be used to predict the value of a dependent variable)
  • Right-hand-side variables (they appear on the right-hand side of a regression equation).

As a rule of thumb, questions related to thoughts, beliefs, and feelings work well in focus groups. Take your time formulating strong questions, paying special attention to phrasing. Be careful to avoid leading questions , which can bias your responses.

Overall, your focus group questions should be:

  • Open-ended and flexible
  • Impossible to answer with “yes” or “no” (questions that start with “why” or “how” are often best)
  • Unambiguous, getting straight to the point while still stimulating discussion
  • Unbiased and neutral

A structured interview is a data collection method that relies on asking questions in a set order to collect data on a topic. They are often quantitative in nature. Structured interviews are best used when: 

  • You already have a very clear understanding of your topic. Perhaps significant research has already been conducted, or you have done some prior research yourself, but you already possess a baseline for designing strong structured questions.
  • You are constrained in terms of time or resources and need to analyze your data quickly and efficiently.
  • Your research question depends on strong parity between participants, with environmental conditions held constant.

More flexible interview options include semi-structured interviews , unstructured interviews , and focus groups .

Social desirability bias is the tendency for interview participants to give responses that will be viewed favorably by the interviewer or other participants. It occurs in all types of interviews and surveys , but is most common in semi-structured interviews , unstructured interviews , and focus groups .

Social desirability bias can be mitigated by ensuring participants feel at ease and comfortable sharing their views. Make sure to pay attention to your own body language and any physical or verbal cues, such as nodding or widening your eyes.

This type of bias can also occur in observations if the participants know they’re being observed. They might alter their behavior accordingly.

The interviewer effect is a type of bias that emerges when a characteristic of an interviewer (race, age, gender identity, etc.) influences the responses given by the interviewee.

There is a risk of an interviewer effect in all types of interviews , but it can be mitigated by writing really high-quality interview questions.

A semi-structured interview is a blend of structured and unstructured types of interviews. Semi-structured interviews are best used when:

  • You have prior interview experience. Spontaneous questions are deceptively challenging, and it’s easy to accidentally ask a leading question or make a participant uncomfortable.
  • Your research question is exploratory in nature. Participant answers can guide future research questions and help you develop a more robust knowledge base for future research.

An unstructured interview is the most flexible type of interview, but it is not always the best fit for your research topic.

Unstructured interviews are best used when:

  • You are an experienced interviewer and have a very strong background in your research topic, since it is challenging to ask spontaneous, colloquial questions.
  • Your research question is exploratory in nature. While you may have developed hypotheses, you are open to discovering new or shifting viewpoints through the interview process.
  • You are seeking descriptive data, and are ready to ask questions that will deepen and contextualize your initial thoughts and hypotheses.
  • Your research depends on forming connections with your participants and making them feel comfortable revealing deeper emotions, lived experiences, or thoughts.

The four most common types of interviews are:

  • Structured interviews : The questions are predetermined in both topic and order. 
  • Semi-structured interviews : A few questions are predetermined, but other questions aren’t planned.
  • Unstructured interviews : None of the questions are predetermined.
  • Focus group interviews : The questions are presented to a group instead of one individual.

Deductive reasoning is commonly used in scientific research, and it’s especially associated with quantitative research .

In research, you might have come across something called the hypothetico-deductive method . It’s the scientific method of testing hypotheses to check whether your predictions are substantiated by real-world data.

Deductive reasoning is a logical approach where you progress from general ideas to specific conclusions. It’s often contrasted with inductive reasoning , where you start with specific observations and form general conclusions.

Deductive reasoning is also called deductive logic.

There are many different types of inductive reasoning that people use formally or informally.

Here are a few common types:

  • Inductive generalization : You use observations about a sample to come to a conclusion about the population it came from.
  • Statistical generalization: You use specific numbers about samples to make statements about populations.
  • Causal reasoning: You make cause-and-effect links between different things.
  • Sign reasoning: You make a conclusion about a correlational relationship between different things.
  • Analogical reasoning: You make a conclusion about something based on its similarities to something else.

Inductive reasoning is a bottom-up approach, while deductive reasoning is top-down.

Inductive reasoning takes you from the specific to the general, while in deductive reasoning, you make inferences by going from general premises to specific conclusions.

In inductive research , you start by making observations or gathering data. Then, you take a broad scan of your data and search for patterns. Finally, you make general conclusions that you might incorporate into theories.

Inductive reasoning is a method of drawing conclusions by going from the specific to the general. It’s usually contrasted with deductive reasoning, where you proceed from general information to specific conclusions.

Inductive reasoning is also called inductive logic or bottom-up reasoning.

A hypothesis states your predictions about what your research will find. It is a tentative answer to your research question that has not yet been tested. For some research projects, you might have to write several hypotheses that address different aspects of your research question.

A hypothesis is not just a guess — it should be based on existing theories and knowledge. It also has to be testable, which means you can support or refute it through scientific research methods (such as experiments, observations and statistical analysis of data).

Triangulation can help:

  • Reduce research bias that comes from using a single method, theory, or investigator
  • Enhance validity by approaching the same topic with different tools
  • Establish credibility by giving you a complete picture of the research problem

But triangulation can also pose problems:

  • It’s time-consuming and labor-intensive, often involving an interdisciplinary team.
  • Your results may be inconsistent or even contradictory.

There are four main types of triangulation :

  • Data triangulation : Using data from different times, spaces, and people
  • Investigator triangulation : Involving multiple researchers in collecting or analyzing data
  • Theory triangulation : Using varying theoretical perspectives in your research
  • Methodological triangulation : Using different methodologies to approach the same topic

Many academic fields use peer review , largely to determine whether a manuscript is suitable for publication. Peer review enhances the credibility of the published manuscript.

However, peer review is also common in non-academic settings. The United Nations, the European Union, and many individual nations use peer review to evaluate grant applications. It is also widely used in medical and health-related fields as a teaching or quality-of-care measure. 

Peer assessment is often used in the classroom as a pedagogical tool. Both receiving feedback and providing it are thought to enhance the learning process, helping students think critically and collaboratively.

Peer review can stop obviously problematic, falsified, or otherwise untrustworthy research from being published. It also represents an excellent opportunity to get feedback from renowned experts in your field. It acts as a first defense, helping you ensure your argument is clear and that there are no gaps, vague terms, or unanswered questions for readers who weren’t involved in the research process.

Peer-reviewed articles are considered a highly credible source due to this stringent process they go through before publication.

In general, the peer review process follows the following steps: 

  • First, the author submits the manuscript to the editor.
  • Reject the manuscript and send it back to author, or 
  • Send it onward to the selected peer reviewer(s) 
  • Next, the peer review process occurs. The reviewer provides feedback, addressing any major or minor issues with the manuscript, and gives their advice regarding what edits should be made. 
  • Lastly, the edited manuscript is sent back to the author. They input the edits, and resubmit it to the editor for publication.

Exploratory research is often used when the issue you’re studying is new or when the data collection process is challenging for some reason.

You can use exploratory research if you have a general idea or a specific question that you want to study but there is no preexisting knowledge or paradigm with which to study it.

Exploratory research is a methodology approach that explores research questions that have not previously been studied in depth. It is often used when the issue you’re studying is new, or the data collection process is challenging in some way.

Explanatory research is used to investigate how or why a phenomenon occurs. Therefore, this type of research is often one of the first stages in the research process , serving as a jumping-off point for future research.

Exploratory research aims to explore the main aspects of an under-researched problem, while explanatory research aims to explain the causes and consequences of a well-defined problem.

Explanatory research is a research method used to investigate how or why something occurs when only a small amount of information is available pertaining to that topic. It can help you increase your understanding of a given topic.

Clean data are valid, accurate, complete, consistent, unique, and uniform. Dirty data include inconsistencies and errors.

Dirty data can come from any part of the research process, including poor research design , inappropriate measurement materials, or flawed data entry.

Data cleaning takes place between data collection and data analyses. But you can use some methods even before collecting data.

For clean data, you should start by designing measures that collect valid data. Data validation at the time of data entry or collection helps you minimize the amount of data cleaning you’ll need to do.

After data collection, you can use data standardization and data transformation to clean your data. You’ll also deal with any missing values, outliers, and duplicate values.

Every dataset requires different techniques to clean dirty data , but you need to address these issues in a systematic way. You focus on finding and resolving data points that don’t agree or fit with the rest of your dataset.

These data might be missing values, outliers, duplicate values, incorrectly formatted, or irrelevant. You’ll start with screening and diagnosing your data. Then, you’ll often standardize and accept or remove data to make your dataset consistent and valid.

Data cleaning is necessary for valid and appropriate analyses. Dirty data contain inconsistencies or errors , but cleaning your data helps you minimize or resolve these.

Without data cleaning, you could end up with a Type I or II error in your conclusion. These types of erroneous conclusions can be practically significant with important consequences, because they lead to misplaced investments or missed opportunities.

Data cleaning involves spotting and resolving potential data inconsistencies or errors to improve your data quality. An error is any value (e.g., recorded weight) that doesn’t reflect the true value (e.g., actual weight) of something that’s being measured.

In this process, you review, analyze, detect, modify, or remove “dirty” data to make your dataset “clean.” Data cleaning is also called data cleansing or data scrubbing.

Research misconduct means making up or falsifying data, manipulating data analyses, or misrepresenting results in research reports. It’s a form of academic fraud.

These actions are committed intentionally and can have serious consequences; research misconduct is not a simple mistake or a point of disagreement but a serious ethical failure.

Anonymity means you don’t know who the participants are, while confidentiality means you know who they are but remove identifying information from your research report. Both are important ethical considerations .

You can only guarantee anonymity by not collecting any personally identifying information—for example, names, phone numbers, email addresses, IP addresses, physical characteristics, photos, or videos.

You can keep data confidential by using aggregate information in your research report, so that you only refer to groups of participants rather than individuals.

Research ethics matter for scientific integrity, human rights and dignity, and collaboration between science and society. These principles make sure that participation in studies is voluntary, informed, and safe.

Ethical considerations in research are a set of principles that guide your research designs and practices. These principles include voluntary participation, informed consent, anonymity, confidentiality, potential for harm, and results communication.

Scientists and researchers must always adhere to a certain code of conduct when collecting data from others .

These considerations protect the rights of research participants, enhance research validity , and maintain scientific integrity.

In multistage sampling , you can use probability or non-probability sampling methods .

For a probability sample, you have to conduct probability sampling at every stage.

You can mix it up by using simple random sampling , systematic sampling , or stratified sampling to select units at different stages, depending on what is applicable and relevant to your study.

Multistage sampling can simplify data collection when you have large, geographically spread samples, and you can obtain a probability sample without a complete sampling frame.

But multistage sampling may not lead to a representative sample, and larger samples are needed for multistage samples to achieve the statistical properties of simple random samples .

These are four of the most common mixed methods designs :

  • Convergent parallel: Quantitative and qualitative data are collected at the same time and analyzed separately. After both analyses are complete, compare your results to draw overall conclusions. 
  • Embedded: Quantitative and qualitative data are collected at the same time, but within a larger quantitative or qualitative design. One type of data is secondary to the other.
  • Explanatory sequential: Quantitative data is collected and analyzed first, followed by qualitative data. You can use this design if you think your qualitative data will explain and contextualize your quantitative findings.
  • Exploratory sequential: Qualitative data is collected and analyzed first, followed by quantitative data. You can use this design if you think the quantitative data will confirm or validate your qualitative findings.

Triangulation in research means using multiple datasets, methods, theories and/or investigators to address a research question. It’s a research strategy that can help you enhance the validity and credibility of your findings.

Triangulation is mainly used in qualitative research , but it’s also commonly applied in quantitative research . Mixed methods research always uses triangulation.

In multistage sampling , or multistage cluster sampling, you draw a sample from a population using smaller and smaller groups at each stage.

This method is often used to collect data from a large, geographically spread group of people in national surveys, for example. You take advantage of hierarchical groupings (e.g., from state to city to neighborhood) to create a sample that’s less expensive and time-consuming to collect data from.

No, the steepness or slope of the line isn’t related to the correlation coefficient value. The correlation coefficient only tells you how closely your data fit on a line, so two datasets with the same correlation coefficient can have very different slopes.

To find the slope of the line, you’ll need to perform a regression analysis .

Correlation coefficients always range between -1 and 1.

The sign of the coefficient tells you the direction of the relationship: a positive value means the variables change together in the same direction, while a negative value means they change together in opposite directions.

The absolute value of a number is equal to the number without its sign. The absolute value of a correlation coefficient tells you the magnitude of the correlation: the greater the absolute value, the stronger the correlation.

These are the assumptions your data must meet if you want to use Pearson’s r :

  • Both variables are on an interval or ratio level of measurement
  • Data from both variables follow normal distributions
  • Your data have no outliers
  • Your data is from a random or representative sample
  • You expect a linear relationship between the two variables

Quantitative research designs can be divided into two main categories:

  • Correlational and descriptive designs are used to investigate characteristics, averages, trends, and associations between variables.
  • Experimental and quasi-experimental designs are used to test causal relationships .

Qualitative research designs tend to be more flexible. Common types of qualitative design include case study , ethnography , and grounded theory designs.

A well-planned research design helps ensure that your methods match your research aims, that you collect high-quality data, and that you use the right kind of analysis to answer your questions, utilizing credible sources . This allows you to draw valid , trustworthy conclusions.

The priorities of a research design can vary depending on the field, but you usually have to specify:

  • Your research questions and/or hypotheses
  • Your overall approach (e.g., qualitative or quantitative )
  • The type of design you’re using (e.g., a survey , experiment , or case study )
  • Your sampling methods or criteria for selecting subjects
  • Your data collection methods (e.g., questionnaires , observations)
  • Your data collection procedures (e.g., operationalization , timing and data management)
  • Your data analysis methods (e.g., statistical tests  or thematic analysis )

A research design is a strategy for answering your   research question . It defines your overall approach and determines how you will collect and analyze data.

Questionnaires can be self-administered or researcher-administered.

Self-administered questionnaires can be delivered online or in paper-and-pen formats, in person or through mail. All questions are standardized so that all respondents receive the same questions with identical wording.

Researcher-administered questionnaires are interviews that take place by phone, in-person, or online between researchers and respondents. You can gain deeper insights by clarifying questions for respondents or asking follow-up questions.

You can organize the questions logically, with a clear progression from simple to complex, or randomly between respondents. A logical flow helps respondents process the questionnaire easier and quicker, but it may lead to bias. Randomization can minimize the bias from order effects.

Closed-ended, or restricted-choice, questions offer respondents a fixed set of choices to select from. These questions are easier to answer quickly.

Open-ended or long-form questions allow respondents to answer in their own words. Because there are no restrictions on their choices, respondents can answer in ways that researchers may not have otherwise considered.

A questionnaire is a data collection tool or instrument, while a survey is an overarching research method that involves collecting and analyzing data from people using questionnaires.

The third variable and directionality problems are two main reasons why correlation isn’t causation .

The third variable problem means that a confounding variable affects both variables to make them seem causally related when they are not.

The directionality problem is when two variables correlate and might actually have a causal relationship, but it’s impossible to conclude which variable causes changes in the other.

Correlation describes an association between variables : when one variable changes, so does the other. A correlation is a statistical indicator of the relationship between variables.

Causation means that changes in one variable brings about changes in the other (i.e., there is a cause-and-effect relationship between variables). The two variables are correlated with each other, and there’s also a causal link between them.

While causation and correlation can exist simultaneously, correlation does not imply causation. In other words, correlation is simply a relationship where A relates to B—but A doesn’t necessarily cause B to happen (or vice versa). Mistaking correlation for causation is a common error and can lead to false cause fallacy .

Controlled experiments establish causality, whereas correlational studies only show associations between variables.

  • In an experimental design , you manipulate an independent variable and measure its effect on a dependent variable. Other variables are controlled so they can’t impact the results.
  • In a correlational design , you measure variables without manipulating any of them. You can test whether your variables change together, but you can’t be sure that one variable caused a change in another.

In general, correlational research is high in external validity while experimental research is high in internal validity .

A correlation is usually tested for two variables at a time, but you can test correlations between three or more variables.

A correlation coefficient is a single number that describes the strength and direction of the relationship between your variables.

Different types of correlation coefficients might be appropriate for your data based on their levels of measurement and distributions . The Pearson product-moment correlation coefficient (Pearson’s r ) is commonly used to assess a linear relationship between two quantitative variables.

A correlational research design investigates relationships between two variables (or more) without the researcher controlling or manipulating any of them. It’s a non-experimental type of quantitative research .

A correlation reflects the strength and/or direction of the association between two or more variables.

  • A positive correlation means that both variables change in the same direction.
  • A negative correlation means that the variables change in opposite directions.
  • A zero correlation means there’s no relationship between the variables.

Random error  is almost always present in scientific studies, even in highly controlled settings. While you can’t eradicate it completely, you can reduce random error by taking repeated measurements, using a large sample, and controlling extraneous variables .

You can avoid systematic error through careful design of your sampling , data collection , and analysis procedures. For example, use triangulation to measure your variables using multiple methods; regularly calibrate instruments or procedures; use random sampling and random assignment ; and apply masking (blinding) where possible.

Systematic error is generally a bigger problem in research.

With random error, multiple measurements will tend to cluster around the true value. When you’re collecting data from a large sample , the errors in different directions will cancel each other out.

Systematic errors are much more problematic because they can skew your data away from the true value. This can lead you to false conclusions ( Type I and II errors ) about the relationship between the variables you’re studying.

Random and systematic error are two types of measurement error.

Random error is a chance difference between the observed and true values of something (e.g., a researcher misreading a weighing scale records an incorrect measurement).

Systematic error is a consistent or proportional difference between the observed and true values of something (e.g., a miscalibrated scale consistently records weights as higher than they actually are).

On graphs, the explanatory variable is conventionally placed on the x-axis, while the response variable is placed on the y-axis.

  • If you have quantitative variables , use a scatterplot or a line graph.
  • If your response variable is categorical, use a scatterplot or a line graph.
  • If your explanatory variable is categorical, use a bar graph.

The term “ explanatory variable ” is sometimes preferred over “ independent variable ” because, in real world contexts, independent variables are often influenced by other variables. This means they aren’t totally independent.

Multiple independent variables may also be correlated with each other, so “explanatory variables” is a more appropriate term.

The difference between explanatory and response variables is simple:

  • An explanatory variable is the expected cause, and it explains the results.
  • A response variable is the expected effect, and it responds to other variables.

In a controlled experiment , all extraneous variables are held constant so that they can’t influence the results. Controlled experiments require:

  • A control group that receives a standard treatment, a fake treatment, or no treatment.
  • Random assignment of participants to ensure the groups are equivalent.

Depending on your study topic, there are various other methods of controlling variables .

There are 4 main types of extraneous variables :

  • Demand characteristics : environmental cues that encourage participants to conform to researchers’ expectations.
  • Experimenter effects : unintentional actions by researchers that influence study outcomes.
  • Situational variables : environmental variables that alter participants’ behaviors.
  • Participant variables : any characteristic or aspect of a participant’s background that could affect study results.

An extraneous variable is any variable that you’re not investigating that can potentially affect the dependent variable of your research study.

A confounding variable is a type of extraneous variable that not only affects the dependent variable, but is also related to the independent variable.

In a factorial design, multiple independent variables are tested.

If you test two variables, each level of one independent variable is combined with each level of the other independent variable to create different conditions.

Within-subjects designs have many potential threats to internal validity , but they are also very statistically powerful .

Advantages:

  • Only requires small samples
  • Statistically powerful
  • Removes the effects of individual differences on the outcomes

Disadvantages:

  • Internal validity threats reduce the likelihood of establishing a direct relationship between variables
  • Time-related effects, such as growth, can influence the outcomes
  • Carryover effects mean that the specific order of different treatments affect the outcomes

While a between-subjects design has fewer threats to internal validity , it also requires more participants for high statistical power than a within-subjects design .

  • Prevents carryover effects of learning and fatigue.
  • Shorter study duration.
  • Needs larger samples for high power.
  • Uses more resources to recruit participants, administer sessions, cover costs, etc.
  • Individual differences may be an alternative explanation for results.

Yes. Between-subjects and within-subjects designs can be combined in a single study when you have two or more independent variables (a factorial design). In a mixed factorial design, one variable is altered between subjects and another is altered within subjects.

In a between-subjects design , every participant experiences only one condition, and researchers assess group differences between participants in various conditions.

In a within-subjects design , each participant experiences all conditions, and researchers test the same participants repeatedly for differences between conditions.

The word “between” means that you’re comparing different conditions between groups, while the word “within” means you’re comparing different conditions within the same group.

Random assignment is used in experiments with a between-groups or independent measures design. In this research design, there’s usually a control group and one or more experimental groups. Random assignment helps ensure that the groups are comparable.

In general, you should always use random assignment in this type of experimental design when it is ethically possible and makes sense for your study topic.

To implement random assignment , assign a unique number to every member of your study’s sample .

Then, you can use a random number generator or a lottery method to randomly assign each number to a control or experimental group. You can also do so manually, by flipping a coin or rolling a dice to randomly assign participants to groups.

Random selection, or random sampling , is a way of selecting members of a population for your study’s sample.

In contrast, random assignment is a way of sorting the sample into control and experimental groups.

Random sampling enhances the external validity or generalizability of your results, while random assignment improves the internal validity of your study.

In experimental research, random assignment is a way of placing participants from your sample into different groups using randomization. With this method, every member of the sample has a known or equal chance of being placed in a control group or an experimental group.

“Controlling for a variable” means measuring extraneous variables and accounting for them statistically to remove their effects on other variables.

Researchers often model control variable data along with independent and dependent variable data in regression analyses and ANCOVAs . That way, you can isolate the control variable’s effects from the relationship between the variables of interest.

Control variables help you establish a correlational or causal relationship between variables by enhancing internal validity .

If you don’t control relevant extraneous variables , they may influence the outcomes of your study, and you may not be able to demonstrate that your results are really an effect of your independent variable .

A control variable is any variable that’s held constant in a research study. It’s not a variable of interest in the study, but it’s controlled because it could influence the outcomes.

Including mediators and moderators in your research helps you go beyond studying a simple relationship between two variables for a fuller picture of the real world. They are important to consider when studying complex correlational or causal relationships.

Mediators are part of the causal pathway of an effect, and they tell you how or why an effect takes place. Moderators usually help you judge the external validity of your study by identifying the limitations of when the relationship between variables holds.

If something is a mediating variable :

  • It’s caused by the independent variable .
  • It influences the dependent variable
  • When it’s taken into account, the statistical correlation between the independent and dependent variables is higher than when it isn’t considered.

A confounder is a third variable that affects variables of interest and makes them seem related when they are not. In contrast, a mediator is the mechanism of a relationship between two variables: it explains the process by which they are related.

A mediator variable explains the process through which two variables are related, while a moderator variable affects the strength and direction of that relationship.

There are three key steps in systematic sampling :

  • Define and list your population , ensuring that it is not ordered in a cyclical or periodic order.
  • Decide on your sample size and calculate your interval, k , by dividing your population by your target sample size.
  • Choose every k th member of the population as your sample.

Systematic sampling is a probability sampling method where researchers select members of the population at a regular interval – for example, by selecting every 15th person on a list of the population. If the population is in a random order, this can imitate the benefits of simple random sampling .

Yes, you can create a stratified sample using multiple characteristics, but you must ensure that every participant in your study belongs to one and only one subgroup. In this case, you multiply the numbers of subgroups for each characteristic to get the total number of groups.

For example, if you were stratifying by location with three subgroups (urban, rural, or suburban) and marital status with five subgroups (single, divorced, widowed, married, or partnered), you would have 3 x 5 = 15 subgroups.

You should use stratified sampling when your sample can be divided into mutually exclusive and exhaustive subgroups that you believe will take on different mean values for the variable that you’re studying.

Using stratified sampling will allow you to obtain more precise (with lower variance ) statistical estimates of whatever you are trying to measure.

For example, say you want to investigate how income differs based on educational attainment, but you know that this relationship can vary based on race. Using stratified sampling, you can ensure you obtain a large enough sample from each racial group, allowing you to draw more precise conclusions.

In stratified sampling , researchers divide subjects into subgroups called strata based on characteristics that they share (e.g., race, gender, educational attainment).

Once divided, each subgroup is randomly sampled using another probability sampling method.

Cluster sampling is more time- and cost-efficient than other probability sampling methods , particularly when it comes to large samples spread across a wide geographical area.

However, it provides less statistical certainty than other methods, such as simple random sampling , because it is difficult to ensure that your clusters properly represent the population as a whole.

There are three types of cluster sampling : single-stage, double-stage and multi-stage clustering. In all three types, you first divide the population into clusters, then randomly select clusters for use in your sample.

  • In single-stage sampling , you collect data from every unit within the selected clusters.
  • In double-stage sampling , you select a random sample of units from within the clusters.
  • In multi-stage sampling , you repeat the procedure of randomly sampling elements from within the clusters until you have reached a manageable sample.

Cluster sampling is a probability sampling method in which you divide a population into clusters, such as districts or schools, and then randomly select some of these clusters as your sample.

The clusters should ideally each be mini-representations of the population as a whole.

If properly implemented, simple random sampling is usually the best sampling method for ensuring both internal and external validity . However, it can sometimes be impractical and expensive to implement, depending on the size of the population to be studied,

If you have a list of every member of the population and the ability to reach whichever members are selected, you can use simple random sampling.

The American Community Survey  is an example of simple random sampling . In order to collect detailed data on the population of the US, the Census Bureau officials randomly select 3.5 million households per year and use a variety of methods to convince them to fill out the survey.

Simple random sampling is a type of probability sampling in which the researcher randomly selects a subset of participants from a population . Each member of the population has an equal chance of being selected. Data is then collected from as large a percentage as possible of this random subset.

Quasi-experimental design is most useful in situations where it would be unethical or impractical to run a true experiment .

Quasi-experiments have lower internal validity than true experiments, but they often have higher external validity  as they can use real-world interventions instead of artificial laboratory settings.

A quasi-experiment is a type of research design that attempts to establish a cause-and-effect relationship. The main difference with a true experiment is that the groups are not randomly assigned.

Blinding is important to reduce research bias (e.g., observer bias , demand characteristics ) and ensure a study’s internal validity .

If participants know whether they are in a control or treatment group , they may adjust their behavior in ways that affect the outcome that researchers are trying to measure. If the people administering the treatment are aware of group assignment, they may treat participants differently and thus directly or indirectly influence the final results.

  • In a single-blind study , only the participants are blinded.
  • In a double-blind study , both participants and experimenters are blinded.
  • In a triple-blind study , the assignment is hidden not only from participants and experimenters, but also from the researchers analyzing the data.

Blinding means hiding who is assigned to the treatment group and who is assigned to the control group in an experiment .

A true experiment (a.k.a. a controlled experiment) always includes at least one control group that doesn’t receive the experimental treatment.

However, some experiments use a within-subjects design to test treatments without a control group. In these designs, you usually compare one group’s outcomes before and after a treatment (instead of comparing outcomes between different groups).

For strong internal validity , it’s usually best to include a control group if possible. Without a control group, it’s harder to be certain that the outcome was caused by the experimental treatment and not by other variables.

An experimental group, also known as a treatment group, receives the treatment whose effect researchers wish to study, whereas a control group does not. They should be identical in all other ways.

Individual Likert-type questions are generally considered ordinal data , because the items have clear rank order, but don’t have an even distribution.

Overall Likert scale scores are sometimes treated as interval data. These scores are considered to have directionality and even spacing between them.

The type of data determines what statistical tests you should use to analyze your data.

A Likert scale is a rating scale that quantitatively assesses opinions, attitudes, or behaviors. It is made up of 4 or more questions that measure a single attitude or trait when response scores are combined.

To use a Likert scale in a survey , you present participants with Likert-type questions or statements, and a continuum of items, usually with 5 or 7 possible responses, to capture their degree of agreement.

In scientific research, concepts are the abstract ideas or phenomena that are being studied (e.g., educational achievement). Variables are properties or characteristics of the concept (e.g., performance at school), while indicators are ways of measuring or quantifying variables (e.g., yearly grade reports).

The process of turning abstract concepts into measurable variables and indicators is called operationalization .

There are various approaches to qualitative data analysis , but they all share five steps in common:

  • Prepare and organize your data.
  • Review and explore your data.
  • Develop a data coding system.
  • Assign codes to the data.
  • Identify recurring themes.

The specifics of each step depend on the focus of the analysis. Some common approaches include textual analysis , thematic analysis , and discourse analysis .

There are five common approaches to qualitative research :

  • Grounded theory involves collecting data in order to develop new theories.
  • Ethnography involves immersing yourself in a group or organization to understand its culture.
  • Narrative research involves interpreting stories to understand how people make sense of their experiences and perceptions.
  • Phenomenological research involves investigating phenomena through people’s lived experiences.
  • Action research links theory and practice in several cycles to drive innovative changes.

Hypothesis testing is a formal procedure for investigating our ideas about the world using statistics. It is used by scientists to test specific predictions, called hypotheses , by calculating how likely it is that a pattern or relationship between variables could have arisen by chance.

Operationalization means turning abstract conceptual ideas into measurable observations.

For example, the concept of social anxiety isn’t directly observable, but it can be operationally defined in terms of self-rating scores, behavioral avoidance of crowded places, or physical anxiety symptoms in social situations.

Before collecting data , it’s important to consider how you will operationalize the variables that you want to measure.

When conducting research, collecting original data has significant advantages:

  • You can tailor data collection to your specific research aims (e.g. understanding the needs of your consumers or user testing your website)
  • You can control and standardize the process for high reliability and validity (e.g. choosing appropriate measurements and sampling methods )

However, there are also some drawbacks: data collection can be time-consuming, labor-intensive and expensive. In some cases, it’s more efficient to use secondary data that has already been collected by someone else, but the data might be less reliable.

Data collection is the systematic process by which observations or measurements are gathered in research. It is used in many different contexts by academics, governments, businesses, and other organizations.

There are several methods you can use to decrease the impact of confounding variables on your research: restriction, matching, statistical control and randomization.

In restriction , you restrict your sample by only including certain subjects that have the same values of potential confounding variables.

In matching , you match each of the subjects in your treatment group with a counterpart in the comparison group. The matched subjects have the same values on any potential confounding variables, and only differ in the independent variable .

In statistical control , you include potential confounders as variables in your regression .

In randomization , you randomly assign the treatment (or independent variable) in your study to a sufficiently large number of subjects, which allows you to control for all potential confounding variables.

A confounding variable is closely related to both the independent and dependent variables in a study. An independent variable represents the supposed cause , while the dependent variable is the supposed effect . A confounding variable is a third variable that influences both the independent and dependent variables.

Failing to account for confounding variables can cause you to wrongly estimate the relationship between your independent and dependent variables.

To ensure the internal validity of your research, you must consider the impact of confounding variables. If you fail to account for them, you might over- or underestimate the causal relationship between your independent and dependent variables , or even find a causal relationship where none exists.

Yes, but including more than one of either type requires multiple research questions .

For example, if you are interested in the effect of a diet on health, you can use multiple measures of health: blood sugar, blood pressure, weight, pulse, and many more. Each of these is its own dependent variable with its own research question.

You could also choose to look at the effect of exercise levels as well as diet, or even the additional effect of the two combined. Each of these is a separate independent variable .

To ensure the internal validity of an experiment , you should only change one independent variable at a time.

No. The value of a dependent variable depends on an independent variable, so a variable cannot be both independent and dependent at the same time. It must be either the cause or the effect, not both!

You want to find out how blood sugar levels are affected by drinking diet soda and regular soda, so you conduct an experiment .

  • The type of soda – diet or regular – is the independent variable .
  • The level of blood sugar that you measure is the dependent variable – it changes depending on the type of soda.

Determining cause and effect is one of the most important parts of scientific research. It’s essential to know which is the cause – the independent variable – and which is the effect – the dependent variable.

In non-probability sampling , the sample is selected based on non-random criteria, and not every member of the population has a chance of being included.

Common non-probability sampling methods include convenience sampling , voluntary response sampling, purposive sampling , snowball sampling, and quota sampling .

Probability sampling means that every member of the target population has a known chance of being included in the sample.

Probability sampling methods include simple random sampling , systematic sampling , stratified sampling , and cluster sampling .

Using careful research design and sampling procedures can help you avoid sampling bias . Oversampling can be used to correct undercoverage bias .

Some common types of sampling bias include self-selection bias , nonresponse bias , undercoverage bias , survivorship bias , pre-screening or advertising bias, and healthy user bias.

Sampling bias is a threat to external validity – it limits the generalizability of your findings to a broader group of people.

A sampling error is the difference between a population parameter and a sample statistic .

A statistic refers to measures about the sample , while a parameter refers to measures about the population .

Populations are used when a research question requires data from every member of the population. This is usually only feasible when the population is small and easily accessible.

Samples are used to make inferences about populations . Samples are easier to collect data from because they are practical, cost-effective, convenient, and manageable.

There are seven threats to external validity : selection bias , history, experimenter effect, Hawthorne effect , testing effect, aptitude-treatment and situation effect.

The two types of external validity are population validity (whether you can generalize to other groups of people) and ecological validity (whether you can generalize to other situations and settings).

The external validity of a study is the extent to which you can generalize your findings to different groups of people, situations, and measures.

Cross-sectional studies cannot establish a cause-and-effect relationship or analyze behavior over a period of time. To investigate cause and effect, you need to do a longitudinal study or an experimental study .

Cross-sectional studies are less expensive and time-consuming than many other types of study. They can provide useful insights into a population’s characteristics and identify correlations for further research.

Sometimes only cross-sectional data is available for analysis; other times your research question may only require a cross-sectional study to answer it.

Longitudinal studies can last anywhere from weeks to decades, although they tend to be at least a year long.

The 1970 British Cohort Study , which has collected data on the lives of 17,000 Brits since their births in 1970, is one well-known example of a longitudinal study .

Longitudinal studies are better to establish the correct sequence of events, identify changes over time, and provide insight into cause-and-effect relationships, but they also tend to be more expensive and time-consuming than other types of studies.

Longitudinal studies and cross-sectional studies are two different types of research design . In a cross-sectional study you collect data from a population at a specific point in time; in a longitudinal study you repeatedly collect data from the same sample over an extended period of time.

Longitudinal study Cross-sectional study
observations Observations at a in time
Observes the multiple times Observes (a “cross-section”) in the population
Follows in participants over time Provides of society at a given point

There are eight threats to internal validity : history, maturation, instrumentation, testing, selection bias , regression to the mean, social interaction and attrition .

Internal validity is the extent to which you can be confident that a cause-and-effect relationship established in a study cannot be explained by other factors.

In mixed methods research , you use both qualitative and quantitative data collection and analysis methods to answer your research question .

A confounding variable , also called a confounder or confounding factor, is a third variable in a study examining a potential cause-and-effect relationship.

A confounding variable is related to both the supposed cause and the supposed effect of the study. It can be difficult to separate the true effect of the independent variable from the effect of the confounding variable.

In your research design , it’s important to identify potential confounding variables and plan how you will reduce their impact.

Discrete and continuous variables are two types of quantitative variables :

  • Discrete variables represent counts (e.g. the number of objects in a collection).
  • Continuous variables represent measurable amounts (e.g. water volume or weight).

Quantitative variables are any variables where the data represent amounts (e.g. height, weight, or age).

Categorical variables are any variables where the data represent groups. This includes rankings (e.g. finishing places in a race), classifications (e.g. brands of cereal), and binary outcomes (e.g. coin flips).

You need to know what type of variables you are working with to choose the right statistical test for your data and interpret your results .

You can think of independent and dependent variables in terms of cause and effect: an independent variable is the variable you think is the cause , while a dependent variable is the effect .

In an experiment, you manipulate the independent variable and measure the outcome in the dependent variable. For example, in an experiment about the effect of nutrients on crop growth:

  • The  independent variable  is the amount of nutrients added to the crop field.
  • The  dependent variable is the biomass of the crops at harvest time.

Defining your variables, and deciding how you will manipulate and measure them, is an important part of experimental design .

Experimental design means planning a set of procedures to investigate a relationship between variables . To design a controlled experiment, you need:

  • A testable hypothesis
  • At least one independent variable that can be precisely manipulated
  • At least one dependent variable that can be precisely measured

When designing the experiment, you decide:

  • How you will manipulate the variable(s)
  • How you will control for any potential confounding variables
  • How many subjects or samples will be included in the study
  • How subjects will be assigned to treatment levels

Experimental design is essential to the internal and external validity of your experiment.

I nternal validity is the degree of confidence that the causal relationship you are testing is not influenced by other factors or variables .

External validity is the extent to which your results can be generalized to other contexts.

The validity of your experiment depends on your experimental design .

Reliability and validity are both about how well a method measures something:

  • Reliability refers to the  consistency of a measure (whether the results can be reproduced under the same conditions).
  • Validity   refers to the  accuracy of a measure (whether the results really do represent what they are supposed to measure).

If you are doing experimental research, you also have to consider the internal and external validity of your experiment.

A sample is a subset of individuals from a larger population . Sampling means selecting the group that you will actually collect data from in your research. For example, if you are researching the opinions of students in your university, you could survey a sample of 100 students.

In statistics, sampling allows you to test a hypothesis about the characteristics of a population.

Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.

Quantitative methods allow you to systematically measure variables and test hypotheses . Qualitative methods allow you to explore concepts and experiences in more detail.

Methodology refers to the overarching strategy and rationale of your research project . It involves studying the methods used in your field and the theories or principles behind them, in order to develop an approach that matches your objectives.

Methods are the specific tools and procedures you use to collect and analyze data (for example, experiments, surveys , and statistical tests ).

In shorter scientific papers, where the aim is to report the findings of a specific study, you might simply describe what you did in a methods section .

In a longer or more complex research project, such as a thesis or dissertation , you will probably include a methodology section , where you explain your approach to answering the research questions and cite relevant sources to support your choice of methods.

Ask our team

Want to contact us directly? No problem.  We  are always here for you.

Support team - Nina

Our team helps students graduate by offering:

  • A world-class citation generator
  • Plagiarism Checker software powered by Turnitin
  • Innovative Citation Checker software
  • Professional proofreading services
  • Over 300 helpful articles about academic writing, citing sources, plagiarism, and more

Scribbr specializes in editing study-related documents . We proofread:

  • PhD dissertations
  • Research proposals
  • Personal statements
  • Admission essays
  • Motivation letters
  • Reflection papers
  • Journal articles
  • Capstone projects

Scribbr’s Plagiarism Checker is powered by elements of Turnitin’s Similarity Checker , namely the plagiarism detection software and the Internet Archive and Premium Scholarly Publications content databases .

The add-on AI detector is powered by Scribbr’s proprietary software.

The Scribbr Citation Generator is developed using the open-source Citation Style Language (CSL) project and Frank Bennett’s citeproc-js . It’s the same technology used by dozens of other popular citation tools, including Mendeley and Zotero.

You can find all the citation styles and locales used in the Scribbr Citation Generator in our publicly accessible repository on Github .

U.S. flag

An official website of the United States government

Here's how you know

Official websites use .gov A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

Home

  •   Facebook
  •   Twitter
  •   Linkedin
  •   Digg
  •   Reddit
  •   Pinterest
  •   Email

Latest Earthquakes |    Chat Share Social Media  

Statistical methods used in research concerning endangered and threatened animal species of Puerto Rico: A meta-study

A concern about statistics in wildlife studies, particularly of endangered and threatened species, is whether the data collected meet the assumptions necessary for the use of parametric statistics. This study identified published papers on the nine endangered and six threatened species found only on Puerto Rico using five different databases. The results from the Zoological Record database identified the most articles, including all identified by the other databases. Of the 222 identified articles, 108 included some form of statistics, 26 used only descriptive statistics, 34 included only parametric statistics, 26 used only nonparametric statistics, and 22 reported both parametric and nonparametric statistical analyses. This meta-study showed that the percentage of articles with no statistical treatment decreased in the most recent 20 years, and that although parametric statistics continue to be the most commonly used in published wildlife studies of Puerto Rican wildlife, there has been a distinct increase in the use of nonparametric statistics over time.

Citation Information

Publication Year 2021
Title Statistical methods used in research concerning endangered and threatened animal species of Puerto Rico: A meta-study
DOI
Authors S.J. Rivera, K.M. Alpi, Jaime A. Collazo, M.K. Stoskopf
Publication Type Article
Publication Subtype Journal Article
Series Title Caribbean Journal of Science
Index ID
Record Source
USGS Organization Coop Res Unit Atlanta

Related Content

Jaime collazo, phd, research wildlife biologist.

IEEE Account

  • Change Username/Password
  • Update Address

Purchase Details

  • Payment Options
  • Order History
  • View Purchased Documents

Profile Information

  • Communications Preferences
  • Profession and Education
  • Technical Interests
  • US & Canada: +1 800 678 4333
  • Worldwide: +1 732 981 0060
  • Contact & Support
  • About IEEE Xplore
  • Accessibility
  • Terms of Use
  • Nondiscrimination Policy
  • Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2024 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 02 September 2024

Research on prediction method of coal mining surface subsidence based on MMF optimization model

  • Chunde Piao 1 ,
  • Bin Zhu 1 , 2 ,
  • Jianxin Jiang 1 , 3 &
  • Qinghong Dong 1  

Scientific Reports volume  14 , Article number:  20316 ( 2024 ) Cite this article

Metrics details

  • Environmental impact
  • Natural hazards

Coal seam mining causes fracture and movement of overlying strata in goaf, and endangers the safety of surface structures and underground pipelines. Based on the engineering geological conditions of 22,122 working face in Cuncaota No.2 Coal Mine of China Shenhua Shendong Coal Group Co., Ltd. a similar material model test of mining overburden rock was carried out. The subsidence of overburden rock was obtained through the full-section strain data of distributed optical fiber technology, and the characteristics of mining surface subsidence were studied. The Weibull model was used to adjust the mathematical form of the first half of the surface subsidence curve via the MMF function. On this basis, the prediction model of coal seam mining surface subsidence was established, and the parameters of the prediction model of surface subsidence were determined. The test results show that with the advancement of coal seam mining, the fit goodness of the surface subsidence prediction curve based on the MMF optimization model reaches 0.987. Compared with the measured values, the relative error of the surface subsidence prediction model is reduced to less than 10%. The model displays good prediction accuracy. The time required for settlement stability in the prediction model is positively correlated with parameter a and negatively correlated with parameter b. The research results can be further extended to the prediction of overburden “three zones” subsidence, and provide a scientific basis for the evaluation of surface subsidence compression potential in coal mine goaf.

Similar content being viewed by others

method used in research

Determination of optimal mining width for coal mining under the slope by of using numerical simulation

method used in research

A new indicator for estimating the degree of mining-induced land subsidence: the overburden’s average GSI value

method used in research

Parameter design and effectiveness evaluation of wide strip mining of extra thick coal seams under dense buildings

Introduction.

The underground mining of coal mine is accompanied by the initiation, expansion and penetration of the internal cracks in the overlying rock and soil mass, which leads to the movement, deformation and destruction of the rock strata to the goaf, causes the surface subsidence, and endangers the safety of the surface structures and underground pipelines 1 . The surface subsidence of coal mining is the result of deformation and failure of overlying strata from bottom to top. Through the monitoring of subsidence in the deformation evolution stage of overlying strata in coal seam mining, the prediction method of mining surface subsidence can be established, and effective prevention and control measures can be taken in advance to avoid and reduce the safety problems caused by surface subsidence 2 .

The settlement of overlying strata in goaf caused by underground coal mining is nonlinear and uncertain. It has become a new research direction to characterize the deformation and dynamic settlement of rock and soil mass through mathematical model. Combined with the field measured data to determine the time influence parameter C and the model order n, Zhang et al. 3 , 4 established an improved Knothe time function model by using the probability integral method, the two-medium method and the least square method. Based on actual geological conditions of a mine in East China, Ma et al. 5 , 6 t substituted he mechanical parameters into the Weibull composite subsidence prediction model (WCSPM) and the probability integral composite subsidence prediction model (PICSPM). Accordingly, they obtained the predictions of surface subsidence movement parameters. Chi et al. 7 introduced the multi-population genetic algorithm into the Boltzmann function and established a calculation model of mining subsidence parameters. Based on the dynamic model of improved probability integral method, Jiang et al. 8 established the observation condition equation of differential interferometric synthetic aperture radar (D-InSAR) monitoring mining subsidence, and proposed a three-dimensional deformation prediction method of mining subsidence. Sui et al. 9 established a mining subsidence prediction method combining D-InSAR technology and support vector machine regression algorithm. Oh HJ et al. 10 used logit boost meta-integrated machine learning model to predict land subsidence. Pal et al. 11 proposed a correction method for surface subsidence by using FNSE model and combining with the excavation parameters of long-arm mining face. The MMF (Morgan-Mercer-Flodin) model in the time function prediction model can well describe the development process of ground subsidence, and the model has a high consistency with the ground subsidence rate 12 , 13 , 14 . Surface subsidence is a dynamic problem caused by the voids in the goaf entering the surface along the overlying strata. The MMF model suffers reduced prediction accuracy when the geological conditions of rock strata are complex.

The advantage of distributed optical fiber sensing technology lies in using the optical fiber path to obtain the continuous distribution information of the measured field in time and space at the same time. It has been applied to the deformation monitoring of mining overburden 15 . Cuncaota No.2 Coal Mine of China Shenhua Shendong Coal Group Co., Ltd. is located in Yijinhuoluo Banner, Ordos City, Inner Mongolia Autonomous Region. Photovoltaic panels are installed on the surface above the 22,122 working face for power generation. Coal mining leads to the hidden danger of deformation and fracture of photovoltaic panels. This study set the 22,122 working face of Cuncaota No.2 Mine as the research object. Through the indoor similar material model test of mining overburden rock, the distributed optical fiber sensing technology was used to obtain the strain distribution of overburden rock in the process of coal seam mining, and the surface subsidence caused by mining was calculated. Aiming to solve the problem of insufficient accuracy in the prediction of surface subsidence caused by coal mining based on MMF model, Weibull model was introduced into the prediction method of surface subsidence based on MMF model, and MMF optimization model based on measured surface subsidence was established to determine the relevant parameters of the prediction model of surface subsidence. This study provides a theoretical basis for real-time control and treatment of surface subsidence above coal mine goaf.

Prediction model of surface subsidence induced by coal seam mining

Settlement prediction method based on mmf model.

The MMF model is a growth curve model with a 'S' type characteristic curve, and its model function form conforms to the evolution characteristics of surface subsidence caused by coal seam mining. The prediction expression of overburden subsidence related to MMF model is:

where \({S}_{t}\) denotes the surface settlement at time t; \({W}_{0}\) is the maximum settlement of the surface; a and b are parameters related to the geological conditions of overlying strata; and t represents the time of settlement.

The first and second derivatives of the MMF model of Eq. ( 1 ) are solved, and the expressions of surface subsidence velocity and acceleration are obtained.

Let \({W}_{0}\) = 1000mm, a = 1, b = 5, the relationship between surface subsidence value, settlement velocity and settlement acceleration based on MMF model is obtained, as shown in Fig.  1 .

figure 1

Curves of surface subsidence, subsidence velocity and subsidence acceleration changing with mining time.

As can be seen from Fig.  1 , the surface subsidence curve of the MMF model exhibits obvious segmentation, and the settlement value of the initial stage and the stable stage of the settlement is small. Under the coal seam mining conditions in the underground coal mine, the strata in the caving zone first moves to the goaf and the subsidence develops rapidly, which is not consistent with the evolution characteristics of the initial stage of the MMF model.

Surface subsidence prediction method of MMF optimization model

In view of the difference between the predicted curve of MMF time function model and the actual subsidence curve in the early stage of coal seam mining, Weibull model is used to optimize and adjust the mathematical form of the first half of MMF function curve, so as to solve the problem that the overall prediction accuracy is reduced due to the small settlement value in the initial stage of settlement. The phased function form of the improved MMF model is established by using the time when the subsidence velocity of the surface monitoring point reaches the maximum \(\tau\) as the demarcation point, as shown in Eq. ( 2 ).

where u1 and u2 denote combined weights, which are non-negative numbers, and u1 + u2 = 1, which are determined according to Eq. ( 3 ); \(\tau\) refers to the maximum surface subsidence velocity moment; and t is the total time of surface subsidence deformation.

where \({e}_{i}\) is the settlement prediction error, and \({e}_{i}\) = predicted settlement value-actual settlement value.

Two methods are used to determine the maximum surface subsidence velocity time \(\tau\) .

Through the real-time monitoring of the whole section of the mining overburden, the second derivative of the fitting equation of the monitoring data is equal to zero, and the maximum settlement velocity moment \(\tau\) is obtained.

For the actual observation data missing conditions, according to the “Specification on Building, Water, Railway and Main Roadway Coal Pillar Setting and Coal Mining”, the total time of surface movement and deformation is calculated by Eq. ( 4 ):

where \({H}_{0}\) denotes the average mining depth of coal seam.

The time experienced by the active stage of ground surface subsidence deformation is 0.56 times of the total time T of the moving deformation, as shown in Eq. ( 5 ).

Similar material model test of mining overburden rock

Model monitoring and excavation scheme.

The 22,122 working face of Cuncaota No.2 Coal Mine of China Shenhua Shendong Coal Group Co., Ltd. is mined along the long arm of the coal seam. The length and width of the working face are 800 m and 340 m, respectively. According to the engineering geological conditions of the 22,122 working face of Cuncaota No.2 Coal Mine and the histogram of BK26 borehole, the average burial depth of the 2 –2 coal seam to be mined is 255 m, and the average mining thickness is 3 m. Given the mining conditions, the physical and mechanical properties of the overlying rock and the size of the model test, the geometric similarity ratio was determined to be 200, and the stress similarity ratio was 333.33. River sand, lime, gypsum and other materials were selected to configure similar materials according to the ratio number; and the test model of similar materials in mining area was made. In the model test, the uppermost Quaternary clay layer is replaced by weights of the same weight. The physical and mechanical parameters and ratio numbers of the prototype and model of Cuncaota No.2 Mine are shown in Table 1 16 .

In order to grasp the deformation and failure characteristics of overlying strata in the process of coal seam mining, four vertical tight sheath strain optical fibers were laid in the similar material test model, and BOTDR ( Brillouin optical-time-domain reflectometer ) was used to monitor the strain distribution of overlying strata. The laying of the sensing fiber in the model test is shown in Fig.  2 .

figure 2

Sensor optical fiber layout and coal seam mining area diagram in the model.

As can be seen, the coal seam mining face is advanced from the left side of the 50 cm open-off cut. The excavation distance of the coal seam is 5cm each time. A total of 30 steps are mined. After each excavation, data acquisition and next mining are performed 30 min later when the rock layer is stable.

Analysis of strain distribution characteristics of overburden rock

Due to space limitation, this paper only analyzes the deformation characteristics of mining overburden according to the overburden strain distribution curve measured by A2 vertical sensing optical fiber during the mining process of 2 –2 coal seam. The strain distribution and deformation and failure characteristics of overlying strata during coal seam mining are shown in Fig.  3 .

figure 3

Strain distribution and deformation failure diagram of mining overburden rock. ( a ) Overburden rock strain distribution curve and ( b ) Deformation and failure characteristics of overburden rock.

From the strain distribution curve of overburden rock in Fig.  3 , it can be seen that before the coal seam working face passes through the A2 monitoring hole, the compressive strain measured by the sensing optical fiber increases continuously with the advancement of the working face. When the coal seam working face advances from the open-off cut position to 60 cm, the optical fiber compressive strain reaches the extreme value, and the roof collapses for the first time. At this time, the roof failure height is 6 cm. When the coal seam is mined to 70 cm, the working face of the coal seam passes through the A2 sensing fiber monitoring hole. The strain measured by the A2 sensing fiber is gradually converted from compressive strain to tensile strain. The roof collapses for the second time, and the tensile strain concentration occurs at 12 cm from the roof of the coal seam. As the mining of the working face continues, the continuous collapse of the roof strata causes the stress release; the lower strain of the section measured by the optical fiber gradually decreases; and the strain value gradually shifts to the upper part of the monitoring hole. When the working face of the coal seam reaches the stop line, the tensile stress concentration occurs at 38 cm from the roof of the coal seam, and the tensile strain value of the A2 sensing fiber is 2200 με. Based on the measured strain, the mining fracture discrimination method 17 reveals that the development height of the caving zone is 12 cm, and that of the water-conducting fracture zone is 38 cm. It can be seen from Fig.  3 that there exists a good correspondence between the strain change measured by the sensing fiber and the deformation and failure characteristics of the overburden rock.

Prediction method of mining surface subsidence model

Calculation of subsidence of mining overburden rock.

There is a good coupling between the sensing fiber and the rock and soil mass around the monitoring hole. The settlement deformation of the mining rock and soil mass is calculated by the strain integral method 18 . The calculation formula of mining overburden subsidence is shown in Eq. ( 6 ).

where S denotes the settlement of overlying strata within the range of \({h}_{1}\) and \({h}_{2}\) from the roof of the coal seam, and the minimum spacing of BOTDR instrument is 0.05 m; and \(\bar{\varepsilon }\) refers to the average strain in the calculation area.

Mining surface subsidence prediction

According to the measured strain distribution and Eq. ( 6 ) displacement calculation formula, the surface subsidence during coal seam mining is obtained. Based on the MMF model and the Weibull model, the surface subsidence measured by the A2 fiber is fitted to obtain the surface subsidence prediction curve. The surface subsidence curve based on A2 optical fiber monitoring data and theoretical prediction model is shown in Fig.  4 .

figure 4

Surface subsidence curve based on A2 optical fiber monitoring data and theoretical prediction model.

From the surface subsidence curve shown in Fig.  4 , it can be seen that the surface subsidence speed is accelerated after 14 times of coal seam mining, and the surface subsidence speed is reduced after 24 times of coal seam mining. The surface deformation is mainly affected by the compaction of the caving zone and the subsidence of the fracture zone and the bending subsidence zone. The subsidence deformation of the mining surface is 'S' type. When the MMF model is used to predict the mining surface subsidence, the error between the measured values is large when the surface subsidence velocity does not reach the maximum velocity. After the local surface subsidence reaches the maximum subsidence velocity, there is a high consistency between the prediction model and the measured values. The goodness of fit reaches 0.992, verifying a good calculation and prediction result. When the Weibull model is used to predict the surface settlement, the relative error between the measured value and the measured value is less than 5% before the settlement development reaches the maximum settlement speed. Compared with the MMF model, it boasts of higher accuracy. However, in the later stage of settlement development, the overall prediction accuracy and goodness of fit of the Weibull model display a downward trend, and the relative error is large.

According to the fitting curves of Eqs. ( 1 ), ( 2 ) and Fig.  4 , the parameters W0, a, b of MMF model and the parameters W0, c, k of Weibull model are determined respectively, and the theoretical calculation formula of surface subsidence is obtained, as shown in Eqs. ( 7 ) and ( 8 ).

According to Eqs. ( 7 ) and ( 8 ), the time when the surface subsidence reaches the maximum subsidence speed \(\tau\) of MMF model and Weibull model in the 14th mining of coal seam is obtained. According to Eq. ( 3 ), the error weights u1 and u2 in the MMF optimization model are 0.78 and 0.22, respectively.

The settlement prediction formula of the MMF optimization model is shown in Eq. ( 9 ):

The surface prediction curve of Eq. ( 9 ) is analyzed with the settlement based on the measured data of A2 fiber, and the comparison results are shown in Fig.  5 .

figure 5

Settlement prediction curve and error analysis of MMF optimization model.

It can be seen from Fig.  5 that the overall accuracy of the prediction curve is high when the MMF optimization model is used to predict the surface subsidence. When the coal seam is mined for the fourth time, the relative error is less than 20%. With the advancement of mining, the prediction error is reduced to less than 10%, and the goodness of fit reaches 0.987, indicating that the overall prediction accuracy of the MMF optimization model meets the requirements.

The parameter evolution characteristics of the prediction model

It can be seen from Eq. ( 1 ) and Fig.  1 that the parameters a and b have a great influence on the function form and prediction accuracy of the MMF model. Figure  6 shows the function curves when W0 is 100mm with varying a and b. To be specific, when b is 2.5, a is 50, 100, 5000, 1000, 5000, 8000; when a is 500, b is 2, 2.5, 3, 5, 7, 10, respectively.

figure 6

The influence of MMF model parameters on the predicted value. ( a ) The influence of the change of parameter a on the model and ( b ) The influence of the change of parameter b on the model.

It can be seen from Fig.  6 that the effects of MMF model parameters a and b on the predicted values are opposite. With the increase of parameter a, the settling velocity decreases obviously, and the time to reach the stable stage of settlement increases. With the increase of parameter b, the settling velocity increases obviously, and the time to reach the stable stage of settlement decreases. Therefore, according to the rock and soil properties of mining overburden and the mining conditions of coal seam, the surface subsidence velocity and stability time are analyzed. Combined with the field monitoring data, the parameters a and b in the MMF prediction model are determined to improve the prediction accuracy of surface subsidence.

The study has achieved the following research results.

According to the evolution characteristics of mining-induced surface subsidence in coal mines, the Weibull model is used to adjust the mathematical form of the first half of the MMF function curve, and the MMF optimization model for mining-induced surface subsidence prediction is established. The goodness of fit of the prediction curve based on the MMF optimization model is 0.987, the average residual sum is 0.25, and the prediction error is less than 10% with the coal seam mining. The model displays good prediction accuracy, indicating that the model is suitable for mining surface subsidence prediction.

In the MMF prediction model, the time required for the settlement stability stage increases with the increase of parameter a, and decreases with the increase of parameter b. The values of parameters a and b in the MMF prediction model should be determined according to the rock and soil properties of mining overburden, coal seam mining conditions and on-site monitoring data.

There is a close relationship between the subsidence evolution stage of overlying strata in coal mine goaf and the engineering properties of 'three zones', namely, overburden caving zone, fracture zone and bending subsidence zone. Based on the full-section strain distribution data of distributed optical fiber technology, the settlement of the three zones is calculated. In the next stage, the MMF optimization model is used to predict the subsidence evolution process of the 'three zones' of the overburden rock, which can provide a basis for the evaluation of the compression potential of the surface subsidence evolution stage of the coal mine goaf.

Data availability

The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.

Yuan, L. Theory and technology considerations on high-quality development of coal main energy security in China. J. Bull. Chinese Acad. Sci. 38 (1), 11–22 (2023).

Google Scholar  

Loupasakis, C., Angelitsa, V., Rozos, D. & Spanou, N. Mining geohazards—land subsidence caused by the dewatering of opencast coal mines: The case study of the Amyntaio coal mine, Florina Greece. J. Nat. Hazards 70 , 675–691 (2014).

Article   Google Scholar  

Zhang, L. L., Cheng, H., Yao, Z. S. & Wang, X. J. Application of the improved knothe time function model in the prediction of ground mining subsidence: A case study from Heze City, Shandong Province China. J. Appl. Sci. 10 (09), 3147 (2020).

Article   CAS   Google Scholar  

Cheng, H., Zhang, L. L., Guo, L. H., Wang, X. J. & Peng, S. L. A new dynamic prediction model for underground mining subsidence based on inverse function of unstable creep. J. Adv. Civil Eng. 1 , 9922136 (2021).

Ma, J. B., Yin, D. W., Jiang, N., Wang, S. & Yao, D. H. Application of a superposition model to evaluate surface asymmetric settlement in a mining area with thick bedrock and thin loose layer. J. Cleaner Product. 314 , 128075 (2021).

Bai, J. W. et al. Force chains evolution and crack characteristics of multiple coal-rock sandwich composite structure by using particle flow code. J. Mater. Today Commun. 38 , 108220 (2024).

Chi, S. S., Wang, L., Yu, X. X., Lv, W. C. & Fang, X. J. Research on Prediction Model of Mining Subsidence in Thick Unconsolidated Layer Mining Area with thick unconsolidated layers. J. Energy Explor. Exploit. 39 (03), 927–943 (2021).

Jiang, C. et al. Prediction of 3D deformation due to large gradient mining subsidence based on InSAR and constraints of IDPIM model. J. Int. J. Remote Sens. 42 (1), 208–239 (2021).

Article   ADS   Google Scholar  

Sui, L. C., Ma, F. & Chen, N. Mining subsidence prediction by combining support vector machine regression and interferometric synthetic aperture radar Data. J. ISPRS Int. J. Geo-Inform. 9 , 391 (2020).

ADS   Google Scholar  

Oh, H. J., Syifa, M., Lee, C. W. & Lee, S. Land subsidence susceptibility mapping using Bayesian, functional, and meta-ensemble machine learning models. J. Appl. Sci. 9 , 1248 (2019).

Pal, A., Rošer, J. & Vulić, M. Surface subsidence prognosis above an underground longwall excavation and based on 3D point cloud analysis. J. Minerals. 10 , 82 (2020).

ADS   CAS   Google Scholar  

Wang, J. B., Liu, X. R., Li, P. & Guo, J. Q. The application of MMF model in the prediction of surface subsidence in goaf. J. China Coal Society 37 (3), 411–415 (2012).

Zhou, B., Yan, Y. G. & Kang, J. R. Dynamic prediction model for progressive surface subsidence based on MMF time function. J. Appl. Sci. 13 (14), 8066 (2023).

Zhao, Y. H., Wang, W. N., Jiang, P. H. & Li, J. H. Improved MMF subsidence prediction model based on Markov chain and its application. J. Bull. Survey. Mapp. 1 , 79–83 (2022).

Meng, F. F., Piao, C. D., Shi, B., Sasaoka, T. & Shimada, H. Calculation model of overburden subsidence in mined-out area based on Brillouin optical time-domain reflectometer technology. J Int J Rock Mech Min Sci 138 (104620), 1–12 (2021).

Li, J., He, Z. H., Piao, C. D., Chi, W. Q. & Lu, Y. Research on subsidence prediction method of water-conducting fracture zone of overlying strata in coal mine based on grey theory model. J Water 15 (23), 4177 (2023).

Piao, C. D., Li, J. J., Wang, D. L. & Qiao, W. A DOFS-based approach to calculate the height of water-flowing fractured zone in overlying strata under mining. Geofluids 1 , 8860600 (2021).

Liang, Y., Gu, K., Shi, B. & Liu, S. P. Application of prediction model of land subsidence potential based on refined monitoring. J. Eng. Geol. 31 (03), 1097–1104 (2023).

Download references

This article was funded by National Natural Science Foundation of China, under Grant No. 42277159.

Author information

Authors and affiliations.

School of Resources and Geosciences, China University of Mining and Technology, Xuzhou, China

Chunde Piao, Bin Zhu, Jianxin Jiang & Qinghong Dong

Qingdao West Coast New District Comprehensive Administrative Law Enforcement Bureau, Qingdao, China

Sichuan Zhongding Blasting Engineering Co, Ltd., Ya’an, China

Jianxin Jiang

You can also search for this author in PubMed   Google Scholar

Contributions

Piao. CD: Conceptualization, Methodology, Writing manuscript. Zhu. B: Data Curation, Revisions manuscript. Jiang. JX: Assistance for data acquisition and data analysis. Dong. QH: Provided theoretical insights, Funding acquisition.

Corresponding author

Correspondence to Chunde Piao .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/ .

Reprints and permissions

About this article

Cite this article.

Piao, C., Zhu, B., Jiang, J. et al. Research on prediction method of coal mining surface subsidence based on MMF optimization model. Sci Rep 14 , 20316 (2024). https://doi.org/10.1038/s41598-024-71434-y

Download citation

Received : 06 July 2024

Accepted : 28 August 2024

Published : 02 September 2024

DOI : https://doi.org/10.1038/s41598-024-71434-y

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Surface subsidence prediction
  • Distributed monitoring
  • MMF optimization model
  • Coal seam mining
  • Sensing optical fiber

By submitting a comment you agree to abide by our Terms and Community Guidelines . If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing: Anthropocene newsletter — what matters in anthropocene research, free to your inbox weekly.

method used in research

What are you looking for?

Keeping native bees buzzing requires rethinking pest control, key points:.

New research shows a strong correlation between pesticide use and declining sightings of wild bees, with pesticide use causing appearances of some species to drop as much as 56%.

The loss of wild bees could disrupt ecosystems, affecting plant survival and the wildlife dependent on those plants, while also posing a significant risk to agricultural productivity.

Researchers advocate for integrated pest management strategies and more long-term studies to better understand and mitigate the impact of pesticides on wild bees and other pollinators.

On a gold background, "Asgmt Earth" appears inside a black circle and "USC" inside a small, white circle that slightly overlaps the black circle.

Native wild bees play a crucial ecological role, ensuring the survival and reproduction of countless plant species — including many agricultural crops — by spreading pollen as they forage for food. Unfortunately, their numbers seem to be declining, and despite experts suggesting multiple causes, the exact reason remains a mystery.

A new study published in Nature Sustainability sheds light on one potential cause: pesticide use. The research reveals a stark decline in the number of wild bee sightings, with appearances of some species dropping as much as 56% in areas of high pesticide use compared to areas with no pesticide use.

The study points to pesticides as a significant factor in wild bee decline and suggests that alternative pest control methods, such as those proposed by the U.S. Environmental Protection Agency, could reduce the damage.

Pesticide effects on wild bee populations scrutinized

Loss of wild bees could disrupt entire ecosystems, affecting not just plants but also the wildlife that depend on those plants for food and habitat. The multibillion-dollar agricultural industry could also suffer; wild bees, alongside honeybees, play a crucial role in pollinating three-quarters of food crops and nearly 90% of flowering plant species.

Recognizing the urgent threat posed by bee population declines, Laura Melissa Guzman of the USC Dornsife College of Letters, Arts and Sciences, along with an international team of researchers, set out to investigate the impact of pesticides on wild bees. They also examined the effects of agricultural practices and how the presence of honeybee colonies might influence wild bee populations.

Guzman, Gabilan Assistant Professor of Biological Sciences and Quantitative and Computational Biology , and the team inspected museum records, ecological surveys and community science data collected between 1996 and 2015 from across the contiguous United States.

We’re ignoring the unique responses of … wild bee species to pesticide exposure.

Using advanced computational methods, they sifted through more than 200,000 unique observations of over 1,000 species — representing one-third of all known bee species in the U.S. — to assess how frequently different species were observed in various locations.

In addition, they analyzed data from several government sources, such as the U.S. Geological Survey’s National Land Cover Database and Pesticide National Synthesis Project . The former tracks U.S. land cover types (crop, urban, forest, wetland, etc.) with snapshots taken every two to three years from 2001 to 2016, whereas the latter provides detailed data on pesticide use by county from 1992 to 2021.

By integrating these resources, the researchers correlated factors such as land use, pesticide application, honeybee colony presence, and types of agricultural crops with wild bee sightings over the past two to three decades.

Pesticides emerge as a top factor harming wild bees

A cactus chimney bee sits on a flower’s stamen covered in pollen

The research provides compelling evidence that pesticide use is a major contributor to the declining numbers of wild bees. The study found a strong correlation between pesticide use and fewer wild bee sightings, suggesting a direct link between pesticide exposure and bee population declines.

Some scientists have speculated that certain crops might adversely affect wild bees. However, Guzman and the team uncovered evidence to the contrary. Among crops frequented by pollinators, they found just as many wild bees in counties with a lot of agriculture versus a little.

Interestingly, the study hinted that the presence of colonies of honeybees, an invasive species, had almost no effect on wild bee populations, despite some evidence to the contrary. The researchers caution, however, that they need more detailed data and further study to confirm this conclusion.

“While our calculations are sophisticated, much of the spatial and temporal data is coarse,” Guzman said. “We plan to refine our analysis and fill in the gaps as much as possible.”

Wild bees need alternative pest management methods

The researchers view their findings as compelling evidence that alternative pest control strategies, such as integrated pest management, are essential for conserving these crucial pollinators.

Integrated pest management involves controlling pests by using natural predators, modifying practices to reduce pest establishment, and using traps, barriers and other physical means, with pesticide use reserved as a last resort.

The team also emphasizes the need for more long-term studies that collect data on more localized bee populations over extended periods. “We need to combine these large-scale studies that span continents with field experiments that expose bees to chemicals over longer periods and under natural conditions to get a clearer picture of how these chemicals affect bees,” Guzman said.

Building a case for better pesticide risk assessment

The current study builds on work published earlier this year by Guzman and scientists from Washington State University and Canada’s Université Laval. That study found that ecological risk assessments (ERAs) underestimate pesticide threats to wild bees and other pollinators.

Currently, ERAs measure pesticide effects on honeybees, often in lab studies, then extrapolate those findings to native bee species. However, Guzman and her colleagues revealed that current ERAs vary wildly — as much as a million-fold — when estimating how lethal pesticides are to honeybees. And many wild bees are even more sensitive to pesticides, compounding the problem, the research showed.

“When we only focus on the western honeybee, we’re ignoring the unique responses of other wild bee species to pesticide exposure,” Guzman said, calling for regulatory agencies, scientists and policymakers to rethink ERA methods.

“More data and analysis on the long-term effects of pesticides will help guide these efforts to the benefit of all pollinators, including wild bees,” Guzman said.

About the study

In addition to corresponding author Guzman, study authors include Elizabeth Elle and Leithen M’Gonigle of Simon Fraser University; Lora Morandin of the Pollinator Partnership; Neil Cobb of Biodiversity Outreach Network (BON); Paige Chesshire of BON and Northern Arizona University; Lindsie McCabe of the USDA-ARS Pollinating Insects Research Unit; Alice Hughes of the University of Hong Kong; and Michael Orr of State Museum of Natural History Stuttgart.

Funding came from the National Science Foundation grant number DBI-2216927 (iDigBees); USC Dornsife; Simon Fraser University and the Natural Sciences and Engineering Research Council of Canada; and the Liber Ero Fellowship Program.

Related Stories

What is ‘blue carbon’ inside usc’s research on carbon capture in upper newport bay, summer sustainability internship equips community college students for success, repeating aids believing: climate misinformation feels more true through repetition – even if you back climate science.

  • Science and Technology
  • Sustainability
  • biological sciences
  • environment
  • quantitative and computational biology

IMAGES

  1. 15 Research Methodology Examples (2024)

    method used in research

  2. Research Paper Methodology

    method used in research

  3. The seven steps for the scientific method and the appropriate research

    method used in research

  4. What is Research

    method used in research

  5. (PDF) Chapter 3

    method used in research

  6. Research Techniques

    method used in research

VIDEO

  1. The scientific approach and alternative approaches to investigation

  2. Metho 4: Good Research Qualities / Research Process / Research Methods Vs Research Methodology

  3. Research Methods Definitions Types and Examples

  4. 2024 Kavli Prize Laureates Announced in Astrophysics, Nanoscience and Neuroscience

  5. Research Methods: Introduction to Research Methods

  6. Be ready for these UX changes in 2024

COMMENTS

  1. Research Methods

    Learn how to choose and use different methods for collecting and analyzing data in your research. Compare qualitative and quantitative, primary and secondary, descriptive and experimental methods with examples and pros and cons.

  2. Research Methods

    Learn about the different types of research methods, such as qualitative, quantitative, and mixed methods, and how they are used to collect, analyze, and interpret data. See examples of research methods in various fields, such as psychology, sociology, medicine, and education.

  3. Research Methods--Quantitative, Qualitative, and More: Overview

    About Research Methods. This guide provides an overview of research methods, how to choose and use them, and supports and resources at UC Berkeley. As Patten and Newhart note in the book Understanding Research Methods, "Research methods are the building blocks of the scientific enterprise. They are the "how" for building systematic knowledge.

  4. Research Methods

    Research Methods | Definition, Types, Examples - Scribbr

  5. Research Methodology

    Qualitative Research Methodology. This is a research methodology that involves the collection and analysis of non-numerical data such as words, images, and observations. This type of research is often used to explore complex phenomena, to gain an in-depth understanding of a particular topic, and to generate hypotheses.

  6. Research Methods: What are research methods?

    What are research methods? - Research Methods

  7. What Is Research Methodology? Definition + Examples

    What Is Research Methodology? Definition + Examples

  8. What Is a Research Methodology?

    What Is a Research Methodology? | Steps & Tips

  9. Research

    Research design: Research design refers to the overall plan and structure of the study, including the type of study (e.g., observational, experimental), the sampling strategy, and the data collection and analysis methods. Sampling strategy: Sampling strategy refers to the method used to select a representative sample of participants or units ...

  10. Choosing the Right Research Methodology: A Guide

    Choosing the Right Research Methodology: A Guide

  11. Types of Research Methods

    Action Research involves identifying a problem, implementing a strategy to address it, observing the results, and adjusting your approach based on the findings. It's a cyclical process that combines research with action, often used in education, healthcare, and organizational development to bring about change. 8.

  12. What are Research Methods?

    Research Methods. Research Methods are systematic strategies, steps, and tools that researchers use to gather, analyze, and interpret data about a particular topic. It's like cooking a new recipe; you need the right ingredients (data), a good method (research design), and the proper tools (instruments like surveys or experiments) to create a delightful dish (knowledge).

  13. What Is Qualitative Research?

    Learn what qualitative research is, how it differs from quantitative research, and what methods and approaches are used to collect and analyze non-numerical data. Find out the advantages and disadvantages of qualitative research and see examples of research questions and data.

  14. What are research methods?

    What are research methods? - Research Methodologies

  15. What is Research Methodology? Definition, Types, and Examples

    What is Research Methodology? Definition, Types, and ...

  16. 15 Types of Research Methods

    Research methods refer to the strategies, tools, and techniques used to gather and analyze data in a structured way in order to answer a research question or investigate a hypothesis (Hammond & Wellington, 2020). Generally, we place research methods into two categories: quantitative and qualitative. Each has its own strengths and weaknesses ...

  17. How To Choose The Right Research Methodology

    How To Choose The Right Research Methodology

  18. What are research methodologies?

    What are research methodologies? - Pfeiffer Library

  19. Research Methods Guide: Research Design & Method

    Research Design & Method - Research Guides - Virginia Tech

  20. Research Methods In Psychology

    Research Methods In Psychology

  21. Types of Research Methods (With Best Practices and Examples)

    Types of Research Methods (With Best Practices and ...

  22. Research Methods in Psychology

    Research Methods in Psychology PsycLearn Essentials. Learn how researchers in psychology conduct their studies and better appreciate and critique the research presented in news media, in other courses, or in the psychological research literature. Presented in collaboration with.

  23. Research Techniques

    Examples of quantitative research techniques are surveys, experiments, and statistical analysis. Qualitative research: This is a research method that focuses on collecting and analyzing non-numerical data, such as text, images, and videos, to gain insights into the subjective experiences and perspectives of the participants.

  24. Methods for Community-Based Participatory Research for Health, 2nd

    This thoroughly revised and updated second edition of Methods for Community-Based Participatory Research for Health provides a step-by-step approach to the application of participatory approaches to quantitative and qualitative data collection and data analysis. With contributions from a distinguished panel of experts, this important volume shows how researchers, practitioners, and community ...

  25. Harnessing the power of poetry in academic research

    Poetry can also be used as an analytical/interpretative lens - for example, Carr (2003) created poems to document the experiences of family members of hospitalised relatives, transforming interview transcripts into poetry. Researchers can also use poetry to disseminate educational research and extend the tone and scope of research communication.

  26. How do I decide which research methods to use?

    The research methods you use depend on the type of data you need to answer your research question. If you want to measure something or test a hypothesis, use quantitative methods. If you want to explore ideas, thoughts and meanings, use qualitative methods. If you want to analyze a large amount of readily-available data, use secondary data.

  27. Statistical methods used in research concerning endangered and

    Statistical methods used in research concerning endangered and threatened animal species of Puerto Rico: A meta-study November 3, 2021 A concern about statistics in wildlife studies, particularly of endangered and threatened species, is whether the data collected meet the assumptions necessary for the use of parametric statistics. This study ...

  28. Research on Detection and Defense Methods of Adversarial Samples

    As an important part of artificial intelligence technology, deep learning is widely used in computer vision, natural language processing and other fields. Studies have shown that the existence of adversarial attacks poses a potential threat to the secure application of deep learning models, which in turn affects the security of the model.On the basis of briefly describing the concept of ...

  29. Research on prediction method of coal mining surface ...

    Coal seam mining causes fracture and movement of overlying strata in goaf, and endangers the safety of surface structures and underground pipelines. Based on the engineering geological conditions ...

  30. Wild native bees need us to rethink pesticide use

    New research shows a strong correlation between pesticide use and declining sightings of wild bees, with pesticide use causing appearances of some species to drop as much as 56%. The loss of wild bees could disrupt ecosystems, affecting plant survival and the wildlife dependent on those plants, while also posing a significant risk to ...