From basic vs applied to qualitative vs quantitative, this guide breaks down every major research type with real examples from healthcare, business, and education so you can pick the right method for your study.

A weak research question wastes months of work. This guide walks through the six-step formulation process, covers PICO, PEO, SPIDER, and FINER frameworks, and shows exactly how vague topics become focused, researchable questions with real before-and-after examples.
Every successful research project begins with a single, well-crafted question. Whether you’re launching a doctoral dissertation, preparing a systematic review, or tackling a master’s thesis, the formulation of research questions serves as the critical first step that shapes everything that follows.
Getting this step right matters more than most researchers initially realize. A precisely formulated research question determines your study design, sample size, data collection methods, analysis approach, and ultimately, the contribution your work makes to the scientific community. Get it wrong, and you’ll spend months, or years, chasing answers to the wrong question.
A research question is a clear, focused statement that identifies the specific problem, target population, variables or phenomena, and context your study will address. It transforms a broad topic of interest into a precise inquiry that can be systematically investigated and answered through empirical evidence.
Think of your research question as the compass for your entire research project. The eventual answer to this question becomes your thesis statement for shorter projects (essays or 3,000–10,000 word assignments) or the primary objective for larger, multi-year studies. Without this compass, even the most ambitious research process loses direction.
Well-formulated research questions guide every subsequent decision you’ll make:
Study design and methodology selection
Sample size calculations and participant recruitment
Measurement tools and instruments
Time frame for data collection
Data analysis plan and statistical approaches
Reporting structure and dissemination strategy
Consider the difference between a vague topic and a focused research question:
Vague topic: “Climate change”
Focused research question: “How did the 2019–2023 Australian bushfires affect household energy-saving behaviours among residents of New South Wales?”
The second formulation immediately tells us who we’re studying (NSW residents), what we’re measuring (energy-saving behaviours), and when the relevant exposure occurred (2019–2023 bushfires). This specificity makes the research feasible, the methods clear, and the contribution identifiable.
Different research traditions require slightly different formulations. Quantitative research typically focuses on measurement and relationships. Qualitative research emphasizes meanings and experiences. Systematic review questions synthesize existing knowledge. Yet all follow the same core logic: focus, feasibility, and relevance to existing literature.

Research questions differ fundamentally by their purpose and the methods needed to answer them. Understanding these distinctions helps you formulate questions that match your goals and resources.
The main question types you’ll encounter include descriptive, comparative, explanatory, evaluative, and exploratory questions. Each serves different research aims and pairs with different methodological approaches.
Descriptive questions seek to characterize what exists or what is happening in a particular context. They often begin with “what” or “how many.” For example: “What proportion of UK university students reported moderate or severe anxiety in 2023–2024?” These questions establish baseline knowledge before more complex investigations can proceed.
Comparative questions examine differences between groups, settings, or time periods. They often use phrases like “compared with” or “differ from.” For example: “Do 12-year-olds in urban London perform differently in mathematics compared with 12-year-olds in rural Devon in 2024 national tests?”
Causal or explanatory questions investigate why something happens or to what extent one factor influences another. They often use “to what extent” or “how does X affect Y.” For example: “To what extent did remote-learning policies during the COVID-19 pandemic (2020–2021) affect reading attainment in French primary schools?”
Exploratory or qualitative questions seek to understand experiences, meanings, and processes from participants’ perspectives. They typically begin with “how do people describe” or “what is the experience of.” For example: “How do nurses in public hospitals in Nairobi describe their workload during night shifts in 2025?” These kinds of questions are best addressed using robust qualitative research methods and implementation practices and, in applied settings, generative research methods that uncover deep user needs and opportunities.
Evaluative or policy questions assess the effectiveness or impact of interventions, programs, or policies. For example: “How effective was the UK ‘Eat Out to Help Out’ scheme (August 2020) in supporting small restaurants compared with direct grant support?”
Quantitative research questions focus on measurement, relationships, and causal effects. They typically require numerical data, statistical analysis, and standardized instruments. Qualitative questions focus on meanings, experiences, and processes, requiring interviews, observations, and interpretive analysis.
Mixed-methods questions combine both approaches. A project might include both a “how much” component (measuring prevalence) and a “how/why” component (understanding experiences) within the same overall study, benefiting from mixed methods research that integrates qualitative and quantitative data and supported by a structured user research plan template for organizing multi-method studies.
In large, multi-year funded studies, it’s common to have one main research question supported by 3–5 sub-questions. Each sub-question addresses a distinct aspect of the overarching aim while contributing to a coherent whole.
Developing research questions isn’t a linear march from idea to final question. It’s an iterative process where you cycle through stages, refining your focus as new information emerges. This section provides a practical sequence suitable for undergraduate essays, master’s theses, and funded projects alike.
The core stages unfold as follows:
Start from a broad area of interest
Conduct preliminary, time-bounded literature scanning
Narrow the topic and identify a concrete problem or gap
Translate the problem into candidate research questions
Evaluate and refine questions using structured criteria (like FINER)
Align the final question with available methods and data
At each stage, make concrete decisions about time period, geographic scope, and population explicit. Rather than vague references to “recent years,” specify “between 2015 and 2024.” Instead of “in Europe,” specify “in EU member states.” This precision transforms general topic into researchable question.
Your research journey begins with identifying a general topic that captures your interest and has sufficient evidence to support investigation. This might emerge from coursework, professional experience, current events, or gaps you’ve noticed in practice.
Strong starting points are typically:
Timely topics with ongoing relevance (renewable energy adoption after 2010)
Issues with practical or policy implications (post-pandemic learning loss)
Emerging phenomena requiring systematic investigation (mental health of remote workers since 2020)
Problems affecting identifiable populations in specific contexts
Use brainstorming and concept mapping to expand your initial idea. List sub-themes, stakeholders, geographic contexts, disciplinary angles, and potential variables. Don’t narrow too quickly, this exploratory phase generates the raw material you’ll refine later.
Consider a student in 2025 interested in “social media and adolescents.” A concept map might include:
Screen time and sleep quality
Cyberbullying and school climate
Body image and eating behaviours
Academic performance and attention span
Parental monitoring strategies
Platform-specific effects (TikTok vs. Instagram vs. Snapchat)
From this web of related issues, specific research questions will eventually emerge.
For multi-year projects (PhD research planned for 2025–2028, for instance), personal interest matters enormously. You’ll spend years with this topic, so choose something that genuinely engages you and has long-term relevance to your career goals.
Before committing to a specific question, you need to understand what’s already known. This preliminary research phase reveals where your contribution might fit.
Key actions for effective literature scanning:
Search academic databases (Scopus, Web of Science, PubMed, ERIC) using your broad topic keywords
Limit initial searches to the last 5–10 years (e.g., 2015–2025) to capture current debates
Identify 10–20 highly relevant recent articles that appear repeatedly in your searches
Locate key reports from authoritative bodies (WHO 2022 mental health report, IPCC 2021 climate assessment)
Find landmark older studies that are consistently cited across recent literature
As you read, note recurring themes, ongoing debates, and methodological approaches. Pay special attention to “future research” sections where authors explicitly identify unanswered questions. These represent validated gaps that your work might address, much like how systematic user research practices help uncover unmet needs and guide product decisions.
A thorough literature review serves multiple purposes: it clarifies what’s already established, reveals where knowledge remains incomplete, and helps you position your potential contribution relative to existing work, especially when comparing different user research methods across qualitative and quantitative techniques.
Set up Google Scholar alerts or RSS feeds for key search terms. If a major 2025 paper shifts the field while you’re developing your proposal, you’ll want to know about it and adjust accordingly.
Use citation management tools (Zotero, Mendeley, EndNote) from the start. Organizing references early saves significant time when you begin writing and need to cite sources properly, especially when you are planning survey-based data collection and questionnaire design, drawing on advanced survey design practices that leverage AI and inclusive question options, or in-depth user interviews that generate rich qualitative data.
The transition from broad interest to specific research problem is where many researchers struggle. Two main strategies help: gap-spotting and problematization, both of which are covered in depth in methodological guides to research problem formulation and comprehensive user research frameworks that link questions to methods and insights.
Gap-spotting involves identifying where existing literature falls short, and structured approaches to formulating and refining research problems can make this process more systematic, similar to how product teams distinguish between generative vs evaluative research when planning studies and draw on broad market researchresources that cover methodologies, tools, and strategy:
Population gaps: Studies exist on smartphone use and sleep among US college students (2016–2023), but few examine vocational trainees in Germany
Temporal gaps: Pre-pandemic findings may not apply to post-2020 contexts
Geographic gaps: Extensive North American research but limited evidence from sub-Saharan Africa
Methodological gaps: Quantitative studies exist but qualitative understanding is missing
Problematization questions dominant assumptions in the field:
Does homework actually improve learning for all age groups, or might it be counterproductive for primary school children aged 7–11?
Is the assumption that more screen time always harms wellbeing supported by nuanced evidence?
A well-formulated research problem statement might read: “Despite extensive research on teacher burnout in North America, there is limited evidence from public secondary schools in sub-Saharan Africa after COVID-19 (2020–2023). This gap matters because educational systems in these contexts face distinct resource constraints and post-pandemic challenges.”
Your problem statement should clearly specify:
Who is affected (population or phenomenon)
In what setting (geographic, institutional, or social context)
During what period (temporal boundaries)
Why it matters (practical importance or theoretical significance)
Now you’re ready to translate your problem statement into actual research questions. This drafting stage should generate multiple options, you’ll refine them later.
Effective research questions typically:
Begin with “how,” “why,” “what,” or “to what extent”
Avoid yes/no phrasing (replace “Does X affect Y?” with “How does X influence Y, and to what extent?”)
Specify population, context, and timeframe
Identify key variables or concepts clearly
Example transformation:
Problem statement: “Remote work increased sharply in the EU after 2020, but its long-term effect on employees’ work–life balance remains unclear.”
Candidate questions:
“How has the shift to remote work since March 2020 affected perceived work–life balance among IT employees in Germany?”
“What factors explain variations in work–life balance outcomes among remote workers in Berlin start-ups between 2020 and 2024?”
“To what extent does remote work duration (partial vs. full-time) influence work–life boundary management among knowledge workers in three EU countries (2021–2025)?”
Each version offers different scope, single country versus cross-country, single factor versus multiple explanatory factors. Generate 3–6 plausible options with varying breadth and focus.
At this drafting stage, don’t obsess over perfection. Your aim is creating workable options that you’ll evaluate systematically in the next step.

The FINER framework provides a structured way to evaluate whether your candidate questions are truly viable for research. Each letter represents an essential criterion.
Feasible: Can the question be answered with available time, data access, budget, expertise, and achievable sample size? Consider your actual constraints, a two-year master’s project can’t accomplish what a five-year funded study might.
Interesting: Does the question interest you personally? Will it engage the relevant academic or professional community? Research requires sustained motivation, and questions that bore you will produce tedious projects.
Novel: Does the question add something new? This might be updated data post-2020, a previously unstudied population, an innovative method, or a fresh theoretical angle. Novelty doesn’t require revolutionizing a field, incremental contributions that extend existing knowledge matter too.
Ethical: Can the research be conducted while respecting participants’ rights and meeting institutional review board standards? Some questions, however interesting, require designs that ethics committees won’t approve.
Relevant: Does the question matter to current science, practice, or policy? Consider alignment with recognized priorities (like Sustainable Development Goals) or urgent practical needs in your field.
Applying FINER to a concrete example:
Question: “How did emergency remote teaching during the COVID-19 school closures of 2020–2021 affect mathematics performance among 10–12-year-old students in public schools in Madrid?”
Feasible? Data on student performance may be accessible through educational authorities; the population is identifiable; timeframe allows retrospective analysis. Probably feasible.
Interesting? Learning loss during COVID-19 remains a pressing concern in educational research and policy. Highly interesting.
Novel? While pandemic learning loss has been studied extensively in the US and UK, Spanish-context evidence, particularly for specific age groups in specific subjects, may be less developed. Potentially novel contribution.
Ethical? Using de-identified educational records raises minimal ethical concerns. Data access permissions would be needed but the design itself is ethically straightforward. Ethically feasible.
Relevant? Findings could inform educational recovery policies and resource allocation in Spain and similar contexts. Clearly relevant.
If any FINER dimension fails, perhaps the data you need became inaccessible after policy changes, the question must be revised, narrowed, or redirected. Experienced researchers often revisit FINER multiple times as proposals develop or reviewer feedback arrives.
A well-formulated question naturally suggests appropriate methods. Misalignment between question and design is one of the most common reasons proposals face criticism or rejection.
How question structure suggests methodology:
“To what extent does X affect Y?” → Quantitative designs requiring numerical measurement, often quasi-experimental or longitudinal approaches
“How do participants experience or describe X?” → Qualitative designs using interviews, focus groups, or observations analyzed through thematic or phenomenological approaches
“What is the prevalence of X, and how do affected individuals understand their situation?” → Mixed-methods designs combining surveys with in-depth interviews, often supported by structured user interview question templates that streamline qualitative data collection
Matched examples: Drawing parallels to practice, well-structured questions also make it easier to choose UX research tools that align with your data collection and testing needs.
Question focused on effect size: “To what extent did a school breakfast program implemented in 2022 improve attendance rates among primary school students in Birmingham by 2024?” → Quasi-experimental design comparing intervention and comparison schools, using administrative attendance data.
Question about lived experience: “How do patients starting a new cancer immunotherapy introduced in 2022 describe their experience of managing treatment-related side effects?” → Qualitative phenomenological study with in-depth interviews and thematic analysis.
Question combining prevalence and explanation: “What is the prevalence of cyberbullying among secondary school students in three English cities in 2023, and how do affected students understand its impact on their school engagement?” → Convergent mixed-methods design with survey component and follow-up interviews, similar in spirit to qualitative research used in product development to understand user experiences and behaviors.
Before finalizing your question, verify that appropriate datasets, field sites, or archives exist and can be accessed within your project timeline. A brilliant question that can’t be answered with obtainable data is, practically speaking, useless, and in applied settings this includes confirming that you can recruit suitable participants for your product or user research studies and design appropriate qualitative formats such as focus group discussions that elicit rich perspectives from target users.
Structured frameworks help ensure all essential elements of a question are covered and expressed consistently. They’re particularly valuable when designing clinical trials, systematic reviews, and complex qualitative or mixed-methods studies.
These frameworks are tools, not rigid rules. Researchers adapt them to disciplinary conventions and data availability. The value lies in their systematic approach to ensuring completeness, they force you to specify who, what, compared to what, and with what outcome.
PICO is the most widely used framework in clinical research and healthcare-related systematic review design. Each letter represents a key component:
Patient/Population: Who are you studying?
Intervention: What treatment, exposure, or intervention are you examining?
Comparison: What is the alternative (standard care, placebo, different intervention)?
Outcome: What results are you measuring?
PICOT adds Time or Timeframe, particularly important for studies where the duration of follow-up or the period of intervention matters.
Concrete PICOT example for a 2025 clinical question:
Population: Adults aged 40–65 with newly diagnosed type 2 diabetes in primary care in the UK
Intervention: A 12-week smartphone-based self-management app
Comparison: Standard care without app support
Outcome: Change in HbA1c levels after 6 months
Time: Recruitment in 2025–2026, follow-up at 6 and 12 months
Converted to a full research question: “Among adults aged 40–65 with newly diagnosed type 2 diabetes in UK primary care, how effective is a 12-week smartphone-based self-management app compared with standard care in reducing HbA1c levels at 6 and 12 months?”
Specifying each PICO/PICOT component directly helps define:
Inclusion and exclusion criteria for participant recruitment
Sample size calculations based on expected effect sizes
Data collection schedule and measurement points
Clear criteria for what constitutes the intervention and comparison conditions
Funders and ethics committees now routinely expect PICO/PICOT-based questions in clinical and public health proposals. This framework demonstrates methodological rigor and clarity of purpose, qualities that are equally valued in comprehensive UX researchpractice for digital products and in many professional product, market, and UX research roles advertised globally.
Qualitative and service evaluation questions require frameworks that emphasize experience, setting, and perspective rather than intervention and outcome alone.
PEO (Population, Exposure, Outcome/Experience) works well for qualitative or observational questions where there’s no controlled intervention:
Population: Who is being studied?
Exposure: What condition, context, or experience are they exposed to?
Outcome/Experience: What experiences or perspectives are being explored?
PEO example: “How do parents (Population) of children diagnosed with autism between 2018 and 2023 (Exposure timeframe) describe their experiences (Experience) with early-intervention services in Ontario?”
SPIDER (Sample, Phenomenon of Interest, Design, Evaluation, Research type) suits qualitative and mixed-methods questions and is particularly helpful when planning qualitative interview question design and execution, drawing on broader guides that cover user research, UX design, and interview best practices, selecting from practical qualitative interview questions for user research, and adapting ready-to-use user interview script templates for different study objectives:
Sample: The specific group being studied
Phenomenon of Interest: The focus of the inquiry
Design: The qualitative approach (interviews, focus groups, observation)
Evaluation: What aspects are being assessed
Research type: Qualitative, quantitative, or mixed-methods
SPIDER example: Nurses working in intensive care units (Sample), burnout and moral distress (Phenomenon), in-depth semi-structured interviews (Design), perceived coping strategies (Evaluation), qualitative (Research type).
SPICE (Setting, Perspective, Intervention/Interest, Comparison, Evaluation) works for service or intervention evaluation questions:
Setting: Context for the research
Perspective: Whose viewpoint is being examined
Intervention/Interest: What is being examined
Comparison: Alternative approaches
Evaluation: How success is being measured
ECLIPSE (Expectation, Client group, Location, Impact, Professionals, Service) is designed for health service and public health evaluation:
ECLIPSE example: Evaluation of a university student mental health helpline service launched in 2021, examining impact on service users (client group), delivered in UK universities (Location), staffed by trained counsellors (Professionals), with evaluation criteria including accessibility, user satisfaction, and referral outcomes.
These frameworks push researchers to clarify who is speaking, from what standpoint, in which setting, and with what kind of evidence. Using PEO or SPIDER helps make qualitative questions sufficiently focused to guide purposive sampling and analytic strategies, which is crucial when later analyzing qualitative researchdata to inform product decisions, applying a structured research synthesis template to turn raw findings into clear insights, and drawing on UX research methods that translate user feedback into concrete design improvements.
Seeing transformations from weak to strong questions makes abstract principles concrete. The following examples show how revision improves focus, measurability, and researchability, much like practical collections of qualitative research questions for user research do for applied projects and broader guides to UX research methods product managers need to know, as well as end-to-end user research guides tailored for product managers and detailed usability testing guides that connect research questions to interface evaluations.
Weak: “What are the effects of social media on teenagers?”
Strong: “How does daily TikTok use of more than two hours affect self-reported body image among 13–16-year-old girls in public secondary schools in Manchester in 2024?”
What changed: The revised version specifies the platform (TikTok), usage threshold (more than two hours), outcome (self-reported body image, measured with a named scale), population (girls aged 13–16), setting (Manchester public secondary schools), and timeframe (2024).
Weak: “Is climate change bad?”
Strong: “How have average summer temperatures in Southern Spain changed between 1980 and 2020, and what has been the associated trend in heat-related hospital admissions?”
What changed: The opinion-seeking question became an analytical question requiring data and interpretation. Specific variables (summer temperatures, hospital admissions), geographic scope (Southern Spain), and temporal range (1980–2020) transform this into a researchable empirical question.
Weak: “Why do students drop out of university?”
Strong: “What factors do first-generation university students in England identify as most influential in their decision to withdraw during the first year of undergraduate study (2022–2024)?”
What changed: The broad “why” question is refined to focus on a specific population (first-generation students), timeframe (2022–2024), and outcome (factors influencing withdrawal), making it feasible and focused.
Formulating research questions is a foundational step that shapes the entire research process. Strong research questions are clear, focused, and aligned with your study objectives, guiding your research design, data collection, and analysis. Utilizing frameworks such as PICO, PEO, or SPIDER, and applying criteria like FINER, helps ensure your questions are feasible, interesting, novel, ethical, and relevant.
Avoid common pitfalls by conducting thorough preliminary research, narrowing your topic thoughtfully, and iterating your questions based on feedback and reflection. For practitioners, strong questions also underpin how product managers use user research to make better decisions, serve as the backbone of user research–driven product management workflows, guide UX research that improves complex B2B product design, enable research-driven UX design strategies that tie user insights to product outcomes, and support the creation of research-backed user personas that inform product and UX decisions. Remember, a well-crafted research question not only directs your own work but also produces knowledge that contributes meaningfully to the scientific community and beyond.
Keep these key points in mind as you embark on your research journey to develop questions that are both rigorous and impactful, ultimately setting the stage for successful and valuable studies.
Access identity-verified professionals for surveys, interviews, and usability tests. No waiting. No guesswork. Just real B2B insights - fast.
Book a demoJoin paid research studies across product, UX, tech, and marketing. Flexible, remote, and designed for working professionals.
Sign up as an expert