Summary Class notes - Methodology: Operationalization, Design and Analyses

Course
- Methodology: Operationalization, Design and Analyses
- 2020 - 2021
- Universiteit van Amsterdam
- Psychologie
150 Flashcards & Notes
1 Students
  • This summary

  • +380.000 other summaries

  • A unique study tool

  • A rehearsal system for this summary

  • Studycoaching with videos

Remember faster, study better. Scientifically proven.

Summary - Class notes - Methodology: Operationalization, Design and Analyses

  • 1598911200 Lecture 1: Introduction

  • What is the aim of research
    1. What has occured 
    2. why has this occurred? How can we explain this?
    3. what has caused this?
  • When do you use a exploratory research question?
    To find out what is happening in little understood situations, develop new questions, understand phenomena
  • When do you use descriptive research questions?
    To portray an accurate profile of persons, events, or situations
  • When do you use explanatory research questions?
    To find an explanation of a situation, problem, or pattern, identify relationships between phenomena
  • What does the moderator do
    It affects the direction or the strength of the relation between an independent and dependent variable.
  • What does the moderator specify?
    It specifies under which condition certain effects take place
  • What does the mediator do?
    A mediaor variable represents the mechanism through which the independent variable affects the dependent variable
  • What does the mediator specify?
    It specifies how and why an effect exists
  • A fixed design is driven on
    Its theory driven
  • A fixed design focusses on outcomes like
    • Descriptive questions 
    • Explanatory questions
  • What is a conceptual framework
    It illustrates what you expect to find through your research
  • What kind of questions do you answer with a descriptive questions?
    What? How much? To what extent? Who?
  • What kind of questions do you answer with explanatory questions?
    How? Why? Causality?
  • Which design focusses on quantitative data
    Fixed design
  • A flexible design is
    Theory development
  • A flexible design focusses on
    On the process; explorative research questions
  • Which design is based on qualitative data
    Flexible design
  • What kind of experiments are part of fixed designs?
    • Real experiments 
    • quasi experiments 
    • non-experimental designs 
  • What kind of research's are part of flexible designs?
    • Case study 
    • ethnographic research 
    • grounded theory' research 
  • Limitations of experimental designs:
    • independent variables can't always be manipulated 
    • manipulation isn't always ethical 
    • random assignment doenst always lead to equivalent groups 
    • sometimes another design is more appropriate for the research question  
  • Hierarchical order to recognizing validity threats: 
    1. Do the statistical conclusions make sense?
    2. Do the operationalizations actually say something about the abstract underlying psychological concepts?
    3. is there really a causal relationship?
    4. Does what I find only apply to Organization Y at Time X?
  • What kind of question do you ask when you want to assess statistical validity?
    Does the relationship between variables accur beceause of more than just coincidence?
  • What kind of question do you ask when you want to assess construct validity
    Can the operationalizations of the constructs be interpreted in a different way?
  • What kind of question do you ask when you want to assess internal validity
    Can the observed relaionship be interpreted as a cause-effect relationship?
  • What kind of quesion do you ask when you want to assess external validity?
    Can the conclusions of the research be generalized to other populations or people, situations, and times?
  • Threaths to statistical validity
    1. Low statistical power 
    2. Fishing 
    3. Unreliable measurement instruments 
    4. Inadequate standardization of the experimental intervention 
    5. Coincidental differences in the experimental situation 
    6. Coincidental differences between the groups 
  • What does the threat to statistical validity mean?
    We usual test for significance, however, this only watches for Type 1 errors.
  • What does the threat to Fishing mean?
    It means that we not specify any hypotheses about relationships in advance, but just looking for significant correlations between variables. In such a matrix, there's always going to be some significant relationship beceause of coincidence.
  • What is positive about fishing?
    Indication further research needed
  • What is negative about fishing?
    Not statistically valid until research is replicated
  • What is the solution to the Fishing Problem?
    Bonferroni method; correcting the significance level for the number of statistical tests.
  • What does reliability mean?
    The extent to which a replication of the measure would yield similar results
  • What are solutions for reliability?
    • Longer tests 
    • Aggregated entities 
    • Corrections for unreliability 
  • When is Invalid Standardization of the experimental procedure a threat?
    When treatment or instructions are not the same for all participants. The result will be inflated error variance and decrease the chance that a true difference will be detected.
  • What is a solution for invalid standardizaition of the experimental procedure?
    Try to standardize everything
  • What is convergent validity?
    Different operationalizations of the same construct should strongly correlate
  • What is discriminant validity?
    Operationalizations of different constructs should not correlate or only weakly
  • Threats to construct validity
    1. Construct underrepresentation
    2. Surplus construct irrelevancies 
    3. Mono-method bias
    4. Demoralized control group 
    5. Fear of evaluation
    6. Expectations of the researcher 
    7. Hypothesis guessing
  • When does internal validity occurs?
    It occurs if a study can plausibly demonstrate a causal relationship between the treatment and the outcome
  • What kind of question do you ask you're self when you want to assess internal validity
    Is the experimental procedure the cause of the effect of is the effect caused by something els?
  • Threats to internal validity
    1. History
    2. Maturation
    3. Test effects
    4. Instrumentation
    5. Regression to the mean
    6. Selection
    7. Mortality/ attrition
    8. Interactions with selection
    9. Uncertainty over causal influences
  • What are threats to external validity about?
    It's about generalizability of th findings to other people, situations and times
  • What are general measures to increase external validity
    1. Random  sampling for representativeness
    2a. Deliberate sampling for hetrogeneity 
    2b. Deliberate sampling for maximal differences 
    3. Generalizing to model instance 
    4. Replication
  • When can threats of exernal validity exist?
    If they have a specific character
  • What kind of specific character(of the threats)
    1. Apply only for the specific group studied 
    2. Apply only for the specific context of the study 
    3. Are affected by specific, unique histrorical influences 
    4. The measured construcs are specific to the group studied
Read the full summary
This summary. +380.000 other summaries. A unique study tool. A rehearsal system for this summary. Studycoaching with videos.

Latest added flashcards

When do you use a control group?
  • It is not clear what the 'normal' level of the dependent variable is 
  • You're interested in comparing the treatment to a normal population
When do you use pre- and post-testing?
  • When pre-testing appears to be unlikely to influence the effect of the treatment
  • there are concerns about whether random assignment has produced equivalent groups 
  • Individual differences between participants are likely to mask treatment effects
  • you are interested in investigating change
When do you use a repeated measures design?
  • Order effects appear unlikely 
  • The independent variable lend themselves to repeated measurement 
  • In real life, people would be likely to be exposed to different treatments 
  • Individual differences between subjects are likely to mask treatment effects 
When to use a matched pairs design?
  • You have a matching variable which correlates highly with the dependent variable 
  • obtaining the score on the matching variable is unlikely to influence the treatment effects 
  • individual differences between subjects are likely to mask treatment effetc (effects of independent variable) 
When do you use a parametric design?
  • The independent variable has a range of values or levels of interest
  • You wish to investigate the form of nature of the relationship between independent and dependent variables. 
When do you use a factorial design?
  • You are interested in more than one independent variable
  • Interactions among independent variables may be of concern or when researchers are specifically interested in these interactions (moderation model) 
When do you use a simple two group design?
  • When orders effects are likely 
  • Independent variables are not suitable for repeated measurement 
  • In real life, people wouldn't often receive multiple treatments 
  • People might be expected to be sensitized by pre-testing on a matching variable 
What does a repeated measures design mean?
Designs where the same participant is tested under two or more experimenal conditions ( an extreme matched pairs design)
what will you do to the participants with a matches pairs design?
Establishing pairs of participants with similar scores on a variable known to be related to the dependent variable. Random allocation of members to different experimental groups.
How many levels do you have with a parametric design?
Several 'levels' of an independent variable covered with random allocation of participants to groups to get a view of the effect of the independent variable over a range of values