Summary Class notes - Methodology: Operationalization, Design and Analyses

Course
- Methodology: Operationalization, Design and Analyses
- 2020 - 2021
- Universiteit van Amsterdam
- Psychologie
300 Flashcards & Notes
1 Students
  • This summary

  • +380.000 other summaries

  • A unique study tool

  • A rehearsal system for this summary

  • Studycoaching with videos

Remember faster, study better. Scientifically proven.

Summary - Class notes - Methodology: Operationalization, Design and Analyses

  • 1598911200 Lecture 1: Introduction

  • What aim's can research fulfill?
    1. What has occured
    2. why has this occurred? How can we explain this? (mediator)
    3. What has caused this? (causal effect)
  • When do you use a exploratory research question?
    To find out what is happening in little understood situations, develop new questions, understand phenomena (uitvinden wat er gebeurd). Meestal gebruiken we qualitative research questions (Interviews/observations). 
  • When do you use descriptive research questions?
    To portray an accurate profile of persons, events, or situations. 
  • When do you use explanatory research questions?
    To find an explanation of a situation, problem, or pattern, identify relationships between phenomena. (een uitleg vinden. Mechanisme uitvinden)
  • What does the moderator do
    It affects the direction or the strength of the relation between an independent and dependent variable.
  • What does the moderator specify?
    It specifies under which condition certain effects take place. The when in the research question.
  • What does the mediator do?
    A mediaor variable represents the mechanism through which the independent variable affects the dependent variable. Also it explains the relation between the independe variable and the dependent variable. 
  • What does the mediator specify?
    It specifies how and why an effect exists
  • A fixed design is driven on
    Its theory driven, there is already a conceptual framework. 
  • A fixed design focusses on outcomes like
    • Descriptive questions (What? How much?)
    • Explanatory questions (How? Why?)
  • What is a conceptual framework
    It illustrates what you expect to find through your research
  • What kind of questions do you answer with a descriptive questions?
    What? How much? To what extent? Who?
  • What kind of questions do you answer with explanatory questions?
    How? Why? Causality?
  • Which design focusses on quantitative data
    Fixed design
  • A flexible design is
    Theory development
  • A flexible design focusses on
    On the process; explorative research questions
  • Which design is based on qualitative data
    Flexible design
  • If you are focused on outcomes, what kind of design do you choose?
    A fixed design
  • If a research question is descriptive and you foced on the outcomes with design do you choose?
    Non-experimental fixed design
  • If you focus on the process, what kind of studies can you do?
    Exploratory and explanatory.
  • What kind of experiments are part of fixed designs?
    • Real experiments (random assignment)
    • quasi experiments (no random assignment)
    • non-experimental designs 
  • What kind of research's are part of flexible designs?
    • Case study 
    • ethnographic research 
    • grounded theory' research 
  • Limitations of experimental designs:
    • independent variables can't always be manipulated 
    • manipulation isn't always ethical 
    • random assignment doenst always lead to equivalent groups 
    • sometimes another design is more appropriate for the research question  
  • Hierarchical order to recognizing validity threats: 
    1. Do the statistical conclusions make sense?
    2. Do the operationalizations actually say something about the abstract underlying psychological concepts?
    3. is there really a causal relationship?
    4. Does what I find only apply to Organization Y at Time X?
  • What kind of question do you ask when you want to assess statistical validity?
    Does the relationship between variables accur beceause of more than just coincidence?
  • What kind of question do you ask when you want to assess construct validity
    Can the operationalizations of the constructs be interpreted in a different way? Kan je het generaliseren naar het construct dat je wilde meten?
  • What kind of question do you ask when you want to assess internal validity
    Can the observed relaionship be interpreted as a cause-effect relationship?
  • What kind of quesion do you ask when you want to assess external validity?
    Can the conclusions of the research be generalized to other populations or people, situations, and times?
  • Threaths to statistical validity
    1. Low statistical power 
    2. Fishing 
    3. Unreliable measurement instruments 
    4. Inadequate standardization of the experimental intervention 
    5. Coincidental differences in the experimental situation 
    6. Coincidental differences between the groups 
  • What does power mean?
    The odds that you will observe a treatment effect when it is true.
  • What does the threat to low statistical power mean?
    We usual test for significance, however, this only watches for Type 1 errors. When the power is to low few hypotheses will be supported.
  • What does the threat to Fishing mean?
    It means that we not specify any hypotheses about relationships in advance, but just looking for significant correlations between variables. In such a matrix, there's always going to be some significant relationship beceause of coincidence.
  • What is positive about fishing?
    Indication further research needed
  • What is negative about fishing?
    Not statistically valid until research is replicated
  • What is the solution to the Fishing Problem?
    Bonferroni method; correcting the significance level for the number of statistical tests.
  • What does reliability mean?
    The extent to which a replication of the measure would yield similar results
  • What are solutions for reliability?
    • Longer tests 
    • Aggregated entities 
    • Corrections for unreliability 
  • When is Invalid Standardization of the experimental procedure a threat?
    When treatment or instructions are not the same for all participants. The result will be inflated error variance and decrease the chance that a true difference will be detected.
  • What is a solution for invalid standardizaition of the experimental procedure?
    Try to standardize everything
  • What is convergent validity? (construct validity)
    Different operationalizations of the same construct should strongly correlate
  • What is discriminant validity? (construct validity)
    Operationalizations of different constructs should not correlate or only weakly
  • Threats to construct validity
    1. Construct underrepresentation
    2. Surplus construct irrelevancies 
    3. Mono-method bias
    4. Demoralized control group 
    5. Fear of evaluation
    6. Expectations of the researcher 
    7. Hypothesis guessing
  • When does internal validity occurs?
    It occurs if a study can plausibly demonstrate a causal relationship between the treatment and the outcome
  • What kind of question do you ask you're self when you want to assess internal validity
    Is the experimental procedure the cause of the effect of is the effect caused by something els?
  • Threats to internal validity
    1. History (events outside the experiment)
    2. Maturation (change of time)
    3. Test effects (people become get to know the test)
    4. Instrumentation ( testen veranderen door het experiment heen)
    5. Regression to the mean
    6. Selection (characteristics)
    7. Mortality/ attrition (mensen die uitvallen tijdens het experiment)
    8. Interactions with selection (selectie kan zorgen voor verschil in score)
    9. Uncertainty over causal influences (does A cause B or B cause A?)
  • What are threats to external validity about?
    It's about generalizability of th findings to other people, situations and times
  • What are general measures to increase external validity
    1. Random  sampling for representativeness
    2a. Deliberate sampling for hetrogeneity (broad range of people) we want a high hetrogeneity. 
    2b. Deliberate sampling for maximal differences
    3. Generalizing to model instance
    4. Replication
  • When can threats of exernal validity exist?
    If they have a specific character
  • What kind of specific character(of the threats)
    1. Apply only for the specific group studied 
    2. Apply only for the specific context of the study 
    3. Are affected by specific, unique histrorical influences 
    4. The measured construcs are specific to the group studied
Read the full summary
This summary. +380.000 other summaries. A unique study tool. A rehearsal system for this summary. Studycoaching with videos.

Latest added flashcards

In multiple regression you check;
You check for the "overlap" between variables. You get an indication of the unique contribution of each predictor to the dependent variable.
With correlations you look at;
At the variance that two variables share, without taking the shared variance with other variables into account
How to estimate the expected the effect size?
  • Base this on prior research 
  • Do a pilot study 
  • Otherwise be conservative (small-medium)
SPSS how to: repeated measures
Analyze > general linear model > repeated measures
What does a repeated measures anova allows you?
To test whether there is a difference within the same persons in mental alertness due to the placebo or coffee
What are the 3 regressions you should run with a mediation?
1. IV on DV 
2. IV on mediator
3. Mediator on DV
SPSS how to: ANCOVA
Analyze > general linear model > univariate
Click;  
  • options
    • descriptive statistics 
    • estimates of effect size 
    • homogeneity tests 
What type of analysis should you do: If looking at interaction between categorical IV and continuous IV
Hierarchical regression
What type of analysis should you do; If controlling for the continuous IV (only main effect)
ANCOVA
Is the result significant for both male and female?
No! Only for male not for female