Study Design: FDA Approval
Terms
undefined, object
copy deck
-
Clinical Trials
Preclinical Phase - animal or cell-culture studies
-
Clinical Trials
Phase I - treatment safety is tested in a few human volunteers
-
Clinical Trials
Phase II - small randomized blinded trial that tests for side effects by a range of doses and surrogate measurements of the outcome variable
-
Clinical Trials
Phase III - larger randomized clinical trial used for hypothesis testing and determinations of treatment safety
-
Clinical Trials
Phase IV - following FDA approval, large study (random or nonrandom) used to determine serious side effect rate and other drug uses
- Pre-Experimental Design
-
weakest of the research designs. Subject to many threats to internal and external validity.
Characterized by lack of a control group, sensitivity to threats and poor generalizability. Ex: one-shot case study, one group with pre/post tests, static group comparison - Quasi-Experimental Design
- more rigorous than pre-experimental, but less robust that true experimental design. Generally lacking randomization or multiple measurements make testing effects a problem. Ex: non-equivalent control group, time series design
- Efficacy
- whether the intervention can be successful when it is properly implemented under controlled conditions
- Effectiveness
- whether the intervention typically is successful in actual clinical practice
- Drug testing in the US is currently biased toward the minimization of "Type I" error (5%), that is, toward minimizing the chance of approving drugs that are unsafe or ineffective
- This regulatory focus of the FDA ignores the potential for committing the alternative "Type II" error (20%), that is, the error of not approving drugs that are, in fact, safe and effective
- Intention to Treat Analysis
- Specifies how to handle noncompliant patients in a randomized control trial. Requires that patients be analyzed in the groups they were randomized into, regardless of whether they complied with the treatment they were given
- What if only included compliant patients?
-
Drawbacks:
1. Groups defined by compliance are no longer randomized and are thus subject to biases.
2. Groups defined by compliance may not represent the practical impact of the treatment. - Reliability
- the external and internal consistency of a measurement. In the abstract, whether a particular technique, applied repeatedly to the same object, would yield the same result each time.(accuracy)
- What are three forms of Reliability?
-
1. Instrument reliability
2. Intra-rater reliability
3. Inter-rater reliability - Instrument reliability
- consistency of a measurement by a particular instrument
- Intra-rater reliability
- consistency with which individual takes measurements (protocols are helpful)
- Inter-rater reliability
-
consistency of measurements between or among >1 individual
** 2 P rule (protocol and practice) - Validity
- the degree to which a scale is in fact consistently measuring the variable that it is designed to measure (precision); appropriateness of a given measure
- 2 Forms of Validity
-
1. Measurement (test)
2. Design (experimental) - 4 Types of Measurement Validity
-
1. Face
2. Construct
3. Content
4. Criterion -
Measurement Validity
-Face Validity -
Does the particular measurement or method appear to be appropriate?
-often just expert opinion
-weakest form -
Measurement Validity
-Construct Validity -
Is the measurement based on theory?
-the degree to which a measure relates to other variables as expected within a system of theoretical relationships (based on logical relationships) -
Measurement Validity
-Content Validity - Is the test broad enough to address the scope of content?
-
Measurement Validity
-Criterion Validity - How well does the test perform and is it useful when judged against a standard?
-
Measurement Validity
-Criterion Validity
--Predictive Validity - Can the test predict a specific outcome?
-
Measurement Validity
-Criterion Validity
--Concurrent Validity - Does the test perform as well as an accepted test?
-
Design Validity
-Internal Validity - factors and events other than the IV which may cause changes in the DV
-
Design Validity
-External Validity - generalizability of the conclusions drawn from the study; degree to which the results of a study generalize to the population
- Threats to Internal Validity
-
1. Temporal or time-based effects: history, maturation, attrition.
2. Measurement effects: testing, instrumentation sampling, statistical regression to the mean - Threats to External Validity
-
1. Threats related to population used: subjects accessibility to the study and subject-treatment interaction.
2. Treats related to the environment in which the study takes place: description of the variables, multiple txs, Hawthorne effect, Rosenthal effect - Design Shorthand
-
R = randomly assigned or selected
X = tx
Xo = no tx or control cond.
O = measurement - When does correlation impy causation?
- When the data from which the correlation was computed were obtained by experimental means with appropriate care to avoid confounding and other threats to the internal validity of the experiment
- Correlation
- relationships between two or more variables to explain the nature of relationships in the world and not to examine cause and effect
- Causation
-
one variable causes another variable
1. Cause must precede effect
2. two variables are correlated with one another
3. correlation between the two variables cannot be explained away as being the result of the influence of a third variable that causes both of them - What is experimental uncertainty caused by?
- Random errors or systematic errors
- Random errors
- statistical fluctuation (either direction)in the measured data due to the precision limitations of the measurement device; caused by experimenter's inability to take the same measurement in exactly the same way to get exactly the same number
- Systematic Errors
- reproducible inaccuracies that are consistently in the same direction; due to a problem which persists throughout the entire experiment
-
Survey Research
-Census - survey of the population
-
Survey Research
-Poll - for political information or opinion
-
Survey Research
-Survey - from a sample of the population
- Strategies to Increase Response Rate of a Survey
-
1. Advance notification
2. Cover letters
3. Multiple mailings, reminders
4. Stamped, return envelopes
5. Separate postcard to request results
6. Incentives
7. Anonymity and confidentiality - 3 Factors Needed to Determine Sample Size
-
1. The "effect size" (a measure of variability between variables)
2. Level of significance (0.05 is generally used)
3. Statistical power - to prevent type I errors - Type I error
- rejecting a null hypothesis when it should have been retained (alpha 5%)
- Type II error
- retaining a null hypothesis when it should have been rejected (beta 20%)
- Statistical Power
- the probability of rejecting a null hypothesis that is, in fact, false (needs to be at least 80%)
- Confidence Interval
- a range of values for a variable of interest constructed so that this range has a specified probability of including the true value of the variable. End pts of the confidence interval are called confidence limits (usually created at the 95% level)
- Absolute Risk
- An individual's risk of developing a disease over a time period
- Relative Risk
- Used to compare the risk in two different groups of people
- Number Needed to Treat
- How many people need a treatment in order to prevent one additional death in an amount of time
- Odds Ratio
- Dividing the odds in the treated or exposed group by the odds in the control group