Wednesday 22 January 2014

Research Design: Quantitative Research Design and Methods

Resources
Course Text: Research Design: Qualitative,Quantitative, and Mixed Methods Approaches

Chapter 8, "Quantitative Methods"Creswell guides the reader through quantitative methods and plans. Use thischapter for the Discussion.

Media:"Quantitative Methods: An Example" (12:51). Dr. George Smeaton discusses a representative quantitative research design.

Measurement of Variables
One of the important aspects of conducting quantitative research is deciding how you will measure your variables. If you are not clear about what a variable is (e.g., gender) as opposed to a value of a
variable (e.g., female, male) or the difference between an operational definition of a variable (i.e., how you will measure and code a variable) and a conceptual definition of a variable (i.e., an explanation of the construct in plain English) please see chapters 5 and 6 in Babbie (2002). If you do not know the different types of measurement reliability (e.g., test-retest, internal consistency), measurement validity (e.g., predictive, construct), or levels of measurement (i.e., nominal, ,ordinal, interval, and ratio) please see chapters 1- 3 in Walsh and Betz (2001) or chapters 1-5 in Thorndike (2005). 
 
CLICK HERE to get this paper

The first three chapters in Walsh and Betz (2001) cover 144 pages and do a good job of covering the basics of measurement including levels of measurement (nominal, ordinal, interval, and ratio), reliability (e.g., test-retest, internal consistency), validity (e.g., predictive, concurrent), types of scores (e.g., standard scores, percentiles), norms, and cultural issues. Chapters 1 - 5 of Thorndike cover 217 pages and go into more detail than Walsh and Betz. In addition, it includes other chapters on measuring attitudes and using
rating scales (Chapter 12) and instrument development (Chapter 15).

You do not have to become an expert on these topics for the assignments in this course; however, you will need to know the basics when you work on the assignments that involve quantitative research.
References
Babbie, E. (2002). The basics of social research (2nd ed.).Belmont, CA: Wadsworth. 
Thorndike, R.M. (2005). Measurement and evaluation in psychology and education. Upper Saddle River, NJ: Pearson.
Walsh, W.B. & Betz, N.E. (2001). Tests and assessment (4th ed.). Upper Saddle River, NJ: Prentice-Hall.
Threats to Internal Validity (Shadish, Cook & Campbell, 2002


CLICK HERE to get this paper
1. Ambiguous temporal precedence.
Based on the design, unable to determine with certainty which variable occurred first or which variable caused the other. Thus, unable to conclude with certainty cause-effect relationship. Correlation of two variables does not prove causation.

2. Selection.
The procedures for selecting participants (e.g., self-selection or researcher sampling and assignment procedures) result in systematic differences across conditions (e.g., experimental-control). Thus, unable to conclude with certainty that the intervention caused the effect; could be due to way in which participants are
selected.

3. History. 
Other events occur during the course of treatment that can interfere with treatment effects, and could account for outcomes. Thus, unable to conclude with certainty that the intervention caused the effect; could be due to some other event to which the participants were exposed.

4. Maturation.
Natural changes that participants experience (e.g., grow older, get tired) during the course of the intervention could account for the outcomes. Thus, unable to conclude with certainty that the �intervention� caused the effect; could be due to the natural change/maturation of the participants.


CLICK HERE to get this paper done

5. Regression artifacts. 
Participants who are at extreme ends of the measure (score higher or lower than average) are likely to regress toward the mean (scores get lower or higher, respectively) on other measures or retest on same measure. Thus, regression can be confused with treatment effect.

6. Attrition (mortality).
Refers to drop out or failure to complete the treatment/study activities. If differential drop out across groups (e.g., experimental-control) occurs, could confound the results. Thus, effects may be due to drop out rather than treatment.

7. Testing.
Experience with test/measure influences scores on retest. For example, familiarity with testing procedures, practice effects, or reactivity can influence subsequent performance on the same test.

8. Instrumentation. The measure changes over time (e.g., from pretest to posttest) thus making it difficult to determine if effects or outcomes are due to instrument vs. treatment. For example, observers change definitions of behaviors they are tracking, or the researcher alters administration of test items from pretest to posttest.

CLICK HERE to get this paper written (A+ Quality)

9. Additive and interactive effects of threats to validity.Single threats interact, such that the occurrence of multiple threats has an additive effect. For example, selection can interact with history, maturation,or instrumentation.

Reference

Shadish, W. R., Cook, T. D.,& Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. Boston: Houghton- Mifflin

The assignment
Quantitative Research Design and Methods
To prepare for this Discussion: Review Chapter 8 in the course text, Research Design, and the "Quantitative Methods: Examples" media segment.

Explain how survey and experimental methods---including components, terminology, elements, statistics,
etc.are similar and different.

Determine which kinds of research questions would be served by a survey or an experimental method.

Examine the reasons why reliability and validity are important in research.

Generalize about how popular quantitative methods are in your discipline.

With these thoughts in mind:
Post, 2 3 paragraphs that compare survey strategies of inquiry with experimental strategies of inquiry.

No comments:

Post a Comment