•0 recomendaciones•24 vistas

Denunciar

Compartir

Descargar para leer sin conexión

statistical analysis

- 2. Statistical analysis means investigating trends, patterns, and relationships using quantitative data. It is an important research tool used by scientists, governments, businesses, and other organizations. To draw valid conclusions, statistical analysis requires careful planning from the very start of the research process. You need to specify your hypotheses and make decisions about your research design, sample size, and sampling procedure. After collecting data from your sample, you can organize and summarize the data using descriptive statistics. Then, you can use inferential statistics to formally test hypotheses and make estimates about the population. Finally, you can interpret and generalize your findings.
- 3. STEP 1: WRITE YOUR HYPOTHESES AND PLAN YOUR RESEARCH DESIGN The goal of research is often to investigate a relationship between variables within a population. You start with a prediction, and use statistical analysis to test that prediction. A statistical hypothesis is a formal way of writing a prediction about a population. Every research prediction is rephrased into null and alternative hypotheses that can be tested using sample data. While the null hypothesis always predicts no effect or no relationship between variables, the alternative hypothesis states your research prediction of an effect or relationship.
- 4. Example: Statistical hypotheses to test an effectNull hypothesis: A 5-minute meditation exercise will have no effect on math test scores in teenagers. Alternative hypothesis: A 5-minute meditation exercise will improve math test scores in teenagers.
- 5. PLANNING YOUR RESEARCH DESIGN A research design is your overall strategy for data collection and analysis. It determines the statistical tests you can use to test your hypothesis later on. In an experimental design, you can assess a cause- and-effect relationship (e.g., the effect of meditation on test scores) using statistical tests of comparison or regression. In a correlational design, you can explore relationships between variables (e.g., parental income and GPA) without any assumption of causality using correlation coefficients and significance tests. In a descriptive design, you can study the characteristics of a population or phenomenon (e.g., the prevalence of anxiety in U.S. college students) using statistical tests to draw inferences from sample data.
- 6. Your research design also concerns whether you’ll compare participants at the group level or individual level, or both. In a between-subjects design, you compare the group-level outcomes of participants who have been exposed to different treatments (e.g., those who performed a meditation exercise vs those who didn’t). In a within-subjects design, you compare repeated measures from participants who have participated in all treatments of a study (e.g., scores from before and after performing a meditation exercise).
- 7. MEASURING VARIABLES When planning a research design, you should operationalize your variables and decide exactly how you will measure them. For statistical analysis, it’s important to consider the level of measurement of your variables, which tells you what kind of data they contain: Categorical data represents groupings. These may be nominal (e.g., gender) or ordinal (e.g. level of language ability). Quantitative data represents amounts. These may be on an interval scale (e.g. test score) or a ratio scale (e.g. age).
- 8. Many variables can be measured at different levels of precision. For example, age data can be quantitative (8 years old) or categorical (young). If a variable is coded numerically (e.g., level of agreement from 1–5), it doesn’t automatically mean that it’s quantitative instead of categorical. Identifying the measurement level is important for choosing appropriate statistics and hypothesis tests. For example, you can calculate a mean score with quantitative data, but not with categorical data. In a research study, along with measures of your variables of interest, you’ll often collect data on relevant participant characteristics.
- 9. STEP 2: COLLECT DATA FROM A SAMPLE Sampling for statistical analysis There are two main approaches to selecting a sample. Probability sampling: every member of the population has a chance of being selected for the study through random selection. Non-probability sampling: some members of the population are more likely than others to be selected for the study because of criteria such as convenience or voluntary self-selection.
- 10. Create an appropriate sampling procedure Based on the resources available for your research, decide on how you’ll recruit participants. Will you have resources to advertise your study widely, including outside of your university setting? Will you have the means to recruit a diverse sample that represents a broad population? Do you have time to contact and follow up with members of hard-to-reach groups?
- 11. Calculate sufficient sample size Significance level (alpha): the risk of rejecting a true null hypothesis that you are willing to take, usually set at 5%. Statistical power: the probability of your study detecting an effect of a certain size if there is one, usually 80% or higher. Expected effect size: a standardized indication of how large the expected result of your study will be, usually based on other similar studies. Population standard deviation: an estimate of the population parameter based on a previous study or a pilot study of your own.
- 12. STEP 3: SUMMARIZE YOUR DATA WITH DESCRIPTIVE STATISTICS Inspect your data There are various ways to inspect your data, including the following: Organizing data from each variable in frequency distribution tables. Displaying data from a key variable in a bar chart to view the distribution of responses. Visualizing the relationship between two variables using a scatter plot.
- 13. By visualizing your data in tables and graphs, you can assess whether your data follow a skewed or normal distribution and whether there are any outliers or missing data. A normal distribution means that your data are symmetrically distributed around a center where most values lie, with the values tapering off at the tail ends.
- 15. In contrast, a skewed distribution is asymmetric and has more values on one end than the other. The shape of the distribution is important to keep in mind because only some descriptive statistics should be used with skewed distributions. Extreme outliers can also produce misleading statistics, so you may need a systematic approach to dealing with these values.
- 16. CALCULATE MEASURES OF CENTRAL TENDENCY Measures of central tendency describe where most of the values in a data set lie. Three main measures of central tendency are often reported: Mode: the most popular response or value in the data set. Median: the value in the exact middle of the data set when ordered from low to high. Mean: the sum of all values divided by the number of values.
- 17. CALCULATE MEASURES OF VARIABILITY Measures of variability tell you how spread out the values in a data set are. Four main measures of variability are often reported: Range: the highest value minus the lowest value of the data set. Interquartile range: the range of the middle half of the data set. Standard deviation: the average distance between each value in your data set and the mean. Variance: the square of the standard deviation.
- 18. STEP 4: TEST HYPOTHESES OR MAKE ESTIMATES WITH INFERENTIAL STATISTICS A number that describes a sample is called a statistic, while a number describing a population is called a parameter. Using inferential statistics, you can make conclusions about population parameters based on sample statistics.
- 19. Researchers often use two main methods (simultaneously) to make inferences in statistics. Estimation: calculating population parameters based on sample statistics. Hypothesis testing: a formal process for testing research predictions about the population using samples.
- 20. ESTIMATION You can make two types of estimates of population parameters from sample statistics: A point estimate: a value that represents your best guess of the exact parameter. An interval estimate: a range of values that represent your best guess of where the parameter lies.
- 21. If your aim is to infer and report population characteristics from sample data, it’s best to use both point and interval estimates in your paper. You can consider a sample statistic a point estimate for the population parameter when you have a representative sample (e.g., in a wide public opinion poll, the proportion of a sample that supports the current government is taken as the population proportion of government supporters). There’s always error involved in estimation, so you should also provide a confidence interval as an interval estimate to show the variability around a point estimate. A confidence interval uses the standard error and the z score from the standard normal distribution to convey where you’d generally expect to find the population parameter most of the time.
- 22. HYPOTHESIS TESTING Using data from a sample, you can test hypotheses about relationships between variables in the population. Hypothesis testing starts with the assumption that the null hypothesis is true in the population, and you use statistical tests to assess whether the null hypothesis can be rejected or not.
- 23. Statistical tests determine where your sample data would lie on an expected distribution of sample data if the null hypothesis were true. These tests give two main outputs: A test statistic tells you how much your data differs from the null hypothesis of the test. A p value tells you the likelihood of obtaining your results if the null hypothesis is actually true in the population.
- 24. Statistical tests come in three main varieties: Comparison tests assess group differences in outcomes. Regression tests assess cause-and-effect relationships between variables. Correlation tests assess relationships between variables without assuming causation.
- 25. Parametric tests Parametric tests make powerful inferences about the population based on sample data. But to use them, some assumptions must be met, and only some types of variables can be used. If your data violate these assumptions, you can perform appropriate data transformations or use alternative non-parametric tests instead.
- 26. A regression models the extent to which changes in a predictor variable results in changes in outcome variable(s). A simple linear regression includes one predictor variable and one outcome variable. A multiple linear regression includes two or more predictor variables and one outcome variable.
- 27. REGRESSION MODELS Regression models describe the relationship between variables by fitting a line to the observed data. Linear regression models use a straight line, while logistic and nonlinear regression models use a curved line. Regression allows you to estimate how a dependent variable changes as the independent variable(s) change.
- 28. Simple linear regression is used to estimate the relationship between two quantitative variables. You can use simple linear regression when you want to know: How strong the relationship is between two variables (e.g. the relationship between rainfall and soil erosion). The value of the dependent variable at a certain value of the independent variable (e.g. the amount of soil erosion at a certain level of rainfall).
- 29. ASSUMPTIONS OF SIMPLE LINEAR REGRESSION Simple linear regression is a parametric test, meaning that it makes certain assumptions about the data. These assumptions are: Homogeneity of variance (homoscedasticity): the size of the error in our prediction doesn’t change significantly across the values of the independent variable. Independence of observations: the observations in the dataset were collected using statistically valid sampling methods, and there are no hidden relationships among observations. Normality: The data follows a normal distribution.
- 30. Comparison tests usually compare the means of groups. These may be the means of different groups within a sample (e.g., a treatment and control group), the means of one sample group taken at different times (e.g., pretest and posttest scores), or a sample mean and a population mean. A t test is for exactly 1 or 2 groups when the sample is small (30 or less). A z test is for exactly 1 or 2 groups when the sample is large. An ANOVA is for 3 or more groups.
- 31. STEP 5: INTERPRET YOUR RESULTS Statistical significance In hypothesis testing, statistical significance is the main criteria for forming conclusions. You compare your p value to a set significance level (usually 0.05) to decide whether your results are statistically significant or non-significant. Statistically significant results are considered unlikely to have arisen solely due to chance. There is only a very low chance of such a result occurring if the null hypothesis is true in the population.
- 32. Effect size A statistically significant result doesn’t necessarily mean that there are important real life applications or clinical outcomes for a finding. In contrast, the effect size indicates the practical significance of your results. It’s important to report effect sizes along with your inferential statistics for a complete picture of your results. You should also report interval estimates of effect sizes if you’re writing an APA style paper.
- 33. Decision errors Type I and Type II errors are mistakes made in research conclusions. A Type I error means rejecting the null hypothesis when it’s actually true, while a Type II error means failing to reject the null hypothesis when it’s false. You can aim to minimize the risk of these errors by selecting an optimal significance level and ensuring high power. However, there’s a trade-off between the two errors, so a fine balance is necessary.
- 34. FREQUENTIST VERSUS BAYESIAN STATISTICS Traditionally, frequentist statistics emphasizes null hypothesis significance testing and always starts with the assumption of a true null hypothesis. However, Bayesian statistics has grown in popularity as an alternative approach in the last few decades. In this approach, you use previous research to continually update your hypotheses based on your expectations and observations. Bayes factor compares the relative strength of evidence for the null versus the alternative hypothesis rather than making a conclusion about rejecting the null hypothesis or not.
- 35. Linear regression makes one additional assumption: The relationship between the independent and dependent variable is linear: the line of best fit through the data points is a straight line (rather than a curve or some sort of grouping factor).
- 36. Multiple linear regression is used to estimate the relationship between two or more independent variables and one dependent variable. You can use multiple linear regression when you want to know: How strong the relationship is between two or more independent variables and one dependent variable (e.g. how rainfall, temperature, and amount of fertilizer added affect crop growth). The value of the dependent variable at a certain value of the independent variables (e.g. the expected yield of a crop at certain levels of rainfall, temperature, and fertilizer addition).
- 37. ASSUMPTIONS OF MULTIPLE LINEAR REGRESSION Multiple linear regression makes all of the same assumptions as simple linear regression: Homogeneity of variance (homoscedasticity): the size of the error in our prediction doesn’t change significantly across the values of the independent variable. Independence of observations: the observations in the dataset were collected using statistically valid methods, and there are no hidden relationships among variables.
- 38. In multiple linear regression, it is possible that some of the independent variables are actually correlated with one another, so it is important to check these before developing the regression model. If two independent variables are too highly correlated (r2 > ~0.6), then only one of them should be used in the regression model. Normality: The data follows a normal distribution. Linearity: the line of best fit through the data points is a straight line, rather than a curve or some sort of grouping factor.