Marinstatslectures

Univariate Analysis

Univariate analysis describes a single variable distribution in one sample. It is the first important step of every clinical trial. Univariate analysis is the simplest form of analyzing data. It doesn't deal with causes or relationships (unlike regression) and its major purpose is to describe; it takes data, summarizes that data and finds patterns in the data.

The Univariate Analysis Series of video tutorials use simple and easy to follow examples to explain these concepts in Statistics. These videos are produced in a real-life lecture format so students can watch and learn as the teacher works through the examples at the pace of a real classroom.

 

 

 

Concepts in Statistics: Univariate Analysis Video Tutorials 

play button t-distribution in Statistics and Probability: In this statistics video, we learn what t-distribution is and why we use it. The short version is that the t-distribution in statistics and probability is similar to the normal distribution, except it accounts for the extra uncertainty that comes along with having to use the estimate of the standard deviation (the sample standard deviation) in the calculations. There is little practical difference between the two (except when sample sizes are extremely small), and software can be used to work with the t distribution, and get us the exact t-value to use (instead of a Z-value). Here, we will focus on the concepts, and not looking up values in a t-table. Here is the accompanying R tutorial for t-distribution (Link Here)

play button Confidence Interval for Mean with Example: In this statistics video we learn to construct a confidence interval for the mean. Here we will learn all of the relevant details that go along with constructing and interpreting a confidence interval in statistics; Details such as one-sided Confidence Interval vs two-sided Confidence Intervals, changing the level of confidence, controlling the margin of error, Z value vs t value are all discussed with examples. We will do this all in the context of building a confidence interval for a mean, although the concepts are transferable to most confidence intervals we will construct. We will focus on the concepts, and not the calculations. To learn how to conduct the one-sample t-test and calculate the confidence interval in R (using RStudio) watch this video (Link Here

play button Margin of Error & Sample Size for Confidence Interval: In this statistics lecture we learn what affects the margin of error for a confidence interval, and how to select an appropriate sample size in order to have the desired margin of error in statistics and research. Attaching a margin of error to estimates is something that we do naturally in everyday life. For example, we tend to say things like “I will be home between 5:00-5:30 pm”, or that a person "looks about 24-27 years old”. While we have an estimate, we prefer to provide an interval for our estimate, to increase the likelihood of being correct. We will do this all in the context of building a confidence interval for a mean, although the concepts are transferable to most confidence intervals we will construct. We will focus on the concepts, and not the calculations.

play button Bootstrapping and Resampling in Statistics with Example: In this statistics lecture, we learn the Bootstrap method (a brute force method), along with why one may want to use such an approach. Bootstrap in statistics is a re-sampling based approach, useful for estimating the sampling distribution and standard error of an estimate. Bootstrapping provides an alternative approach to approaches based on large sample theory (you may recall that many approaches rely on having a large n in order to carry out the method). Bootstrapping in statistics and in research becomes particularly useful when dealing with more complicated estimates, where their standard error may not be easily calculated. This video takes a simple example and explains the principle of a bootstrap approach, and the concept it is based on. The intention is to clarify exactly what this method involves. This video uses an estimate of a "sample mean" for simplicity of explaining the concept, although the bootstrap's real value comes when dealing with more complicated estimates. Here, we simply aim to provide a conceptual understanding of this approach.

play button Hypothesis Testing: Calculations and Interpretations: In this statistics lecture, we discuss the process of testing a hypothesis, using a One-Sample t-test (Student’s t-test). We will define the null and alternative hypothesis, alpha or significance values and p-values. We will work through calculating a hypothesis test for the mean and the interpretations of a hypothesis test (interpreting significant results). We also stress the importance of providing a confidence interval along with a hypothesis test. Here, we will focus on the concepts, and not the calculations. To learn how to conduct One-Sample t-test in R watch this video: (Link Here) and to learn how to conduct paired t-Test in R watch this video (Link Here

play button Hypothesis Testing: One Sided vs Two Sided Alternative: In this statistics video, we learn the difference between a One-Sided Alternative Hypothesis (or one-sided test, or one-tailed test) versus a Two-Sided Alternative Hypothesis (or two-sided test, or two-tailed test) working through an example. How the p values, significance levels and effect size for one-sided vs two-sided hypothesis testing in statistics. There is little practical difference between a 1 vs 2 sided hypothesis test in statistics, although it is important to understand the difference between the two. In this statistics video, we spend more time on the statistical concept and explanations and less time on calculations. To learn how to conduct One-Sample t-test in R watch this video: (Link Here) and to learn how to conduct paired t-Test with R watch this video (Link Here

play button Hypothesis Test vs. Confidence Interval: In this lecture, we compare and contrast a hypothesis test with a confidence interval working through an example. Hypothesis test and confidence interval can be viewed as complementary. Hypothesis test and confidence interval always come to the same conclusion if the hypothesis test is two-sided and with the alpha of 5% and confidence interval of 95 %; it might be beneficial to use both of them in research and statistics. For a hypothesis test, the point of reference is the null hypothesized value, while for a confidence interval the point of reference is the sample estimate (or sample statistic). In this statistics video, we spend more time on the statistical concept and explanations and less time on calculations.

play button Errors and Power in Hypothesis Testing: In this statistics video we learn about Type I Errors (False Positive), Type II Errors (False Negative), and Power (Sensitivity) in hypothesis testing. We work through multiple examples to explore the concept of errors in statistics in the context and use a 2 by 2 table to understand these concepts better. Here we will also learn how these type of errors In hypothesis testing related to each other and how the probability of making these errors can change. The terminology can be confusing at first. when "rejecting a null hypothesis", this is referred to as a "positive test result". "failing to reject the null" is a "negative test result" (much like disease testing, null is that you don’t have disease, alternative is that you do have the disease, and testing positive means we reject the null and conclude that you have the disease, and vice versa). A Type I error is when we reject the null when in reality it is true. when we reject the null this is a "positive test result" and if in reality this is incorrect, it is a "false positive". In this statistics video, we spend more time on the statistical concept and explanations and less time on calculations. To learn more about Sensitivity, Specificity, Positive and Negative Predictive Values watch this video (Link Here)

play button Power Calculations in Hypothesis Testing: In this statistics lecture we learn about the statistical power of a hypothesis test and type II error in statistics and in research. This tutorial covers the concept of power in statistics, how statistical power can be calculated, and the factors that affect power. Here, we explore more in detail how the Power is related to alpha, the sample size (n), and the difference we wish to detect. The goal is to use this as a foundation for understanding the concept of power, and the factors that affect it. While power calculations can become quite complicated very quickly, the underlying concept is always the same. This video should lay a foundation for understanding power as a concept. Some of these terminologies can be confusing at first. when "rejecting a null hypothesis", this is referred to as a "positive test result". "failing to reject the null" is a "negative test result" (much like disease testing, null is that you don’t have disease, alternative is that you do have the disease, and testing positive means we reject the null and conclude that you have the disease, and vice versa). A Type I error is when we reject the null when in reality it is true. when we reject the null this is a "positive test result" and if in reality, this is incorrect, it is a "false positive". To learn more about Sensitivity, Specificity, Positive and Negative Predictive Values watch this video (Link Here)

play button Statistical Inference Definition with Example: In this statistics lecture we summarize what we have learned about sampling distributions, hypothesis testing (or significance testing) and confidence intervals. Hypothesis testing and confidence interval in statistics fall under the heading of "statistical inference", the act by which we use a sample to try and generalize back to a population. We will use all we learned here as the foundation for understanding other methods of statistical inference. Following this, we will begin to analyze the relationship between 2 variables (bivariate analysis). For these, we will calculate different estimates, although we will use the foundation we've built up to construct confidence intervals and test hypotheses about these other estimates. In this statistics video, we spend more time on the statistical concept and explanations and less time on calculations.

 

 statistical inference concept marinstatslectures