I am always confused about these two, despite many years of statistical background :(
In short,
Type I Error: Incorrectly reject the H0
Type II Error: Incorrectly NOT reject the H0
The $\alpha$ is sometime called the sinificance level of the test and it represents the maximum probability of Type I error that will be tolerated. Typical values of $\alpha$ are 0.01, 0.05, 0.10.
I am getting more understanding about Type I and Type II error and it can be linked to the True Positive, False Positive, False Negative and True Negative as follow.
Actually, the main confusion came because of the name "positive". Here, "Positive" means "rejecting H0" and "Negative" means "fail to reject H0" OK?
Therefore, The 2x2 matrix of test/actual conditions is
Moreover, while Type I error is the False Positive (FP) but the $\alpha$ is not the same as "false positive" as most references try to say. It should be "false positive rate" FP/(FP+TN). This way, it makes more sense for the $1-\alpha$ to be called the specificity = TN/(FP+TN).
On the other hand, Type II error is the False Negative (FN) but the $\beta$ is the "false negative rate" FN/(TP+FN). Now, the power or $1-\beta$, which is also called the sensitivity, is TP/(TP+FN).
No comments:
Post a Comment