If an estimator is "weakly consistent", then:
- It will also be mean-square consistent
- Its asymptotic distribution will be the same as that of the corresponding MLE
- The probability limit of the estimator is equal to the true value of the parameter being estimated
- It will also be mean-square consistent if it is unbiased in finite samples
If an estimator is "strongly consistent" then:
- It is also "weakly consistent"
- The estimator converges "almost surely" to the true value of the parameter
- Both A and B
- It must also be asymptotically efficient
If an estimator is "Fisher consistent", then:
- It will also be mean square consistent
- It will satisfy Slutsky's Theorem
- It will also be weakly consistent
- Both B and C
The "regularity conditions" associated with MLE are:
- Conditions that the Hessian matrix of the log-likelihood function should satisfy
- Conditions that the underlying population density function must satisfy
- Conditions that MLE's generally satisfy
- Conditions that ensure that the "likelihood equations" have a unique solution
If the usual "regularity conditions" are satisfied, then MLE's are:
- Weakly consistent & asymptotically unbiased
- A hypothetical, but not always attainable, minimum value for the variance of any estimator for this parameter
- The smallest value for the that can be achieved by any unbiased estimator for this parameter
- The value of Fisher's "Information Measure"
When estimating a scalar parameter, the Cramer-Rao lower bound is:
- The smallest variance that that any MLE can achieve
- A hypothetical, but not always attainable, minimum value for the variance of any estimator for this parameter
- The smallest value for the that can be achieved by any unbiased estimator for this parameter
- The value of Fisher's "Information Measure"
Fisher's Information Matrix:
- Is obtained by taking the negative of the expectation of the Hessian matrix of the log-likelihood function
- Is the Cramer-Rao lower bound for the covariance matrix of an unbiased estimator
- Has an inverse that is the Cramer-Rao lower bound for the covariance matrix of an unbiased estimator
- Both 'A' and 'C' above
When testing restrictions on the parameters, the Wald, LM and LRT tests are "asymptotically equivalent" in the sense that:
- The asymptotic distributions of the associated test statistics are the same if the null hypothesis is false
- The asymptotic distributions of the associated test statistics are all standard normal if the null hypothesis is true
- The test statistics will all take the same value if the model is correctly specified
- The asymptotic distributions of the associated test statistics are the same if the null hypothesis is true
The LM test might be preferred over the Wald or LR tests if:
- It is easier to estimate the model by MLE once the restrictions under test are imposed on the model
- It is easier to estimate the model by MLE if the restrictions under test are NOT imposed on the model
- The restrictions under test are non-linear functions of the parameters
- Both A and C
The main motivation for the LRT is that:
- In the case of a "point" null hypothesis and a "point" alternative hypothesis, the test is consistent
- In the case of a "composite" null hypothesis and a "composite" alternative hypothesis, the test is UMP
- In the case of a "point" null hypothesis and a "point" alternative hypothesis, the test is UMP
- It is easy to compute, even for complex hypotheses