Exercises 3: Asymptotic Properies of Various Regression Estimators
If an estimator is "mean square consistent", then:
Its mean squared error vanishes if ithe sample size is sufficiently large.
Its mean squared error will be zero.
Its probability limit will equal the true value of the parameter.
Both A and C.
Consider two possible estimators of the (constant) variance of the error term in a standard linear regression model, for which all of the usual assumptions are satisfied: vhat1 = (e'e) / (n-k), and vhat2 = (e'e) / (n-k+2).
Both of these estimators are weakly consistent and mean square consistent.
Both of these estimators are weakly consistent and unbiased.
The first estimator is unbiased and mean square consistent, while the second estimator is biased and only weakly consistent.
Both of these estimators are weakly consistent, but neither of them is mean square consistent.
If plim(β*) = β and plim(α*) = α, where α and β are unknown parameters. Then:
α* is mean square consistent for α, and β* is mean square consistent for β.
α* and β* are unbiased estimators of α and β respectively.
plim(log(β*)) = log(β), and plim(α* / β*) = (α / β), provided that β > 0.
The Lindeberg-Lévy Central Limit Theorem tells us that, under certain conditions:
The sum of independent drawings from a normal distribution will also be normally distributed.
The sum of n independent drawings from any distribution will be normally distributed if n is greater than about 20.
The Student-t distribution will approach the Standard normal distribution as the degrees of freedom tends to infinity.
The arithmetic average of independent random variables from any distribution will be approximately normally distributed, provided the number of terms used in constructing this average is sufficiently large.
Suppose that we have 2 statistics. One of them is 't', which follows a Student-t distribution with p degrees of freedom. The other is 'F', and it follows an F distribution with p1 and p2 degrees of freedom.
As p → ∞ and p2 → ∞, the distribution for 't' approaches the standard normal distribution, and the distribution for 'F' approaches the chi-square distribution with p1 degrees of freedom.
As p and p2 → ∞ , the distribution for 't' approaches the standard normal distribution, and the distribution for (p1F) approaches the chi-square distribution with p1 degrees of freedom.
As p and p2 → ∞, the distribution for 't' approaches a chi-square distribution with p1 degrees of freedom, and the distribution for (p1F) also approaches the chi-square distribution with p1 degrees of freedom.
As p and p2 → ∞, the distributions for 't' and 'F' approach the same limiting (asymptotic) distribution.
Suppose that β1 and β2 are 2 weakly consistent estimators of a parameter, β; Also, suppose that the asymptotic variance of β1 is (v1 / n), and that of β2 is (v2 / n). Then:
We can't say anything about the relative asymptotic efficiencies of these 2 estimators, as both of the asymptotic variances converge to zero as n becomes infinitely large
β1 will be asymptotically more efficient than β2 if v1 < v2.
Both estimators must have the same asymptotic efficiency as they are both consistent estimators.
Both estimators are asymptotically unbiased, but we can't tell anything about their asymptotic efficiency.
The "generalized" Instrumental Variables (I.V.) estimator for the coefficient vector in a linear regression model collapses to the "simple" I.V. estimator if:
The number of instruments is greater than or equal to the number of regressors.
The instruments are non-random.
The number of instruments is less than or equal to the number of regressors.
The number of instruments exactly equals the number of regressors.
If we apply a Hausman test of the hypothesis that the errors in a regression model are asymptotically uncorrelated with the regressors:
We would use I.V. estimation if the p-value for the test is large enough (say, greater than 10% or 20%).
We would use OLS estimation if the null hypothesis is rejected.
We would use I.V. estimation if the null hypothesis is rejected.
The test statistic will follow a chi-square distribution if the null hypothesis is true.
The Wu test is used:
Because it is (asymptotically) more powerful than the Hausman test.
To determine if a model's regressors are asymptotically uncorrelated with the error term, or not.
To determine whether OLS or IV estimation is more appropriate.
Both B and C.
The usual OLS estimator of the coefficient vector in the linear regression model is also an I.V. estimator, with the instrument matrix chosen to be:
Non-singular.
The regressor matrix, X.
Non-random.
The (X'X) matrix.
If we are constructing several I.V. estimators with "valid" choices of instruments (i.e., in each case the instruments are asymptotically uncorrelated with the error term in the model), then:
All of these estimators will be equivalent asymptotically, although their finite-sample properties may differ.
The choice of instruments will affect the relative asymptotic efficiencies of the estimators, but not their finite-sample bias.
The choice of instruments will affect the consistency of the estimators, but not their relative asymptotic efficiencies.
The choice of instruments will affect the relative asymptotic efficiencies of the estimators, but not their coinsistency.
Check the EViews regression output located here. The following is true:
The estimator used in Equation 1 will be inconsistent, while that used in Equation 2 will be consistent - this is why there are some large differences in the results.
The larger standard errors in Equation 2, as compared with Equation 1, reflect the inefficiency of OLS relative to I.V. estimation (at least asymptotically).
Equation 2 has been estimated with an allowance for the possibility that the income variable, Y, may be correlated with the error term, even in very large samples.
The use of I.V. estimation (in this particular example) reduces the estimates of both the short-run and long-run marginal propensities to consume.