When considering exact linear restrictions on the regression model's coefficient vector, the number of these restrictions must be:
Greater than the number of regressors in the model.
Less than the number of regressors in the model.
Exactly equal to the number of regressors in the model.
Less than 'n', the sample size.
If we wish to test the validity of J independent linear restrictions on the coefficients of the usual linear model, and if all of the usual assumptions about that model hold, then:
We can use an F-test. The distribution of the test statistic is F with J and (n-k) degrees of freedom if the restrictions are valid, but it is Chi-square with J degrees of freedom if they are false.
We can use a Wald test.
The Wald test will be uniformly most powerful.
We can use an F-test, and this test will be valid even if the sample size is small.
Suppose we have a regression model that is non-linear in the parameters, and we want to test the hypothesis that the sum of two of the parameters is unity. Then:
Using a Wald test would be foolish, as the results will depend on how we write the restriction.
A Wald test would be a sensible choice as it will be optimal for any sample size.
A Wald test would be a sensible choice and the test statistic will have a known null distribution if the sample size is sufficiently large.
Because the restriction is linear, we can use an F-test, which will be exact and UMP even if the sample size is relatively small.
Check the EViews regression output located here. The following is true:
The F-statistic that is reported there would increase if any other regressors were added to the model.
The null distribution for the F-statistic that is reported there is "F", with 2 and 208 degrees of freedom.
Both A and B are true.
Because of the relatively large sample size, the F-statistic that is reported there has a distribution that is approximately Chi-Square with 3 degrees of freedom, if the null hypothesis is true.
Suppose that we have a standard linear regression model, y = Xβ + ε, and we want to test the null hypothesis that β1 = (β2 / β3). Then:
The F-test is a better choice than the Wald test, as the results for the former test will be invariant to how we write the null hypothesis.
The Wald test is asymptotically valid, but the F-test is not appropriate.
The Wald test is not a good choice as the results will depend on how we write the null hypothesis.
Both B and C.
Check the EViews regression output located here. The following is true:
The hypothesis being tested is that β2 = -β3, and we would reject this hypothesis at the 10% significance level.
The hypothesis being tested is that β2 = β3, and we would not reject this hypothesis at the 10% significance level.
The hypothesis being tested is that β2 = β3, and we would reject this hypothesis at the 10% significance level.
The hypothesis being tested is that β2 = -β3, and the alternative hypothesis is that β2 +β3 ≠ 0.
Check the EViews regression output located here. The following is true:
In the Wald test output box, the F-statistic should not be used for an exact tests because OLS estimation has not been used.
In the Wald test output box, the F-statistic should not be used fr an exact test because a lagged value of the dependent variable is used as a regressor.
In the Wald test output box, the Chi-Square and F-statistics are the same because v1*F(v1,v2) → Chisquare with v1 degrees of freedom.
All of the above are correct.
The Restricted Least Squares estimator collapses to the OLS estimator if:
The sample data are such that the OLS estimator happens to exactly satisfy the restrictions.
The restrictions are actually valid in the population.
The regressors are non-random.
The error term in the model has a zero mean, so that the OLS estimator is unbiased.
Suppose we apply the Restricted Least Squares estimator, and all of the usual assumptions about our regression model hold. However, suppose the restrictions themselves are false. Then:
The RLS estimator will be biased.
The RLS estimator will be biased and inconsistent, but the OLS estimator will be unbiased and consistent.
The RLS estimator will be inconsistent.
The RLS estimator will be biased but weakly consistent.
If we construct the Restricted Least Squares (RLS) estimator and the restrictions are actually false, then:
The RLS estimator will be inefficient relative to the OLS estimator.
The RLS estimator will be biased, and it will exhibit greater variability then the OLS estimator.
The RLS estimator may be inefficient, or efficient, relative to the OLS estimator, depending on the bias/variance trade-off.
The RLS estimator will be less biased than if the restrictions were true.
The rank of the restrictions matrix, 'R', in the set-up for exact linear restrictions on the regression model's coefficients, is less than J (the number of restrictions) then:
There will be some restrictions that are either redundant, or in conflict with other restrictions.
The Wald test statistic will not be defined.
The Restricted Least Squares estimator will not be defined.
All of the above.
If we wanted to estimate the model y = Xβ + ε by Generalized Instrumental Variables, and we wanted to impose the restrictions Rβ = q on the estimates, then we would:
Be wasting our time, becuase we can only incorportate such restrictions if we are using OLS as the basis for estimation.
Derive the estimator by finding the b* that solves the problem: Min. (y-Xb*)'Mz(y-Xb*) subject to Rb* = q; where Mz is the same idempotent matrix used to construct the usual Generalized I.V. estimator.
End up with an estimator that would be consistent (but probably biased) if the the restrictions were valid, but inconsistent if the restrictions were false.