Exercises 5: Model Stability & Specification Analysis
In the case of the Chow test, the null hypothesis being tested is:
There is a single structural break at a known point in the sample.
The complete coefficient vector in the regression model is the same in each regime.
All of the slope coefficients in the regression model are the same in each regime.
The variance of the error term is the same before and after a possible break-point.
If we are applying the Chow test in the context of a regression model that has a lagged value of the dependent variable as a regressor, then:
The test statistic is F-distributed if the null hypothesis is true.
The test statistic will be asymptotically F-distributed if the null hypothesis is true.
The test statistic will be asymptotically Chi-Square distributed with (n-k) degrees of freedom if the null hypothesis is true.
k times the test statistic will be asymptotically Chi-Square distributed with k degrees of freedom if the null hypothesis is true.
The "forecast period" version of the Chow test is used when:
The location of the potential break-point is such that there are insufficient degrees of freedom to allow the model to be estimated separately over either of the sub-samples.
The location of the potential break-point is such that there are insufficient degrees of freedom to allow the model to be estimated separately over one of the sub-samples.
The location of the potential break-point is unknown.
The location of the potential break-point is close to one end of the sample.
The correct application of the Chow test requires the following, with respect to the variance of the error term in the regression model:
The value of this variance must be known.
This variance is the same for both of the regimes.
This variance must be estimated consistently.
None of the above.
Check the EViews regression output located here. The following is true:
The null hypothesis for the Chow test is that the β vector is the same over the periods 1960 to 1967, and 1968 to 1983, and we would reject this hypothesis at the 5% significance level.
The null hypothesis for the Chow test is that the β vector is the same over the periods 1960 to 1968, and 1969 to 1983, and we would not reject this hypothesis at the 5% significance level.
The null hypothesis for the Chow test is that the β vector is the same over the periods 1960 to 1968, and 1969 to 1983, and we would treat (3*1.596454) as being asymptoticaly Chi-Square with 3 degrees of freedom in order to apply the test.
The null hypothesis for the Chow test is that the β vector is the same over the periods 1960 to 1967, and 1968 to 1983, and we would not reject this hypothesis at the 5% significance level.
When we wrongly include an extra regressor in a regression model that is otherwise properly specified:
The OLS estimator of β is unbiased and more efficient than if we had not included this extra variable.
The OLS estimator of β is biased and inefficient.
The OLS estimator of β is unbiased but inefficient.
The OLS estimator of β is biased but has lower mean squared error than if we had not included this extra variable.
Suppose that we have a standard regression model that satisfies all of the usual assumptions. However, we wrongly include an extra regressor, and at the same time we wrongly omit a relevant regressor. In this case
The OLS estimator of β will be both biased and inefficient.
The OLS estimator of β will be both inefficient and inconsistent.
The OLS estimator of β will be both biased and inconsistent.
The OLS estimator of β will be biased but consistent.
Suppose that we have a standard regression model that satisfies all of the usual assumptions, but has a lagged value of the dependent variable as one of the regressors. If we wrongly omit a relevant regressor, then:
The OLS estimator for β will be more biased than if we had included this omitted variable.
The OLS estimator for β will be more biased than if we had included this omitted variable, and not included the lagged dependent variable in this model.
The OLS estimator for β will be less biased than if we had included this omitted variable.
The OLS estimator for β will be biased, but will still be consistent as long as the errors are serially independent.
If we wrongly omit a regressor from an otherwise well-specified regression model then:
The (usually unbiased) estimator for the variance of the error term will be biased, unless the omitted regressor is uncorrelated with the included regressors.
The (usually unbiased) estimator for the variance of the error term will be negatively biased, unless the omitted regressor is uncorrelated with the included regressors.
The (usually unbiased) estimator for the variance of the error term will be positively biased, unless the omitted regressor is uncorrelated with the included regressors.
The (usually unbiased) estimator for the variance of the error term will be positively biased.
Check the EViews regression output located here. The following is true:
The two models are "nested" and we would prefer Equation 2 over Equation 1 on the basis of the Akaike and Schwarz criteria.
The two models are "non-nested" and we would prefer Equation 2 over Equation 1 on the basis of the Akaike and Schwarz criteria.
The two models are "non-nested" and we would prefer Equation 1 over Equation 2 on the basis of the Akaike and Schwarz criteria.
The two models are "non-nested", but we can't choose between them on the basis of the Akaike and Schwarz criteria because Equation 2 uses a different sample from Equation 1.
If we wrongly omit one or more regressors from our regression model then:
The usual unbiased estimator of the variance of the error term will be inconsistent.
The usual unbiased estimator of the variance of the error term will be biased, but consistent.
The usual t-statistics will no longer be t-distributed under the null.
Both A and C are correct.
If the correct regression model is one that is a non-linear function of the parameters, but instead we fit a model that is linear in the parameters, then:
The OLS estimator for β will be biased and inconsistent.
The OLS estimator for β will be biased but consistent.
The OLS estimator for β will be unbiased but inconsistent.