The relationship between the Generalized Least Squares estimator & the Weighted Least Squares estimator is that:
They are the same.
They are the same if the errors have a known and observable form of heteroskedasticity.
They are the sane if the errors follow an AR(1) process.
They are the same as long as the GLS estimator is "feasible".
The GLS estimator is BLU (by the Gauss-Markhov Theorem) if:
The errors are normally distributed.
The covariance matrix of the errors is non-singular.
The covariance matrix of the errors is known and observable.
The regressors are non-random and the errors have a zero mean and a covaraince matrix that is known and observable.
Consider the linear regression model, y = Xβ + ε , where E(ε) = 0 and E(εε') = Σ, and X is non-random.
The OLS estimator of β is biased, and inefficient, unless Σ is a "scalar" matrix.
The GLS estimator of β is biased, but efficient.
The OLS estimator of β is unbiased, but inefficient, unless Σ is a "scalar" matrix.
The "feasible" GLS estimator of β is unbiased, and consistent.
White's estimator of the covariance matrix of the OLS estimator of β in the linear regression model is used:
To ensure that the standard errors of the estimated coefficients are consistent, even if the error term is heteroskedastic of some unknown form.
To ensure that the the estimated coefficients are consistent, even if the error term is heteroskedastic of some unknown form.
To ensure that the standard errors of the estimated coefficients are consistent, even if the error term is autocorrelated of some unknown form.
To ensure that the standard errors of the estimated coefficients are unbiased, even if the error term is heteroskedastic of some unknown form.
The difference between the GLS estimator and the "feasible" GLS estimator is:
The GLS estimator uses the known non-scalar covariance matrix of the errors, while the "feasible" GLS estimator uses an unbiased estimator of this covariance matrix because it is unobservable.
The GLS estimator uses the known non-scalar covariance matrix of the errors, while the "feasible" GLS estimator uses a consistent estimator of this covariance matrix because it is unobservable.
None - if the error term has a scalar covarince matrix,
Both B & C.
White's test for homoskedasticity in the linear regression model is:
An asymptotically valid test in which the null hypothesis is that the errors follow some arbitrary form of heterokedasticity, and the alternative hypothesis is that the errors are homoskedastic.
An asymptotically valid test in which the null hypothesis is that the errors are homoskedastic, and the alternative hypothesis is any arbitrary form of heteroskedasticity.
The UMP test against any arbitrary form of heteroskedasticity.
Valid in finite samples, but more powerful if the sample size is very large.
The Goldfeld-Quandt test is used:
To test if the variance of the regression model's error term is constant over two sub-samples, when it is known that the coefficient vector is constant.
To test if the coefficient vector of the regression model is constant over two sub-samples, when it is known that the model's error term is constant.
To test if the variance of the regression model's error term is constant over two sub-samples.
To test if the variance of the regression model's error term is homoskedastic.
The Goldfeld-Quandt test for homoskedasticity is:
An asymptotically valid F-test if the model satisifes ALL of the usual assumptions, including normally-distributed errors.
An exact F-test if the model satisifes ALL of the usual assumptions, including normally-distributed errors.
An exact F-test if the model satisifes all of the usual assumptions, except for normally-distributed errors.
Usually applied as a two-sided test as we don't know if σ12 > σ22, or if σ12 < σ22.
The Breusch-Pagan test is used:
To test the null of homoskedastic errors against the alternative of heteroskedastic errors, when the heteroskedasticity is of a specified form, and the errors are normally distributed.
To test the null of homoskedastic errors against the alternative of heteroskedastic errors, when the heteroskedasticity is of an arbitrary form, and the sample is relatively large.
To test the null of homoskedastic errors against the alternative of heteroskedastic errors, when the heteroskedasticity is of a specified form, and the sample is relatively large.
To test the null of homoskedastic errors against the alternative of heteroskedastic errors, when the heteroskedasticity is of a specified form, and the regressors are non-random.
The connection between the Breusch-Pagan test for homoskedasticity and Harvey's test for homoskedasticity is:
They are both examples of Wald tests.
Harvey's test is just a special case of the Breusch-Pagan test, with the error variance being a particular function of some variables, under the alternative hypothesis.
The Breusch-Pagan test is just a special case of Harvey's test, with the error variance being an exponential function of some variables, under the alternative hypothesis.
They are both Lagrange Multiplier (LM) tests.
If we have linear regression model that is "standard", except that the errors follow the process εt = ρ εt-1 + ut, then:
The OLS estimator of β will be unbiased and consistent, but inefficient.
The OLS estimator of β will be biased and inefficient, but consistent.
The OLS estimator of β will be inefficient but unbiased and consistent, as long as the regressors do not include any lagged values of y.
The OLS estimator of β will be consistent but inefficient and biased and inconsistent, if the regressors include any lagged values of y.
The LM test for serial independence of the regression model's error term is:
An asymptotically valid test.
Appropriate even if the model includes lagged values of y as regressors.
Appropriate even if the errors are non-normal.
All of A, B, & C.
The Cochrane-Orcutt estimator for the coefficients of a regression model with AR(1) errors is:
Just a convenient way of implementing the Maximum Likelihood Estimator (MLE) of β.
An approximation to the MLE of β - the approximation arises because a term is actually omitted from the full likelihood function.
Slightly different from the MLE in finite samples, but asymptotically equivalent to the MLE.
Used to obtain unbiased estimates of these coefficients.
Check the EViews regression output located here. The following is true:
We reject the hypothesis that the errors for the model estimated on page 1 are serially independent, against the alternative hypothesis that they follow a first-order moving average process, at least at the 5% significance level, & this has been dealt with adequately on page 2 by re-estimating the model allowing for errors that follow this process.
The residuals for the model estimated on page 1 exhibit first-order autocorrelation, at least at the 5% significance level, & this has been dealt with adequately on page 2 by re-estimating the model allowing for errors that follow a first-order autoregressive process.
The residuals for the model estimated on page 1 exhibit first-order autocorrelation, at least at the 5% significance level, & this has been dealt with adequately on page 2 by re-estimating the model allowing for errors that follow a first-order moving average process.
Answer C would be correct if we had a larger sample size, but we really can't conclude much at all in the case of only 15 observations.
Check the EViews regression output located here. The following is true:
The residuals for the model estimated on page 1 exhibit fourth-order autocorrelation, at least at the 5% significance level, & this is still a problem on page 2 after re-estimating the model allowing for errors that follow a (restricted) ARMA(4,4) process.
The residuals for the model estimated on page 1 exhibit fourth-order autocorrelation, at least at the 5% significance level, but this problem has been resolved on page 2 by re-estimating the model allowing for errors that follow a (restricted) ARMA(4,4) process.
The residuals for the model estimated on page 1 exhibit fourth-order autocorrelation, at least at the 5% significance level, & this is still a problem on page 2 after re-estimating the model allowing for errors that follow a fourth-order autoregressive process.