We say that the linear regression model is "linear" because:
The dependent variable is a linear function of the unknown coefficients, although it may be a non-linear function of the regressors.
The dependent variable is a linear function of the unknown coefficients, and it is also a linear function of the regressors.
Although the dependent variable may be a non-linear function of the unknown coefficients, it is a linear function of the regressors.
The model is a linear function of the regressors and the error term.
If the regressor matrix, X, in the linear regression model has less than full rank, then
One of the regressors must appear twice in the model
The OLS estimator for the full coefficient vector is not defined
There must be an exact (non-random) linear relationship between the dpendent variable in the model and the regressors
This matrix is singular, and cannot be inverted
Under the usual set of assumptions associated with the linear regression model, the covariance of the error vector is "scalar". This arises because
The errors are assumed to be pair-wise uncorrelated, and to come from a distribution with a constant variance
The regressor matrix, X, has full rank
Both A and B
The errors are homoskedastic and Normally distributed
Suppose that we estimate an OLS regression model with Y as the dependent variable, and X as the only regressor. Suppose that the estimated coefficient of X is 10. Then we re-estimate the model with X as the dependent variable and Y as the regressor. The estimated coefficient of Y will be:
0.1
0.1, as long as there is no intercept in the model
0.1, as long as there is an intercept in the model
Less than or equal to 0.1
Suppose we fit a linear regression model by OLS. The regressors all have units of $'s. Then we scale the regressors so that they are measured in $'000, leaving the dependent variable unchanged. When we re-estimate the model by OLS:
Each estimated slope coefficient, and the intercept coefficient, will be 1,000 times larger than before
Each estimated slope coefficient will be 1,000 times smaller than before, but the predicted values for the dependent variables will be unaltered
Each estimated slope coefficient will be 1,000 times larger than before, but the predicted values for the dependent variables will be unaltered
Each estimated slope coefficient will be 1,000 times larger than before, but the intercept coefficient and the predicted values for the dependent variables will be unaltered
When we estimate a linear regression model by OLS, the fitted "line" (actually, hyperplane) has the property that:
It passes through the arithmetic mean of the data for all of the variables in the model
It passes through the arithmetic mean of the data for all of the regressors in the model
It passes through the arithmetic mean of the data for all of the variables, provided that an intercept is inlcuded in the model
It passes through the arithmetic mean of the data for all of the variables in the model, and the number of positive residuals equals the number of negative residuals
Suppose we fit a simple OLS regression model (with an intercept), explaining Y in terms of X. Then we fit a linear model, using the same data, but the estimator that we use minimizes the sum of the squared "residuals" in the HORIZONTAL direction, rather than in the usual VERTICAL direction. The ratio of the first slope coefficient estimator to the second slope coefficient estimator will be:
The ratio of the sum of the squared deviations (about the mean) for the Y data, to the sum of the squared deviations (about the mean) of the X data
Unity
Positive in value
Possibly positive, but it could also be negative, depending on the data values
If we estimate a linear multiple regression model by OLS then:
The model's error vector will be orthogonal to the regressor matrix.
The OLS residuals will sum to zero.
The residual vector will be orthogonal to the regressor matrix if the model includes an intercept.
The residual vector will always be orthogonal to the regressor matrix.
Suppose we estimate a linear regression model with regressors X1 & X2, by OLS. Then we re-estimate the model with a third regressor, X3, added into the original model. The estimated coeffcients of X1 and X2 will be unaltered if:
The sample correlation between X1 and X2 is zero
The sample correlations between X3 and X1, and between X3 and X2 are both zero
The sample correlation between X3 and X1 is zero
X1 is an intercept variable - i.e., a vector of "ones"
Suppose we regress X1 on X3 using OLS, and obtain the residuals, E1. Then we regress X2 on X3 using OLS, and get the residuals E2. Finally, we regess Y on X3 using OLS, and get the residuals EY. Then, the following statement is correct (where all of the regressions in question use OLS):
If we regress EY on E1 and E2, we get the same coefficient estimates as if we had regressed Y on X1 and X2
If we regress EY on E1 and E2, we get the same coefficient estimates as if we had regressed Y on X1, X2 and X3
If we regress EY on E1, E2 and X3, we get the same coefficient estimates as if we had regressed Y on X1, X2 and X3
If we regress Y on E1 and E2 we get the same coefficient estimates as if we had regressed Y on X1, X2 and X3
Which of the following assumptions about the standard regression model are needed for the Frisch-Waugh Theorem to be satisfied?
The errors are uncorrelated and homoskedastic
The model is linear in the parameters
The model is linear in the parameters and the regressor matrix has full column rank
The errors are normally distributed
For the relationship, Total Sum of Squares = (Explained Sum of Squares + Residual Sum of Squares) to hold, we require that:
The regression model is linear and is estimated by OLS
The regression model is estimated by OLS and an intercept is included in the model
The regression model includes an intercept and the errors are normally distributed
The regression model is linear, it includes an intercept, and it is estimated by OLS
If we add one or more regressors to a linear regression model that contains an intercept, then:
The coefficient of determination MUST increase, but the "adjusted" coefficient of determination will not necessarily increase.
The coefficient of determination cannot decline in value, and the "adjusted" coefficient of determination will increase if the extra regressors are correlated with the original regressors.
The coefficient of determination cannot decline in value, and the "adjusted" coefficient of determination cannot increase in value.
None of the above
Check the EViews regression output located here. The value of the R-squared is missing. What is the value of this statistic?
There is no way of telling from the information supplied
At least as great as 0.171
0.176006
0.174621
Check the EViews regression output located here. If the regresssor, W, is removed from the model, the values of the R-Squared and "adjusted" R-Squared statistics will change in the following way:
It's impossible to tell what wll happen to the "adjusted" R-Squared value, but the R-Squared value will definitely increase
The R-Squared value will decrease, and the "adjusted" R-Squared value will increase
Anything could happen to both values - we can't tell, because we haven't been told what the variables are
The R-Squared value cannot increase, and the "adjusted" R-Squared value will definitely decrease