If we use Non-linear Least Squares to estimate a model that is a non-linear function of the parameters, then:
The estimator will generally be consistent and asymptotically unbiased, but typically it will be biased in finite samples.
The estimator will generally be consistent and Best Linear Unbiased.
The estimator will generally be consistent and unbiased, but it is likely to be inefficient in finite samples.
The estimator will generally be consistent and have an asymptotic sampling distribution that is normal, and it will be unbiased if the errors are normally distributed.
When applying the Non-Linear Least Squares estimator, we need to find the "global" minimum of the objective function, rather than a "local" minimum, becuase:
Otherwise the estimator will not usually be Best Linear Unbiased.
Otherwise the estimator will not usually be consistent.
Otherwise the estimator will not usually have a normal sampling distribution.
Otherwise the estimator will not usually be efficient in finite samples.
The Newton-Raphson algorithm for obtaining the Non-Linear Least Squares estimator has the following property:
It works effectively if there are just a few parameters in the model, but not if there are many parameters.
It may give different results if different starting values are chosen for the algorithm.
It works extremely effectively if the objective function is close to being a quadratic function of the parameters.
Both B and C.
When we are using Non-Linear Least squares estimation, the usual "t-statistics" will be:
Chi-square distributed in large samples, but not necessarily t-distributed in finite samples.
Standard normally distributed in large samples.
Standard normally distributed in large samples, but not necessarily t-distributed in finite samples.
Student-t distributed in finite samples, and standard normally distributed if the sample size is very large.
Check the EViews regression output located here. The following is true:
Non-Linear Least Squares estimation has been used, and a global minimum of the objective function appears to have been found.
Non-Linear Least Squares estimation has been used, and a local minimum of the objective function appears to have been found.
Non-Linear Least Squares estimation has been used, but the algorithm has not really converged to a true minimum of the objective function.
None of the above.
If we estimate a non-linear regression model using the Non-Linear Least Squares (NLLS) estimator, and we wrongly omit one or more ariables from the model, then:
The NLLS estimator will be biased, but consistent.
The NLLS estimator will be both biased and inconsistent.
The NLLS estimator will be unbiased, but inconsistent.
The NLLS estimator will be unbiased and inconsistent, as long as the error-term has a zero mean.
If we have a non-linear regression model with additive and normally distributed errors, then:
The NLLS estimator of the coefficient vector will be asymptotically normally distributed.
The usual t-statistics will be asymptotically normally distributed.
All of A, B and D are correct.
The NLLS estimator of the coefficient vector is the same as the Maximum Likelihood estimator for this vector.
Check the EViews regression output located here. The following is true:
The usual F-statistic does not appear in the output because there is no intercept in the model.
The usual F-statistic does not appear in the output because this statistic is for testing the hypothesis that there is no linear relationship between the dependent variable and the (non-constant) regressors, and here the relationship is non-linear.
The reported coefficient of determination (R-squared) could not fall if we added another regressor to the model.