Consider the linear regression model [katex] y_i = \beta_0 + \beta_1 x_i + \in_i , i = 1,2,\dots, 6 [/katex] where [katex] \beta_0 [/katex] and [katex] \beta_1 [/katex] are unknown parameters and [katex] \in_i’s [/katex] are independent and identically distributed random variables having N(0,1) distribution. The data on [katex] (x_i,y_i) [/katex] are given in the following table:
| [katex] x_i [/katex] | 1.0 | 2.0 | 2.5 | 3.0 | 3.5 | 4.5 |
| [katex] y_i [/katex] | 2.0 | 3.0 | 3.5 | 4.2 | 5.0 | 5.4 |
If [katex] \hat{\beta_0} [/katex] and [katex] \hat{\beta_1} [/katex] are the least squares estimates of [katex] \beta_0 [/katex] and [katex] \beta_1 [/katex], respectively, based on the above data, then [katex] \hat{\beta_0} + \hat{\beta_1} [/katex] equals. ______________________ (round off to 2 decimal places)
The following can be calculated from the table::
n=6 , \\ \sum x_i = 16.5 , \sum y_i = 23.1 , \\ \sum x_i^2 = 52.75 , \sum y_i^2 = 97.05\\ \sum x_i y_i = 71.15 \\ var(x) = 1.623 , var(y) = 1.623 \\ cov(x,y) =1.525
Noting that the regression model may be rewritten as:
(y_i -\bar{y}) = \beta_1 (x_i - \bar{x}) + (\in_i -\bar{\in}) , i = 1,2,\dots, 6 \\
as \\
\bar{y} = \beta_0 + \beta_1 \bar{x} + \bar{\in} By using the least square principle we will have,
\hat{\beta_1} = \frac{cov(x,y)}{\sigma^2_x} = 1.034 \\
\hat{\beta_0} =1.007 \\
and \\
\hat{\beta_0} + \hat{\beta_1} = 2.041
Leave a comment