deriviation_of_a_and_b_in_a_simple_regression
Differences
This shows you the differences between two versions of the page.
| Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
| deriviation_of_a_and_b_in_a_simple_regression [2025/07/17 03:30] – hkimscil | deriviation_of_a_and_b_in_a_simple_regression [2025/08/04 21:24] (current) – hkimscil | ||
|---|---|---|---|
| Line 1: | Line 1: | ||
| derivate of a and b in regression | derivate of a and b in regression | ||
| + | dv for a | ||
| + | dv for b | ||
| + | to understand [[:gradient descent]] | ||
| [{{: | [{{: | ||
| \begin{eqnarray*} | \begin{eqnarray*} | ||
| - | \sum{(Y_i - \hat{Y_i})^2} & = & \sum{(Y_i - (a + bX_i))^2} | + | \sum{(Y_i - \hat{Y_i})^2} |
| + | & = & \sum{(Y_i - (a + bX_i))^2} | ||
| & = & \text{SSE or SS.residual} \;\;\; \text{(and this should be the least value.)} | & = & \text{SSE or SS.residual} \;\;\; \text{(and this should be the least value.)} | ||
| \end{eqnarray*} | \end{eqnarray*} | ||
| Line 11: | Line 15: | ||
| \text{for a (constant)} \\ | \text{for a (constant)} \\ | ||
| \\ | \\ | ||
| - | \dfrac{\text{d}}{\text{dv}} \sum{(Y_i - (a + bX_i))^2} & = & \sum \dfrac{\text{d}}{\text{dv}} {(Y_i - (a + bX_i))^2} \\ | + | \dfrac{\text{d}}{\text{da}} \sum{(Y_i - (a + bX_i))^2} |
| - | & = & \sum{2 (Y_i - (a + bX_i))} | + | & = & \sum \dfrac{\text{d}}{\text{da}} {(Y_i - (a + bX_i))^2} \\ |
| - | & \because | + | & & \because |
| + | & & \therefore{} | ||
| + | & = & \sum \dfrac{\text{dresidual}^2} {da} \\ | ||
| + | & = & \sum \dfrac{\text{dresidual}^2}{\text{dresidual}} * \dfrac{\text{dresidual}}{\text{da}} \\ | ||
| + | & = & \sum{2 * \text{residual}} * {\dfrac{\text{dresidual}}{\text{da}}} \;\;\;\; \\ | ||
| + | & = & \sum{2 * \text{residual}} * {\dfrac{d{(Y_i - (a + bX_i))}}{\text{da}}} \;\;\;\; \\ | ||
| + | & = & \sum{2 * \text{residual}} * (0 - 1 - 0) \;\;\;\; \\ | ||
| + | & & \because{Y_i = 0; \;\;\; a = 1; \;\;\; bX_i = 0} \\ | ||
| + | & = & \sum{2 * \text{residual}} * (-1) \; | ||
| & = & -2 \sum{(Y_i - (a + bX_i))} \\ | & = & -2 \sum{(Y_i - (a + bX_i))} \\ | ||
| \\ | \\ | ||
| Line 32: | Line 44: | ||
| \text{for b, (coefficient)} \\ | \text{for b, (coefficient)} \\ | ||
| \\ | \\ | ||
| - | \dfrac{\text{d}}{\text{dv}} \sum{(Y_i - (a + bX_i))^2} | + | \dfrac{\text{d}}{\text{db}} \sum{(Y_i - (a + bX_i))^2} |
| & = & \sum{2 (Y_i - (a + bX_i))} * (-X_i) \;\;\;\; \\ | & = & \sum{2 (Y_i - (a + bX_i))} * (-X_i) \;\;\;\; \\ | ||
| & \because & \dfrac{\text{d}}{\text{dv for b}} (Y_i - (a+bX_i)) = -X_i \\ | & \because & \dfrac{\text{d}}{\text{dv for b}} (Y_i - (a+bX_i)) = -X_i \\ | ||
| Line 48: | Line 60: | ||
| b & = & \dfrac{\sum{(Y_i - \overline{Y})}}{\sum{(X_i - \overline{X})}} \\ | b & = & \dfrac{\sum{(Y_i - \overline{Y})}}{\sum{(X_i - \overline{X})}} \\ | ||
| b & = & \dfrac{ \sum{(Y_i - \overline{Y})(X_i - \overline{X})} } {\sum{(X_i - \overline{X})(X_i - \overline{X})}} \\ | b & = & \dfrac{ \sum{(Y_i - \overline{Y})(X_i - \overline{X})} } {\sum{(X_i - \overline{X})(X_i - \overline{X})}} \\ | ||
| - | b & = & \dfrac{ \text{SP} } {\text{SS}_\text{x}} = \dfrac{\text{Cov(X, | + | b & = & \dfrac{ \text{SP} } {\text{SS}_\text{x}} = \dfrac{\text{Cov(X, |
| \end{eqnarray*} | \end{eqnarray*} | ||
| </ | </ | ||
| 리그레션 라인으로 예측하고 틀린 나머지 error의 제곱의 합을 (ss.res) 최소값으로 만드는 선의 기울기와 절편값은 위와 같다 (a and b). | 리그레션 라인으로 예측하고 틀린 나머지 error의 제곱의 합을 (ss.res) 최소값으로 만드는 선의 기울기와 절편값은 위와 같다 (a and b). | ||
| + | 위는 증명을 통해서 a와 b값을 알아낸 것이고, [[:gradient descent|R과 같은 어플리케이션에서 a와 b를 알아내는 방법은]] 없을까? | ||
deriviation_of_a_and_b_in_a_simple_regression.1752723004.txt.gz · Last modified: by hkimscil
