multiple_regression
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
multiple_regression [2020/12/03 15:20] – [in R] hkimscil | multiple_regression [2023/10/19 08:39] (current) – [Determining IVs' role] hkimscil | ||
---|---|---|---|
Line 44: | Line 44: | ||
====== e.g.====== | ====== e.g.====== | ||
Data set again. | Data set again. | ||
+ | < | ||
+ | datavar <- read.csv(" | ||
^ DATA for regression analysis | ^ DATA for regression analysis | ||
Line 332: | Line 334: | ||
===== in R ===== | ===== in R ===== | ||
- | < | + | < |
mod <- lm(api00 ~ ell + acs_k3 + avg_ed + meals, data=dvar) | mod <- lm(api00 ~ ell + acs_k3 + avg_ed + meals, data=dvar) | ||
summary(mod) | summary(mod) | ||
Line 392: | Line 394: | ||
$$ \hat{Y} = 709.6388 + -0.8434 \text{ell} + 3.3884 \text{acs_k3} + 29.0724 \text{avg_ed} + -2.9374 \text{meals} \\$$ | $$ \hat{Y} = 709.6388 + -0.8434 \text{ell} + 3.3884 \text{acs_k3} + 29.0724 \text{avg_ed} + -2.9374 \text{meals} \\$$ | ||
- | 그렇다면 각각의 독립변인 고유의 설명력은 얼마인가? | + | 그렇다면 각각의 독립변인 고유의 설명력은 얼마인가? |
- | + | ||
- | ====== Why overall model is significant while IVs are not? ====== | + | |
- | see https:// | + | |
- | + | ||
- | < | + | |
- | RSS = 3:10 #Right shoe size | + | |
- | LSS = rnorm(RSS, RSS, 0.1) #Left shoe size - similar to RSS | + | |
- | cor(LSS, RSS) # | + | |
- | + | ||
- | weights = 120 + rnorm(RSS, 10*RSS, 10) | + | |
- | + | ||
- | ##Fit a joint model | + | |
- | m = lm(weights ~ LSS + RSS) | + | |
- | + | ||
- | ##F-value is very small, but neither LSS or RSS are significant | + | |
- | summary(m) | + | |
- | </ | + | |
- | + | ||
- | + | ||
- | < | + | |
- | > LSS = rnorm(RSS, RSS, 0.1) #Left shoe size - similar to RSS | + | |
- | > cor(LSS, RSS) # | + | |
- | [1] 0.9994836 | + | |
- | > | + | |
- | > weights = 120 + rnorm(RSS, 10*RSS, 10) | + | |
- | > | + | |
- | > ##Fit a joint model | + | |
- | > m = lm(weights ~ LSS + RSS) | + | |
- | > | + | |
- | > ##F-value is very small, but neither LSS or RSS are significant | + | |
- | > summary(m) | + | |
- | + | ||
- | Call: | + | |
- | lm(formula = weights ~ LSS + RSS) | + | |
- | + | ||
- | Residuals: | + | |
- | 1 | + | |
- | | + | |
- | + | ||
- | Coefficients: | + | |
- | Estimate Std. Error t value Pr(> | + | |
- | (Intercept) | + | |
- | LSS -14.162 | + | |
- | RSS | + | |
- | --- | + | |
- | Signif. codes: | + | |
- | + | ||
- | Residual standard error: 7.296 on 5 degrees of freedom | + | |
- | Multiple R-squared: | + | |
- | F-statistic: | + | |
- | + | ||
- | > | + | |
- | > ##Fitting RSS or LSS separately gives a significant result. | + | |
- | > summary(lm(weights ~ LSS)) | + | |
- | + | ||
- | Call: | + | |
- | lm(formula = weights ~ LSS) | + | |
- | + | ||
- | Residuals: | + | |
- | | + | |
- | -6.055 -4.930 -2.925 | + | |
- | + | ||
- | Coefficients: | + | |
- | Estimate Std. Error t value Pr(> | + | |
- | (Intercept) | + | |
- | LSS | + | |
- | --- | + | |
- | Signif. codes: | + | |
- | + | ||
- | Residual standard error: 7.026 on 6 degrees of freedom | + | |
- | Multiple R-squared: | + | |
- | F-statistic: | + | |
- | + | ||
- | > | + | |
- | </ | + | |
Line 493: | Line 420: | ||
| | Standard Multiple | | | Standard Multiple | ||
- | | r< | + | | r< |
| ::: | IV< | | ::: | IV< | ||
- | | sr< | + | | sr< |
| ::: | IV< | | ::: | IV< | ||
- | | pr< | + | | pr< |
| ::: | IV< | | ::: | IV< | ||
| IV< | | IV< | ||
Line 654: | Line 581: | ||
</ | </ | ||
+ | [[:Multiple Regression Exercise]] | ||
====== Resources ====== | ====== Resources ====== |
multiple_regression.1606976410.txt.gz · Last modified: 2020/12/03 15:20 by hkimscil