User Tools

Site Tools


multiple_regression

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
multiple_regression [2020/12/01 19:08] – [in R] hkimscilmultiple_regression [2023/10/19 08:39] (current) – [Determining IVs' role] hkimscil
Line 44: Line 44:
 ====== e.g.====== ====== e.g.======
 Data set again.  Data set again. 
 +<code>
 +datavar <- read.csv("http://commres.net/wiki/_media/regression01-bankaccount.csv") </code>
  
 ^  DATA for regression analysis   ^^^ ^  DATA for regression analysis   ^^^
Line 332: Line 334:
  
 ===== in R ===== ===== in R =====
-<code>dvar <- read.csv("http://commres.net/wiki/_media/elemapi2_.csv", fileEncoding="UTF-8-BOM")+<code>dvar <- read.csv("http://commres.net/wiki/_media/elemapi2_.csv", sep = "\t", fileEncoding="UTF-8-BOM")
 mod <- lm(api00 ~ ell + acs_k3 + avg_ed + meals, data=dvar) mod <- lm(api00 ~ ell + acs_k3 + avg_ed + meals, data=dvar)
 summary(mod) summary(mod)
Line 391: Line 393:
 ></code> ></code>
 $$ \hat{Y} =  709.6388 + -0.8434 \text{ell} + 3.3884 \text{acs_k3} + 29.0724 \text{avg_ed} + -2.9374 \text{meals} \\$$  $$ \hat{Y} =  709.6388 + -0.8434 \text{ell} + 3.3884 \text{acs_k3} + 29.0724 \text{avg_ed} + -2.9374 \text{meals} \\$$ 
-====== Why overall model is significant while IVs are not? ====== 
-see https://www.researchgate.net/post/Why_is_the_Multiple_regression_model_not_significant_while_simple_regression_for_the_same_variables_is_significant 
  
-<code> +그렇다면 각각의 독립변인 고유의 설명력은 얼마인가? --> see [[:partial and semipartial correlation]]
-RSS = 3:10 #Right shoe size +
-LSS = rnorm(RSS, RSS, 0.1) #Left shoe size similar to RSS +
-cor(LSS, RSS) #correlation ~ 0.99 +
-  +
-weights = 120 + rnorm(RSS, 10*RSS, 10) +
-  +
-##Fit a joint model +
-m = lm(weights ~ LSS + RSS) +
- +
-##F-value is very small, but neither LSS or RSS are significant +
-summary(m) +
-</code> +
- +
- +
-<code>> RSS = 3:10 #Right shoe size +
-> LSS = rnorm(RSS, RSS, 0.1) #Left shoe size - similar to RSS +
-> cor(LSS, RSS) #correlation ~ 0.99 +
-[10.9994836 +
->  +
-> weights = 120 + rnorm(RSS, 10*RSS, 10) +
->  +
-> ##Fit a joint model +
-> m = lm(weights ~ LSS + RSS) +
->  +
-> ##F-value is very small, but neither LSS or RSS are significant +
-> summary(m) +
- +
-Call: +
-lm(formula = weights ~ LSS + RSS) +
- +
-Residuals: +
-      1                                           8  +
- 4.8544  4.5254 -3.6333 -7.6402 -0.2467 -3.1997 -5.2665 10.6066  +
- +
-Coefficients: +
-            Estimate Std. Error t value Pr(>|t|)     +
-(Intercept)  104.842      8.169  12.834 5.11e-05 *** +
-LSS          -14.162     35.447  -0.400    0.706     +
-RSS           26.305     35.034   0.751    0.487     +
---- +
-Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 +
- +
-Residual standard error: 7.296 on 5 degrees of freedom +
-Multiple R-squared:  0.9599, Adjusted R-squared:  0.9439  +
-F-statistic: 59.92 on 2 and 5 DF,  p-value: 0.000321 +
- +
->  +
-> ##Fitting RSS or LSS separately gives a significant result.  +
-> summary(lm(weights ~ LSS)) +
- +
-Call: +
-lm(formula = weights ~ LSS) +
- +
-Residuals: +
-   Min     1Q Median     3Q    Max  +
--6.055 -4.930 -2.925  4.886 11.854  +
- +
-Coefficients: +
-            Estimate Std. Error t value Pr(>|t|)     +
-(Intercept)  103.099      7.543   13.67 9.53e-06 *** +
-LSS           12.440      1.097   11.34 2.81e-05 *** +
---- +
-Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 +
- +
-Residual standard error: 7.026 on 6 degrees of freedom +
-Multiple R-squared:  0.9554, Adjusted R-squared:  0.948  +
-F-statistic: 128.6 on 1 and 6 DF,  p-value: 2.814e-05 +
- +
->  +
-</code>+
  
  
Line 490: Line 420:
  
 |  | Standard Multiple   | Sequential    comments    |  | Standard Multiple   | Sequential    comments   
-| r<sub>i</sub><sup>2</sup>  \\ squared correlation \\ **zero-order** correlation   | IV<sub>1</sub> : (a+b) / (a+b+c+d)   | IV<sub>1</sub> : (a+b) / (a+b+c+d)   | overlapped effects   +| r<sub>i</sub><sup>2</sup>  \\ squared correlation \\ squared **zero-order** \\ correlation in spss  | IV<sub>1</sub> : (a+b) / (a+b+c+d)   | IV<sub>1</sub> : (a+b) / (a+b+c+d)   | overlapped effects   
 | ::: | IV<sub>2</sub> : (c+b) / (a+b+c+d)   | IV<sub>2</sub>: (c+b) / (a+b+c+d)   | ::: |  | ::: | IV<sub>2</sub> : (c+b) / (a+b+c+d)   | IV<sub>2</sub>: (c+b) / (a+b+c+d)   | ::: | 
-| sr<sub>i</sub><sup>2</sup>  \\ squared **semipartial** correlation \\ **part in spss**   | IV<sub>1</sub> : (a) / (a+b+c+d)   | IV<sub>1</sub> : (a+b) / (a+b+c+d)   | Usual setting \\ Unique contribution to Y   +| sr<sub>i</sub><sup>2</sup>  \\ squared \\ **semipartial** correlation \\ **part in spss**   | IV<sub>1</sub> : (a) / (a+b+c+d)   | IV<sub>1</sub> : (a+b) / (a+b+c+d)   | Usual setting \\ Unique contribution to Y   
 | ::: | IV<sub>2</sub> : %%(c%%) / (a+b+c+d)   | IV<sub>2</sub> : %%(c%%) / (a+b+c+d)   | ::: |  | ::: | IV<sub>2</sub> : %%(c%%) / (a+b+c+d)   | IV<sub>2</sub> : %%(c%%) / (a+b+c+d)   | ::: | 
-| pr<sub>i</sub><sup>2</sup>  \\ squared **partial** correlation \\ **partial in spss**   | IV<sub>1</sub> : (a) / (a+d)   | IV<sub>1</sub> : (a+b) / (a+b+d)   | Like adjusted r<sup>2</sup>  \\ Unique contribution to Y   +| pr<sub>i</sub><sup>2</sup>  \\ squared \\ **partial** correlation \\ **partial in spss**   | IV<sub>1</sub> : (a) / (a+d)   | IV<sub>1</sub> : (a+b) / (a+b+d)   | Like adjusted r<sup>2</sup>  \\ Unique contribution to Y   
 | ::: | IV<sub>2</sub> : %%(c%%) / (c+d)   | IV<sub>2</sub> : %%(c%%) / (c+d)   | ::: |  | ::: | IV<sub>2</sub> : %%(c%%) / (c+d)   | IV<sub>2</sub> : %%(c%%) / (c+d)   | ::: | 
 | IV<sub>1</sub> 이 IV<sub>2</sub> 보다 먼저 투입되었을 때를 가정   ||||  | IV<sub>1</sub> 이 IV<sub>2</sub> 보다 먼저 투입되었을 때를 가정   |||| 
Line 651: Line 581:
 </code> </code>
  
 +[[:Multiple Regression Exercise]]
  
 ====== Resources ====== ====== Resources ======
multiple_regression.1606817321.txt.gz · Last modified: 2020/12/01 19:08 by hkimscil

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki