User Tools

Site Tools


b:head_first_statistics:using_the_normal_distribution

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
b:head_first_statistics:using_the_normal_distribution [2025/10/07 23:20] – [Pool Puzzle] hkimscilb:head_first_statistics:using_the_normal_distribution [2025/10/08 12:11] (current) – [Pool Puzzle] hkimscil
Line 490: Line 490:
 </code> </code>
  
-<WRAP info>+<WRAP box>
 pnorm in r: 표준점수에 해당하는 누적 퍼센티지 (<fc #ff0000>**P**</fc>ercentage) pnorm in r: 표준점수에 해당하는 누적 퍼센티지 (<fc #ff0000>**P**</fc>ercentage)
 <code> <code>
Line 513: Line 513:
 rnorm(n, mean = 0, sd = 1) rnorm(n, mean = 0, sd = 1)
 </code> </code>
 +</WRAP>
 +{{  :b:head_first_statistics:pasted:20201204-175705.png?500}}
  
-[{{  :b:head_first_statistics:pasted:20201204-175705.png  }}] 
  
-</WRAP> 
  
 따라서  따라서 
Line 594: Line 594:
 {{:b:head_first_statistics:pasted:20191114-080220.png}} {{:b:head_first_statistics:pasted:20191114-080220.png}}
  
-<WRAP info>+<WRAP box>
 Q: So what’s the difference between linear transforms and independent observations? Q: So what’s the difference between linear transforms and independent observations?
 A: Linear transforms affect the underlying values in your probability distribution. As an example, if you have a length of rope of a particular length, then applying a linear transform affects the length of the rope. Independent observations have to do with the quantity of things you’re dealing with. As an example, if you have n independent observations of a piece of rope, then you’re talking about n pieces of rope. In general, __if the quantity changes__, you’re dealing with **independent observations**. __If the underlying values change__, then you’re dealing with a **transform**. A: Linear transforms affect the underlying values in your probability distribution. As an example, if you have a length of rope of a particular length, then applying a linear transform affects the length of the rope. Independent observations have to do with the quantity of things you’re dealing with. As an example, if you have n independent observations of a piece of rope, then you’re talking about n pieces of rope. In general, __if the quantity changes__, you’re dealing with **independent observations**. __If the underlying values change__, then you’re dealing with a **transform**.
Line 622: Line 622:
 [1] 0.9452007 [1] 0.9452007
 # 혹은 # 혹은
-> pnorm(800, 720, sqrt(2500), lower.tail = TRUE)+> pnorm(800, 720, sqrt(2500),  
 +>       lower.tail = TRUE)
 [1] 0.9452007 [1] 0.9452007
 </code> </code>
Line 632: Line 633:
 Before going further:  Before going further: 
  
-<WRAP alert 60%>+<WRAP info>
 So what’s the probability of getting 30 or more questions right out of 40? That will help us determine whether to keep playing, or walk away. So what’s the probability of getting 30 or more questions right out of 40? That will help us determine whether to keep playing, or walk away.
 </WRAP> </WRAP>
  
  
-<WRAP info>+<WRAP box>
 There are 40 questions, which means there are 40 trials.  There are 40 questions, which means there are 40 trials. 
  
Line 651: Line 652:
  
 </WRAP> </WRAP>
-<WRAP center info>+<WRAP box>
 <code> <code>
 > pbinom(29,40, 1/4, lower.tail = F) > pbinom(29,40, 1/4, lower.tail = F)
Line 766: Line 767:
  
  
-<WRAP help 60%>+<WRAP help>
 Before we use the normal distribution for the full 40 questions for Who Wants To Win A Swivel Chair, let’s tackle a simpler problem to make sure it works. Let’s try finding the probability that we get 5 or fewer questions correct out of 12, where there are only two possible choices for each question. Before we use the normal distribution for the full 40 questions for Who Wants To Win A Swivel Chair, let’s tackle a simpler problem to make sure it works. Let’s try finding the probability that we get 5 or fewer questions correct out of 12, where there are only two possible choices for each question.
  
Line 776: Line 777:
 {{:b:head_first_statistics:pasted:20191118-095652.png}} {{:b:head_first_statistics:pasted:20191118-095652.png}}
  
-<WRAP info 60%>+<WRAP box>
 이를 R을 이용하여 구하면,  이를 R을 이용하여 구하면, 
 <code> <code>
Line 843: Line 844:
 이 값은 위의 0.387에 근사하다.  이 값은 위의 0.387에 근사하다. 
  
-<WRAP info 60%>+<WRAP box>
   * In particular circumstances you can **use the normal distribution to approximate the binomial**. If X ~ B(n, p) and np > 5 and nq > 5 then you can approximate X using X ~ N(np, npq)   * In particular circumstances you can **use the normal distribution to approximate the binomial**. If X ~ B(n, p) and np > 5 and nq > 5 then you can approximate X using X ~ N(np, npq)
   * If you’re approximating the binomial distribution with the normal distribution, then you need to **<fc #ff0000>apply a continuity correction</fc>** to make sure your results are accurate.   * If you’re approximating the binomial distribution with the normal distribution, then you need to **<fc #ff0000>apply a continuity correction</fc>** to make sure your results are accurate.
Line 850: Line 851:
 {{:b:head_first_statistics:pasted:20191118-103328.png}} {{:b:head_first_statistics:pasted:20191118-103328.png}}
  
-<WRAP info 70%>+<WRAP box>
 Q:Does it really save time to approximate the binomial distribution with the normal? Q:Does it really save time to approximate the binomial distribution with the normal?
  
Line 873: Line 874:
 ===== Pool Puzzle ===== ===== Pool Puzzle =====
 <wrap #continuity_correction_egs /> <wrap #continuity_correction_egs />
-<WRAP help>+<WRAP box>
 X < 3  ----  <wrap spoiler> X < 2.5 </wrap> X < 3  ----  <wrap spoiler> X < 2.5 </wrap>
 X > 3  ----  <wrap spoiler> X > 3.5 </wrap> X > 3  ----  <wrap spoiler> X > 3.5 </wrap>
b/head_first_statistics/using_the_normal_distribution.1759846833.txt.gz · Last modified: by hkimscil

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki