In this practical you’ll do basic statistics in R. By the end of this practical you will know how to:
mean()
, median()
, table()
.t.test()
, cor.test()
glm()
and lm()
names()
, summary()
, print()
, predict()
rnorm()
to conduct simulationsPackage | Installation |
---|---|
tidyverse |
install.packages("tidyverse") |
lubridate |
install.packages("lubridate") |
broom |
install.packages("broom") |
rsq |
install.packages("rsq") |
library(tidyverse)
Warning: package 'dplyr' was built under R version 3.5.1
house <- read_csv("../_data/day_1/kc_house_data.csv")
File | Rows | Columns | Description | Source |
---|---|---|---|---|
kc_house_data.csv |
21613 | 21 | House sale prices for King County between May 2014 and May 2015. | Kaggle: House Sales Prediction |
Descriptive Statistics
Function | Description |
---|---|
table() |
Frequency table |
mean(), median(), mode() |
Measures of central tendency |
sd(), range(), var() |
Measures of variability |
max(), min() |
Extreme values |
summary() |
Several summary statistics |
Statistical Tests
Function | Hypothesis Test |
---|---|
t.test() |
One and two sample t-test |
cor.test() |
Correlation test |
glm() , lm() |
Generalized linear model and linear model |
Sampling Functions
Function | Description | Additional Help |
---|---|---|
sample() |
Draw a random sample of values from a vector | ?sample |
rnorm() |
Draw random values from a Normal distribution | ?rnorm() |
runif |
Draw random values from a Uniform distribution | ?runif() |
# -----------------------------------------------
# Examples of hypothesis tests on the diamonds
# ------------------------------------------------
library(tidyverse)
library(broom)
library(rsq)
# First few rows of the diamonds data
diamonds
# -----
# Descriptive statistics
# -----
mean(diamonds$carat) # What is the mean carat?
median(diamonds$price) # What is the median price?
max(diamonds$depth) # What is the maximum depth?
table(diamonds$color) # How many observations for each color?
# -----
# 1-sample hypothesis test
# -----
# Q: Is the mean carat of diamonds different from .50?
htest_A <- t.test(x = diamonds$carat, # The data
alternative = "two.sided", # Two-sided test
mu = 0.5) # The null hyopthesis
htest_A # Print result
names(htest_A) # See all attributes in object
htest_A$statistic # Get just the test statistic
htest_A$p.value # Get the p-value
htest_A$conf.int # Get a confidence interval
# -----
# 2-sample hypothesis test
# -----
# Q: Is there a difference in the carats of color = E and color = I diamonds?
htest_B <- t.test(formula = carat ~ color, # DV ~ IV
alternative = "two.sided", # Two-sided test
data = diamonds, # The data
subset = color %in% c("E", "I")) # Compare Diet 1 and Diet 2
htest_B # Print result
# -----
# Correlation test
# ------
# Q: Is there a correlation between carat and price?
htest_C <- cor.test(formula = ~ carat + price,
data = diamonds)
htest_C
# A: Yes. r = 0.92, t(53938) = 551.51, p < .001
# -----
# Regression
# ------
# Q: Create regression equation predicting price by carat, depth, table, and x
price_glm <- glm(formula = price ~ carat + depth + table + x,
data = diamonds)
# Print coefficients
price_glm$coefficients
# Tidy version
tidy(price_glm)
# Extract R-Squared
rsq(price_glm)
# -----
# Simulation
# ------
# 100 random samples from a normal distribution with mean = 0, sd = 1
samp_A <- rnorm(n = 100, mean = 0, sd = 1)
# 100 random samples from a Uniform distribution with bounds at 0, 10
samp_B <- runif(n = 100, min = 0, max = 10)
# Calculate descriptives
mean(samp_A)
sd(samp_A)
mean(samp_B)
sd(samp_B)
# Combine samples (plus tw new ones) in a tibble
my_sim <- tibble(A = samp_A,
B = samp_B,
C = rnorm(n = 100, mean = 0, sd = 1),
error = rnorm(n = 100, mean = 5, sd = 10))
# Add y, a linear function of A and B to my_sim
my_sim <- my_sim %>%
mutate(y = 3 * A -8 * B + error)
# Regress y on A, B and C
my_glm <- glm(y ~ A + B + C,
data = my_sim)
# Look at results!
tidy(my_glm)
A. Open your R project. It should already have the folders 1_Data
and 2_Code
. Make sure that all of the data files listed above are contained in the folder
# Done!
B. Open a new R script and save it as a new file called statistics_practical.R
in the 2_Code
folder. At the top of the script, using comments, write your name and the date. The, load the set of packages listed above with library()
.
library(tidyverse)
library(broom)
library(rsq)
C. For this practical, we’ll use the kc_house_data.csv
data. This dataset contains house sale prices for King County, Washington. It includes homes sold between May 2014 and May 2015. Using the following template, load the data into R and store it as a new object called kc_house
.
kc_house <- read_csv(file = "1_Data/kc_house_data.csv")
C. Using print()
, summary()
, head()
and skim()
, explore the data to make sure it was loaded correctly.
kc_house
# A tibble: 21,613 x 21
id date price bedrooms bathrooms sqft_living
<chr> <dttm> <dbl> <int> <dbl> <int>
1 7129300520 2014-10-13 00:00:00 221900 3 1 1180
2 6414100192 2014-12-09 00:00:00 538000 3 2.25 2570
3 5631500400 2015-02-25 00:00:00 180000 2 1 770
4 2487200875 2014-12-09 00:00:00 604000 4 3 1960
5 1954400510 2015-02-18 00:00:00 510000 3 2 1680
6 7237550310 2014-05-12 00:00:00 1225000 4 4.5 5420
7 1321400060 2014-06-27 00:00:00 257500 3 2.25 1715
8 2008000270 2015-01-15 00:00:00 291850 3 1.5 1060
9 2414600126 2015-04-15 00:00:00 229500 3 1 1780
10 3793500160 2015-03-12 00:00:00 323000 3 2.5 1890
# ... with 21,603 more rows, and 15 more variables: sqft_lot <int>,
# floors <dbl>, waterfront <int>, view <int>, condition <int>,
# grade <int>, sqft_above <int>, sqft_basement <int>, yr_built <int>,
# yr_renovated <int>, zipcode <int>, lat <dbl>, long <dbl>,
# sqft_living15 <int>, sqft_lot15 <int>
price
) of all houses?mean(kc_house$price)
[1] 540088.1
price
) of all houses?median(kc_house$price)
[1] 450000
price
) of all houses?sd(kc_house$price)
[1] 367127.2
min()
, max()
, and range()
range(kc_house$price)
[1] 75000 7700000
waterfront
) and how many were not? (Hint: use table()
)table(kc_house$waterfront)
0 1
21450 163
water_htest
which contains the result of the appropriate t-test using t.test()
. Use the following templatewater_htest <- t.test(formula = XX ~ XX,
data = XX)
water_htest <- t.test(formula = price ~ waterfront,
data = kc_house)
water_htest
object to see summary resultswater_htest
Welch Two Sample t-test
data: price by waterfront
t = -12.876, df = 162.23, p-value < 2.2e-16
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
-1303661.6 -956963.3
sample estimates:
mean in group 0 mean in group 1
531563.6 1661876.0
summary()
function to water_htest
. Do you see anything new or important?summary(water_htest)
Length Class Mode
statistic 1 -none- numeric
parameter 1 -none- numeric
p.value 1 -none- numeric
conf.int 2 -none- numeric
estimate 2 -none- numeric
null.value 1 -none- numeric
alternative 1 -none- character
method 1 -none- character
data.name 1 -none- character
water_htest
with names()
. What are the named elements of the object?names(water_htest)
[1] "statistic" "parameter" "p.value" "conf.int" "estimate"
[6] "null.value" "alternative" "method" "data.name"
$
operator, print the exact test statistic of the t-test.water_htest$statistic
t
-12.87588
$
operator, print the exact p-value of the test.water_htest$p.value
[1] 1.379112e-26
tidy()
function to return a dataframe containing the main results of the test.tidy(water_htest)
estimate estimate1 estimate2 statistic p.value parameter conf.low
1 -1130312 531563.6 1661876 -12.87588 1.379112e-26 162.229 -1303662
conf.high method alternative
1 -956963.3 Welch Two Sample t-test two.sided
sqft_living
)? Answer this by conducting the appropriate t-test and going through the same steps you did in the previous questionwater2_htest <- t.test(formula = sqft_living ~ waterfront,
data = kc_house)
water2_htest
Welch Two Sample t-test
data: sqft_living by waterfront
t = -8.7506, df = 162.78, p-value = 2.569e-15
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
-1350.7971 -853.4012
sample estimates:
mean in group 0 mean in group 1
2071.588 3173.687
# ...
time_htest
which contains the result of the appropriate correlation test using cor.test()
. Use the following template:# Note: cor.terst() is the only function (we know of)
# that uses formulas in the strange format
# formula = ~ X + Y instead of formula = Y ~ X
time_htest <- cor.test(formula = ~ XX + XX,
data = XX)
time_htest <- cor.test(formula = ~ yr_built + price,
data = kc_house)
time_htest
object to see summary resultstime_htest
Pearson's product-moment correlation
data: yr_built and price
t = 7.9517, df = 21611, p-value = 1.93e-15
alternative hypothesis: true correlation is not equal to 0
95 percent confidence interval:
0.04070886 0.06729506
sample estimates:
cor
0.05401153
summary()
function to time_htest
. Do you see anything new or important?summary(time_htest)
Length Class Mode
statistic 1 -none- numeric
parameter 1 -none- numeric
p.value 1 -none- numeric
estimate 1 -none- numeric
null.value 1 -none- numeric
alternative 1 -none- character
method 1 -none- character
data.name 1 -none- character
conf.int 2 -none- numeric
time_htest
with names()
. What are the named elements of the object?names(time_htest)
[1] "statistic" "parameter" "p.value" "estimate" "null.value"
[6] "alternative" "method" "data.name" "conf.int"
$
operator, print the exact correlation of the test.time_htest$estimate
cor
0.05401153
$
operator, print the exact p-value of the test.time_htest$p.value
[1] 1.929873e-15
tidy()
function to return a dataframe containing the main results of the test.tidy(time_htest)
estimate statistic p.value parameter conf.low conf.high
1 0.05401153 7.95167 1.929873e-15 21611 0.04070886 0.06729506
method alternative
1 Pearson's product-moment correlation two.sided
rsq()
function to obtain the R-squared value.# Actually it doesn't work!
#rsq(time_htest)
condition
tend to sell at higher prices than those in worse condition? Answer this by conducting the appropriate correlation test and going through the same steps you did in the previous question.condition_htest <- cor.test(formula = ~ condition + price,
data = kc_house)
condition_htest
Pearson's product-moment correlation
data: condition and price
t = 5.349, df = 21611, p-value = 8.936e-08
alternative hypothesis: true correlation is not equal to 0
95 percent confidence interval:
0.02304097 0.04966970
sample estimates:
cor
0.03636179
# ...
lm()
. Do to this, use the following template:time_lm <- lm(formula = XX ~ XX,
data = XX)
time_lm <- lm(formula = price ~ yr_built,
data = kc_house)
time_lm
object to see the main results.time_lm
Call:
lm(formula = price ~ yr_built, data = kc_house)
Coefficients:
(Intercept) yr_built
-790477.9 675.1
tidy()
function (from the broom
package) to return a dataframe containing the main results of the test.tidy(time_lm)
term estimate std.error statistic p.value
1 (Intercept) -790477.8729 167350.23419 -4.723494 2.332828e-06
2 yr_built 675.0698 84.89661 7.951670 1.929873e-15
cor.test()
?# The same!
bedrooms
, bathrooms
, and sqft_living
predict housing price? Answer this by conducting the appropriate regression analysis using lm()
and assigning the result to an object price_lm
. Use the following template:price_lm <- lm(formula = XX ~ XX + XX + XX,
data = XX)
price_lm <- lm(formula = price ~ bedrooms + bathrooms + sqft_living,
data = kc_house)
price_lm
object to see the main results.price_lm
Call:
lm(formula = price ~ bedrooms + bathrooms + sqft_living, data = kc_house)
Coefficients:
(Intercept) bedrooms bathrooms sqft_living
74847.1 -57860.9 7932.7 309.4
summary()
function to price_lm
. Do you see anything new or important?summary(price_lm)
Call:
lm(formula = price ~ bedrooms + bathrooms + sqft_living, data = kc_house)
Residuals:
Min 1Q Median 3Q Max
-1642456 -144268 -22821 102461 4180678
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 74847.141 6913.667 10.83 <2e-16 ***
bedrooms -57860.894 2334.607 -24.78 <2e-16 ***
bathrooms 7932.712 3510.556 2.26 0.0239 *
sqft_living 309.392 3.087 100.23 <2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 257800 on 21609 degrees of freedom
Multiple R-squared: 0.5069, Adjusted R-squared: 0.5069
F-statistic: 7405 on 3 and 21609 DF, p-value: < 2.2e-16
price_lm
with names()
. What are the named elements of the object?names(price_lm)
[1] "coefficients" "residuals" "effects" "rank"
[5] "fitted.values" "assign" "qr" "df.residual"
[9] "xlevels" "call" "terms" "model"
$
operator, print a vector of the estimated coefficients of the analysis.price_lm$coefficients
(Intercept) bedrooms bathrooms sqft_living
74847.1408 -57860.8943 7932.7122 309.3924
tidy()
function (from the broom
package) to return a dataframe containing the main results of the test.tidy(price_lm)
term estimate std.error statistic p.value
1 (Intercept) 74847.1408 6913.666912 10.825969 3.046344e-27
2 bedrooms -57860.8943 2334.607078 -24.783997 9.812099e-134
3 bathrooms 7932.7122 3510.555931 2.259674 2.385140e-02
4 sqft_living 309.3924 3.086779 100.231460 0.000000e+00
rsq()
function (from the rsq
package) to obtain the R-squared value. Does this match what you saw in your previous outputs?rsq(price_lm)
[1] 0.5069198
everything_lm
that predicts housing prices based on all predictors in kc_house
except for id
and date
(id
is meaningless and date shouldn’t matter). Use the following template. Note that to include all predictors, we’ll use the formula = y ~.
shortcut. We’ll also remove id and date from the dataset using select()
before running the analysis.everything_lm <- lm(formula = XX ~.,
data = kc_house %>%
select(-id, -date))
everything_lm <- lm(formula = price ~.,
data = kc_house %>%
select(-id, -date))
everything_lm
object to see the main results.everything_lm
Call:
lm(formula = price ~ ., data = kc_house %>% select(-id, -date))
Coefficients:
(Intercept) bedrooms bathrooms sqft_living sqft_lot
6.690e+06 -3.577e+04 4.114e+04 1.501e+02 1.286e-01
floors waterfront view condition grade
6.690e+03 5.830e+05 5.287e+04 2.639e+04 9.589e+04
sqft_above sqft_basement yr_built yr_renovated zipcode
3.113e+01 NA -2.620e+03 1.981e+01 -5.824e+02
lat long sqft_living15 sqft_lot15
6.027e+05 -2.147e+05 2.168e+01 -3.826e-01
summary()
function to everything_lm
. Do you see anything new or important?summary(everything_lm)
Call:
lm(formula = price ~ ., data = kc_house %>% select(-id, -date))
Residuals:
Min 1Q Median 3Q Max
-1291725 -99229 -9739 77583 4333222
Coefficients: (1 not defined because of singularities)
Estimate Std. Error t value Pr(>|t|)
(Intercept) 6.690e+06 2.931e+06 2.282 0.02249 *
bedrooms -3.577e+04 1.892e+03 -18.906 < 2e-16 ***
bathrooms 4.114e+04 3.254e+03 12.645 < 2e-16 ***
sqft_living 1.501e+02 4.385e+00 34.227 < 2e-16 ***
sqft_lot 1.286e-01 4.792e-02 2.683 0.00729 **
floors 6.690e+03 3.596e+03 1.860 0.06285 .
waterfront 5.830e+05 1.736e+04 33.580 < 2e-16 ***
view 5.287e+04 2.140e+03 24.705 < 2e-16 ***
condition 2.639e+04 2.351e+03 11.221 < 2e-16 ***
grade 9.589e+04 2.153e+03 44.542 < 2e-16 ***
sqft_above 3.113e+01 4.360e+00 7.139 9.71e-13 ***
sqft_basement NA NA NA NA
yr_built -2.620e+03 7.266e+01 -36.062 < 2e-16 ***
yr_renovated 1.981e+01 3.656e+00 5.420 6.03e-08 ***
zipcode -5.824e+02 3.299e+01 -17.657 < 2e-16 ***
lat 6.027e+05 1.073e+04 56.149 < 2e-16 ***
long -2.147e+05 1.313e+04 -16.349 < 2e-16 ***
sqft_living15 2.168e+01 3.448e+00 6.289 3.26e-10 ***
sqft_lot15 -3.826e-01 7.327e-02 -5.222 1.78e-07 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 201200 on 21595 degrees of freedom
Multiple R-squared: 0.6997, Adjusted R-squared: 0.6995
F-statistic: 2960 on 17 and 21595 DF, p-value: < 2.2e-16
$
operator, print a vector of the estimated coefficients of the analysis. Are the beta values for bedrooms, bathrooms, and sqft_living different from what you got in your previous analysis price_lm
?everything_lm$coefficients
(Intercept) bedrooms bathrooms sqft_living sqft_lot
6.690325e+06 -3.576654e+04 4.114428e+04 1.501005e+02 1.285979e-01
floors waterfront view condition grade
6.689550e+03 5.829605e+05 5.287094e+04 2.638565e+04 9.589045e+04
sqft_above sqft_basement yr_built yr_renovated zipcode
3.112758e+01 NA -2.620223e+03 1.981258e+01 -5.824199e+02
lat long sqft_living15 sqft_lot15
6.027482e+05 -2.147298e+05 2.168140e+01 -3.826418e-01
tidy()
function to return a dataframe containing the main results of the test.tidy(everything_lm)
term estimate std.error statistic p.value
1 (Intercept) 6.690325e+06 2.931485e+06 2.282231 2.248541e-02
2 bedrooms -3.576654e+04 1.891843e+03 -18.905663 4.460096e-79
3 bathrooms 4.114428e+04 3.253678e+03 12.645469 1.596998e-36
4 sqft_living 1.501005e+02 4.385409e+00 34.227254 4.466373e-250
5 sqft_lot 1.285979e-01 4.792237e-02 2.683462 7.291971e-03
6 floors 6.689550e+03 3.595859e+03 1.860348 6.284986e-02
7 waterfront 5.829605e+05 1.736010e+04 33.580488 5.007658e-241
8 view 5.287094e+04 2.140055e+03 24.705418 6.538058e-133
9 condition 2.638565e+04 2.351461e+03 11.220959 3.874277e-29
10 grade 9.589045e+04 2.152789e+03 44.542422 0.000000e+00
11 sqft_above 3.112758e+01 4.360319e+00 7.138831 9.710987e-13
12 yr_built -2.620223e+03 7.265914e+01 -36.061853 1.389078e-276
13 yr_renovated 1.981258e+01 3.655592e+00 5.419802 6.030379e-08
14 zipcode -5.824199e+02 3.298581e+01 -17.656677 2.775354e-69
15 lat 6.027482e+05 1.073472e+04 56.149395 0.000000e+00
16 long -2.147298e+05 1.313390e+04 -16.349280 1.005905e-59
17 sqft_living15 2.168140e+01 3.447721e+00 6.288618 3.264445e-10
18 sqft_lot15 -3.826418e-01 7.326917e-02 -5.222413 1.782435e-07
rsq()
function (from the rsq
package) to obtain the R-squared value. How does this R-squared value compare to what you got in your previous regression price_lm
?rsq(everything_lm)
[1] 0.6997472
everything_lm
model fit the actual housing prices? We can answer this by calculating the average difference between the fitted values and the true values directly. Using the following template, make this calculation. What do you find is the mean absolute difference between the fitted housing prices and the true prices?# True housing prices
prices_true <- kc_house$XX
# Model fits (fitted.values)
prices_fitted <- everything_lm$XX
# Calculate absolute error between fitted and true values
abs_error <- abs(XX - XX)
# Calculate mean absolute error
mae <- mean(XX)
# Print the result!
mae
# True housing prices
prices_true <- kc_house$price
# Model fits (fitted.values)
prices_fitted <- everything_lm$fitted.values
# Calculate absolute error between fitted and true values
abs_error <- abs(prices_true - prices_fitted)
# Calculate mean absolute error
mae <- mean(abs_error)
# Print the result!
mae
[1] 125922.6
everything_lm
object and the true prices.# Create dataframe containing fitted and true prices
prices_results <- tibble(truth = XX,
fitted = XX)
# Create scatterplot
ggplot(data = prices_results,
aes(x = fitted, y = truth)) +
geom_point(alpha = .1) +
geom_smooth(method = 'lm') +
labs(title = "Fitted versus Predicted housing prices",
subtitle = paste0("Mean Absolute Error = ", mae),
caption = "Source: King County housing price Kaggle dataset",
x = "Fitted housing price values",
y = 'True values') +
theme_minimal()
# Create dataframe containing fitted and true prices
prices_results <- tibble(truth = prices_true,
fitted = prices_fitted)
# Create scatterplot
ggplot(data = prices_results,
aes(x = fitted, y = truth)) +
geom_point(alpha = .1) +
geom_smooth(method = 'lm') +
labs(title = "Fitted versus Predicted housing prices",
subtitle = paste0("Mean Absolute Error = ", mae),
caption = "Source: King County housing price Kaggle dataset",
x = "Fitted housing price values",
y = 'True values') +
theme_minimal()
kc_house
called million
that is 1 when the price is over 1 million, and 0 when it is not. Run the following code to create this new variable# Create a new binary variable called million that
# indicates when houses sell for more than 1 million
# Note: 1e6 is a shortcut for 1000000
kc_house <- kc_house %>%
mutate(million = price > 1e6)
glm()
function to conduct a logistic regression to see which of the variables bedrooms
, bathrooms
, floors
, waterfront
, yr_built
predict whether or not a house will sell for over 1 Million. Be sure to include the argument family = 'binomial'
to tell glm()
that we are conducting a logistic regression analysis.# Logistic regression analysis predicting which houses will sell for
# more than 1 Million
million_glm <- glm(formula = XX ~ XX + XX + XX + XX + XX,
data = kc_house,
family = "XX")
million_glm <- glm(formula = million ~ bedrooms + bathrooms + floors + waterfront + yr_built,
family = "binomial", # Logistic regression
data = kc_house)
million_glm
object to see the main results.million_glm
Call: glm(formula = million ~ bedrooms + bathrooms + floors + waterfront +
yr_built, family = "binomial", data = kc_house)
Coefficients:
(Intercept) bedrooms bathrooms floors waterfront
27.92982 0.09524 2.04122 0.42525 3.27013
yr_built
-0.01867
Degrees of Freedom: 21612 Total (i.e. Null); 21607 Residual
Null Deviance: 10710
Residual Deviance: 7391 AIC: 7403
summary()
function to million_glm
.summary(million_glm)
Call:
glm(formula = million ~ bedrooms + bathrooms + floors + waterfront +
yr_built, family = "binomial", data = kc_house)
Deviance Residuals:
Min 1Q Median 3Q Max
-3.9182 -0.3076 -0.2182 -0.1143 4.0936
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) 27.929817 2.152211 12.977 < 2e-16 ***
bedrooms 0.095237 0.034519 2.759 0.0058 **
bathrooms 2.041219 0.054958 37.142 < 2e-16 ***
floors 0.425255 0.068982 6.165 7.06e-10 ***
waterfront 3.270126 0.211849 15.436 < 2e-16 ***
yr_built -0.018673 0.001116 -16.736 < 2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
(Dispersion parameter for binomial family taken to be 1)
Null deviance: 10714.3 on 21612 degrees of freedom
Residual deviance: 7391.1 on 21607 degrees of freedom
AIC: 7403.1
Number of Fisher Scoring iterations: 7
$
operator, print a vector of the estimated beta values (coefficients) of the analysis.million_glm$coefficients
(Intercept) bedrooms bathrooms floors waterfront yr_built
27.92981675 0.09523674 2.04121853 0.42525455 3.27012640 -0.01867290
tidy()
function to return a dataframe containing the main results of the test.tidy(million_glm)
term estimate std.error statistic p.value
1 (Intercept) 27.92981675 2.152211183 12.977266 1.646540e-38
2 bedrooms 0.09523674 0.034518893 2.758974 5.798308e-03
3 bathrooms 2.04121853 0.054957806 37.141558 6.000104e-302
4 floors 0.42525455 0.068982018 6.164716 7.060979e-10
5 waterfront 3.27012640 0.211848542 15.436152 9.351496e-54
6 yr_built -0.01867290 0.001115733 -16.735985 7.166189e-63
million_glm$fitted.values
. Using this vector, calculate the average probability that houses will sell for more than 1 Million (hint: just take the mean!)mean(million_glm$fitted.values)
[1] 0.06778328
# Just run it!
million_fit <- tibble(pred_million = million_glm$fitted.values,
true_million = kc_house$million) %>%
mutate(fitted_cut = cut(pred_million, breaks = seq(0, 1, .1))) %>%
group_by(fitted_cut) %>%
summarise(true_prob = mean(true_million))
million_fit
# A tibble: 10 x 2
fitted_cut true_prob
<fct> <dbl>
1 (0,0.1] 0.0231
2 (0.1,0.2] 0.192
3 (0.2,0.3] 0.292
4 (0.3,0.4] 0.387
5 (0.4,0.5] 0.562
6 (0.5,0.6] 0.518
7 (0.6,0.7] 0.550
8 (0.7,0.8] 0.673
9 (0.8,0.9] 0.574
10 (0.9,1] 0.837
# Just run it!
ggplot(million_fit,
aes(x = fitted_cut, y = true_prob, col = as.numeric(fitted_cut))) +
geom_point(size = 2) +
labs(x = "Fitted Probability",
y = "True Probability",
title = "Predicting the probability of a 1 Million house",
subtitle = "Using logistic regression with glm(family = 'binomial')") +
scale_y_continuous(limits = c(0, 1)) +
guides(col = FALSE)
kc_house
dataset? Find out by creating a histogram of price
with the following code:# Just run it!
# Create a histogram of housing prices
ggplot(data = kc_house,
aes(x = price, y = ..density..)) +
geom_histogram(col = "white") +
geom_density(col = "red") +
theme_minimal() +
labs(title = "Housing prices (original)")
`stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
log()
function# Just run it!
# Create a histogram of housing prices
ggplot(data = kc_house,
aes(x = log(price), y = ..density..)) +
geom_histogram(col = "white") +
geom_density(col = "red") +
theme_minimal() +
labs(title = "Housing prices (original)")
`stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
price_l
to kc_house
that contains log–transformed prices.# Add price_l, log-transformed price, to kc_house
kc_house <- kc_house %>%
mutate(price_l = log(price))
price_l
, try repeating your previous regression analyses. Do you get the same results? If not, which results do you trust more?# on your own!
predict()
function. You know that his house is on the waterfront, has 3 bedrooms, 2 floors and has a condition of 4. The following block of code may help you!# Create regression model predicting year (yr_built)
year_lm <- lm(formula = XX ~ XX + XX + XX + XX,
data = XX)
# Define Donald's House
DonaldsHouse <- tibble(waterfront = X,
bedrooms = X,
floors = X,
condition = X)
# Predict the hear of donald's house
predict(object = year_lm,
newdata = DonaldsHouse)
# Create regression model predicting year (yr_built)
year_lm <- lm(formula = yr_built ~ waterfront + bedrooms + floors + condition,
data = kc_house)
# Define Donald's House
DonaldsHouse <- tibble(waterfront = 1,
bedrooms = 3,
floors = 2,
condition = 4)
# Predict the hear of donald's house
predict(object = year_lm,
newdata = DonaldsHouse)
1
1963.885
You can easily generate random samples from statistical distributions in R. To see all of them, run ?distributions
. For example, to generate samples from the well known Normal distribution, you can use rnorm()
. Look at the help menu for rnorm()
to see its arguments.
Let’s explore the rnorm()
function. Using rnorm()
, create a new object samp_100
which is 100 samples from a Normal distribution with mean 10 and standard deviation 5. Print the object to see what the elements look like. What should the mean and standard deviation of this sample? be? Test it by evaluating its mean and standard deviation directly using the appropriate functions. Then, do a one-sample t-test on this sample against the null hypothesis that the true mean is 10. What are the results? Use the following code template to help!
# Generate 100 samples from a Normal distribution with mean = 10 and sd = 5
samp_100 <- rnorm(n = XX, mean = XX, sd = XX)
# Print result
samp_100
# Calcultae sample mean and standard deviation.
mean(XX)
sd(XX)
t.test(x = XX, # Vector of values
mu = XX) # Mean under null hypothesis
# Generate 100 samples from a Normal distribution with mean = 10 and sd = 5
samp_100 <- rnorm(n = 100, mean = 10, sd = 5)
# Print result
samp_100
[1] 7.7123149 0.0295536 15.4177207 1.8091445 4.8427877 9.7708274
[7] 10.1343561 11.1991115 11.7673196 10.8825861 7.9837963 18.3583695
[13] 22.0864481 15.7664408 14.4855380 10.8448450 9.8525495 3.1577809
[19] 4.1783465 6.0854516 7.8272127 5.8079508 11.8415534 6.1164426
[25] 10.4629226 9.2174843 12.7622893 13.4996237 15.8221147 12.8715114
[31] 15.9768745 7.8619657 18.0703823 9.2978981 1.3741191 6.6422572
[37] 3.1675480 7.1475393 1.7928087 6.7128352 12.2255039 12.0311948
[43] 16.6516502 21.2151224 8.0239223 5.1164304 10.9439008 12.9135575
[49] 13.4913153 6.6957224 6.8585887 11.5052704 10.9082755 8.0249644
[55] 6.6295324 4.6254969 5.0244020 8.0361374 7.7478228 8.7272640
[61] 14.9102524 18.1796592 10.3165647 13.5585428 4.8796974 9.3854295
[67] 1.6344352 7.1909232 12.6420497 11.4103614 17.0717663 11.6628556
[73] 8.1236099 13.0811984 17.3548255 6.5946889 10.9775800 15.8909203
[79] 8.0364854 15.3454795 12.5222672 10.0226927 8.7717356 13.4319201
[85] 3.1891194 15.2602636 11.2638745 12.0048706 10.1891494 9.4931506
[91] 17.0967707 11.4358932 10.1487704 4.7057640 13.7424784 4.0489639
[97] 16.9999941 10.8806999 4.4738260 10.1907790
# Calcultae sample mean and standard deviation.
mean(samp_100)
[1] 10.16155
sd(samp_100)
[1] 4.59164
t.test(x = samp_100, # Vector of values
mu = 10) # Mean under null hypothesis
One Sample t-test
data: samp_100
t = 0.35184, df = 99, p-value = 0.7257
alternative hypothesis: true mean is not equal to 10
95 percent confidence interval:
9.250469 11.072631
sample estimates:
mean of x
10.16155
# on your own!
# on your own!
coefficients
of the regression equation will be? Test your prediction by running the code and exploring the my_lm
object.# Generate independent variables
x1 <- rnorm(n = 100, mean = 10, sd = 1)
x2 <- rnorm(n = 100, mean = 20, sd = 10)
x3 <- rnorm(n = 100, mean = -5, sd = 5)
# Generate noise
noise <- rnorm(n = 100, mean = 0, sd = 1)
# Create dependent variable
y <- 3 * x1 + 2 * x2 - 5 * x3 + 100 + noise
# Combine all into a tibble
my_data <- tibble(x1, x2, x3, y)
# Calculate my_lm
my_lm <- lm(formula = y ~ x1 + x2 + x3,
data = my_data)
(Intercept) = -50
, x1 = -3
, x2 = 10
, x3 = 15
# Generate independent variables
x1 <- rnorm(n = 100, mean = 10, sd = 1)
x2 <- rnorm(n = 100, mean = 20, sd = 10)
x3 <- rnorm(n = 100, mean = -5, sd = 5)
# Generate noise
noise <- rnorm(n = 100, mean = 0, sd = 1)
# Create dependent variable
y <- -3 * x1 + 10 * x2 + 15 * x3 - 50 + noise
# Combine all into a tibble
my_data <- tibble(x1, x2, x3, y)
# Calculate my_lm
my_lm <- lm(formula = y ~ x1 + x2 + x3,
data = my_data)
For more advanced mixed level ANOVAs with random effects, consult the afex
and lmer
packages.
To do Bayesian versions of common hypothesis tests, try using the BayesFactor
package. BayesFactor Guide Link