Here a link to the lecture slides for this session: LINK
In this practical you’ll learn how to do natural language processing in R. By the end of this practical you will know how to:
Here are some tools:
Function | Description |
---|---|
paste() , paste0 |
Base function for combining strings |
str_c() |
stringr function for combining strings |
str_extract_all() |
stringr function for extracting strings using regular expressions |
Here are the main read-in functions:
Function | Description |
---|---|
read_file() |
Read flat csv file |
readRDS() |
Read from R’s RDS format |
file(...,'r'), readLines |
Read from file connection |
Here are the main web-scraping functions (package rvest
):
Function | Description |
---|---|
read_html() |
Read html from web |
read_nodes() |
Access tagged elements within the html document using, e.g., XPath |
html_table() |
Extract a table from an html document |
Here are the main tidytext
(& dplyr
) functions:
Function | Description |
---|---|
unnest_tokens() |
Split text into words (tokens) |
bind_tf_idf() |
Compute tf_idf weighting |
get_sentiments |
Access sentiment data set |
inner_join() |
Join words with, e.g., sentiments |
anti_join() |
Eliminate words, e.g., stop_words |
This tutorial begins with an optional task for users who feel confident with programming in R and want to deal with the often complicated but necessary step of bringing the raw text data into the right shape for your analysis tools. As this will involve programming elements that we have not covered yet, I generally recommend to jump to Read in processed text data and to return to this section later if you have the time.
Read in the subtitles for each episode of Game of Thrones using read_file()
from the readr
-package. To do this, first extract the file names of all of the files using list.files(path, full.names = TRUE)
. One way to achieve this quickly is by first creating a vector containing the subtitle’s folders using paste()
and then by using the lapply()
-function to iterate over the folders. The lapply()
-function such as any other apply()
-function (this will be covered in the Programming with R session) iterates the object provided as the first argument and applies a function provided as the second. Thus, you want to run a command similar to files <- lapply(folder_paths, list.files, full.names = TRUE)
. Note that any third arguments will be passed on to the function specified in the second argument.
Extract text lines from subtitles. Begin by inspecting the text. Use str_sub()
to print the first few hundred characters. Try to identify what characters precede the the spoken lines and which succeed. Think about how to build a regular expression that captures the end and stop points of the spoken line that also handles the many lines including not speech but comments. Evaluate the code below (find more info here). Try to understand why the regular expression looks that way. Use it to extract the text
# inspect
str_sub(got[[1]], 1, 1000)
# extract data
got = str_extract_all(got, '(?<=\n)[^(][<i>]*[:alpha:]+[:control:]*[:print:]+(?=\r*\n)')
# define XPath locations of episode tables
paths = paste0('//*[@id="mw-content-text"]/div/table[',2:8,']')
# extract episode names
names = unlist(lapply(paths, function(x) {
read_html('https://en.wikipedia.org/wiki/List_of_Game_of_Thrones_episodes') %>%
html_nodes(xpath = x) %>%
html_table() %>%
`[[`(1) %>%
`[[`(3) %>%
str_replace_all('"','')
}))
4.Combine the extracted text, the episode names, their index in the season, and the season’s index inside a single tibble()
. Use the code below. Try to understand what the code does.
# create tibbles
got = lapply(1:length(got), function(i){
season = ceiling(i / 10)
episode = i - ((season-1) * 10)
tibble(season, episode, title = names[i], text = got[[i]])
})
# combine data frames
got = do.call(rbind, got)
readRDS()
from today’s data sets.Extract the words from the subtitles using the amazingly fast and convenient unnest_tokens()
from the tidytext
package (install and load). Try using the pipe %>%
. Evaluate the effect on the object. How much bigger have its dimensions become?
Count words using the count()
-function from the dplyr
-package. Don’t forget to ungroup()
.
One of the most fundamental findings of Psycholinguistics is that there are few words that occur a lot and many that occur rarely. To be precise, the relationship between the words rank in the frequency distribution (with 1 indicating the rank with the highest frequency) and the actual frequency is commonly found to follow a power law roughly proportional to \(rank^{-alpha}\) where \(alpha\) is some value around 1. This relationship is know as Zipf’s law. Let’s see if Game of Thrones also shows Zipf’s law. Create a new variable that contains the rank of the frequency counts (remember rank 1 must be associated with the highest frequency) using rank()
. Then plot the relationship between the log()
of the rank and the log()
of the frequency. If the plot looks roughly linear then the relationship follows a power law (because we are plotting in a log-log space).
Another fundamental finding of Psycholinguistics related to information science is that frequent words should have fewer numbers of characters in order to maximize the efficiency of communication. Analyze this relationship by adding a variable containing the word’s number of character symbols using nchar()
and plotting its relationship to the words’ frequencies. Does Game of Thrones communicate efficiently?
In this subsection we will be looking at the term document matrix. To do this, we will actually not be required to transform our data into a matrix, because of the way how tidyverse works. However, if you wanted to, you cast the table with counts to term-document or document-term matrices using cast_tdm()
or cast_dtm()
. This will create an object of class TermDocumentMatrix
or DocumentTermMatrix
and allow you to continue working with the tools included in the tm
-package.
Determine the 3 most important words for each episode based on raw word frequencies (using the title
variable and top_n()
).
Remove words without specific meaning, which are known as stopwords. You will find a stopword list already available through the tidytext
package. The object is called stop_words
. Remove them using anti_join()
, determine again the top 3 most frequent and plot.
Transform to tf-idf using bind_tf_idf()
. Repeat the same analysis for the newly created tf_idf
variable.
Aggregate the tf-idf values on the level of the seasons by computing the sum of all tf-idf values of a word across all episodes of a season. What are the 10 most diagnostic words of each of season (i.e., what are the ten words with the highest aggregate tf-idf values). Adjust the plot from before to visualize the ten most diagnostic words per season and their tf-idf values.
Load the AFINN sentiment data set using get_sentiments("afinn")
and assign it to an object.
Read back in the processed data (see above), tokenize the sentences using unnest_tokens()
, and join the data with the AFINN data set using inner_join()
.
Aggregate the sentiment score by episode and plot the result.
Play around with the other sentiment data sets, which can be accessed using get_sentiments("bing")
and get_sentiments("nrc")
.