Using Word Similarity Graphs to Explore Themes in Text: A Tutorial

One of the first questions people ask about text data is, “What is the text about?” This search for topics or themes involves reducing the complexity of the text down to a handful of meaningful categories. I have found that a lot of common approaches for this are not as useful as advertised. In this post, I’m going to explain and demonstrate how I use word similarity graphs to quickly explore and make sense of topics in text data.


Other Approaches

I have used a number of platforms that advertise “using AI” to examine themes in text data. I am generally underwhelmed with these platforms: Themes hardly make much sense; the underlying algorithm is not explained, so it is hard to figure out why things don’t make sense; and they are rigid interfaces, not letting the user tune hyperparameters or change how the text is vectorized.

I have also experimented with a number of common unsupervised learning techniques. I appreciate the idea of Latent Dirichlet allocation (Blei, Ng, & Jordan, 2003), because it is a probabilistic model that is explicit about its parameters, how it is estimated, the assumptions on which it relies, etc. However, I have rarely seen results that make much sense—other than toy examples that text mining books use. Selecting an appropriate k has been difficult, too, as the four metrics programmed into the ldatuning package (Nikita, 2016) tend to disagree with one another, some frequently recommending to choose the least k while others indicating the most.

I also looked at vectorizing the documents, generally using tf-idf (Silge & Robinson, 2017, Ch. 3), and applying my go-to clustering algorithms, like k-NN or DBSCAN (Ester, Kriegel, Sander, & Xu, 1996). I generally wasn’t satisfied with the results here, either. For example, difficulties might arise due to Zipf’s law (Silge & Robinson, 2017, Ch. 3), as the p-dimensional space we are projecting documents into is sparse, which causes problems for k-NN (e.g., Grcar, Fortuna, Mladenic, & Grobelnik, 2006).

There are undoubtedly use-cases for each of these approaches—and text mining is not my expertise—but my experiences led me to use word similarity graphs for exploring topics.


Word Similarity Graphs

I have found word similarity graphs (or networks) to be useful. The three primary steps involved:

  1. Calculate the similarities between words. This makes the word—not the document—the unit of analysis. Instead of looking for documents close to one another in p-dimensional space, we will be looking for groups of words that co-occur often. I generally focus on words that appear in at least J% of documents—but less than L% of documents—and I flatten any similarity scores between two words that fall below the Ith percentile to 0.

  2. Format these similarity scores into a symmetric matrix, where the diagonal contains 0s and the off-diagonal cells are similarities between words. Use this as an adjacency matrix to plot a network graph.

  3. Cluster nodes in this graph using a community detection algorithm. I use the Walktrap algorithm, which is based on random walks that take T steps in the network.

I like treating words as the unit of analysis; it makes sense to me to think of topics as being made up of words often used in conjunction with one another. This is also a naturally visual method, and plotting the network helps me understand topics better than columns of weights corresponding to latent topics. However, this approach is exploratory. I generally choose the values of the J, L, I, and T hyperparameters based on subjective decisions—not by minimizing a loss function; nonetheless, it helps me understand what is being talked about in text.

I will show how to use a few simple functions I wrote as I describe this technique in more detail. The full code for each of the functions are found at the end of this post. The data I am using are headlines from news stories and blogs in the last 6 months that mention “Star Wars” (if the sequel trilogy makes you angry—you’re wrong, but I hope you still read the rest of this post). The data and all code necessary to reproduce this post can be found at my GitHub.


Preparation

The functions rely on the tidyverse being loaded into the current session, and it requires the lsa and igraph packages to be installed. Before running any of the similarity or clustering functions, I run:

library(tidyverse)
library(tidytext)
library(igraph)
library(ggraph)
data(stop_words)
dat <- read_csv("starwars.csv") %>% 
  transmute(
    id = 1:nrow(.), # headline identification number for reference
    text = gsub("[-/]", " ", title),
    text = tolower(gsub("[^A-Za-z ]", "", text))
  ) %>% 
  unnest_tokens(word, text) %>% 
  anti_join(stop_words, by = "word") %>% 
  filter(word != "cnet") # corpus-specific stop word


Calculating Similarity

Many metrics exist to quantify how similar two units are to one another; distance measures can also be inverted to measure similarity (see Cha, 2007; Choi, Cha, & Tappert, 2010; Lesot, Rifqi, & Benhadda, 2009 for reviews). I started with an assumption that words belonging to the same topic will be used in the same document (which can be a story, chapter, song, sentence, headline, and so on), so I decided the foundation of word similarities here should be co-occurrences. If the words “Darth” and “Vader” appear together in 161 headlines (see below), then their similarity score would be 161.

dat %>% 
  group_by(id) %>% 
  summarise(vader = all(c("darth", "vader") %in% word)) %>% 
  with(sum(vader))
## [1] 161

A problem arises, however, in considering how frequently each word is used. The words “Princess” and “Leia” occur 31 times together, but “Princess” is used far less frequently than “Darth” in general (see below). Does that mean the words “Princess” and “Leia” are less similar to one another than “Darth” and “Vader”? Not necessarily.

dat %>% 
  group_by(id) %>% 
  summarise(leia = all(c("princess", "leia") %in% word)) %>% 
  with(sum(leia))
## [1] 31
dat %>% 
  group_by(id) %>% 
  summarise(
    darth = "darth" %in% word, 
    vader = "vader" %in% word, 
    princess = "princess" %in% word, 
    leia = "leia" %in% word
  ) %>% 
  select(-id) %>% 
  colSums()
##    darth    vader princess     leia 
##      236      232       39      116

We can overcome this difference in base rates by normalizing (standardizing) the co-occurrences. I use the cosine similarity for this, which is identical to the Ochiai coefficient in this situation (Zhou & Leydesdorff, 2016). The cosine similarity gets its name from being the cosine of the angle located between two vectors. In our case, each vector is a word, and the length of these vectors is the number of documents. If the word appears in a document, it is scored as “1”; if it does not, it is “0.” For simplicity’s sake, let’s imagine we have two documents: “Darth” appears in the first, but not the second; “Vader” appears in both. Plotted on two-dimensional space, the vectors look like:

data.frame(word = c("darth", "vader"), d1 = 1, d2 = 0:1) %>% 
  ggplot(aes(x = d1, y = d2)) +
  geom_point() +
  coord_cartesian(xlim = 0:1, ylim = 0:1) +
  geom_segment(aes(x = 0, y = 0, xend = d1, yend = d2)) +
  theme_minimal() +
  theme(text = element_text(size = 18))

This is a 45 degree angle. We can make sure that the cosine of 45 degrees is the same as the cosine similarity between those two vectors:

cos(45 * pi / 180) # this function takes radians, not degrees
## [1] 0.7071068
lsa::cosine(c(1, 1), c(0, 1))
##           [,1]
## [1,] 0.7071068

The binary (1 or 0) scoring means that words are never projected into negative space—no numbers below 0 are used. This means that negative similarities cannot occur. In the two-dimensional example above, the largest angle possible is 90 degrees, which has a cosine of 0; the smallest angle possible is 0 degrees, which has a cosine of 1. Similarities are thus normalized inside of the 0 (e.g., words are never used together) to 1 (e.g., words are always used together) range.

I wrote a function that takes a tokenized data frame—where one column is named word and another is named id—and returns a symmetric cosine similarity matrix. There are three other arguments. First, what proportion of documents must a word appear in to be considered? This makes sure that words only used in one or two documents are not included. I generally tune this so that it takes the top 85 to 120 words. Second, what proportion of documents is too many to be considered? In the present example, the words “Star” and “Wars” appear in every headline, so they would not tell us differentiating information about topics. I usually set this to be about .80. Third, how large must the similarity be to be included in the word similarity graph? I define this as a percentile. If it is set at .50, for example, then the function will shrink the similarities that are below the median to 0. This is to cut down on spurious or inconsequential relationships in the graph. I generally set this to be somewhere between .65 and .90. There is a lot of debate in the literature about how to filter these graphs (e.g., Christensen, Kenett, Aste, Silvia, & Kwapil, 2018), and I still need to experiment with these different filtering methods to come to a more principled approach than the arbitrary one I currently use.

Using the function shown at the end of this post, I compute the cosine similarity matrix using the following code:

cos_mat <- cosine_matrix(dat, lower = .01, upper = .80, filt = .80)

Since 8,570 documents (headlines) are in this corpus, the only words used in this graph must appear in more than 85.7 documents and less than 6,856. I only graph the similarities that are in the uppermost quintile (i.e., similarity > 80th percentile). This leaves 83 words:

dim(cos_mat)
## [1] 83 83


Making the Graph

A background on network theory and analysis is outside the scope of the post—but see Baggio, Scott, and Cooper (2010); Borgatti and Halgin (2011); Borgatti, Mehra, Brass, and Labianca (2009); and Talesford, Simpson, Burdette, Hayasaka, and Laurienti (2011) for introductions. We can build the network from our similarity matrix by using the igraph function to do so and then plot it using the ggraph package, which I like because it employs the same grammar as ggplot2. A random seed is set so that the layout of the graph is reproducible.

g <- graph_from_adjacency_matrix(cos_mat, mode = "undirected", weighted = TRUE)

set.seed(1839)
ggraph(g, layout = "nicely") +
  geom_edge_link(aes(alpha = weight), show.legend = FALSE) + 
  geom_node_label(aes(label = name)) +
  theme_void()

Words used in the same headlines appear closer to one another and have darker lines connecting them. This is obvious when looking at entities described by two words: J.J. Abrams, Episode IX, Blu-Ray, Darth Vader, Millennium Falcon. Topics should already be appearing to readers familiar with Star Wars, but we can make this clearer by finding words that cluster together.


Clustering the Graph

“Community detection” in social networks refers to finding groups of nodes (in our case, words are the nodes, but they can represent people, proteins, webpages, and so on) that have edges (the lines between the nodes) connecting to one another more than to other nodes in the network (Fortunato & Hric, 2016).

For example, imagine looking at emails in an organization. Each node is a person, and the number of emails sent between two people is the edge between them. If this organization has a marketing department and an accounting department, we would expect people inside of the same department to email one another more frequently than they email those outside of the department. If this were the case, a community detection algorithm applied to this email network would cluster people within the same department together. The case with words is analogous: We are looking to find “communities” of words used together more often than with other words, creating topics or themes.

The algorithm I use here is called Walktrap (Pons & Latapy, 2005; also this YouTube video). It uses random walks to operationalize distance between nodes. A random walk refers to starting at a node and following edges to another node. Each “step” is how many nodes have been visited after the starting point. The direction in which a step is taken is based on the strength of the edge between two nodes: The higher the strength (weight), the higher the probability that a step is taken to that node.

The Walktrap algorithm is based on the idea that two nodes within the same community will “visit” all other nodes in the graph in similar ways via random walks. For example, we want to know the distance between “battlefront” and “games.” Do these words visit, for example, “lightsaber” in the same way? To get at this, the algorithm finds the probability that a random walk starting at “battlefront” leads to “lightsaber” in T steps. Then, it finds the probability that a random walk starting at “games” leads to “lightsaber” in T steps. The algorithm will subtract these probabilities to get the distance: If these probabilities are similar, the distance between “battlefront” and “games” will be small. It doesn’t just look at walks to “lightsaber,” though: The Walktrap will calculate this distance based on random walks to all other nodes in the graph.

This distance is calculated for each pair of nodes in the graph, and then hierarchical clustering is applied to these distances. I wrote a wrapper—seen at the end of this post—for the igraph package’s Walktrap function to run this algorithm. It returns a list of objects useful for finding topics in word similarity graphs. You can run the function by:

topics <- walktrap_topics(g)

This returns a list of two objects. One is a dendrogram describing the hierarchical clustering process on the distances calculated from the random walks:

par(cex = .6)
plot(topics$dendrogram)


Examing Themes

The second object in the list returned by walktrap_topics() is a data.frame of two columns: word, which contains each word in the graph, and group, which contains the community to which each node belongs. I do a little bit of re-arranging of it to present the topics here:

topics$membership %>% 
  group_by(group) %>% 
  summarise(words = paste(word, collapse = ", "))
## # A tibble: 8 x 2
##   group words                                                              
##   <dbl> <chr>                                                              
## 1     1 abrams, carrie, cast, episode, fans, fisher, footage, hamill, ix, …
## 2     2 awakens, black, darth, fan, figure, film, force, jedi, lightsaber,…
## 3     3 disneyland, edge, falcon, galaxys, john, millennium                
## 4     4 animated, clone, comic, coming, empire, galaxy, marvel, movie, new…
## 5     5 battlefront, ea, game, games, ii, world                            
## 6     6 action, details, disney, disneys, jon, live, mandalorian, movies, …
## 7     7 advent, amazon, bb, free, lego, shipping, store, toys, wing        
## 8     8 blu, han, ray, solo, story

We could label the 8 themes in this corpus:

  1. Episode IX news

  2. Darth Vader: fan film and Black Series action figure

  3. Star Wars theme park

  4. Comics and animated series

  5. Battlefront II video game

  6. Live-action television series (Mandalorian, Cassian Andor)

  7. Toy shopping

  8. Solo: A Star Wars Story Blu-Ray release

We can then assign the vector of group labels as a feature called “cluster” to the nodes of our graph object, g, and plot the same network graph above, but with the themes showing up as different colors:

V(g)$cluster <- arrange(topics$membership, word)$group

set.seed(1839)
ggraph(g, layout = "nicely") +
  geom_edge_link(aes(alpha = weight), show.legend = FALSE) + 
  geom_node_label(
    aes(label = name, color = factor(cluster)), 
    show.legend = FALSE
  ) +
  theme_void()


Conclusion

Word similarity graphs have been useful for me as a first step to exploring topics in data. One can then follow-up on what these networks show by coding documents for given topics based on inclusion or exclusion of words in the text (e.g., any document that contains the words “live” and “action” refer to television series). This is a flexible approach, as well. Instead of cosine similarity, one could use a different metric; instead of the Walktrap, one could use a different community detection algorithm (the igraph package alone offers quite a few; see this StackOverflow answer).


R Functions

cosine_matrix <- function(tokenized_data, lower = 0, upper = 1, filt = 0) {
  
  if (!all(c("word", "id") %in% names(tokenized_data))) {
    stop("tokenized_data must contain variables named word and id")
  }
  
  if (lower < 0 | lower > 1 | upper < 0 | upper > 1 | filt < 0 | filt > 1) {
    stop("lower, upper, and filt must be 0 <= x <= 1")
  }
  
  docs <- length(unique(tokenized_data$id))
  
  out <- tokenized_data %>%
    count(id, word) %>%
    group_by(word) %>%
    mutate(n_docs = n()) %>%
    ungroup() %>%
    filter(n_docs < (docs * upper) & n_docs > (docs * lower)) %>%
    select(-n_docs) %>%
    mutate(n = 1) %>%
    spread(word, n, fill = 0) %>%
    select(-id) %>%
    as.matrix() %>%
    lsa::cosine()
  
  filt <- quantile(out[lower.tri(out)], filt)
  out[out < filt] <- diag(out) <- 0
  out <- out[rowSums(out) != 0, colSums(out) != 0]
  
  return(out)
}

walktrap_topics <- function(g, ...) {
  wt <- igraph::cluster_walktrap(g, ...)
  
  membership <- igraph::cluster_walktrap(g, ...) %>% 
      igraph::membership() %>% 
      as.matrix() %>% 
      as.data.frame() %>% 
      rownames_to_column("word") %>% 
      arrange(V1) %>% 
      rename(group = V1)
  
  dendrogram <- stats::as.dendrogram(wt)
  
  return(list(membership = membership, dendrogram = dendrogram))
}


References

Baggio, Scott, & Cooper (2010). Network science: A review focused on tourism. Annals of Tourism Research. Retrieved from: https://arxiv.org/pdf/1002.4766.pdf

Blei, Ng, & Jordan (2003). Latent Dirichlet allocation. Journal of Machine Learning Research. Retrieved from: http://www.jmlr.org/papers/volume3/blei03a/blei03a.pdf

Borgatti & Halgin (2011). On network theory. Organization Science. doi: 10.1287/orsc.1100.0641

Borgatti, Mehra, Brass, & Labianca (2009). Network analysis in the social sciences. Science. doi: 10.1126/science.1165821

Cha (2007). Comprehensive survey on distance/similarity measures between probability density functions. International Journal of Mathematical Models and Methods in Applied Sciences. Retrieved from: http://users.uom.gr/~kouiruki/sung.pdf

Choi, Cha, & Tappert (2010). A survey of binary similarity and distance measures. Systemics, Cybernetics and Informatics. Retrieved from: http://www.iiisci.org/journal/CV$/sci/pdfs/GS315JG.pdf

Christensen, Kenett, Aste, Silvia, & Kwapil (2018). Network structure of the Wisconsin Schizotypy Scales-Short Forms: Examining psychometric network filtering approaches. Behavior Research Methods. doi: 10.3758/s13428-018-1032-9

Ester, Kriegel, Sander, & Xu (1996). A density-based algorithm for discovering clusters a density-based algorithm for discovering clusters in large spatial databases with noise. Proceedings of the Second International Conference on Knowledge Discovery and Data Mining. Retrieved from: https://www.aaai.org/Papers/KDD/1996/KDD96-037.pdf

Fortunato & Hric (2016). Community detection in networks: A user guide. Physics Reports. Retrieved from: https://arxiv.org/pdf/1608.00163.pdf

Grcar, Fortuna, Mladenic, & Grobelnik (2006). Data sparsity issues in the collaborative filtering framework. International Workshop on Knowledge Discovery on the Web: Advances in Web Mining and Web Usage Analysis. doi: 10.1007/11891321_4

Lesot, Rifqi, & Benhadda (2009). Similarity measures for binary and numerical data: A survey. International Journal of Knowledge Engineering and Soft Data Paradigms. Retrieved from: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.212.6533&rep=rep1&type=pdf

Nikita (2016). ldatuning: Tuning of the Latent Dirichlet allocation models parameters. R Package version 0.2.0

Pons & Latapy (2006). Computing communities in large networks using random walks. Journal of Graph Algorithms and Applications. Retrieved from: http://jgaa.info/accepted/2006/PonsLatapy2006.10.2.pdf

Silge & Robinson (2017). Text mining with R: A tidy approach. Retrieved from: https://www.tidytextmining.com/

Talesford, Simpson, Burdette, Hayasaka, & Laurienti (2011). The brain as a complex system: Using network science as a tool for understanding the brain. Brain Connectivity. doi: 10.1089/brain.2011.0055

Zhou & Leydesdorff (2016). The normalization of occurrence and co-occurrence matrices in bibliometrics using cosine similarities and Ochiai coefficients. Journal of the Association for Information Science and Technology. Retrieved from: https://arxiv.org/pdf/1503.08944.pdf

Sentiment Analysis of Kanye West's Discography

Introduction

Last time, I did some basic frequency analyses of Kanye West lyrics. I've been using the tidytext package at work a lot recently, and I thought I would apply some of the package's sentiment dictionaries to the Kanye West corpus I have already made (discussed here). The corpus is not big enough to do the analyses by song or by a very specific emotion (such as joy), so I will stick to tracking positive and negative sentiment of Kanye's lyrical content over the course of his career.

Method

For each song, I removed duplicate words (for reasons like so that the song “Amazing” doesn't have an undue influence on the analyses, given that he says “amazing"—a positive word—about 50 times). I allowed duplicate words from the same album, but not from the same song.

The tidytext package includes three different sentiment analysis dictionaries. One of the dictionaries assigns each word a score from -5 (very negative) to +5 (very positive). Using this dictionary, I simply took the average score for the album:

afinn <- kanye %>% # starting with full data set
  unnest_tokens(word, lyrics) %>% # what was long string of text, now becomes one word per row
  inner_join(get_sentiments("afinn"), by="word") %>% # joining sentiments with these words
  group_by(song) %>%  # grouping dataset by song
  subset(!duplicated(word)) %>% # dropping duplicates (by song, since we are grouped by song)
  ungroup() %>% # ungrouping real quick...
  group_by(album) %>% # and now grouping by album
  mutate(sentiment=mean(score)) %>%  # getting mean by album
  slice(1) %>% # taking one row per every album
  ungroup() %>% # ungrouping
  subset(select=c(album,sentiment)) # only including the album and sentiment columns

The other two dictionaries tag each word as negative or positive. For these, I tallied up how many positive and negative words were being used per album and subtracted the negative count from the positive count:

bing <- kanye %>% # taking data set
  unnest_tokens(word, lyrics) %>% # making long list of lyrics into one word per row
  inner_join(get_sentiments("bing"), by="word") %>% # joining words with sentiment
  group_by(song) %>% # grouping by song
  subset(!duplicated(word)) %>% # getting rid of duplicated words (within each song)
  ungroup() %>% # ungrouping
  count(album, sentiment) %>% # counting up negative and positive sentiments per album
  spread(sentiment, n, fill=0) %>% # putting negative and positive into different columns
  mutate(sentiment=positive-negative) # subtracting negative from positive

# all of this is same as above, but with a different dictionary
nrc <- kanye %>% 
  unnest_tokens(word, lyrics) %>% 
  inner_join(get_sentiments("nrc"), by="word") %>%
  group_by(song) %>% 
  subset(!duplicated(word)) %>% 
  ungroup() %>% 
  count(album, sentiment) %>%
  spread(sentiment, n, fill=0) %>% 
  mutate(sentiment=positive-negative)

I then merged all these data frames together, standardized the sentiment scores, and averaged the three z-scores together to get an overall sentiment rating for each album:

colnames(afinn)[2] <- "sentiment_afinn" # renaming column will make it easier upon joining

bing <- ungroup(subset(bing, select=c(album,sentiment))) # getting rid of unnecessary columns
colnames(bing)[2] <- "sentiment_bing"

nrc <- ungroup(subset(nrc, select=c(album,sentiment)))
colnames(nrc)[2] <- "sentiment_nrc"

suppressMessages(library(plyr)) # getting plyr temporarily because I like join_all()
album_sent <- join_all(list(afinn, bing, nrc), by="album") # joining all three datasets by album
suppressMessages(detach("package:plyr")) # getting rid of plyr, because it gets in the way of dplyr

# creating composite sentiment score:
album_sent$sent <- (scale(album_sent$sentiment_afinn) + 
                    scale(album_sent$sentiment_bing) + 
                    scale(album_sent$sentiment_nrc))/3
album_sent <- album_sent[-7,c(1,5)] # subsetting data to not include watch throne and only include composite
# reordering the albums in chronological order
album_sent$album <- factor(album_sent$album, levels=levels(album_sent$album)[c(2,4,3,1,5,7,8,6)])

Results

And then I plotted the sentiment scores by album, in chronological order. Higher scores represent more positive lyrics:

ggplot(album_sent, aes(x=album, y=sent))+
  geom_point()+
  geom_line(group=1, linetype="longdash", size=.8, alpha=.5)+
  labs(x="Album", y="Sentiment")+
  theme_fivethirtyeight()+
  scale_x_discrete(labels=c("The College Dropout", "Late Registration", "Graduation",
                            "808s & Heartbreak", "MBDTF", "Yeezus", "The Life of Pablo"))+
  theme(axis.text.x=element_text(angle=45, hjust=1),
        text = element_text(size=12))

plot of chunk unnamed-chunk-32

Looks like a drop in happiness after his mother passed as well as after starting to date Kim Kardashian…

This plot jibes with what we know: 808s is the "sad” album. Graduation—with all the talk about achievement, being strong, making hit records, and having a big ego—peaks at the album with the most positive words. This plot seems to lend some validity to the method of analyzing sentiments using all three dictionaries from the tidytext package.

Some issues with this “bag-of-words” approach, however, was that it was prone to error in the finer level of analysis, like song. “Street Lights,” perhaps one of Kanye's saddest songs, came out with one of the most positive scores. Why? It was reading words like “fair” as positive, neglecting to realize that a negation word (i.e., “not”) always preceded it. One could get around this with maybe n-grams or natural language processing.

Nevertheless, there's the trajectory of Kanye's lyrical positivity over time!

Text Mining Kanye's Vocabulary

Introduction

I have been teaching myself how to do some text mining in R, and I thought a fun corpus to look at would be Kanye West lyrics. I'm still learning sentiment analysis, so this post will focus on basic frequencies for words in Kanye West songs. I'm going to look at the lyrics from Kanye's discrography (his solo albums and Watch the Throne) to see (a) the words he uses most and (b) which albums have the biggest vocabularies.

Making the corpus

I wanted to do scrape a website like Genius.com to automate the data collection, but sadly there were too many issues: How should I handle inconsistent formatting? What about when it says “Verse 1”? How do I edit out lyrics that Kanye didn't rap or sing? I just did a lot of copy-and-pasting into .txt files.

I edited down the files so that only verses by Kanye were included—I excluded features. I also took out the lyrics that were samples, choruses sung by other people, all spoken word introductions and outroductions, and skits. As is typical of text mining, I took out “stop words,” which are basically just boring words that I don't want to muck up my analyses (e.g., the, and, or). I also made it so that words like “change,” “changing,” and “changed” all counted as one word.

Like I said above, I included all of Kanye West's solo albums. My friends and I generally consider Watch the Throne to be primarily a Kanye album (save for American Gangster, Did Jay-Z ever recover from collaborating with Linkin Park in 2004?), so I included lyrics by Kanye West on this record.

Frequently used words

The first thing I did was look at the most frequently used words Kanye uses throughout his discography. He uses 3,294 unique words; he uses 1,741 of these only once, 479 twice, and 242 three times. Here is a figure showing all the words he has used 99 or more times:

plot of chunk unnamed-chunk-69

Kanye West coined the term “hashtag rap,” which, in Kanye's words, involves taking “the 'like' or 'as' out of the metaphor” (think: “Here's another hit—Barry Bonds” instead of “Here's another hit, like Barry Bonds”). Despite this, “like” is the word Kanye uses most throughout his discography. “Know,” “get,” “now,” just,“ "don't,” and “got” follow after. To my eye, this collection of words seems like a lot of words that deal with achieving or demanding things.

Which album has the biggest vocabulary?

The next thing I did was create a “uniqueness score” for each album. I started by calculating a score for each album: the number of unique words in a song divided by the number of words total in a song. For example, if a song had 25 unique words and 50 total words in the song, it would have a score of 0.5. The phrase “Hello, hello, everybody” has a score of 0.66, as there are two unique words (“hello” and “everybody”), and there are a total of three words in the phrase.

Then I simply took these songs and averaged them across album. So if an album was four songs and these songs had scores of .20, .40, .60, and .80, then the album's “uniqueness score” would be 0.50. Here is a plot of the albums' “uniqueness scores” (i.e., average vocabulary size), in chronological order:

plot of chunk unnamed-chunk-71

He's been remarkably consistent throughout his discography. The only notable difference is 808s and Heartbreak, which makes sense—he does not rap the lyrics, but sings them (mostly). What my next post on this dataset will include is the content of his vocabulary: How has the content of his lyrics changed over time?