A look at some Tweets from Thanksgiving 2015

Overview


Twitter is great. Behind all those tweets, there is an enormous amount of data that holds many interesting facts, and opinions of almost any subject from people all around the world.

During Thanksgiving 2015 (November 26), while everyone was eating turkey, I fired up Spark to capture tweets containing the keyword ‘thanksgiving’.

The reason I did this work was because I was interested in exploring the tweets generated during that period of time, principally the tops hashtags, mentions and retweets. Moreover, I wanted to try Apache Zeppelin, a web-based notebook (similar to iPython or Jupyter) for interactive data analytics.

Note: Because this report has some images, I changed the layout of this page for better visibility of them.

The data


The dataset used is made of 177955 tweets obtained on November 26, 2015.

Platforms used


Report


Obtaining the data

The data used for this work was obtained using Spark Streaming, and its Twitter library. The script written captured only the text component of the tweet, in other word, just the tweet itself. After an hour or so of capturing tweets, I ended up with a directory made of many subdirectories that had the tweets. Because of this, a Pig script was written to transfer the content of all these files into a single one. Both of these scripts are on the GitHub repository of this project.

This is an example of a single tweet: Macy's Thanksgiving day parade never gets old #HappyThanksgiving

The result

Once the data was in the wanted format, it was loaded into Zeppelin. Keep in mind that I used the Spark interpreter, meaning that the syntax you will see in the following images are Spark code, or Pyspark to be more specific.

The following screenshots (taken from the Zeppelin notebook) shows how the data was loaded, and the number of tweets available in the dataset, 177955.

load

After counting the number of tweets, I executed the flatMap action on my data structure (RDD to be precise) to get every single word of the corpus as an element of the RDD, followed by a reduceByKey action to count them - the total number of words is 2145540. Then a new dataframe, made of the words and their frequency, was created. The next images display the code written to achieve this, and a bar chart and table that presents the top 10 words. This bar chart and table are merely the output of an SQL query (also shown in the images).

load

load

load

These are the top 10 words and their frequency

Is not surprising that the most common words are "thanksgiving" and "happy".

Something really cool about Zeppelin is that you can change the view of the output of an SQL query by just clicking one of the small icons below the query editor. Some of these views include a regular table (as seen on the previous image), a bar chart, a pie chart and others.

Now that we know what the most common words are, lets do the same but with the hashtags.

The next image shows the code used to get the hashtags. Most of the actions performed at this step are similar to those used to find the most common words, the exception in this case is that I used a regex to remove the special characters that follow a hashtag. For example, someone on Twitter might write #thanksgiving! as one word, but Twitter does not allow numbers or special characters on the hashtags, so the hashtag is just #thanksgiving - the ! is just a normal character of the tweet.

hashtags code

These are the top 10 hashtags:

hashtags

There is one hashtag from this list that feels out of place. Do you agree? That is the hashtag #revealed. This hashtag belongs to a tweet from an account called @Drudge_Report that starts like this "#REVEALED: What Your #Thanksgiving Feast Does to Your Organs... Avg #American Will Consume 4,500 Cals...". Mystery solved.

revealed

So far we have discovered the top words, and the top hashtags, and now it is time to find the most common mentions.

mentions

The last section of this report is related to the retweets. As I did before with the hashtags and mentions, I looked for the five most common retweets, and the number of time they were reposted.

The process behind this was a bit different than we I did before, mostly because I had to check each tweet to verify if it is in fact, a retweet. Thus I called themap function on my original dataframe, and checked if the first characters of the tweet are RT @. The result of this map action is a new dataframe made of a tuple of 3 elements: the tweet, the retweet status, and the length of the tweet (you will see why soon).

retweet1

A second dataframe was created with just those tweets that are retweets. Followed by this, they were counted using reduceByKey.

retweet2

Note: Some of these retweets had a URL on them and I removed them.

BONUS! This is why I added the length of the tweets.

stats

Conclusion

Turkey, Macy's Parade, One Direction and good times. This is what was revealed by the tweets I showed in this report. Moreover, I had the chance of trying Zeppelin, which I found really intuitive and easy to use. The scripts, an export of the Zeppelin notebook, and the dataset is available at the repository of this project (link is at the footer of this page).