Critical Code Project: IML 203

Emily Billow
12 min readMay 6, 2019

#FleetingSupport: An Intervention into Twitter’s API

THE PROJECT

For this project, my team and I wanted to compare the prevalence of tweets and hashtags from various movements and events during specific time periods.

The goal with this intervention was to represent the “fleeting” changes and fluctuations that exist with trends in Twitter’s fast paced online culture. Not only did we aim to look at the rapid and widespread support that develops for news and movements, but also the phenomenon of how quickly this coverage and support can die out. Our idea was to see the disconnect between certain stories that quickly go viral with a popular hashtag, and those that blow up quickly but fade away soon after or even those that gain little online support in the first place.

Our goal with the project was to extract this data from Twitter to visualize these fluctuations and hashtag trends that wouldn’t normally be seen by simply going on Twitter’s platform. Therefore, we set out to use Twitter’s API to create unique artistic visualizations and expressions showing this #FleetingSupport. One hope with this end-product was to make people feel like they’re playing a part in these changes and to reinforce the empowerment of the movement. One aspect of online culture is just that — online, for people that can’t attend events or marches in person, these visualizations are further portals (both informational and aesthetic) into that world.

IMPORTANCE

In our current society, social media plays a large and ever-increasing role in the lives of just about everyone. Twitter specifically is an integral platform for the exposure of stories, popularity of news, and the support of social movements. Its hashtag feature is one of the biggest and fastest ways that information and voices of support spread from person to person as well as all corners of the globe.

This project relies on those hashtags, as we aimed to visualize the online support for different events as well as see how quickly support for one thing dies out with the emergence of another. In just one day, or in even a few hours, the most popular hashtag from the past three weeks can be overshadowed by something new. Topics and news are dropped and picked up in the blink of an eye, and we thought this important to represent.

Due to the fast pace and limited interface of social media and of Twitter, it is difficult to notice these things. That is why it was important for us to show audiences these drastic changes in an aesthetically consumable way. We hope that the visualizations we create show just how powerful social media is in determining where our attention is directed and for how long.

INSPIRATION

Jer Thorp, a data artist working primarily in visualizations was our main influence and inspiration in this process. Not only are his creations stunningly beautiful and intricate, but his work has real meaning behind it. We wanted to work in his realm because he focuses on the boundaries between data, art, and culture. He has a human cultural element and that is a big part of our own project. We aren’t just looking at random numbers from Twitter, we are analyzing popularity trends from individual humans actually showing interest in a hashtag or event. The culture of the Twitter-sphere and the real world is imbued in our project.

Jer Thorp’s Data Visualizations

PROCESS

Our workflow involved collecting actual data from Twitter’s Application Programming Interface, gathering that data into a usable file, importing the data into a digital art coding platform, and using that language to create these tweet visualizations.

Above is our diagram with the step-by-step process. First we used Tweepy with Twitter’s API to data scrape the tweets with specific hashtags. Then we used a Python library called Pandas to gather the number and string of tweets, subsequently exporting that data into a CSV file. Once in a readable CSV format, the file can be imported into Processing (does not have to be TSV format). We used Processing because the majority of our team was familiar with its language and it is one of the main programs used to create data visualizations.

Although I did not spearhead the data scraping, I do know that we had lots of problems to start out with. The data scraping program (Tweepy) was bringing back lots of ‘null’ data, which means some important parts were missing or not collected. We attempted to work with this problem for the first few days but ended up changing our method slightly. By changing the hashtags we were searching for and limiting the time frame for the scraping to be gathered from, the return brought us real and authentic data. Below is a list of the hashtags we used along with their respective collected time chunk.

▪︎ #SriLanka — April 21, 2019 to April 22, 2019.
▪︎ #Flint — April 15, 2019 to April 21, 2019.
▪︎ #WomensMarch — April 11, 2019 to April 21, 2019.
▪︎ #TheyAreUs — April 11, 2019 to April 21, 2019.
▪︎ #NotreDame — April 15, 2019 to April 21, 2019.

We chose hashtags and movements that were both recent events and trends from a while ago. No matter the time of the original hashtag, we chose things that were big in the media and popular on Twitter at at least one time. At the time of the scraping, Notre Dame, They Are Us (the hashtag for the Christ Church shooting in New Zealand), and Sri Lanka were the most recent events.

THE CODE

Each team member had the responsibility of creating their own data visualization from the same data we had all collected. While we used the same data set and had talked about the same inspirations, like Jer Thorp, our final products came out quite differently. Due to our differences in coding ability and knowledge of the Processing language itself, the visualizations ended up taking different forms than we expected.

My own work definitely differed from the beautiful lines and shapes that we admired from Jer Thorp. Since I had extremely limited knowledge of Processing, I decided to take a slightly different route. One of my goals articulated in the group presentation was to create the CSV and learn how to import that into Processing, which was successful. After that, however, my individual goals morphed a bit. I set out to learn more basics with Processing and I also wanted to incorporate images. I knew my piece was no longer going to aim to be a beautiful expression of data lines and shapes because it just wasn’t realistic for me.

void setup() {
size(1100, 650);
background(41, 42, 51);
void draw() {
//Loading the file into a table
//"header" means that the first line of the file should be understood as header
Table tabela = loadTable("data.csv", "header");
//For loop that iterates 10 times, the number of rows in the table
for (int i = 0; i < tabela.getRowCount(); i = i + 1) {
//Select the row corresponding to the count(i) TableRow
TableRow linha = tabela.getRow(i);
//Draws a rectangle at position y = 50, x = 120 times counting
int posX = 30+210*i;
int posY = 35;

//Variable for size
//float sz = sqrt(linha.getInt("área"));
float sz = 200;
stroke(0,38,77);
rect(posX, posY, sz, sz);
for (int j = 0; j < linha.getInt("Tweets"); j = j + 5) {point( random(posX, posX + sz), random(posY, posY + sz));

}

After the initial importation and setup of basic code, which can be seen above, my work within the program was challenging and frustrating. I ran into countless problems as I coded, some of which being limitations of my own knowledge but also some Processing bugs. The biggest problem I ran into was my attempt to separate the rows from the data file. Once the CSV was imported, the data set showed up as all one row and any changes that were made applied to all of them. This should have been solved relatively easily, but despite my repeated and great efforts, it would not work. The code below should have solved the problem by taking from row 0,1,2, etc, in the data respectively, but once the For Loop and the getRow line was added, the tweet data changed to match each other.

for (int i = 0; i < tabela.getRowCount(); i = i + 1) {               
TableRow linha = tabela.getRow(0);
//code for rectangles and point positions omittedfor (int i = 0; i < tabela.getRowCount(); i = i + 1) {
TableRow linha = tabela.getRow(1);

In other words, the tweet data that looked different for each hashtag would look exactly the same, despite its inherent variation. I tried changing the rows for each getRow, I tried making new CSV files and importing them again, and I tried making a new sketch entirely, all to no avail. My team members were able to implement getRow with no problem, and upon investigation they did not know why mine was not working either. Below is how it looked.

The data conformed to match each other, no distinct tweets, same number of dots.

After running into many problems and hours and hours of troubleshooting, I decided upon negotiating the vision in my head for a working product. I had spent too much time making too little progress and so I morphed my visualization into something else that interested me. First I positioned my tweet data in separated boxes along the top of my Processing window. I labeled each box with its tweet name and the amount of tweets plus the time the data was gathered from. I then manipulated int j, the variable for the tweets themselves, in its For Loop. I also randomized the point position (individual tweets) so that they continually appear in different places within their respective boxes. Subsequently I created semi-transparent rectangles to go over each tweet data box which created a nice fade in and out effect when moused on and off. I also added the mouse-over effect to work with the text so that it only appears when your mouse moves to that tweet box. These were done with if statements.

for (int j = 0; j < linha.getInt("Tweets"); j = j + 5) {point( random(posX, posX + sz), random(posY, posY + sz));
}
//semi-transparent rectangles
fill(240,240,240,10);
rect(30, 35, 200, 200);
fill(240,240,240,10);
rect(240,35,200,200);
fill(240,240,240,10);
rect(450,35,200,200);
fill(240,240,240,10);
rect(660,35,200,200);
fill(240,240,240,10);
rect(870,35,200,200);
if(mouseX<250 && mouseX>0 && mouseY<270){
noStroke();
fill(218, 135, 24);
textSize(12);
text("April 21, 2019 - April 22, 2019", 35, 250);
text("65,139 tweets", 35, 266);
}
if(mouseX>250 && mouseX<450 && mouseY<270){
fill(218, 221, 255);
textSize(12);
text("April 15, 2019 - April 21, 2019", 245, 250);
text("1,773 tweets", 245, 266);
}

I mentioned before that I wanted to incorporate images, and now with my morphed vision I was able to do so. This was a new concept for me and so I taught myself the code and used it in a way to complement the data. After looking at the common hashtags we took from Twitter, I chose a widely used photo to go along with each event or movement. After manipulating the data into the format and position I wanted within my Processing window, I added a mousePressed function to show a corresponding image below when the data for that respective movement is clicked. I had some problems with the code for this function as well but I was able to work around the challenge this time. The function I wanted to use where the image would be hidden until the user mousePressed on the tweet box was not working, so I had to write it in a different way. Instead of hiding the image until mousePressed, I used image position to create the same effect. The images are originally positioned below the Processing window, so are essentially hidden, until mousePressed and their y position changes so they all move up into frame.

image(img1, -30, 1385, width/3, height/3);
image(img2, 225, 1280, width/3.5, height/3.5);
image(img3, 385, 1410, width/3, height/3);
image(img4, 630, 1270, width/3.5, height/3.5);
image(img5, 790, 1418, width/3, height/3);
if(mouseX<250 && mouseX>0 && mouseY<270 && mousePressed){
image(img1, -30, 385, width/3, height/3);
}
if(mouseX>250 && mouseX<450 && mouseY<270 && mousePressed){
image(img2, 225, 280, width/3.5, height/3.5);
}
if(mouseX>450 && mouseX<660 && mouseY<270 && mousePressed){
image(img3, 385, 410, width/3, height/3);
}
if(mouseX>660 && mouseX<880 && mouseY<270 && mousePressed){
image(img4, 630, 285, width/3.5, height/3.5);
}
if(mouseX<1100 && mouseX>880 && mouseY<270 && mousePressed){
image(img5, 790, 418, width/3, height/3);
}

After this solution I was satisfied with the functioning code and only changed a few more things. I added color to each tweet box that corresponded with the main color from the scheme of the pictures I had chosen. When moused over, the tweet boxes change color, but unfortunately they all change together due to the initial problem I had of separating rows. If the getRow solution had worked, each box would change color individually.

FINAL PRODUCT

Here is a video of the final code working.

#FleetingSupport

This exhibits the mouse-over functions, color changes, and mousePressed images that I was able to create. My visualization does definitely work as it shows the prevalence of different movements tweeted about over certain time frames. It is clear from the abundance of dots in the Sri Lanka and Notre Dame boxes that they had the most data at the time of the scraping. Women’s March and Flint are relatively low because they are not as current, but They Are Us brought different results than expected. Though it was still recent, it has the lowest tweet count of all the hashtags. This begs interesting questions about the media’s portrayal of news and even our insensitivity to mass shootings. Whatever the reason may be for #TheyAreUs low tweet count, the project meets its primary goals. While I am happy with my final product I only wish I could have gotten the getRow function to work. There were already a very limited number of ways to manipulate the code and even more so when I couldn’t change individual data sets. All in all, this project is successful but there is much room for further exploration.

Here is the full functioning code for the visualization.

PImage img1;
PImage img2;
PImage img3;
PImage img4;
PImage img5;
void setup() {
size(1100, 650);
background(41, 42, 51);
img1 = loadImage("srilanka.png");
img2 = loadImage("flint.jpg");
img3 = loadImage("womensmarch.jpg");
img4 = loadImage("newzealand.jpg");
img5 = loadImage("notredame.jpg");
}
void draw() {
//Loading the file into a table
//"header" means that the first line of the file should be understood as header
Table tabela = loadTable("data.csv", "header");
//For loop that iterates 10 times, the number of rows in the table
for (int i = 0; i < tabela.getRowCount(); i = i + 1) {
//Select the row corresponding to the count(i) TableRow
TableRow linha = tabela.getRow(i);
//Draws a rectangle at position y = 50, x = 120 times counting
int posX = 30+210*i;
int posY = 35;

//Variable for size
//float sz = sqrt(linha.getInt("área"));
float sz = 200;
stroke(0,38,77);
rect(posX, posY, sz, sz);
for (int j = 0; j < linha.getInt("Tweets"); j = j + 5) {point( random(posX, posX + sz), random(posY, posY + sz));
}
//semi-transparent rectangles
fill(240,240,240,10);
rect(30, 35, 200, 200);
fill(240,240,240,10);
rect(240,35,200,200);
fill(240,240,240,10);
rect(450,35,200,200);
fill(240,240,240,10);
rect(660,35,200,200);
fill(240,240,240,10);
rect(870,35,200,200);
if(mouseX<250 && mouseX>0 && mouseY<270){
noStroke();
fill(218, 135, 24);
textSize(12);
text("April 21, 2019 - April 22, 2019", 35, 250);
text("65,139 tweets", 35, 266);
}
if(mouseX>250 && mouseX<450 && mouseY<270){
fill(218, 221, 255);
textSize(12);
text("April 15, 2019 - April 21, 2019", 245, 250);
text("1,773 tweets", 245, 266);
}
if(mouseX>450 && mouseX<660 && mouseY<270){
fill(241, 132, 206);
textSize(12);
text("April 11, 2019 - April 21, 2019", 452, 250);
text("308 tweets", 452, 266);
}
if(mouseX>660 && mouseX<880 && mouseY<270){
fill(110, 179, 109);
textSize(12);
text("April 11, 2019 - April 21, 2019", 662, 250);
text("77 tweets", 662, 266);
}
if(mouseX<1100 && mouseX>880 && mouseY<270){
fill(244, 90, 51);
textSize(12);
text("April 15, 2019 - April 21, 2019", 876, 250);
text("30,642 tweets", 876, 266);
}
PFont mono;
mono = loadFont("DamascusBold-24.vlw");
textFont(mono);
textSize(16);
text("#SriLanka", 93, 30);
mono = loadFont("DamascusBold-24.vlw");
textFont(mono);
textSize(16);
text("#Flint", 319, 30);
mono = loadFont("DamascusBold-24.vlw");
textFont(mono);
textSize(16);
text("#WomensMarch", 486, 30);
mono = loadFont("DamascusBold-24.vlw");
textFont(mono);
textSize(16);
text("#TheyAreUs", 708, 30);
mono = loadFont("DamascusBold-24.vlw");
textFont(mono);
textSize(16);
text("#NotreDame", 917, 30);
}

image(img1, -30, 1385, width/3, height/3);
image(img2, 225, 1280, width/3.5, height/3.5);
image(img3, 385, 1410, width/3, height/3);
image(img4, 630, 1270, width/3.5, height/3.5);
image(img5, 790, 1418, width/3, height/3);
if(mouseX<250 && mouseX>0 && mouseY<270 && mousePressed){
image(img1, -30, 385, width/3, height/3);
}
if(mouseX>250 && mouseX<450 && mouseY<270 && mousePressed){
image(img2, 225, 280, width/3.5, height/3.5);
}
if(mouseX>450 && mouseX<660 && mouseY<270 && mousePressed){
image(img3, 385, 410, width/3, height/3);
}
if(mouseX>660 && mouseX<880 && mouseY<270 && mousePressed){
image(img4, 630, 285, width/3.5, height/3.5);
}
if(mouseX<1100 && mouseX>880 && mouseY<270 && mousePressed){
image(img5, 790, 418, width/3, height/3);
}
}

--

--