The main technical component of my research was going to be writing a piece of code in Python that could be executed to scrape tweets mentioning the cafe I was researching in over the course of the month.
Then Twitter changed its API and its authorisation system. And they were changes that meant there would be fundamental changes to coding in order to make it work. You can read more about the changes on Twitter’s developer blog along with some FAQs about the API changes. The API and authorisation changes meant that I would have to sign up to Twitter as a developer (instead of using my log in details) and make changes to my Python code to reflect this.
Because the research project was only three months long, with one month of that dedicated to analysing and writing my findings, I wanted to find another solution that would work but that could have the technical elements easily explained.
Around the time that Twitter made its API changes, another scraping-related website made significant changes. ScraperWiki was previously a website with libraries of code contributed by users wanting to scrape digital content. But Scraperwiki noticed that a lot of their users were writing code specifically to scrape Twitter for search terms, so they built a bespoke web tool to do that and cumulate tweets for the user in either online or downloadable spreadsheets. The solution was found. I made contact with ScraperWiki’s product manager to make sure it worked how I thought it did, so I could write it up in my technical specifications.
It’s not as technical a solution as I would like to have had. I would love to have written a piece of code on my own and had it work. But the end result was more important – I needed data to support my theory of Twitter’s role in independent cafes – and that’s how I achieved it.