Researching Quantized Social Interaction

The House That Fox Built: Anonymous, Spectacle and Cycles of Amplification

My article “The House That Fox Built: Anonymous, Spectacle and Cycles of Amplification” was accepted for publication by peer-edited journal Television and New Media. Pre-proof version available for download here. Abstract:

This article focuses on 4chan’s /b/ board, a—if not the—pillar of online trolling activity. In addition to chronicling the history of the site, as well as the emergence of the nebulous collective known as Anonymous, the article considers the ways in which early media representations of and subsequent reactions to trolling behaviors on /b/ helped create and sustain an increasingly influential subculture. Echoing Stanley Cohen’s analysis of moral panics, the article goes on to postulate that trolls and mainstream media outlets, specifically Fox News, are locked in a cybernetic feedback loop predicted upon spectacle; each camp amplifies and builds upon the other’s reactions, thus entering into an unintended but highly synergistic congress.

Ilya Zhitomirskiy (1989 – 2011)

Spending enough time around technology, it becomes easy to slip into a habit of ascribing a certain volition to the Internet. “The Internet,” we say to each other, “interprets censorship as damage and routes around it.” Or, more boldly, “The Internet promotes innovation.” We even ask ourselves more abstractly, “What does technology want?”

But, the “Internet” is not some willful thing in of itself. The Internet is the aggregate of countless acts of human volition at every scale from protocol, to platform, to user. Insofar as “the Internet” speaks to us through anything, it speaks through the words and deeds and values of people, not machines.

The ecosystem of our technology, and the ever shifting arena of freedoms within it, depends on those human choices, and not some magical inherent quality of the devices we use. The only question is: what kind of Internet do we want?

For us, Ilya and his work embodied those choices — more accurately really those values — upon which the Internet frantically draws on in its finest moments. Open. Dynamic. Collaborative.

We mourn the loss of an enormously generous friend, a brilliant collaborator, and an irreplaceable, wonderful force in strengthening the creative life of the web. We’ll miss you.

The Revolutions Were Tweeted:

Information Flows During the 2011 Tunisian and Egyptian Revolution

IJOC Cover PageBy Gilad Lotan, Erhardt Graeff, Mike Ananny, Devin Gaffney, Ian Pearce, and danah boyd

Web Ecology goes peer-review! In a new International Journal of Communication article, Web Ecologists Erhardt Graeff, Devin Gaffney, and Ian Pearce collaborated with friends of Web Ecology Gilad Lotan, Mike Ananny, and danah boyd on an analysis of Twitter data from the Arab Spring. Here is the abstract:

This article details the networked production and dissemination of news on Twitter during snapshots of the 2011 Tunisian and Egyptian Revolutions as seen through information flows—sets of near-duplicate tweets—across activists, bloggers, journalists, mainstream media outlets, and other engaged participants. We differentiate between these user types and analyze patterns of sourcing and routing information among them. We describe the symbiotic relationship between media outlets and individuals and the distinct roles particular user types appear to play. Using this analysis, we discuss how Twitter plays a key role in amplifying and spreading timely information across the globe.

You can download and read the full article (open access) in PDF format from the IJOC website:

Lead author Gilad Lotan also produced an online data navigator to accompany the article:

Archiving Internet Subculture: Encyclopedia Dramatica

EDIT: Pointed out in comments that we wrote 4901 articles instead of actual total of 9401 articles.

The Web Ecology Project is dedicated to the preservation of digital culture and folklore. In a recent talk about the Archive Team, Jason Scott elucidated the usual strategy that companies employ for dealing with digital artifacts, platforms, and communities:

Disenfranchise. Cut off any amount of support or awareness by users of their environment and what they are putting their lives into.

Demean. When a site falls out of favor, act like it’s an electronic ghetto, not worth consideration as a valid entity. Think Friendster, orkut, myspace, geocities and a dozen others. Say their name in the company of people who understand the technical issues, and they snort. For a lot of people, these sites are parties, and the party is over.

Delete. Give a random amount of warning, and I mean, it really is completely arbitrary and made up, and then delete, with no recourse, nobody to ask for a copy, nobody to contact to retrieve your lost data, your husband’s history, your child’s photos. I’ve seen periods as long as a year and as short as 48 hours. There’s nothing, no standardization, no agreed upon procedure for decommissioning these sites. It’s all just being made up as it goes along.

Recently, Encyclopediae Dramatica (ED) — a wiki dedicated to the archiving of -chan subculture, celebrity, and the lulz — was removed from its servers with no effort to preserve the information contained within. While it has been replaced with a new wiki, we at the Web Ecology Project remain disheartened that no opportunity for the preservation of ED was offered nor any warning given.

Luckily, during a recent Web Ecology Camp in mid-February 2011, researchers Seth Woodworth and Alex Leavitt — during a scoping session for a project on Anonymous and Operation Payback — scraped ED and downloaded the textual elements of the wiki. We currently possess .txt files detailing the wiki markup used in the 9401 pages of ED (total at the time of collection), including links and records of images (though we do not possess the actual image files; we also do not have the edit histories, discussion pages, or user pages).

Taking a cue from Archive Team, “we are going to rescue your shit.” For the betterment of culture and research, you can find a link to a .zip that contains all 9401 .txt files, the archive of Encyclopediae Dramatica, below.

Complete Source Code From Socialbots 2011

As promised, we’ve taken the last week or so getting the code from all the Socialbots 2011 teams cleaned up and ready for public release. So, for all you programming-inclined folks out there, we’re happy to say today that you can now play from the comfort of your own home and rig up your own automated social-influencing bot too. Fun! Links to tarballs follow:

Team C’s bot works on a clever interlocking system of follows and a pre-stored databases of generic responses/questioning to stimulate target activity. Given performance on the game, we’ve clocked the social horsepower of this bot in at an average of ~8 follows/day (f/d), and ~14 responses/day (r/d).

Team B’s bot uses a copy-and-parrot structure to fill its content, and also has some neat attack countermeasures built into its codebase. Our measured performance shows about ~7 f/d, ~2 r/d on this one.

Team C’s bot has an alternative and very interesting following structure built into its logic. Over the course of the game, Team A was the winner on automation of raw connection building. ~9 f/d, ~1 r/d.

Enjoy! All code from the competition has been licensed under the terms of the permissive MIT License. So feel free to tweak and share as you wish – give a holler if we can be of help on anything!

Socialbots: The End-Game

With the stroke of midnight on Sunday, the first Socialbots competition has officially ended. It’s been a crazy last 48 hours. At the last count, the final scores (and how they broke down) were:

  • Team C: 701 Points (107 Mutuals, 198 Responses)
  • Team B: 183 Points (99 Mutuals, 28 Responses)
  • Team A: 170 Points (119 Mutuals, 17 Responses)

This leaves the winner of the first-ever Socialbots Cup as Team C. Congratulations!

You also read those stats right. In under a week, Team C’s bot was able to generate close to 200 responses from the target network, with conversations ranging from a few back and forth tweets to an actual set of lengthy interchanges between the bot and the targets. Interestingly, mutual followbacks, which played so strong as a source for points in Round One, showed less strongly in Round Two, as teams optimized to drive interactions.

In any case, much further from anything having to do with mutual follows or responses, the proof is really in the pudding. The network graph shows the enormous change in the configuration of the target network from when we first got started many moons ago. The bots have increasingly been able to carve out their own independent community — as seen in the clustering of targets away from the established tightly-knit networks and towards the bots themselves.

If you’re around in San Francisco, I’ll be giving a brief talk @ Ignite Bay Area tomorrow night about what we’ve learned so far, and where we’re looking to take this into the future.

There will be two exciting things coming out of this in the coming week or so, so you should stay tuned:

  • The bot code from all the teams, so you can tweak and play at home.
  • And, we’ve prepared a full data dump of all the target and bot activity. Every single tweet for the two week competition. We’ll be distributing this on a researcher-by-researcher basis, drop a line to, if you’re interested.

In the end, I’ve been thrilled by how this turned out — enormous public thanks to Ian Pearce for lending some tech firepower on this one, and congratulations to all the teams who put in such hard work competing this game! Hope to see you on the social battlegrounds soon enough.

Socialbots: Round Two, Day Four

Hey all, here’s the latest skinny on what’s going on in the world of Socialbots:

  • Team A: 148 (+4)
  • Team B: 122 (+26)
  • Team C: 480 (+119)

And the obligatory network graph. It appears that Team B’s dastardly counterstrikes to undermine the other teams are having some effect, as Team A’s growth has slowed somewhat in the last 24-48 hours. However, these counter-strategies seem to have had less of an effect on Team C, as they have enjoyed the largest single-day growth seen yet in the course of the game — which has been driven almost entirely by responses from the targets.

Socialbots: Round Two So Far

Hey readers! Things have been absolutely nuts here at socialbots mission control, but wanted to take a second to drop the latest update on how the competition is going. As we wrote earlier, Monday officially commenced the second round of the competition. Over the weekend, the teams had a chance to edit their bots based on their performance in Round One, and launch additional bots. Team A and C updated their code, and Team B additionally became a swarm bot (a lead bot with supporting bots).

We’re currently at the third day of Round Two — which will end on Day Seven (Sunday). The current scores as they stand are:

  • Team A: 144
  • Team B: 96
  • Team C: 361

Both Team B and C have deployed new architectures for their bots, trying to maximize their social response from the target network and make their bots more proactive in responding to the targets. This has led to a gigantic growth in points — although Team A currently now outpaces Team C in terms of raw follow-backs (117 v. 107), we’re seeing a huge point lead start to cleave out from bots getting optimized to elicit responses (Team C currently has logged 86 responses, with Team A only with 9).

Team B’s approach to appropriate other user content in fueling interactions between bot and target has been relatively less successful, with only 9 responses racked up so far. However, optimizations to their connection building mechanism have been significantly more successful, showing a 500% increase in points against round one in the three days since the second round began.

It should also be reported that Team B’s also deployed some counter-measures with its supporting swarm against Team C, with bots attempting to erode the credibility of Team C among its followers. While this hasn’t slowed down their point growth to date, there’s still a few more days of Round Two…

As per usual — the full network graph, now color coated for more easy determination of what’s going on. We’re playing around with some more visualization to make it easier to see how much social “territory” the teams have captured.

Socialbots: Day Five

Phew — so, this competition just got a helluva tighter:

  • Team A: 84 (+17)
  • Team B: 12 (+3)
  • Team C: 127 (+10)

Team A continues its strong rate of growth into late Round 1, edging in to close the early lead Team C has been enjoying for the majority of the game. At this point, the point difference between Team A and Team C turns largely on the number of @ replies that the bots have received, with Team C having the slight edge at the moment. Here’s the latest network graph showing the state of the battlefield.

As mentioned in our last post, this Sunday is “Tweak Day” — essentially a 24-hour period where teams will get a chance to freely update their bots, launch new bots, and (potentially) switch their lead bot. No points will be scored during this time, though we’ll be reporting on the latest score going into Round 2 next week.

Socialbots: Day Three and Four

Hey readers! Sorry about the delay in posting, been nuts here at Socialbots 2011 mission control. Here’s what’s been happening for the past two days.

Day Three:

  • Team A – 36 (+23 over Day 2)
  • Team B – 8 (+3 over Day 2)
  • Team C – 107 (+7 over Day 2)

Day Four:

  • Team A — 67 (+31)
  • Team B — 9 (+1)
  • Team C — 117 (+10)

Generally, we’re seeing a strong surge in points from Team A — driven almost entirely on points gained through follower acquisition. As per usual, you can see the current network graph here, showing all three team bots becoming pretty deeply embedded within the target battlefield.

At this point in the game we’re running towards the end of Round One. This Sunday, teams will have a chance to tweak the code running on their bots, launch/decommission bots, and implement strategies to aid themselves (or hinder their opponents) before the start of Round Two on Monday next week.

This is significant, since prior to the launch of the competition, teams did not know the identities of the other bots in the battlefield. Having had a week to observe their performance and the behavior of the other team’s lead bots, they’ll be able to adapt strategies accordingly. We’re hoping that we’ll see numerous sparks fly. Keep your eyes on this space!