Internet Bots Fight Each Other Because They’re All Too Human – WIRED
Slide: 1 / of 1. Caption: Getty Images
No one saw the crisis coming: a coordinated vandalistic effort to insert Squidward references into articles totally unrelated to Squidward. In 2006, Wikipedia was really starting to get going, and really couldnt afford to have any SpongeBob SquarePants-related high jinks sullying the sites growing reputation. It was an embarrassment. Someone had to stop Squidward.
The Wikipedia community knew it couldnt possibly mobilize human editors to face down the trollsthe onslaught was too great, the work too tedious. So instead an admin cobbled together a bot that automatically flagged errant insertions of the Cephalopod Who Shall Not Be Named. And it worked. Wikipedia beat back the Squidward threat, and in so doing fell into a powerful alliance with the bots. Today, hundreds of algorithmic assistants fight all manner of vandals, fix typos, and even create articles on their own. Wikipedia would be a mess without them.
But a funny thing happens when you lock a bunch of bots in a virtual room: Sometimes they dont get along. Sometimes a pair of bots will descend into a slapfight, overwriting each others decisions thousands of times for years on end. According to a new study in PLOS ONE, it happens a lot. Why? Because no matter how cold and calculating bots may seem, they tend to act all too human. And these are the internets nice, not-at-all racist bots. Imagine AI-powered personal digital assistants in the same room yelling at each other all day. Google Home versus Alexa, anyone?
On Wikipedia, bots handle the excruciatingly dull and monotonous work that would drive an army of human editors madif an army of editors could even keep up with all the work. A bot does not tire. It does not get angrywell, at least not at humans. Its programmed for a task, and it sees to that task with a consistency and devotion humans cant match.
While disagreements between human Wikipedia editors tend to fizzle, fights between bots can drag on for months or years. The study found that bots are far more likely to argue than human editors on the English version of Wikipedia: Bots each overrode another bot an average of 105 times over the course of a decade, compared to an average of three times for human editors. Bots get carried away because they simply dont know any bettertheyre just bits of code, after all.
But that doesnt mean they arent trustworthy. Bots are handling relatively simple tasks like spellchecking, not making larger editorial decisions. Indeed, its only because of the bots work that human editors can concentrate on those big-picture problems at all. Still, when they disagree, they dont rationally debate like humans might. Theyre servants to their code. And their sheer reachcontinuously scanning more than 5 million articles in the English Wikipedia alonemeans they find plenty of problems to correct and potentially disagree on.
And bots do far more than their fair share of work. The number of human editors on the English Wikipedia may dwarf the number of botssome 30,000 active meatspace editors versus about 300 active editors made purely out of codebut the bots are insanely productive contributors. Theyre not even quite visible if you put them on a map among other editors, says the University of Oxfords Taha Yasseri, a co-author of the study. But they do a lot. The proportion of all the edits done by robots in different languages would vary from 10 percent, up to 40 even 50 percent in certain language editions. Yet Wikipedia hasnt descended into a bloody bot battlefield. Thats because humans closely monitor the bots, which do far more good than harm.
But bots inevitably collide, Yasseri contends. For example, the study found that over the course of three years, two bots that monitor for double redirects on Wikipedia had themselves quite the tiff. (A redirect happens when, for instance, a search for UK forwards you to the article for United Kingdom. A double redirect is a redirect that forwards to another redirect, a big Wikipedia no-no.) Across some 1,800 articles, Scepbot reverted RussBots edits a total of 1,031 times, while RussBot returned the favor 906 times. This happens because of discrepancies in naming conventionsRussBot, for instance, made Ricotta al forno redirect to Ricotta cheese, when previously it redirected to Ricotta. Then Scepbot came in and reverted that change.
For its part, Wikipedia disputes that these bots arent really fighting.
If, for example, Scepbot had performed the original double-redirect cleanup and RussBot performed the second double-redirect cleanup, then it would appear that they are reverting each other, says Aaron Halfaker, principal research scientist at the Wikimedia Foundation. But in reality, the bots are collaborating together to keep the redirect graph of the wiki clean.
Were perfectly aware of which bots are running right now. Aaron Halfaker, Wikimedia Foundation
Still, Halfaker acknowledges that bots reverting each other can look like conflict. Say for example you might have an editor that wants to make sure that all the English language lists on Wikipedia use the Oxford comma, and another editor believes that we should not use the Oxford comma. (Full disclosure: This writer believes the Oxford comma is essential and that anyone who doesnt use it is a barbarian.) But Wikipedia has a bot approval process to catch these sorts of things. Were perfectly aware of which bots are running right now, he says.
Also, Wikipedians are at all times monitoring their bots. People often imagine them as fully autonomous Terminator AI that are kind of floating through the Wikipedia ether and making all these autonomous decisions, says R. Stuart Geiger, a UC Berkeley data scientist whos worked with Wikipedia bots. But for the most part a lot of these bots are relatively simple scripts that a human writes.
A human. Always a human. A bot expresses human ingenuity and human mistakes. The bot and its creator are, in an intimate sense, a hybrid organism. Whenever you read about a bot in Wikipedia, think of that as a human, says Geiger. A human whos got a computer that they never turn off, and theyve got a power tool running on that computer that they can tweak the knobs, they can fiddle the words, they can say they want to replace X with Y.
On the all-too-human front, Yasseris study also found cultural differences among the bot communities of different Wikipedia languages. That was really interesting, because this is the same technology being used just in different environments, and being used by different people, says Yasseri. Why should that lead to a big difference? Bots in the German Wikipedia, for instance, argue relatively infrequently, while Portuguese took the prize for most contentious.
Those differences may seem trivial, but such insight has profound implications as AI burrows deeper and deeper into human society. Imagine how a self-driving car thats adapted to the insanity of the German Autobahn might interact with a self-driving car thats adapted to the relative calm of Portugals roadways. The AI inside each has to make nice or risk killing the occupants. So the different ways bots interact on different versions of Wikipedia could foretell how AI-powered machines get alongor dontin the near future.
And imagine that AI elsewhere on the internet like Twitter makes its way into machines. Bots that spew fake news, that imitate Donald Trump, that harass Trump supporters. Unlike the benevolent bots of Wikipedia, these fool humans into thinking theyre actually people. If you think Wikipedia bots squabbling is problematic, imagine machines with heads full of malevolent AI doing battle.
For now, though, the many bots of Wikipedia collaborate, clash, and keep Squidward in his place.
See original here:
Internet Bots Fight Each Other Because They're All Too Human - WIRED