Building the bots that keep Wikipedia fresh – GCN.com

Building the bots that keep Wikipedia fresh

While we can all learn from Wikipedias 40 million articles, government bot builders specifically can get a significant education by studying the creation, vetting and roles of the 1,601 bots that help maintain the site and interact with its more than 137,000 human editors.

Researchers at Stevens Institute of Technology classified the Wikipedia bots into nine roles and 25 associated functions with the goal of understanding what bots do now and what they might do in the future. Jeffrey Nickerson, professor and associate dean of research at Stevens School of Business, and an author of The Roles Bots Play in Wikipedia, published in November 2019, likened the classification to the way humans talk about occupations and professions, the skills required to do them and the tasks that must be performed.

Each bot performs a unique job: some generate articles based on templates; some fix typos, spelling mistakes and errors in links; some identify spam, vandals or policy violations; some interact with humans by greeting newcomers, sending notifications or providing suggestions.

The nine main roles account for about 10% of all activity on the site and up to 88% of activity on subsections, such as the Wikidata platform, where more than 1,200 fixer bots have made a total of more than 80 million edits, according to the report.

Anyone can build a bot -- an automated, artificial intelligence-powered software tool -- for use in Wikipedia, but before its deployed, it needs the blessing of the Bot Approval Group. Members determine what the bot will do and which pages it will touch, and they review a trial run of the bot on sample data. That may be all that's required, or the group may also ask to check the source code, Nickerson said. That entire process is public.

Its a good place to start [for bot builders] because you can actually see it, Nickerson said. You can see the bots that are successful, and you can see the conversations take place there, and you can see the way the developers of the bots actually talk to the end users.

Builders consider risks and advantages of their bots, what functions they will start with and which features will come later, and how their bot might interact with others that perform similar functions, for example, he said.

Theres this vetting of the bot, Nickerson said. If the bot is going to do something fairly minor and not on very many pages, there may be less vetting than if the bot is going to create a whole bunch of new pages or is going to do a lot edits.

Another feature of the Wikipedia bots is how they work with human editors. Often, editors create a bot to automate some of their editing processes, Nickerson said. Once they build it, they set it loose and check on it periodically. That frees the editors to do the work that most interests them, but they also become bot maintainers.

The subsection of Wikipedia called Wikidata, a collaboratively editedknowledge baseof open source data, is especially bot-intensive. The platform is a knowledge graph, meaning that every piece of knowledge has a little fact involved and because of the way these are hooked together, the value of it can be a link to another fact, and essentially it forms a very, very large graph, Nickerson said.

Wikidatas factual information is used in knowledge production in Wikipedia articles, thanks to adviser and fixer bots. For example, when theres an election, the results will populate in Wikidata, and pages about a citys government will automatically update the name of the mayor by extracting election information from Wikidata.

Bots interaction with human editors are critical to the success of a website based on knowledge production. On Wikipedia, if someone makes an incorrect edit, a bot may reverse that change and explain what was wrong. Being corrected by a machine can be unpleasant, Nickerson said, but bots can also be diplomatic.

The researchers call these first- and second-order effects. The former are the knowledge artifacts the bots help protect or create, while the latter are the reactions they bring out in humans.

They can actually pay attention to what people are interested in, he said. They can be patient. They can direct somebody toward a page that they know with high probability is going to be the kind of page where that person can actually make an important contribution. The instinct of some people is to go to the pages that are actually very highly edited and very mature and try to make changes to those pages, and thats actually not the right place to start. The place to start is with a page that is newer and needs a particular kind of expertise.

When human editors have a positive interaction with bots right out of the gate, that helps with the cultural aspect of bot building. It also provides insight into what makes a bot successful -- a topic Nickerson plans to study more in the future.

Researchers at MIT, meanwhile, have developed a system to further automate the work done by Wikipedias human editors. Rather than editors crafting updates, a text generating system would take unstructured information and rewrite the entry in a humanlike fashion.

Unlike the rules-based bots on the site, MITs bot takes as input an outdated sentence from a Wikipedia article, plus a separate claim sentence that contains the updated and conflicting information, according to a report in MIT News. The system updates the facts but maintains the existing style and grammar. Thats an easy task for humans, but a novel one in machine learning, it added.

About the Author

Stephanie Kanowitz is a freelance writer based in northern Virginia.

Read the rest here:
Building the bots that keep Wikipedia fresh - GCN.com

Related Posts

Comments are closed.