Archive for the ‘Wikipedia’ Category

An old Encyclopaedia Britannica is a work to cherish – The Spectator

All the Knowledge in the World:The Extraordinary History of the Encyclopaedia

Simon Garfield

Weidenfeld & Nicolson, pp. 352, 18.99

Two thousand years ago, a young Cilician named Oppian, wanting to rehabilitate his disgraced father, decided to write Halieutica, an account of the world of fishes, as a gift for Marcus Aurelius. It was a mixture of possible fact and definite fiction if only there were octopuses that climb trees and fishes that fancy goats and it was a success. His father was forgiven, and the sons written work accepted as authoritative knowledge. In short, although Wikipedia, the free encyclopaedia, calls Halieutica a didactic epic, it was an early encyclopaedia a word taken from the Greek enkyklios paideia, meaning knowledge in the round, and which has come to denote a set of books that contains articles that can be cross-referenced, is in alphabetical order and is the authors view of what knowledge needs to be known and what unknowns need to remain unknown.

Simon Garfield does not write about Oppian (I mention him not for one-upmanship but because more people should know about a man who wrote that deer sailed the sea using their horns as sails). But this history of the encyclopaedia (and its future) does not lack for learned gentlemen and their learned books. This is definitely a mans world: in the first Encyclopaedia Britannica the definition of woman was the female of man. See Homo, and things did not much improve until the 20th century.

And so many men: British, German, French, Chinese. Britannica was by no means the first. Garfield makes a convincing case for the encyclopaedic status of works by Pliny (who believed menstruating women can expel insects from the trees), Gervase of Tilbury, Isidore of Seville and (delightfully) a Herr Franckenstein, whose detailed medical entries instructed any prospective amputators of arms that the time needed for sawing through forearm bones was about the same needed to say the Lords Prayer. All had ambition to encircle knowledge and transmit it to others, for the common good and for profit. All had elements of what Garfield calls the vast commitment required to make those volumes an astonishing energy force and the belief that such a thing will be worthwhile. Those who bought them did so in the hope of purchasing perennial value.

And what value, sometimes. Britannica was founded in 1768 in Edinburgh, and its first compilers were not necessarily experts. Andrew Bell was an engraver with an unfeasibly large nose and William Smellie an ex-priest and polymath. The entries were in alphabetic order, a controversial decision that became the standard. The first volume covered Aa to Bzo, a town of Africa, in the kingdom of Morocco. Expert contributions came from filleting published books, a common practice. See Plagiary in Ephraim Chamberss encyclopaedia of 1710, in which Chambers wrote that he could not be accused of author theft because what they take from others they do it avowedly, and in the open sun. In effect, their quality gives them a title to everything that may be for their purpose, wherever they find it.

Denis Diderots Encyclopdie, published between 1751 and 1772, instead had original writing from Voltaire (Elegance, History, Taste and others) and Jean-Jacques Rousseau, whose entry on political economy should be required reading for our current government: It is one of the most important concerns of government to prevent the extreme inequality of fortunes... not by building hospitals for the poor but by guaranteeing that the citizens will not become poor.

An encyclopaedia was meant as reference, but also to be savoured. The 11th edition of Britannica (1929) featured Cecil B. DeMille on motion pictures and J.B. Priestley on English literature. It was, wrote Denis Boyles, plausible, reasonable, unruffled, often reserved and completely authoritative. And sometimes plain wrong. Garfield reaps many pages out of the unsavoury views of the past, from awful entries on negroes and Hitler and homosexuality, even while believing that scholarship of any era is still scholarship. It is valuable to know what 1819 knew about Egypt, and what 1824 understood about James Watt.

Sometimes the book drags, weighed down by the encyclopaedic bounty. Turn the page and my heart sinks to find yet another set of learned gentlemen compiling yet another set of clever books. I think back to the entry of Abridgement in the first Britannica, written by the polymath Smellie, who attended many lectures. He wrote: The art of conveying much sentiment in a few words is the happiest talent an author can be possessed of; and abridging is particularly useful in taking the substance of what is delivered by professors. Or authors attempting to be encyclopaedic about encyclopaedias. (Garfield states early on that this is not his intention; he will write only about those he judges most significant or interesting, or indicative of a turning point in how we view the world.) Perhaps, then, this is a book to be used like an encyclopaedia: to be put down but always picked up again. To be read with pleasure, but not all at once.

Because it is a pleasure. Garfield writes fluidly, cheerily and charmingly, even while the breeziness does not detract from the scale of his ambition: to understand nothing less than humans need for knowledge and how to convey and preserve it. When is knowledge a factoid? Who gets to be the gatekeeper? Who, in the words of Arthur Mee, the editor of the Childrens Encyclopaedia, is holding up the stars?

Garfields love for Wikipedia, dismissed by snobs but used by us all, is surprising but heartfelt. He believes in the democracy of input, and that errors are usually righted and that Wikipedias gatekeeping works. (He also believes that people cant edit their own entries, but I corrected mine with no trouble, as it said I was American and I cant have that.)

Wikipedia is now the way of all knowledge and the printed encyclopaedia is doomed by its very structure. It can never know it all or show enough of what it knows. It cant hope to keep up with important developments in the world, nor take back what it said about Hitler or slavery. Endless editions, salesmen crisscrossing America selling expensive sets none can compare with the speed of the click. Even so, Garfield concludes, there is still a place for Slow Books. A fine encyclopaedia will stand you in good stead like an old wristwatch: its timing may be out, and sometimes it may not work at all, but its mechanics will always intrigue.

Link:
An old Encyclopaedia Britannica is a work to cherish - The Spectator

2022 – Combatting Misinformation – The Seattle U Newsroom – The Seattle U Newsroom – News, stories and more

Lydia Bello, science and engineering librarian, and Jennifer Bodley, adjunct librarian, collaborated on a blog article, SIFT-ing Through Information Online to provide the campus community with guidance on media literacy, which is being critically engaged when receiving, finding, evaluating, using, creating and sharing media, particularly in an online environment.While some question the legitimacy of Wikipedia, learn how its their first trusted line of defense in avoiding misinformation.

Q: What prompted you both to write a blog article that addresses media literacy and misinformation?

JB: We were asked to write about misinformation, mostly prompted from events happening in the world. I think at the time the Russia-Ukraine news cycle had started and ... COVID-19 has obviously been in the news cycle for a long time and theres a lot of other issues where misinformation is very problematic with information getting to the public.

LB: In the early days of the invasion of Ukraine, there was information flowing fast and furious. A lot of it misinformation and unverified and a lot of people here [at SU] are directly impacted by those current events. Along with that, Jennifer had just finished teaching a workshop on these skills and it all sort of fell into place that we thought this might be a good time to offer a reminder.

JB: News is a fire hose, so whenever you have that fire hose obviously you dont have checks and balances and controlled dissemination of factual reporting.

Q: What are some of the sources where misinformation is the highest or spreads fastest (i.e., social media/specific platforms, news media, etc.)?

LB: Some of the really obvious places include social media designed to spread information quickly. Weve all seen articles and research about the addictive nature of social media, about how its designed to engage you in order to create advertising dollars. Because of that, its a core place where information moves.

Misinformation and disinformation also move quickly when theres a strong sense of emotion attached to it. Emotions like fear or anger are ones that come to mind, but also vindication, satisfaction and a really strong desire to help. When those emotions are attached or are involved, they help move the flow of misinformation on these platforms really quickly as well.

JB: Were talking about social media being behind a lot of misinformation and disinformation, but the mainstream media also reports on social media and we also know that governing bodies and other institutions of power disseminate a lot of information through social media.

Its about figuring out whats okay. I can use the social media from this organization because its a quote good organization. But then Im supposed to be able to spot this bad information from this other social media channel.

Were in a flat environment (lacking indicators around credibility). Years ago, when you went to the checkout stand, you could tell what was a tabloid like The National Enquirer by the paper it was printed on, the colors used and the sensational headlines. There was a tactile or concrete way that you could process and evaluate. And right now, everything is just in this flat environment, so it's just that much harder to process.

Q: As librarians, how do you view your roles when it comes to combatting misinformation?

LB: One of the key parts of our jobs is helping our students, faculty and staff build skills to navigate the information environment (through courses, research services, etc.). The first thing you think of when you think librarians is that we help students navigate the library and navigate the information we have in the library, which is its own type of complex information environment.

We see those skills transferring to teaching students how to navigate the world and the information environment outside of their assignments as well. Helping students build those skills and then also helping them understand that they need to be engaged with the information they see on a day-to-day basis, not necessarily as passive consumers.

We [as humans] dont innately know how to navigate information and theres a lot of talk about someone who has grown up around technology, but even young people dont innately know. It depends on who has access to what sort of technology growing up and thats very financially based. It also depends on if youre actually taught those skills or not.

A good part of our job is explicitly teaching those skills and teaching them in such a way that they fit with their day-to-day lives.

JB: As librarians, were teaching students particularly in content-related classes. When we teach students in an introductory chemistry or psychology class, we arent working with domain experts [in those subjects]. Domain experts already know seminal works and know prominent, authoritative researchers and organizations within their domain who are disseminating information. Domain experts can go to these sources directly or see them quickly in search results. Domain novices dont have that head start when evaluating information. They have to evaluate a lot of unfamiliar and complex information with no specialized knowledge.

Take for example health information. A student could say, I know the CDC, I understand the government structures, so I know that the CDC would potentially be a good source. Somebody else could say, Oh, you know doctor so and so has this blog, I think that would be a good source. This directly ties into what we teach them in the classroom and how they apply that in their lives outside the classroom.

Q: Anything you would like to highlight or expand on regarding Michael Caufields work/approach (SIFT Method, etc.)?

JB: Caulfields approach is kind of simplistic, but he specifically created it so that you could use it in that flat environment. His method helps you recontextualize information.

LB: Its grounded in a Stanford Graduate School of Education study on how students navigate the credibility of information online. There have been updates to this research recently, but one of the original studies was from 2016.

Also, I want to emphasize the SIFT method isnt like a checklist or a long, arduous process.

Its supposed to be a quick fact-checking habit. Its designed to help you decide whether you want to spend more time on a source. Its supposed to be something that you can just build in your daily practice of consuming information on a day-to-day basis. A lot of times Ill investigate sources on Wikipedia for the original source if Ive never heard of it before. Caufield calls it the Wikipedia Trickchecking to see what somebody says about a source and figure out if its a known site for misinformation.

Q: What are some ways people can spot and/or avoid misinformation?

LB: Because so many things around this flat environment are on the Internet, were losing all these contextual clues and its really easy to convince someone that something is true or something is fake. Known misinformation sites can look really well polished, have great web design and a really specific tone and a well-known and respected scholarly article or source.

All of this is why having SIFT as a habit knowing that it takes 30 seconds or less is really helpful so you dont waste time looking for clues or hints. Sometimes misinformation is not designed to be actively harmfulits satire or something else thats been moved to a completely different context.

Q: Are there any additional points, resources or intersections of media literacy/misinformation research you would like to mention?

JB: This isnt going away anytime soon or ever so theres no way we can legislate our way out of this. Corporate responsibility is not going to get rid of this. Everything from the Australian wildfires to war in Ukraine to school board meetings. I mean theres absolutely nothing that's immune to misinformation.

To view the full Lemieux Library blog article, visit https://libguides.seattleu.edu/blog/SIFT-ing-Through-Information-Online.

See the rest here:
2022 - Combatting Misinformation - The Seattle U Newsroom - The Seattle U Newsroom - News, stories and more

Meta Is Building an AI to Fact-Check WikipediaAll 6.5 Million Articles – Singularity Hub

Most people older than 30 probably remember doing research with good old-fashioned encyclopedias. Youd pull a heavy volume from the shelf, check the index for your topic of interest, then flip to the appropriate page and start reading. It wasnt as easy as typing a few words into the Google search bar, but on the plus side, you knew that the information you found in the pages of the Britannica or the World Book was accurate and true.

Not so with internet research today. The overwhelming multitude of sources was confusing enough, but add the proliferation of misinformation and its a wonder any of us believe a word we read online.

Wikipedia is a case in point. As of early 2020, the sites English version was averaging about 255 million page views per day, making it the eighth-most-visited website on the internet. As of last month, it had moved up to spot number seven, and the English version currently has over 6.5 million articles.

But as high-traffic as this go-to information source may be, its accuracy leaves something to be desired; the page about the sites own reliability states, The online encyclopedia does not consider itself to be reliable as a source and discourages readers from using it in academic or research settings.

Metaof the former Facebookwants to change this. In a blog post published last month, the companys employees describe how AI could help make Wikipedia more accurate.

Though tens of thousands of people participate in editing the site, the facts they add arent necessarily correct; even when citations are present, theyre not always accurate nor even relevant.

Meta is developing a machine learning model that scans these citations and cross-references their content to Wikipedia articles to verify that not only the topics line up, but specific figures cited are accurate.

This isnt just a matter of picking out numbers and making sure they match; Metas AI will need to understand the content of cited sources (though understand is a misnomer, as complexity theory researcher Melanie Mitchell would tell you, because AI is still in the narrow phase, meaning its a tool for highly sophisticated pattern recognition, while understanding is a word used for human cognition, which is still a very different thing).

Metas model will understand content not by comparing text strings and making sure they contain the same words, but by comparing mathematical representations of blocks of text, which it arrives at using natural language understanding (NLU) techniques.

What we have done is to build an index of all these web pages by chunking them into passages and providing an accurate representation for each passage, Fabio Petroni, Metas Fundamental AI Research tech lead manager, told Digital Trends. That is not representing word-by-word the passage, but the meaning of the passage. That means that two chunks of text with similar meanings will be represented in a very close position in the resulting n-dimensional space where all these passages are stored.

The AI is being trained on a set of four million Wikipedia citations, and besides picking out faulty citations on the site, its creators would like it to eventually be able to suggest accurate sources to take their place, pulling from a massive index of data thats continuously updating.

One big issue left to work out is working in a grading system for sources reliability. A paper from a scientific journal, for example, would receive a higher grade than a blog post. The amount of content online is so vast and varied that you can find sources to support just about any claim, but parsing the misinformation from the disinformation (the former means incorrect, while the latter means deliberately deceiving), and the peer-reviewed from the non-peer-reviewed, the fact-checked from the hastily-slapped-together, is no small taskbut a very important one when it comes to trust.

Meta has open-sourced its model, and those who are curious can see a demo of the verification tool. Metas blog post noted that the company isnt partnering with Wikimedia on this project, and that its still in the research phase and not currently being used to update content on Wikipedia.

If you imagine a not-too-distant future where everything you read on Wikipedia is accurate and reliable, wouldnt that make doing any sort of research a bit too easy? Theres something valuable about checking and comparing various sources ourselves, is there not? It was a big a leap to go from paging through heavy books to typing a few words into a search engine and hitting Enter; do we really want Wikipedia to move from a research jumping-off point to a gets-the-last-word source?

In any case, Metas AI research team will continue working toward a tool to improve the online encyclopedia. I think we were driven by curiosity at the end of the day, Petroni said. We wanted to see what was the limit of this technology. We were absolutely not sure if [this AI] could do anything meaningful in this context. No one had ever tried to do something similar.

Image Credit: Gerd Altmann from Pixabay

Read more here:
Meta Is Building an AI to Fact-Check WikipediaAll 6.5 Million Articles - Singularity Hub

Meta Wants to Fixs Wikipedia Biggest Problem Using AI – Review Geek

Meta, Wikipedia

Despite the efforts of over 30 million editors, Wikipedia sure aint perfect. Some information on Wikipedia lacks a genuine source or citationas we learned with the Pringle Man hoax, this can have a wide-ranging impact on culture or facts. But Meta, formerly Facebook, hopes to solve Wikipedias big problem with AI.

As detailed in a blog post and research paper, the Meta AI team created a dataset of over 134 million web pages to build a citation-checker AIcalled SIDE. Using natural language technology, SIDE can analyze a Wikipedia citation and determine whether its appropriate. It can also find new sources for information already published on Wikipedia.

Meta AI highlights the Blackfoot ConfederacyWikipedia article as an example of how SIDE can improve citations. If you scroll to the bottom of this article, youll learn that Joe Hipp was the first Native American to competefor the WBA World Heavyweight Titlea cool fact that is 100% true. But heres the problem; whoever wrote this factoid cited a source that has nothing to do with Joe Hipp or the Blackfeet Tribe.

In this case, Wikipedia editors failed to check the veracity of a citation (the problem has since been fixed). But if the editors had SIDE, they could have caught the bad citation early. And they wouldnt need to look for a new citation, as SIDE would automatically suggest one.

At least, this is the hypothesis put forth by Meta AI researchers. While SIDE is certainly an interesting tool, we still cant trust AI to understand language, context, or the veracity of anything published online. (To be fair, Meta AIs research paperdescribes SIDE as more of a demonstration than a working tool.)

Wikipedia editors can now test SIDE and assess its usefulness. The project is also available on Github. For what its worth, SIDE looks like a super-powered version of the tools that Wikipedia editors already use to improve their workflow. Its easy to see how such a tool could flag citations for humans to review, at the very least.

Source: Meta AI

Read this article:
Meta Wants to Fixs Wikipedia Biggest Problem Using AI - Review Geek

BIOGRAPHY AND WIKIPEDIA: Capitol Records just signed an "Artificial Intelligence virtual rapper" FN meka becoming the world’s first A.I….

Information reaching Kossyderrickent has it that Capitol Records just signed an "Artificial Intelligence virtual rapper" FN meka becoming the worlds first A.I. artist to sign with a major label. He has 10 million followers on TikTok. (Rappers are pissed!)

The deal was signed following Mekas continued success on TikTok with its singles Moonwalkin, Speed Demon and Internet, earning her over a billion views and has accrued 10 million followers on TikTok. The new deal came boasting a first single with Capitol Records titled Florida Water featuring Gunna and Fortnite streamer Clix.

The artificial intelligence rapper also announced it will star in a new commercial for Apple Music this week.

On the Turbo-produced song "Florida Water," Meka delivers flossy lines like, "Oh, just put it on my tab/I don't see the prices, throw it in my bag/Always in a foreign when I dash/Clean water VVS diamonds bust down/Make it splash."

Ryan Ruden, Capitol Music Groups Executive Vice President of Experiential Marketing & Business Development, views the partnership with FN Meka as the future of music merging with technology. "[It] meets at the intersection of music, technology and gaming culture," he told MBW. "It's just a preview of whats to come."

Go here to see the original:
BIOGRAPHY AND WIKIPEDIA: Capitol Records just signed an "Artificial Intelligence virtual rapper" FN meka becoming the world's first A.I....