Archive for the ‘Wikipedia’ Category

2022 – Combatting Misinformation – The Seattle U Newsroom – The Seattle U Newsroom – News, stories and more

Lydia Bello, science and engineering librarian, and Jennifer Bodley, adjunct librarian, collaborated on a blog article, SIFT-ing Through Information Online to provide the campus community with guidance on media literacy, which is being critically engaged when receiving, finding, evaluating, using, creating and sharing media, particularly in an online environment.While some question the legitimacy of Wikipedia, learn how its their first trusted line of defense in avoiding misinformation.

Q: What prompted you both to write a blog article that addresses media literacy and misinformation?

JB: We were asked to write about misinformation, mostly prompted from events happening in the world. I think at the time the Russia-Ukraine news cycle had started and ... COVID-19 has obviously been in the news cycle for a long time and theres a lot of other issues where misinformation is very problematic with information getting to the public.

LB: In the early days of the invasion of Ukraine, there was information flowing fast and furious. A lot of it misinformation and unverified and a lot of people here [at SU] are directly impacted by those current events. Along with that, Jennifer had just finished teaching a workshop on these skills and it all sort of fell into place that we thought this might be a good time to offer a reminder.

JB: News is a fire hose, so whenever you have that fire hose obviously you dont have checks and balances and controlled dissemination of factual reporting.

Q: What are some of the sources where misinformation is the highest or spreads fastest (i.e., social media/specific platforms, news media, etc.)?

LB: Some of the really obvious places include social media designed to spread information quickly. Weve all seen articles and research about the addictive nature of social media, about how its designed to engage you in order to create advertising dollars. Because of that, its a core place where information moves.

Misinformation and disinformation also move quickly when theres a strong sense of emotion attached to it. Emotions like fear or anger are ones that come to mind, but also vindication, satisfaction and a really strong desire to help. When those emotions are attached or are involved, they help move the flow of misinformation on these platforms really quickly as well.

JB: Were talking about social media being behind a lot of misinformation and disinformation, but the mainstream media also reports on social media and we also know that governing bodies and other institutions of power disseminate a lot of information through social media.

Its about figuring out whats okay. I can use the social media from this organization because its a quote good organization. But then Im supposed to be able to spot this bad information from this other social media channel.

Were in a flat environment (lacking indicators around credibility). Years ago, when you went to the checkout stand, you could tell what was a tabloid like The National Enquirer by the paper it was printed on, the colors used and the sensational headlines. There was a tactile or concrete way that you could process and evaluate. And right now, everything is just in this flat environment, so it's just that much harder to process.

Q: As librarians, how do you view your roles when it comes to combatting misinformation?

LB: One of the key parts of our jobs is helping our students, faculty and staff build skills to navigate the information environment (through courses, research services, etc.). The first thing you think of when you think librarians is that we help students navigate the library and navigate the information we have in the library, which is its own type of complex information environment.

We see those skills transferring to teaching students how to navigate the world and the information environment outside of their assignments as well. Helping students build those skills and then also helping them understand that they need to be engaged with the information they see on a day-to-day basis, not necessarily as passive consumers.

We [as humans] dont innately know how to navigate information and theres a lot of talk about someone who has grown up around technology, but even young people dont innately know. It depends on who has access to what sort of technology growing up and thats very financially based. It also depends on if youre actually taught those skills or not.

A good part of our job is explicitly teaching those skills and teaching them in such a way that they fit with their day-to-day lives.

JB: As librarians, were teaching students particularly in content-related classes. When we teach students in an introductory chemistry or psychology class, we arent working with domain experts [in those subjects]. Domain experts already know seminal works and know prominent, authoritative researchers and organizations within their domain who are disseminating information. Domain experts can go to these sources directly or see them quickly in search results. Domain novices dont have that head start when evaluating information. They have to evaluate a lot of unfamiliar and complex information with no specialized knowledge.

Take for example health information. A student could say, I know the CDC, I understand the government structures, so I know that the CDC would potentially be a good source. Somebody else could say, Oh, you know doctor so and so has this blog, I think that would be a good source. This directly ties into what we teach them in the classroom and how they apply that in their lives outside the classroom.

Q: Anything you would like to highlight or expand on regarding Michael Caufields work/approach (SIFT Method, etc.)?

JB: Caulfields approach is kind of simplistic, but he specifically created it so that you could use it in that flat environment. His method helps you recontextualize information.

LB: Its grounded in a Stanford Graduate School of Education study on how students navigate the credibility of information online. There have been updates to this research recently, but one of the original studies was from 2016.

Also, I want to emphasize the SIFT method isnt like a checklist or a long, arduous process.

Its supposed to be a quick fact-checking habit. Its designed to help you decide whether you want to spend more time on a source. Its supposed to be something that you can just build in your daily practice of consuming information on a day-to-day basis. A lot of times Ill investigate sources on Wikipedia for the original source if Ive never heard of it before. Caufield calls it the Wikipedia Trickchecking to see what somebody says about a source and figure out if its a known site for misinformation.

Q: What are some ways people can spot and/or avoid misinformation?

LB: Because so many things around this flat environment are on the Internet, were losing all these contextual clues and its really easy to convince someone that something is true or something is fake. Known misinformation sites can look really well polished, have great web design and a really specific tone and a well-known and respected scholarly article or source.

All of this is why having SIFT as a habit knowing that it takes 30 seconds or less is really helpful so you dont waste time looking for clues or hints. Sometimes misinformation is not designed to be actively harmfulits satire or something else thats been moved to a completely different context.

Q: Are there any additional points, resources or intersections of media literacy/misinformation research you would like to mention?

JB: This isnt going away anytime soon or ever so theres no way we can legislate our way out of this. Corporate responsibility is not going to get rid of this. Everything from the Australian wildfires to war in Ukraine to school board meetings. I mean theres absolutely nothing that's immune to misinformation.

To view the full Lemieux Library blog article, visit https://libguides.seattleu.edu/blog/SIFT-ing-Through-Information-Online.

See the rest here:
2022 - Combatting Misinformation - The Seattle U Newsroom - The Seattle U Newsroom - News, stories and more

Meta Is Building an AI to Fact-Check WikipediaAll 6.5 Million Articles – Singularity Hub

Most people older than 30 probably remember doing research with good old-fashioned encyclopedias. Youd pull a heavy volume from the shelf, check the index for your topic of interest, then flip to the appropriate page and start reading. It wasnt as easy as typing a few words into the Google search bar, but on the plus side, you knew that the information you found in the pages of the Britannica or the World Book was accurate and true.

Not so with internet research today. The overwhelming multitude of sources was confusing enough, but add the proliferation of misinformation and its a wonder any of us believe a word we read online.

Wikipedia is a case in point. As of early 2020, the sites English version was averaging about 255 million page views per day, making it the eighth-most-visited website on the internet. As of last month, it had moved up to spot number seven, and the English version currently has over 6.5 million articles.

But as high-traffic as this go-to information source may be, its accuracy leaves something to be desired; the page about the sites own reliability states, The online encyclopedia does not consider itself to be reliable as a source and discourages readers from using it in academic or research settings.

Metaof the former Facebookwants to change this. In a blog post published last month, the companys employees describe how AI could help make Wikipedia more accurate.

Though tens of thousands of people participate in editing the site, the facts they add arent necessarily correct; even when citations are present, theyre not always accurate nor even relevant.

Meta is developing a machine learning model that scans these citations and cross-references their content to Wikipedia articles to verify that not only the topics line up, but specific figures cited are accurate.

This isnt just a matter of picking out numbers and making sure they match; Metas AI will need to understand the content of cited sources (though understand is a misnomer, as complexity theory researcher Melanie Mitchell would tell you, because AI is still in the narrow phase, meaning its a tool for highly sophisticated pattern recognition, while understanding is a word used for human cognition, which is still a very different thing).

Metas model will understand content not by comparing text strings and making sure they contain the same words, but by comparing mathematical representations of blocks of text, which it arrives at using natural language understanding (NLU) techniques.

What we have done is to build an index of all these web pages by chunking them into passages and providing an accurate representation for each passage, Fabio Petroni, Metas Fundamental AI Research tech lead manager, told Digital Trends. That is not representing word-by-word the passage, but the meaning of the passage. That means that two chunks of text with similar meanings will be represented in a very close position in the resulting n-dimensional space where all these passages are stored.

The AI is being trained on a set of four million Wikipedia citations, and besides picking out faulty citations on the site, its creators would like it to eventually be able to suggest accurate sources to take their place, pulling from a massive index of data thats continuously updating.

One big issue left to work out is working in a grading system for sources reliability. A paper from a scientific journal, for example, would receive a higher grade than a blog post. The amount of content online is so vast and varied that you can find sources to support just about any claim, but parsing the misinformation from the disinformation (the former means incorrect, while the latter means deliberately deceiving), and the peer-reviewed from the non-peer-reviewed, the fact-checked from the hastily-slapped-together, is no small taskbut a very important one when it comes to trust.

Meta has open-sourced its model, and those who are curious can see a demo of the verification tool. Metas blog post noted that the company isnt partnering with Wikimedia on this project, and that its still in the research phase and not currently being used to update content on Wikipedia.

If you imagine a not-too-distant future where everything you read on Wikipedia is accurate and reliable, wouldnt that make doing any sort of research a bit too easy? Theres something valuable about checking and comparing various sources ourselves, is there not? It was a big a leap to go from paging through heavy books to typing a few words into a search engine and hitting Enter; do we really want Wikipedia to move from a research jumping-off point to a gets-the-last-word source?

In any case, Metas AI research team will continue working toward a tool to improve the online encyclopedia. I think we were driven by curiosity at the end of the day, Petroni said. We wanted to see what was the limit of this technology. We were absolutely not sure if [this AI] could do anything meaningful in this context. No one had ever tried to do something similar.

Image Credit: Gerd Altmann from Pixabay

Read more here:
Meta Is Building an AI to Fact-Check WikipediaAll 6.5 Million Articles - Singularity Hub

Meta Wants to Fixs Wikipedia Biggest Problem Using AI – Review Geek

Meta, Wikipedia

Despite the efforts of over 30 million editors, Wikipedia sure aint perfect. Some information on Wikipedia lacks a genuine source or citationas we learned with the Pringle Man hoax, this can have a wide-ranging impact on culture or facts. But Meta, formerly Facebook, hopes to solve Wikipedias big problem with AI.

As detailed in a blog post and research paper, the Meta AI team created a dataset of over 134 million web pages to build a citation-checker AIcalled SIDE. Using natural language technology, SIDE can analyze a Wikipedia citation and determine whether its appropriate. It can also find new sources for information already published on Wikipedia.

Meta AI highlights the Blackfoot ConfederacyWikipedia article as an example of how SIDE can improve citations. If you scroll to the bottom of this article, youll learn that Joe Hipp was the first Native American to competefor the WBA World Heavyweight Titlea cool fact that is 100% true. But heres the problem; whoever wrote this factoid cited a source that has nothing to do with Joe Hipp or the Blackfeet Tribe.

In this case, Wikipedia editors failed to check the veracity of a citation (the problem has since been fixed). But if the editors had SIDE, they could have caught the bad citation early. And they wouldnt need to look for a new citation, as SIDE would automatically suggest one.

At least, this is the hypothesis put forth by Meta AI researchers. While SIDE is certainly an interesting tool, we still cant trust AI to understand language, context, or the veracity of anything published online. (To be fair, Meta AIs research paperdescribes SIDE as more of a demonstration than a working tool.)

Wikipedia editors can now test SIDE and assess its usefulness. The project is also available on Github. For what its worth, SIDE looks like a super-powered version of the tools that Wikipedia editors already use to improve their workflow. Its easy to see how such a tool could flag citations for humans to review, at the very least.

Source: Meta AI

Read this article:
Meta Wants to Fixs Wikipedia Biggest Problem Using AI - Review Geek

BIOGRAPHY AND WIKIPEDIA: Capitol Records just signed an "Artificial Intelligence virtual rapper" FN meka becoming the world’s first A.I….

Information reaching Kossyderrickent has it that Capitol Records just signed an "Artificial Intelligence virtual rapper" FN meka becoming the worlds first A.I. artist to sign with a major label. He has 10 million followers on TikTok. (Rappers are pissed!)

The deal was signed following Mekas continued success on TikTok with its singles Moonwalkin, Speed Demon and Internet, earning her over a billion views and has accrued 10 million followers on TikTok. The new deal came boasting a first single with Capitol Records titled Florida Water featuring Gunna and Fortnite streamer Clix.

The artificial intelligence rapper also announced it will star in a new commercial for Apple Music this week.

On the Turbo-produced song "Florida Water," Meka delivers flossy lines like, "Oh, just put it on my tab/I don't see the prices, throw it in my bag/Always in a foreign when I dash/Clean water VVS diamonds bust down/Make it splash."

Ryan Ruden, Capitol Music Groups Executive Vice President of Experiential Marketing & Business Development, views the partnership with FN Meka as the future of music merging with technology. "[It] meets at the intersection of music, technology and gaming culture," he told MBW. "It's just a preview of whats to come."

Go here to see the original:
BIOGRAPHY AND WIKIPEDIA: Capitol Records just signed an "Artificial Intelligence virtual rapper" FN meka becoming the world's first A.I....

Spanish art museum has painting that looks exactly like Connor McDavid | Offside – Daily Hive

It turns out Edmonton Oilers captain Connor McDavid is causing a stir in Spain.

A painting in the Museo del Prado in central Madrid bears a striking resemblance to McDavid, and Twitter has just discovered it.

Went to El Museo del Prado in Madrid yesterday to get a little culture and this was [sic] my favourite painting because its Connor McDavid, user @Mariia19 tweeted Tuesday.

The painting actually is a portrait of Francisco Lezcano, also known as The Nio de Vallecas, and is the 1645 portrait by Diego Velzquez of Francisco Lezcano, also known as Lezcanillo or el Vizcano, a jester at the court of Philip IV of Spain, according to Wikipedia.

The tweet has prompted plenty of reaction.

And more than a few photoshops, too.

McDavid netted NHL career-highs in goals (44) and points (123) in 80 games this season to win the Art Ross Trophy as the leagues leading scorer. The 25-year-old also paced the Stanley Cup Playoffs in scoring with 33 points (10 goals, 23 assists) despite being swept out of the Western Conference Final by the eventual Stanley Cup-winning Colorado Avalanche.

He leads all NHLers in scoring since entering the league in 2015-16 with 697 points (239 points, 458 assists) in 487 games.

McDavid has Hart Memorial Trophy wins in 2021 and 2017; Art Ross Trophy wins in 2021, 2018, and 2017; Ted Lindsay Awards in 2021, 2018, and 2017; and has earned NHL First All-Star Team nods in 2021, 2019, 2018, and 2017.

See the original post:
Spanish art museum has painting that looks exactly like Connor McDavid | Offside - Daily Hive