Archive for the ‘Artificial Intelligence’ Category

Revisiting the rise of A.I.: How far has artificial intelligence come since 2010? – Digital Trends

2010 doesnt seem all that long ago. Facebook was already a giant, time-consuming leviathan; smartphones and the iPad were a daily part of peoples lives; The Walking Dead was a big hit on televisions across America; and the most talked-about popular musical artists were the likes of Taylor Swift and Justin Bieber. So pretty much like life as we enter 2020, then? Perhaps in some ways.

One place that things most definitely have moved on in leaps and bounds, however, is on the artificial intelligence front. Over the past decade, A.I. has made some huge advances, both technically and in the public consciousness, that mark this out as one of the most important ten year stretches in the fields history. What have been the biggest advances? Funny you should ask; Ive just written a list on exactly that topic.

To most people, few things say A.I. is here quite like seeing an artificial intelligence defeat two champion Jeopardy! players on prime time television. Thats exactly what happened in 2011, when IBMs Watson computer trounced Brad Rutter and Ken Jennings, the two highest-earning American game show contestants of all time at the popular quiz show.

Its easy to dismiss attention-grabbing public displays of machine intelligence as being more about hype-driven spectacles than serious, objective demonstrations. What IBM had developed was seriously impressive, though. Unlike a game such as chess, which features rigid rules and a limited board, Jeopardy! is less easily predictable. Questions can be about anything and often involve complex wordplay, such as puns.

I had been in A.I. classes and knew that the kind of technology that could beat a human at Jeopardy! was still decades away, Jennings told me when I was writing my book Thinking Machines. Or at least I thought that it was. At the end of the game, Jennings scribbled a sentence on his answer board and held it up for the cameras. It read: I for one welcome our new robot overlords.

October 2011 is most widely remembered by Apple fans as the month in which company co-founder and CEO Steve Jobs passed away at the age of 56. However, it was also the month in which Apple unveiled its A.I. assistant Siri with the iPhone 4s.

The concept of an A.I. you could communicate with via spoken words had been dreamed about for decades. Former Apple CEO had, remarkably, predicted a Siri-style assistant back in the 1980s; getting the date of Siri right almost down to the month. But Siri was still a remarkable achievement. True, its initial implementation had some glaring weaknesses, and Apple arguably has never managed to offer a flawless smart assistant.Nonetheless, it introduced a new type of technology that was quickly pounced on for everything from Google Assistant to Microsofts Cortana to Samsungs Bixby.

Of all the tech giant, Amazon has arguably done the most to advance the A.I. assistant in the years since. Its Alexa-powered Echo speakers have not only shown the potential of these A.I. assistants; theyve demonstrated that theyre compelling enough to exist as standalone pieces of hardware. Today, voice-based assistants are so commonplace they barely even register. Ten years ago most people had never used one.

Deep learning neural networks are not wholly an invention of the 2010s. The basis for todays artificial neural networks traces back to a 1943 paper by researchers Warren McCulloch and Walter Pitts. A lot of the theoretical work underpinning neural nets, such as the breakthrough backpropagation algorithm, were pioneered in the 1980s. Some of the advances that lead directly to modern deep learning were carried out in the first years if the 2000s with work like Geoff Hintons advances in unsupervised learning.

But the 2010s are the decade the technology went mainstream. In 2010,researchers George Dahl and Abdel-rahman Mohamed demonstrated that deep learning speech recognition tools could beat what were then the state-of-the-art industry approaches. After that, the floodgates were opened.From image recognition (example: Jeff Dean and Andrew Ngs famous paper on identifying cats) to machine translation, barely a week went by when the world wasnt reminded just how powerful deep learning could be.

It wasnt just a good PR campaign either, the way an unknown artist might finally stumble across fame and fortune after doing the same way in obscurity for decades. The 2010s are the decade in which the quantity of data exploded, making it possible to leverage deep learning in a way that simply wouldnt have been possible at any previous point in history.

Of all the companies doing amazing AI work, DeepMind deserves its own entry on this list. Founded in September 2010, most people hadnt heard of deep learning company DeepMind until it was bought by Google for what seemed like a bonkers $500 million in January 2014. DeepMind has made up for it in the years since, though.

Much of DeepMinds most public-facing work has involved the development of game-playing AIs, capable of mastering computer games ranging from classic Atari titles like Breakout and Space Invaders (with the help of some handy reinforcement learning algorithms) to, more recently, attempts at StarCraft II and Quake III Arena.

Demonstrating the core tenet of machine learning, these game-playing A.I.s got better the more they played. In the process, they were able to form new strategies that, in some cases, even their human creators werent familiar with. All of this work helped set the stage for DeepMinds biggest success of all

As this list has already shown, there are no shortage of examples when it comes to A.I. beating human players at a variety of games. But Go, a Chinese board game in which the aim is to surround more territory than your opponent, was different. Unlike other games in which players could be beaten simply by number crunching faster than humans are capable of, in Go the total number of allowable board positions is mind-bogglingly staggering: far more than the total number of atoms in the universe. That makes brute force attempts to calculate answers virtually impossible, even using a supercomputer.

Nonetheless, DeepMind managed it. In October 2015, AlphaGo became the first computer Go program to beat a human professional Go player without handicap on a full-sized 1919 board. The next year, 60 million people tuned in live to see the worlds greatest Go player, Lee Sedol, lose to AlphaGo. By the end of the series AlphaGo had beaten Sedol four games to one.

In November 2019, Sedol announced his intentions to retire as a professional Go player. He cited A.I. as the reason.Even if I become the number one, there is an entity that cannot be defeated, he said.Imagine if Lebron James announced he was quitting basketball because a robot was better at shooting hoops that he was. Thats the equivalent!

In the first years of the twenty-first century, the idea of an autonomous car seemed like it would never move beyond science fiction. In MIT and Harvard economists Frank Levy and Richard Murnanes 2004 book The New Division of Labor, driving a vehicle was described as a task too complex for machines to carry out. Executing a left turn against oncoming traffic involves so many factors that it is hard to imagine discovering the set of rules that can replicate a drivers behavior, they wrote.

In 2010, Google officially unveiled its autonomous car program, now called Waymo. Over the decade that followed, dozens of other companies (including tech heavy hitters like Apple) have started to develop their own self-driving vehicles. Collectively these cars have driven thousands of miles on public roads; apparently proving less accident-prone than humans in the process.

Foolproof full autonomy is still a work-in-progress, but this was nonetheless one of the most visible demonstrations of A.I. in action during the 2010s.

The dirty secret of much of todays A.I. is that its core algorithms, the technologies that make it tick, were actually developed several decades ago. Whats changed is the processing power available to run these algorithms and the massive amounts of data they have to train on. Hearing about a wholly original approach to building A.I. tools is therefore surprisingly rare.

Generative adversarial networks certainly qualify. Often abbreviated to GANs, this class of machine learning system was invented by Ian Goodfellow and colleagues in 2014. No less an authority than A.I. expert Yann LeCun has described it as the coolest idea in machine learning in the last twenty years.

At least conceptually, the theory behind GANs is pretty straightforward: take two cutting edge artificial neural networks and pit them against one another. One network creates something, such as a generated image. The other network then attempts to work out which images are computer-generated and which are not. Over time, the generative adversarial process allows the generator network to become sufficiently good at creating images that they can successfully fool the discriminator network every time.

The power of Generative Adversarial Networks were seen most widely when a collective of artists used them to create original paintings developed by A.I. The result sold for a shockingly large amount of money at a Christies auction in 2018.

Follow this link:

Revisiting the rise of A.I.: How far has artificial intelligence come since 2010? - Digital Trends

Artificial Intelligence Is Rushing Into Patient Care – And Could Raise Risks – Scientific American

Health products powered by artificial intelligence, or AI, are streaming into our lives, from virtual doctor apps to wearable sensors and drugstore chatbots.

IBM boasted that its AI could outthink cancer. Others say computer systems that read X-rays will make radiologists obsolete.

Theres nothing that Ive seen in my 30-plus years studying medicine that could be as impactful and transformative as AI, said Eric Topol, a cardiologist and executive vice president of Scripps Research in La Jolla, Calif. AI can help doctors interpret MRIs of the heart, CT scans of the head and photographs of the back of the eye, and could potentially take over many mundane medical chores, freeing doctors to spend more time talking to patients, Topol said.

Even the U.S. Food and Drug Administration which has approved more than 40 AI products in the past five years says the potential of digital health is nothing short of revolutionary.

Yet many health industry experts fear AI-based products wont be able to match the hype. Many doctors and consumer advocates fear that the tech industry, which lives by the mantra fail fast and fix it later, is putting patients at risk and that regulators arent doing enough to keep consumers safe.

Early experiments in AI provide reason for caution, said Mildred Cho, a professor of pediatrics at Stanfords Center for Biomedical Ethics.

Systems developed in one hospital often flop when deployed in a different facility, Cho said. Software used in the care of millions of Americans has been shown to discriminate against minorities. And AI systems sometimes learn to make predictions based on factors that have less to do with disease than the brand of MRI machine used, the time a blood test is taken or whether a patient was visited by a chaplain. In one case, AI software incorrectly concluded that people with pneumonia were less likely to die if they had asthma an error that could have led doctors to deprive asthma patients of the extra care they need.

Its only a matter of time before something like this leads to a serious health problem, said Steven Nissen, chairman of cardiology at the Cleveland Clinic.

Medical AI, which pulled in $1.6 billion in venture capital funding in the third quarter alone, is nearly at the peak of inflated expectations, concluded a July report from the research company Gartner. As the reality gets tested, there will likely be a rough slide into the trough of disillusionment.

That reality check could come in the form of disappointing results when AI products are ushered into the real world. Even Topol, the author of Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again, acknowledges that many AI products are little more than hot air. Its a mixed bag, he said.

Experts such as Bob Kocher, a partner at the venture capital firm Venrock, are more blunt. Most AI products have little evidence to support them, Kocher said. Some risks wont become apparent until an AI system has been used by large numbers of patients. Were going to keep discovering a whole bunch of risks and unintended consequences of using AI on medical data, Kocher said.

None of the AI products sold in the U.S. have been tested in randomized clinical trials, the strongest source of medical evidence, Topol said. The first and only randomized trial of an AI system which found that colonoscopy with computer-aided diagnosis found more small polyps than standard colonoscopy was published online in October.

Few tech startups publish their research in peer-reviewed journals, which allow other scientists to scrutinize their work, according to a January article in the European Journal of Clinical Investigation. Such stealth research described only in press releases or promotional events often overstates a companys accomplishments.

And although software developers may boast about the accuracy of their AI devices, experts note that AI models are mostly tested on computers, not in hospitals or other medical facilities. Using unproven software may make patients into unwitting guinea pigs, said Ron Li, medical informatics director for AI clinical integration at Stanford Health Care.

AI systems that learn to recognize patterns in data are often described as black boxes because even their developers dont know how they have reached their conclusions. Given that AI is so new and many of its risks unknown the field needs careful oversight, said Pilar Ossorio, a professor of law and bioethics at the University of Wisconsin-Madison.

Yet the majority of AI devices dont require FDA approval.

None of the companies that I have invested in are covered by the FDA regulations, Kocher said.

Legislation passed by Congress in 2016 and championed by the tech industry exempts many types of medical software from federal review, including certain fitness apps, electronic health records and tools that help doctors make medical decisions.

Theres been little research on whether the 320,000 medical apps now in use actually improve health, according to a report on AI published Dec. 17 by the National Academy of Medicine.

Almost none of the [AI] stuff marketed to patients really works, said Ezekiel Emanuel, professor of medical ethics and health policy in the Perelman School of Medicine at the University of Pennsylvania.

The FDA has long focused its attention on devices that pose the greatest threat to patients. And consumer advocates acknowledge that some devices such as ones that help people count their daily steps need less scrutiny than ones that diagnose or treat disease.

Some software developers dont bother to apply for FDA clearance or authorization, even when legally required, according to a 2018 study in Annals of Internal Medicine.

Industry analysts say that AI developers have little interest in conducting expensive and time-consuming trials. Its not the main concern of these firms to submit themselves to rigorous evaluation that would be published in a peer-reviewed journal, said Joachim Roski, a principal at Booz Allen Hamilton, a technology consulting firm, and co-author of the National Academys report. Thats not how the U.S. economy works.

But Oren Etzioni, chief executive officer at the Allen Institute for AI in Seattle, said AI developers have a financial incentive to make sure their medical products are safe.

If failing fast means a whole bunch of people will die, I dont think we want to fail fast, Etzioni said. Nobody is going to be happy, including investors, if people die or are severely hurt.

Relaxed AI Standards At The FDA

The FDA has come under fire in recent years for allowing the sale of dangerous medical devices, which have been linked by the International Consortium of Investigative Journalists to 80,000 deaths and 1.7 million injuries over the past decade.

Many of these devices were cleared for use through a controversial process called the 510(k) pathway, which allows companies to market moderate-risk products with no clinical testing as long as theyre deemed similar to existing devices.In 2011, a committee of the National Academy of Medicine concluded the 510(k) process is so fundamentally flawed that the FDA should throw it out and start over.

Instead, the FDA is using the process to greenlight AI devices.

Of the 14 AI products authorized by the FDA in 2017 and 2018, 11 were cleared through the 510(k) process, according to a November article in JAMA. None of these appear to have had new clinical testing, the study said. The FDA cleared an AI device designed to help diagnose liver and lung cancer in 2018 based on its similarity to imaging software approved 20 years earlier. That software had itself been cleared because it was deemed substantially equivalent to products marketed before 1976.

AI products cleared by the FDA today are largely locked, so that their calculations and results will not change after they enter the market, said Bakul Patel, director for digital health at the FDAs Center for Devices and Radiological Health. The FDA has not yet authorized unlocked AI devices, whose results could vary from month to month in ways that developers cannot predict.

To deal with the flood of AI products, the FDA is testing a radically different approach to digital device regulation, focusing on evaluating companies, not products.

The FDAs pilot pre-certification program, launched in 2017, is designed to reduce the time and cost of market entry for software developers, imposing the least burdensome system possible. FDA officials say they want to keep pace with AI software developers, who update their products much more frequently than makers of traditional devices, such as X-ray machines.

Scott Gottlieb said in 2017 while he was FDA commissioner that government regulators need to make sure its approach to innovative products is efficient and that it fosters, not impedes, innovation.

Under the plan, the FDA would pre-certify companies that demonstrate a culture of quality and organizational excellence, which would allow them to provide less upfront data about devices.

Pre-certified companies could then release devices with a streamlined review or no FDA review at all. Once products are on the market, companies will be responsible for monitoring their own products safety and reporting back to the FDA. Nine companies have been selected for the pilot: Apple, FitBit, Samsung, Johnson & Johnson, Pear Therapeutics, Phosphorus, Roche, Tidepool and Verily Life Sciences.

High-risk products, such as software used in pacemakers, will still get a comprehensive FDA evaluation. We definitely dont want patients to be hurt, said Patel, who noted that devices cleared through pre-certification can be recalled if needed. There are a lot of guardrails still in place.

But research shows that even low- and moderate-risk devices have been recalled due to serious risks to patients, said Diana Zuckerman, president of the National Center for Health Research. People could be harmed because something wasnt required to be proven accurate or safe before it is widely used.

Johnson & Johnson, for example, has recalled hip implants and surgical mesh.

In a series of letters to the FDA, the American Medical Association and others have questioned the wisdom of allowing companies to monitor their own performance and product safety.

The honor system is not a regulatory regime, said Jesse Ehrenfeld, who chairs the physician groups board of trustees.In an October letter to the FDA, Sens. Elizabeth Warren (D-Mass.), Tina Smith (D-Minn.) and Patty Murray (D-Wash.) questioned the agencys ability to ensure company safety reports are accurate, timely and based on all available information.

When Good Algorithms Go Bad

Some AI devices are more carefully tested than others.

An AI-powered screening tool for diabetic eye disease was studied in 900 patients at 10 primary care offices before being approved in 2018. The manufacturer, IDx Technologies, worked with the FDA for eight years to get the product right, said Michael Abramoff, the companys founder and executive chairman.

The test, sold as IDx-DR, screens patients for diabetic retinopathy, a leading cause of blindness, and refers high-risk patients to eye specialists, who make a definitive diagnosis.

IDx-DR is the first autonomous AI product one that can make a screening decision without a doctor. The company is now installing it in primary care clinics and grocery stores, where it can be operated by employees with a high school diploma. Abramoffs company has taken the unusual step of buying liability insurance to cover any patient injuries.

Yet some AI-based innovations intended to improve care have had the opposite effect.

A Canadian company, for example, developed AI software to predict a persons risk of Alzheimers based on their speech. Predictions were more accurate for some patients than others. Difficulty finding the right word may be due to unfamiliarity with English, rather than to cognitive impairment, said co-author Frank Rudzicz, an associate professor of computer science at the University of Toronto.

Doctors at New Yorks Mount Sinai Hospital hoped AI could help them use chest X-rays to predict which patients were at high risk of pneumonia. Although the system made accurate predictions from X-rays shot at Mount Sinai, the technology flopped when tested on images taken at other hospitals. Eventually, researchers realized the computer had merely learned to tell the difference between that hospitals portable chest X-rays taken at a patients bedside with those taken in the radiology department. Doctors tend to use portable chest X-rays for patients too sick to leave their room, so its not surprising that these patients had a greater risk of lung infection.

DeepMind, a company owned by Google, has created an AI-based mobile app that can predict which hospitalized patients will develop acute kidney failure up to 48 hours in advance. A blog post on the DeepMind website described the system, used at a London hospital, as a game changer. But the AI system also produced two false alarms for every correct result, according to a July study in Nature. That may explain why patients kidney function didnt improve, said Saurabh Jha, associate professor of radiology at the Hospital of the University of Pennsylvania. Any benefit from early detection of serious kidney problems may have been diluted by a high rate of overdiagnosis, in which the AI system flagged borderline kidney issues that didnt need treatment, Jha said. Google had no comment in response to Jhas conclusions.

False positives can harm patients by prompting doctors to order unnecessary tests or withhold recommended treatments, Jha said. For example, a doctor worried about a patients kidneys might stop prescribing ibuprofen a generally safe pain reliever that poses a small risk to kidney function in favor of an opioid, which carries a serious risk of addiction.

As these studies show, software with impressive results in a computer lab can founder when tested in real time, Stanfords Cho said. Thats because diseases are more complex and the health care system far more dysfunctional than many computer scientists anticipate.

Many AI developers cull electronic health records because they hold huge amounts of detailed data, Cho said. But those developers often arent aware that theyre building atop a deeply broken system. Electronic health records were developed for billing, not patient care, and are filled with mistakes or missing data.

A KHN investigation published in March found sometimes life-threatening errors in patients medication lists, lab tests and allergies.

In view of the risks involved, doctors need to step in to protect their patients interests, said Vikas Saini, a cardiologist and president of the nonprofit Lown Institute, which advocates for wider access to health care.

While it is the job of entrepreneurs to think big and take risks, Saini said, it is the job of doctors to protect their patients.

Kaiser Health News (KHN) is a nonprofit news service covering health issues. It is an editorially independent program of the Kaiser Family Foundation that is not affiliated with Kaiser Permanente.

See the original post:

Artificial Intelligence Is Rushing Into Patient Care - And Could Raise Risks - Scientific American

The Power Of Purpose: How We Counter Hate Used Artificial Intelligence To Battle Hate Speech Online – Forbes

We Counter Hate

One of the most fascinating examples of social innovation Ive been tracking recently was the We Counter Hate platform, by Seattle-based agency POSSIBLE (now part of Wunderman Thompson Seattle) that sought to reduce hate speech on Twitter by turning retweets of these hateful messages into donations for a good cause.

Heres how it worked: Using machine learning, it first identified hateful speech on the platform. A human moderator then selected the most offensive and most dangerous tweets and attached an undeletable reply, which informed recipients that if they retweet the message, a donation will be committed to an anti-hate group. In a beautiful twist this non-profit wasLife After Hate, a group that helps members of extremist groups leave and transition to mainstream life.

Unfortunately (and ironically) on the very day I reached out to the team, Twitter decided to allow users to hide replies in their feeds in an effort to empower people faced with bullying and harassment, eliminating the reply function which was the main mechanism that gave #WeCounterHate its power and led to it being able to remove more than 20M potentialhatespeech impressions.

Undeterred, I caught up with some members of the core teamShawn Herron, Jason Carmel and Matt Gilmoreto find out more about their journey.

(From left to right)Shawn Herron, Experience Technology Director @ Wunderman ThompsonMatt ... [+] Gilmore, Creative Director @ Wunderman ThompsonJason Carmel, Chief Data Officer @ Wunderman Thompson

Afdhel Aziz: Gentlemen, welcome. How did the idea for WeCounterHate come about?

Shawn Herron: It started when we caught wind of what the citizens of the town of Wunsiedel, Germany were doing to combat the annual extremists that were descending on their town every year to hold rally and march through the streets. The towns people had devised a peaceful way to upend the extremists efforts by turning their hateful march into an involuntary walk-a-thon that benefitted EXIT Deutschland, an organization that helps people escape extremist groups. For every meter the neo Nazis marched 10 euro would be donated to Exit Deutschland. The question became, how can we scale something like that so anyone, anywhere, could have the ability to fight against hate in a meaningful way?

Jason Carmel: We knew that, to create scale, it had to be digital in nature and Twitter seemed like the perfect problem in need of a solution. We figured if we could reduce hate on a platform of that magnitude, even a small percentage, it could have a big impact. We started by developing an innovative machine-learning and natural-language processing technology that could identify and classify hate speech.

Matt Gilmore: But we still needed the mechanic, a catch 22, that would present those looking to spread hate on the platform with a no-win decision to make. Thats when we stumbled onto the fact that Twitter didnt allow people to delete comments on their tweets. The only way to remove a comment was to delete the post entirely. That mechanic is what gave us a way put a permanent marker, in the form of an image and message, on tweets containing hate speech. Its that permanent marker that let those looking to retweet, and spread hate, know that doing so would benefit an organization theyre opposed to, Life After Hate. No matter what they chose to do, love wins.

Aziz: Fascinating. So, what led you to the partnership with Life After Hate and how did that work?

Carmel: Staffed and founded by former hate group members and violent extremists, Life After Hate is a non-profit that helps people in extremist groups break from that hate-filled lifestyle. They offer a welcoming way out thats free of judgement.We collaborated with them in training the AI thats used to identify hate speech in near real time on Twitter. With the benefit of their knowledge our AI can even find hidden forms of hate speech (coded language, secret emoji combinations) in a vast sea of tweets. Their expertise was crucial to align the language we used when countering hate, making it more compassionate and matter-of-fact, rather than confrontational.

Herron: Additionally, their partnership just made perfect sense on a conceptual level as the beneficiary of the effort. If youre one of those people looking to spread hate on Twitter, youre much less likely to hit retweet knowing that youll be benefiting an organization youre opposed to.

Aziz: Was it hard to wade through that much hate speech? What surprised you?

Herron: Being exposed to all the hate filled tweets was easily the most difficult part of the whole thing. The human brain is not wired to read and see the kinds of messages we encountered for long periods of time. At the end of the countering process, after the AI identified hate, we always relied on a human moderator to validate it before countering/tagging it. We broke up the shifts between many volunteers, but it was always quite difficult when it was your shift.

Carmel: We learned that the identification of hate speech was much easier than categorizing it. Or initial understanding of hate speech, especially before Life After Hate helped us, was really just the movie version of hate speech and missed a lot of hidden context. We were also surprised at how much the language would evolve relative to current events. It was definitely something we had to stay on top of.

We were surprised by how broad a spectrum of people the hate was coming from. We went in thinking wed just encounter a bunch of thugs, but many of these people held themselves out as academics, comedians, or historians. The brands of hate some of them shared were nuanced and, in an insidious way, very compelling.

We were caught off guard by the amount of time and effort those who disliked our platform would take to slam or discredit it. A lot of these people are quite savvy and would go to great lengths to attempt to undermine our efforts. Outside of the things we dealt with in Twitter, one YouTube hate-fluencer made a video, close to an hour long, that wove all sorts of intricate theories and conspiracies about our platform.

Gilmore: We were also surprised by how wrong our instincts were. When we first started, the things we were seeing made us angry and frustrated. We wanted to come after these hateful people in an aggressive way. We wanted to fight back. Life After Hate was essential in helping course-correct our tone and message. They helped us understand (and wed like more people to know) the power of empathy combined with education, and its ability to remove walls rather than build them between people. It can be difficult to take this approach, but it ultimately gets everyone to a better place.

Aziz: I love that idea - empathy with education.What were the results of the work youve done so far? How did you measure success?

Carmel: The WeCounterHate platform radically outperformed expectations of identifying hate speech (91% success) relative to a human moderator, as we continued to improve the model over the course of the project.

When @WeCounterHatereplied to a tweet containing hate, it reduces the spread of that hate by an average of 54%. Furthermore, 19% of the "hatefluencers" deleted their original tweet outright once it had been countered.

By our estimates, the Hate Tweets we countered were shared roughly 20 million fewer times compared to similar Hate Tweets by the same authors that werent countered.

Matt: It was a pretty mind-bending exercise for people working in an ad agency, that have spent our entire careers trying to gain exposure for the work do on behalf of clients, to suddenly be trying to reduce impressions. We even began referring to WCH as the worlds first reverse-media plan, designed to reduce impressions by stopping retweets.

Aziz: So now that the project has ended, how do you hope to take this idea forward in an open source way?

Herron: Our hope was to counter hate speech online, while collecting insightful data about how hate speech online propagates. Going forward, hopefully this data will allow experts in the field to address the hate speech problem at a more systemic level. Our goal is to publicly open source archived data that has been gathered, hopefully next quarter (Q1 2020)

I love this idea on so many different levels. The ingenuity of finding a way to counteract hate speech without resorting to censorship. The partnership with Life After Hate to improve the sophistication of the detection. And the potential for this same model to be applied to so many different problems in the world (*anyone want to build a version for climate change deniers?). It proves that the creativity of the advertising world can truly be turned into a force for good, and for that I salute the team at Possible for showing whats, well, possible.

See the article here:

The Power Of Purpose: How We Counter Hate Used Artificial Intelligence To Battle Hate Speech Online - Forbes

One key to artificial intelligence on the battlefield: trust – C4ISRNet

To understand how humans might better marshal autonomous forces during battle in the near future, it helps to first consider the nature of mission command in the past.

Derived from a Prussian school of battle, mission command is a form of decentralized command and control. Think about a commander who is given an objective and then trusted to meet that goal to the best of their ability and to do so without conferring with higher-ups before taking further action. It is a style of operating with its own advantages and hurdles, obstacles that map closely onto the autonomous battlefield.

At one level, mission command really is a management of trust, said Ben Jensen, a professor of strategic studies at the Marine Corps University. Jensen spoke as part of a panel on multidomain operations at the Association of the United States Army AI and Autonomy symposium in November. Were continually moving choice and agency from the individual because of optimized algorithms helping [decision-making]. Is this fundamentally irreconcilable with the concept of mission command?

The problem for military leaders then is two-fold: can humans trust the information and advice they receive from artificial intelligence? And, related, can those humans also trust that any autonomous machines they are directing are pursuing objectives the same way people would?

To the first point, Robert Brown, director of the Pentagons multidomain task force, emphasized that using AI tools means trusting commanders to act on that information in a timely manner.

A mission command is saying: youre going to provide your subordinates the depth, the best data, you can get them and youre going to need AI to get that quality data. But then thats balanced with their own ground and then the art of whats happening, Brown said. We have to be careful. You certainly can lose that speed and velocity of decision.

Before the tools ever get to the battlefield, before the algorithms are ever bent toward war, military leaders must ensure the tools as designed actually do what service members need.

How do we create the right type of decision aids that still empower people to make the call, but gives them the information content to move faster? said Tony Frazier, an executive at Maxar Technologies.

Know all the coolest acronyms Sign up for the C4ISRNET newsletter about future battlefield technologies.

Subscribe

Enter a valid email address (please select a country) United States United Kingdom Afghanistan Albania Algeria American Samoa Andorra Angola Anguilla Antarctica Antigua and Barbuda Argentina Armenia Aruba Australia Austria Azerbaijan Bahamas Bahrain Bangladesh Barbados Belarus Belgium Belize Benin Bermuda Bhutan Bolivia Bosnia and Herzegovina Botswana Bouvet Island Brazil British Indian Ocean Territory Brunei Darussalam Bulgaria Burkina Faso Burundi Cambodia Cameroon Canada Cape Verde Cayman Islands Central African Republic Chad Chile China Christmas Island Cocos (Keeling) Islands Colombia Comoros Congo Congo, The Democratic Republic of The Cook Islands Costa Rica Cote D'ivoire Croatia Cuba Cyprus Czech Republic Denmark Djibouti Dominica Dominican Republic Ecuador Egypt El Salvador Equatorial Guinea Eritrea Estonia Ethiopia Falkland Islands (Malvinas) Faroe Islands Fiji Finland France French Guiana French Polynesia French Southern Territories Gabon Gambia Georgia Germany Ghana Gibraltar Greece Greenland Grenada Guadeloupe Guam Guatemala Guinea Guinea-bissau Guyana Haiti Heard Island and Mcdonald Islands Holy See (Vatican City State) Honduras Hong Kong Hungary Iceland India Indonesia Iran, Islamic Republic of Iraq Ireland Israel Italy Jamaica Japan Jordan Kazakhstan Kenya Kiribati Korea, Democratic People's Republic of Korea, Republic of Kuwait Kyrgyzstan Lao People's Democratic Republic Latvia Lebanon Lesotho Liberia Libyan Arab Jamahiriya Liechtenstein Lithuania Luxembourg Macao Macedonia, The Former Yugoslav Republic of Madagascar Malawi Malaysia Maldives Mali Malta Marshall Islands Martinique Mauritania Mauritius Mayotte Mexico Micronesia, Federated States of Moldova, Republic of Monaco Mongolia Montserrat Morocco Mozambique Myanmar Namibia Nauru Nepal Netherlands Netherlands Antilles New Caledonia New Zealand Nicaragua Niger Nigeria Niue Norfolk Island Northern Mariana Islands Norway Oman Pakistan Palau Palestinian Territory, Occupied Panama Papua New Guinea Paraguay Peru Philippines Pitcairn Poland Portugal Puerto Rico Qatar Reunion Romania Russian Federation Rwanda Saint Helena Saint Kitts and Nevis Saint Lucia Saint Pierre and Miquelon Saint Vincent and The Grenadines Samoa San Marino Sao Tome and Principe Saudi Arabia Senegal Serbia and Montenegro Seychelles Sierra Leone Singapore Slovakia Slovenia Solomon Islands Somalia South Africa South Georgia and The South Sandwich Islands Spain Sri Lanka Sudan Suriname Svalbard and Jan Mayen Swaziland Sweden Switzerland Syrian Arab Republic Taiwan, Province of China Tajikistan Tanzania, United Republic of Thailand Timor-leste Togo Tokelau Tonga Trinidad and Tobago Tunisia Turkey Turkmenistan Turks and Caicos Islands Tuvalu Uganda Ukraine United Arab Emirates United Kingdom United States United States Minor Outlying Islands Uruguay Uzbekistan Vanuatu Venezuela Viet Nam Virgin Islands, British Virgin Islands, U.S. Wallis and Futuna Western Sahara Yemen Zambia Zimbabwe

Thanks for signing up!

By giving us your email, you are opting in to the C4ISRNET Daily Brief.

An intelligence product, using AI to provide analysis and information to combatants, will have to fall in the sweet spot of offering actionable intelligence, without bogging the recipient down in details or leaving them uninformed.

One thing thats remained consistent is folks will do one of three things with overwhelming information, Brown said. They will wait for perfect information. Theyll just wait wait, wait, theyll never have perfect information and adversaries [will have] done 10 other things, by the way. Or theyll be overwhelmed and disregard the information.

The third path users will take, Brown said, is the very task commanders want them to follow: find golden needles in eight stacks of information to help them make a decision in a timely manner.

Getting there, however, where information is empowering instead of paralyzing or disheartening, is the work of training. Adapting for the future means practicing in the future environment, and that means getting new practitioners familiar with the kinds of information they can expect on the battlefield.

Our adversaries are going to bring a lot of dilemmas our way and so our ability to comprehend those challenges and then hopefully not just react but proactively do something to prevent those actions, is absolutely critical, said Brig. Gen. David Kumashiro, the director of Joint Force Integration for the Air Force.

When a battle has thousands of kill chains, and analysis that stretches over hundreds of hours, humans have a difficult time comprehending what is happening. In the future, it will be the job of artificial intelligence to filter these threats. Meanwhile, it will be the role of the human in the loop to take that filtered information and respond as best it can to the threats arrayed against them.

What does it mean to articulate mission command in that environment, the understanding, the intent, and the trust? said Kumashiro, referring to the fast pace of AI filtering. When the highly contested environment disrupts those connections, when we are disconnected from the hive, those authorities need to be understood so that our war fighters at the farthest reaches of the tactical edge can still perform what they need to do.

Planning not just for how these AI tools work in ideal conditions, but how they will hold up under the degradation of a modern battlefield, is essential for making technology an aide, and not a hindrance, to the forces of the future.

If the data goes away, and you still got the mission, youve got to attend to it, said Brown. Thats a huge factor as well for practice. If youre relying only on the data, youll fail miserably in degraded mode.

More:

One key to artificial intelligence on the battlefield: trust - C4ISRNet

In the 2020s, human-level A.I. will arrive, and finally ace the Turing test – Inverse

The past decade has seen the rise of remarkably human personal assistants, increasing automation in transportation and industrial environments, and even the alleged passing of Alan Turings famous robot consciousness test. Such innovations have taken artificial intelligence out labs and into our hands.

A.I. programs have become painters, drivers, doctors assistants, and even friends. But with these new benefits have also come increasing dangers. This ending decade saw the first, and likely not the last, death caused by a self-driving car.

This is #20 on Inverses 20 predictions for the 2020s.

And as we head toward another decade of machine learning and robotics research, questions surrounding the moral programming of A.I. and the limits of their autonomy will no longer be just thought-experiments but time-sensitive problem.

One such area to keep on eye on going forward into a new decade will be partially defined by this question: what kind of legal status will A.I. be granted as their capabilities and intelligence continues to scale closer to that of humans? This is a conversation the archipelago nation Malta started in 2018 when its leaders proposed that it should prepare to grant or deny citizenship to A.I.s just as they would humans.

The logic behind this being that A.I.s of the future could have just as much agency and potential to cause disruption as any other non-robotic being. Francois Piccione, policy advisor for the Maltese government, told Inverse in 2019 that not taking such measures would be irresponsible.

Artificial Intelligence is being seen in many quarters as the most transformative technology since the invention of electricity, said Piccione. To realize that such a revolution is taking place and not do ones best to prepare for it would be irresponsible.

While the 2020s might not see fully fledged citizenship for A.I.s, Inverse predicts that there will be increasing legal scrutiny in coming years over who is legally responsible over the actions of A.I., whether it be their owners or the companies designing them. Instead of citizenship or visas for A.I., this could lead to further restrictions on the humans who travel with them and the ways in which A.I. can be used in different settings.

Another critical point of increasing scrutiny in the coming years will be how to ensure A.I. programmers continue to think critically about the algorithms they design.

This past decade saw racism and death as the result of poorly designed algorithms and even poorer introspection. Inverse predicts that as A.I. continues to scale labs will increasingly call upon outside experts, such as ethicists and moral psychologists, to make sure these human-like machines are not doomed to repeat our same, dehumanizing, mistakes.

As 2019 draws to a close, Inverse is looking to the future. These are our 20 predictions for science and technology for the 2020s. Some are terrifying, some are fascinating, and others we can barely wait for. This has been #20. Read a related story here.

More here:

In the 2020s, human-level A.I. will arrive, and finally ace the Turing test - Inverse