Archive for the ‘Artificial Intelligence’ Category

Using A.I. to Find Bias in A.I. – The New York Times

In 2018, Liz OSullivan and her colleagues at a prominent artificial intelligence start-up began work on a system that could automatically remove nudity and other explicit images from the internet.

They sent millions of online photos to workers in India, who spent weeks adding tags to explicit material. The data paired with the photos would be used to teach A.I. software how to recognize indecent images. But once the photos were tagged, Ms. OSullivan and her team noticed a problem: The Indian workers had classified all images of same-sex couples as indecent.

For Ms. OSullivan, the moment showed how easily and often bias could creep into artificial intelligence. It was a cruel game of Whac-a-Mole, she said.

This month, Ms. OSullivan, a 36-year-old New Yorker, was named chief executive of a new company, Parity. The start-up is one of many organizations, including more than a dozen start-ups and some of the biggest names in tech, offering tools and services designed to identify and remove bias from A.I. systems.

Soon, businesses may need that help. In April, the Federal Trade Commission warned against the sale of A.I. systems that were racially biased or could prevent individuals from receiving employment, housing, insurance or other benefits. A week later, the European Union unveiled draft regulations that could punish companies for offering such technology.

It is unclear how regulators might police bias. This past week, the National Institute of Standards and Technology, a government research lab whose work often informs policy, released a proposal detailing how businesses can fight bias in A.I., including changes in the way technology is conceived and built.

Many in the tech industry believe businesses must start preparing for a crackdown. Some sort of legislation or regulation is inevitable, said Christian Troncoso, the senior director of legal policy for the Software Alliance, a trade group that represents some of the biggest and oldest software companies. Every time there is one of these terrible stories about A.I., it chips away at public trust and faith.

Over the past several years, studies have shown that facial recognition services, health care systems and even talking digital assistants can be biased against women, people of color and other marginalized groups. Amid a growing chorus of complaints over the issue, some local regulators have already taken action.

In late 2019, state regulators in New York opened an investigation of UnitedHealth Group after a study found that an algorithm used by a hospital prioritized care for white patients over Black patients, even when the white patients were healthier. Last year, the state investigated the Apple Card credit service after claims it was discriminating against women. Regulators ruled that Goldman Sachs, which operated the card, did not discriminate, while the status of the UnitedHealth investigation is unclear.

A spokesman for UnitedHealth, Tyler Mason, said the companys algorithm had been misused by one of its partners and was not racially biased. Apple declined to comment.

More than $100 million has been invested over the past six months in companies exploring ethical issues involving artificial intelligence, after $186 million last year, according to PitchBook, a research firm that tracks financial activity.

But efforts to address the problem reached a tipping point this month when the Software Alliance offered a detailed framework for fighting bias in A.I., including the recognition that some automated technologies require regular oversight from humans. The trade group believes the document can help companies change their behavior and can show regulators and lawmakers how to control the problem.

Though they have been criticized for bias in their own systems, Amazon, IBM, Google and Microsoft also offer tools for fighting it.

Ms. OSullivan said there was no simple solution to bias in A.I. A thornier issue is that some in the industry question whether the problem is as widespread or as harmful as she believes it is.

Changing mentalities does not happen overnight and that is even more true when youre talking about large companies, she said. You are trying to change not just one persons mind but many minds.

When she started advising businesses on A.I. bias more than two years ago, Ms. OSullivan was often met with skepticism. Many executives and engineers espoused what they called fairness through unawareness, arguing that the best way to build equitable technology was to ignore issues like race and gender.

Increasingly, companies were building systems that learned tasks by analyzing vast amounts of data, including photos, sounds, text and stats. The belief was that if a system learned from as much data as possible, fairness would follow.

But as Ms. OSullivan saw after the tagging done in India, bias can creep into a system when designers choose the wrong data or sort through it in the wrong way. Studies show that face-recognition services can be biased against women and people of color when they are trained on photo collections dominated by white men.

Designers can be blind to these problems. The workers in India where gay relationships were still illegal at the time and where attitudes toward gays and lesbians were very different from those in the United States were classifying the photos as they saw fit.

Ms. OSullivan saw the flaws and pitfalls of artificial intelligence while working for Clarifai, the company that ran the tagging project. She said she had left the company after realizing it was building systems for the military that she believed could eventually be used to kill. Clarifai did not respond to a request for comment.

She now believes that after years of public complaints over bias in A.I. not to mention the threat of regulation attitudes are changing. In its new framework for curbing harmful bias, the Software Alliance warned against fairness through unawareness, saying the argument did not hold up.

They are acknowledging that you need to turn over the rocks and see what is underneath, Ms. OSullivan said.

Still, there is resistance. She said a recent clash at Google, where two ethics researchers were pushed out, was indicative of the situation at many companies. Efforts to fight bias often clash with corporate culture and the unceasing push to build new technology, get it out the door and start making money.

It is also still difficult to know just how serious the problem is. We have very little data needed to model the broader societal safety issues with these systems, including bias, said Jack Clark, one of the authors of the A.I. Index, an effort to track A.I. technology and policy across the globe. Many of the things that the average person cares about such as fairness are not yet being measured in a disciplined or a large-scale way.

Ms. OSullivan, a philosophy major in college and a member of the American Civil Liberties Union, is building Parity around a tool designed by and licensed from Rumman Chowdhury, a well-known A.I. ethics researcher who spent years at the business consultancy Accenture before becoming an executive at Twitter. Dr. Chowdhury founded an earlier version of Parity and built it around the same tool.

While other start-ups, like Fiddler A.I. and Weights and Biases, offer tools for monitoring A.I. services and identifying potentially biased behavior, Paritys technology aims to analyze the data, technologies and methods a business uses to build its services and then pinpoint areas of risk and suggest changes.

The tool uses artificial intelligence technology that can be biased in its own right, showing the double-edged nature of A.I. and the difficulty of Ms. OSullivans task.

Tools that can identify bias in A.I. are imperfect, just as A.I. is imperfect. But the power of such a tool, she said, is to pinpoint potential problems to get people looking closely at the issue.

Ultimately, she explained, the goal is to create a wider dialogue among people with a broad range of views. The trouble comes when the problem is ignored or when those discussing the issues carry the same point of view.

You need diverse perspectives. But can you get truly diverse perspectives at one company? Ms. OSullivan asked. It is a very important question I am not sure I can answer.

See the original post:
Using A.I. to Find Bias in A.I. - The New York Times

Four ways artificial intelligence is helping us learn about the universe – The Conversation UK

Astronomy is all about data. The universe is getting bigger and so too is the amount of information we have about it. But some of the biggest challenges of the next generation of astronomy lie in just how were going to study all the data were collecting.

To take on these challenges, astronomers are turning to machine learning and artificial intelligence (AI) to build new tools to rapidly search for the next big breakthroughs. Here are four ways AI is helping astronomers.

There are a few ways to find a planet, but the most successful has been by studying transits. When an exoplanet passes in front of its parent star, it blocks some of the light we can see.

By observing many orbits of an exoplanet, astronomers build a picture of the dips in the light, which they can use to identify the planets properties such as its mass, size and distance from its star. Nasas Kepler space telescope employed this technique to great success by watching thousands of stars at once, keeping an eye out for the telltale dips caused by planets.

Humans are pretty good at seeing these dips, but its a skill that takes time to develop. With more missions devoted to finding new exoplanets, such as Nasas (Transiting Exoplanet Survey Satellite), humans just cant keep up. This is where AI comes in.

Time-series analysis techniques which analyse data as a sequential sequence with time have been combined with a type of AI to successfully identify the signals of exoplanets with up to 96% accuracy.

Time-series models arent just great for finding exoplanets, they are also perfect for finding the signals of the most catastrophic events in the universe mergers between black holes and neutron stars.

When these incredibly dense bodies fall inwards, they send out ripples in space-time that can be detected by measuring faint signals here on Earth. Gravitational wave detector collaborations Ligo and Virgo have identified the signals of dozens of these events, all with the help of machine learning.

By training models on simulated data of black hole mergers, the teams at Ligo and Virgo can identify potential events within moments of them happening and send out alerts to astronomers around the world to turn their telescopes in the right direction.

Read more: What happens when black holes collide with the most dense stars in the universe

When the Vera Rubin Observatory, currently being built in Chile, comes online, it will survey the entire night sky every night collecting over 80 terabytes of images in one go to see how the stars and galaxies in the universe vary with time. One terabyte is 8,000,000,000,000 bits.

Over the course of the planned operations, the Legacy Survey of Space and Time being undertaken by Rubin will collect and process hundreds of petabytes of data. To put it in context, 100 petabytes is about the space it takes to store every photo on Facebook, or about 700 years of full high-definition video.

You wont be able to just log onto the servers and download that data, and even if you did, you wouldnt be able to find what youre looking for.

Machine learning techniques will be used to search these next-generation surveys and highlight the important data. For example, one algorithm might be searching the images for rare events such as supernovae dramatic explosions at the end of a stars life and another might be on the lookout for quasars. By training computers to recognise the signals of particular astronomical phenomena, the team will be able to get the right data to the right people.

As we collect more and more data on the universe, we sometimes even have to curate and throw away data that isnt useful. So how can we find the rarest objects in these swathes of data?

One celestial phenomenon that excites many astronomers is strong gravitational lenses. This is what happens when two galaxies line up along our line of sight and the closest galaxys gravity acts as a lens and magnifies the more distant object, creating rings, crosses and double images.

Finding these lenses is like finding a needle in a haystack a haystack the size of the observable universe. Its a search thats only going to get harder as we collect more and more images of galaxies.

In 2018, astronomers from around the world took part in the Strong Gravitational Lens Finding Challenge where they competed to see who could make the best algorithm for finding these lenses automatically.

The winner of this challenge used a model called a convolutional neural network, which learns to break down images using different filters until it can classify them as containing a lens or not. Surprisingly, these models were even better than people, finding subtle differences in the images that we humans have trouble noticing.

Over the next decade, using new instruments like the Vera Rubin Observatory, astronomers will collect petabytes of data, thats thousands of terabytes. As we peer deeper into the universe, astronomers research will increasingly rely on machine-learning techniques.

Continued here:
Four ways artificial intelligence is helping us learn about the universe - The Conversation UK

Astronomers Use Artificial Intelligence to Reveal the Actual Shape of the Universe – SciTechDaily

Using AI driven data analysis to peel back the noise and find the actual shape of the Universe. Credit: The Institute of Statistical Mathematics

Japanese astronomers have developed a new artificial intelligence (AI) technique to remove noise in astronomical data due to random variations in galaxy shapes. After extensive training and testing on large mock data created by supercomputer simulations, they then applied this new tool to actual data from Japans Subaru Telescope and found that the mass distribution derived from using this method is consistent with the currently accepted models of the Universe. This is a powerful new tool for analyzing big data from current and planned astronomy surveys.

Wide area survey data can be used to study the large-scale structure of the Universe through measurements of gravitational lensing patterns. In gravitational lensing, the gravity of a foreground object, like a cluster of galaxies, can distort the image of a background object, such as a more distant galaxy. Some examples of gravitational lensing are obvious, such as the Eye of Horus. The large-scale structure, consisting mostly of mysterious dark matter, can distort the shapes of distant galaxies as well, but the expected lensing effect is subtle. Averaging over many galaxies in an area is required to create a map of foreground dark matter distributions.

But this technique of looking at many galaxy images runs into a problem; some galaxies are just innately a little funny looking. It is difficult to distinguish between a galaxy image distorted by gravitational lensing and a galaxy that is actually distorted. This is referred to as shape noise and is one of the limiting factors in research studying the large-scale structure of the Universe.

To compensate for shape noise, a team of Japanese astronomers first used ATERUI II, the worlds most powerful supercomputer dedicated to astronomy, to generate 25,000 mock galaxy catalogs based on real data from the Subaru Telescope. They then added realist noise to these perfectly known artificial data sets, and trained an AI to statistically recover the lensing dark matter from the mock data.

After training, the AI was able to recover previously unobservable fine details, helping to improve our understanding of the cosmic dark matter. Then using this AI on real data covering 21 square degrees of the sky, the team found a distribution of foreground mass consistent with the standard cosmological model.

This research shows the benefits of combining different types of research: observations, simulations, and AI data analysis. comments Masato Shirasaki, the leader of the team, In this era of big data, we need to step across traditional boundaries between specialties and use all available tools to understand the data. If we can do this, it will open new fields in astronomy and other sciences.

Reference: Noise reduction for weak lensing mass mapping: an application of generative adversarial networks to Subaru Hyper Suprime-Cam first-year data by Masato Shirasaki, Kana Moriwaki, Taira Oogi, Naoki Yoshida, Shiro Ikeda and Takahiro Nishimichi, 9 April 2021, Monthly Notices of the Royal Astronomical Society.DOI: 10.1093/mnras/stab982

View post:
Astronomers Use Artificial Intelligence to Reveal the Actual Shape of the Universe - SciTechDaily

Artificial Intelligence used on Army operation for the first time – GOV.UK

Soldiers from the 20th Armoured Infantry Brigade used an AI engine which provides information on the surrounding environment and terrain.

Through the development of significant automation and smart analytics, the engine is able to rapidly cut through masses of complex data. Providing efficient information regarding the environment and terrain, it enables the Army to plan its appropriate activity and outputs.

The deployment was a first of its kind for the Army. It built on close collaboration between the MOD and industry partners that developed AI specifically designed for the way the Army is trained to operate.

The lessons this has provided are considerable, not just in terms of our support to deployed forces, but more broadly in how we inform Defences digital transformation agenda and the best practices we must adopt to integrate and exploit leading-edge technologies.

This AI capability, which can be hosted in the cloud or operate in independent mode, saved significant time and effort, providing soldiers with instant planning support and enhancing command and control processes.

Announced by the Prime Minister last November, Defence has received an increase in funding of over 24 billion across the next four years, focusing on the ability to adapt to meet future threats. Further outlined in the Defence Command Paper, the MOD intends to invest 6.6billion over the next four years in defence research and development, focusing on emerging technologies in artificial intelligence, AI-enabled autonomous systems, cyber, space and directed energy systems.

This was a fantastic opportunity to use a new and innovative piece of technology in a deployed environment. The kit was shown to outperform our expectations and has clear applications for improving our level of analysis and speed at which we conduct our planning. Im greatly looking forward to further opportunities to work with this.

In future, the UK armed forces will increasingly use AI to predict adversaries behaviour, perform reconnaissance and relay real-time intelligence from the battlefield.

During the annual large-scale NATO exercise, soldiers from France, Denmark, Belgium, Estonia and the UK used the technology whilst carrying out live-fire drills.

Operation Cabrit is the British Armys deployment to Estonia where British troops are leading a multinational battlegroup as part of the enhanced Forward Presence.

Artificial Intelligence has already been incorporated in a number of key military initiatives, including the Future Combat Air System, and is the focus of several innovative funding programmes through the Defence and Security Accelerator.

Read more here:
Artificial Intelligence used on Army operation for the first time - GOV.UK

Here’s how artificial intelligence helping astronomers learn about the universe – Hindustan Times

Some of the biggest challenges of the next generation of astronomy lie in studying all the data. To take on the challenges, astronomers are turning to machine learning and artificial intelligence (AI) to build new tools to rapidly search for the next big breakthroughs.

A research by Ashley Spindler from the department of Astrophysics, University of Hertfordshire, has thrown light on this, as reported by news agency PTI.

Here are the four ways in which AI is helping astronomers

1. Planet hunting: There are a few ways to find a planet but the most successful has been by studying transits. When an exoplanet passes in front of its parent star, it blocks some of the light which the humans can see.

By observing many orbits of an exoplanet, astronomers build a picture of the dips in the light, which they can use to identify the planets properties, such as its mass, size and distance from its star.

AI's time-series analysis techniques, which analyse data as a sequential sequence with time have been combined with a type of AI to successfully identify the signals of exoplanets with up to 96 per cent accuracy.

2. Gravitational waves: Time-series models arent just great for finding exoplanets, they are also perfect for finding the signals of the most catastrophic events in the universe.

When these dense bodies fall inwards, they send out ripples in space-time that can be detected by measuring faint signals here on Earth. Gravitational wave detector collaborations - Ligo and Virgo - have identified the signals of dozens of these events, all with the help of machine learning.

By training models on simulated data of black hole mergers, the teams at Ligo and Virgo can identify potential events within moments of them happening and send out alerts to astronomers around the world to turn their telescopes in the right direction.

3. The changing sky: When the Vera Rubin Observatory, currently being built in Chile, comes online, it will survey the entire night sky every night - collecting over 80 terabytes of images in one go - to see how the stars and galaxies in the universe vary with time. One terabyte is 8,000,000,000,000 bits.

Over the course of the planned operations, the Legacy Survey of Space and Time being undertaken by Rubin will collect and process hundreds of petabytes of data. To put it in context, 100 petabytes is about the space it takes to store every photo on Facebook, or about 700 years of full high-definition video.

4. Gravitational lenses: One celestial phenomenon that excites many astronomers is strong gravitational lenses. This is what happens when two galaxies line up along our line of sight and the closest galaxys gravity acts as a lens and magnifies the more distant object, creating rings, crosses and double images.

In 2018, astronomers from around the world took part in the Strong Gravitational Lens Finding Challenge where they competed to see who could make the best algorithm for finding these lenses automatically.

The winner of this challenge used a model called a convolutional neural network, which learns to break down images using different filters until it can classify them as containing a lens or not.

Go here to read the rest:
Here's how artificial intelligence helping astronomers learn about the universe - Hindustan Times