Archive for the ‘Artificial Intelligence’ Category

COVID: Artificial intelligence in the pandemic – DW (English)

If artificial intelligence is the future, then the future is now. This pandemic has shown us just how fast artificial intelligence, or AI, works and what it can do in so many different ways.

Right from the start, AI has helped us learn about SARS-CoV-2, the virus that causes COVID-19 infections.

It's helped scientists analyse the virus' genetic information its DNA at speed. DNA is the stuff that makes the virus, indeed any living thing, what it is. And if you want to defend yourself, you had better know your enemy.

AI has also helped scientists understand how fast the virus mutates and helped them develop and test vaccines.

We won't be able to get into all of it this is just an overview. But let's start by recapping the basics about AI.

An AI is a set of instructions that tells a computer what to do, from recognizing faces in the photo albums on our phones to sifting through huge dumps of data for that proverbial needle in a haystack.

People often call them algorithms. It sounds fancy but an algorithm is nothing more thana static list of rules that tells a computer: "If this, then that."

A machine learning (ML) algorithm, meanwhile, is the kind of AI that many of us like to fear. It's an AI that can learn from the things it reads and analyzes and teach itself to do new things. And wehumansoften feel like we can't control or even know what ML algorithms learn. But actually, we can because we write the original code. Soyou can afford to relax. A bit.

In summary, AIs and MLs are programs that let us process lots and lots of information, a lot of it "raw" data, very fast. They are not all evil monsters out to kill us or steal our jobs not necessarily, anyway.

With COVID-19, AI and ML may have helped save a few lives. They have been used in diagnostic tools that read vast numbers of chest X-raysfaster than any radiologist. That's helped doctors identify and monitor COVID patients.

In Nigeria, the technology has been used at a very basic but practical level to help people assess their of risk of getting infected. People answer a series of questions online and depending on their answers, are offered remote medical advice or redirected to a hospital.

The makers, a company called Wellvis, say it has reduced the number of people calling disease control hotlines unnecessarily.

One of the most important things we've had to handle is finding out who is infected fast. And in South Korea, artificial intelligence gave doctors ahead start.

Way back when the rest of the world was still wondering whether it was time to go into the first lockdown, a company in Seoul used AI to develop a COVID-19 test in mere weeks. It would have taken them months without AI.

It was "unheard of," said Youngsahng "Jerry" Suh, head of data science and AI development at the company, Seegene, in an interview with DW.

Seegene's scientists ordered raw materials for the kits on January 24 and by February 5, the first version of the test was ready.

It was only the third time the company had used its supercomputer and Big Data analysis to design a test.

But they must have done something right because by mid-March 2020, international reports suggested that South Korea had tested 230,000 people.

And, at least for a while, the country was able to keep the number of new infections per day relatively flat.

"And we're constantly updating that as new variants and mutations come to light. So, that allows our machine learning algorithm to detect those new variants as well," says Suh.

One of the other major issues we've had to handle is tracking how the disease especially new variants and their mutations spread through a community and from country to country.

In South Africa, researchers used an AI-based algorithm to predictfuture daily confirmed cases of COVID-19.

It was based on historical data from South Africa's past infection history and other information, such as the way people move from one community to another.

They say they showed the country had a low risk of a third wave of the pandemic.

"People thought the beta variant was going to spread around the continent and overwhelm our health systems, but with AI we were able to control that," says Jude Kong, who leadsthe Africa-Canada Artificial Intelligence and Data Innovation Consortium.

The project is a collaboration between Wits University and the Provincial Government of Gauteng in South Africa and York University in Canada, where Kong, who comes from Cameroon, is an assistant professor.

Kong says "data is very sparse in Africa" and one of the problems is getting over the stigma attached to any kind of illness, whether it's COVID, HIV, Ebola or malaria.

But AI has helped them "reveal hidden realities" specific to each area, and that's informed local health policies, he says.

They have deployed their AI modelling in Botswana, Cameroon, Eswatini, Mozambique, Namibia, Nigeria, Rwanda, South Africa, and Zimbabwe.

"A lot of information is one-dimensional," Kong says. "You know the number of people entering a hospital and those that get out. But hidden below that is their age, comorbidities, and the community where they live. We reveal that with AI to determine how vulnerable they are and inform policy makers."

Other types of AI, similar to facial recognition algorithms, can be used to detect infected people, or those with elevated temperatures, in crowds. And AI-driven robots can clean hospitals and other public spaces.

But, beyond that, there are experts who say AI's potential has been overstated.

They include Neil Lawrence, a professor of machine learning at the University of Cambridge who was quoted in April 2020, calling out AI as "hyped."

It was not surprising, he said, that in a pandemic, researchers fell back on tried and tested techniques, like simple mathematical modelling. But one day, he said, AI might be useful.

That was only 15 months ago. And look how far we've come.

That's how to do it: If humans have COVID-19, dogs had better cuddle with their stuffed animals. Researchers from Utrecht in the Netherlands took nasal swabs and blood samples from 48 cats and 54 dogs whose owners had contracted COVID-19 in the last 200 days. Lo and behold, they found the virus in 17.4% of cases. Of the animals, 4.2% also showed symptoms.

About a quarter of the animals that had been infected were also sick. Although the course of the illness was mild in most of the animals, three were considered to be severe. Nevertheless, medical experts are not very concerned. They say pets do not play an important role in the pandemic. The biggest risk is human-to-human transmission.

The fact that cats can become infected with coronaviruses has been known since March 2020. At that time, the Veterinary Research Institute in Harbin, China, had shown for the first time that the novel coronavirus can replicate in cats. The house tigers can also pass on the virus to other felines, but not very easily, said veterinarian Hualan Chen at the time.

But cat owners shouldn't panic. Felines quickly form antibodies to the virus, so they aren't contagious for very long. Anyone who is acutely ill with COVID-19 should temporarily restrict outdoor access for domestic cats. Healthy people should wash their hands thoroughly after petting strange animals.

Should this pet pig keep a safe distance from the dog when walking in Rome? That question may now also have to be reassessed. Pigs hardly come into question as carriers of the coronavirus, the Harbin veterinarians argued in 2020. But at that time they had also cleared dogs of suspicion. Does that still apply?

Nadia, a four-year-old Malaysian tiger, was one of the first big cats to be detected with the virus in 2020 at a New York zoo. "It is, to our knowledge, the first time a wild animal has contracted COVID-19 from a human," the zoo's chief veterinarian told National Geographic magazine.

It is thought that the virus originated in the wild. So far, bats are considered the most likely first carriers of SARS-CoV-2. However, veterinarians assume there must have been another species as an intermediate host between them and humans in Wuhan, China, in December 2019. Only which species this could be is unclear.

This racoon dog is a known carrier of the SARS viruses. German virologist Christian Drosten spoke about the species being a potential virus carrier. "Racoon dogs are trapped on a large scale in China or bred on farms for their fur," he said. For Drosten, the racoon dog is clearly the prime suspect.

Pangolins are also under suspicion for transmitting the virus. Researchers from Hong Kong, China and Australia have detected a virus in a Malaysian Pangolin that shows stunning similarities to SARS-CoV-2.

Hualan Chen also experimented with ferrets. The result: SARS-CoV-2 can multiply in the scratchy martens in the same way as in cats. Transmission between animals occurs as droplet infections. At the end of 2020, tens of thousands of martens had to be killed in various fur farms worldwide because the animals had become infected with SARS-CoV-2.

Experts have given the all-clear for people who handle poultry, such as this trader in Wuhan, China, where scientists believe the first case of the virus emerged in 2019. Humans have nothing to worry about, as chickens are practically immune to the SARS-CoV-2 virus, as are ducks and other bird species.

Author: Fabian Schmidt

See the rest here:
COVID: Artificial intelligence in the pandemic - DW (English)

Astronomers use artificial intelligence to reveal the true shape of universe – WION

The universe comes off as a vast and immeasurable entity whose depths are imperceptible to Earthlings. But in the pursuit of simplifying all that surrounds us, scientists have made great strides in understanding the space we inhabit.

Now, Japanese astronomers have developed an astounding technique to measure the universe. Using artificial intelligence, scientists were able to remove noise in astronomical data which iscaused by random variations in the shapes of galaxies.

What did the scientists do?

Scientists used supercomputer simulations and tested large mock data before performing the same on real data from space. After extensive testing, scientists used the tool on data derived from Japans Subaru Telescope.

To their surprise, it worked! The results that followed remained largely in sync withthe currently accepted models of the universe. If employed on a bigger scale, the tool could help scientists analyse expansive data from astronomical surveys.

Current methods cannot effectively get rid of the noise which pervades all data from space. To avoid interference from noise data, the team used the worlds most advanced astronomy supercomputer called ATERUI II.

Using real data from the Subaru Telescope, they generated 25,000 mock galaxy catalogues.

Also read:Explosion on Sun equivalent to millions of hydrogen bombs causes biggest solar flare in 4 years

What's causing data distortion?

All data from space can be distorted by the gravity of whats in the foreground eclipsing its background. This is called gravitational lensing. Measurements of such lensing is used to better understand the universe. Essentially, a galaxy directly visible to us could be manipulating data about what lies behind it.

But its difficult to differentiate oddly-looking galaxies from distorting ones that manipulate data. Its called shape noise and regularly gets in the way of understanding the universe.

Based on these understandings, scientists added noise to the artificial data sets and trained AI to recover lensing data from the mock data. The AI was able to highlight previously unobservable details from this data.

Building on this, scientists used the AI model on the real world, covering 21 square degrees of the sky. They found that the details registered about the foreground were actually consistent with existing knowledge about the cosmos.

Also read:'Orphan cloud' bigger than Milky Way found in 'no-galaxy's land' by scientists

The research was published in the April issueof Monthly Notices of the Royal Astronomical Society.

Read the original here:
Astronomers use artificial intelligence to reveal the true shape of universe - WION

US government watchdog finds federal use of artificial intelligence poses threat to federal agencies and public – JURIST

The US Government Accountability Office (GAO) released a public report Tuesday stating that most federal agencies that use facial recognition technology systems are unaware of the privacy and accuracy-related risks that such systems pose to federal agencies and the American public.

After holding a forum on AI oversight, the GAO developed an artificial intelligence (AI) accountability framework focused on governance, data, performance, and monitoringto help federal agencies and others use AI responsibly.

Of the 42 federal agencies that the GAO surveyed, 20 reported owning or using facial recognition technology systems. The GAO confirmed that most federal agencies that use facial recognition technology are unaware of which AI systems their employees use; hence, the GEO remarked that these agencies have not fully assessed the potential risks of using these systems, such as risks related to privacy and accuracy. Consequently, the GAO also noted that the use of these AI systems can pose [n]umerous risks to federal agencies and the public.

The GAO, which has provided objective, non-partisan information on government operations for a century, said:

AI is a transformative technology with applications in medicine, agriculture, manufacturing, transportation, defense, and many other areas. It also holds substantial promise for improving government operations. Federal guidance has focused on ensuring AI is responsible, equitable, traceable, reliable, and governable. Third-party assessments and audits are important to achieving these goals. However, AI systems pose unique challenges to such oversight because their inputs and operations are not always visible.

In March, the American Civil Liberties Union (ACLU) requested information on how intelligence agencies use AI for national security. In its request, the ACLU warned that AI systems can be biased against marginalized communities and may pose a risk to civil rights.

View post:
US government watchdog finds federal use of artificial intelligence poses threat to federal agencies and public - JURIST

Artificial intelligence and algorithms in the workplace – Lexology

Is removing subjective human choice from HR decisions going to create more problems than it solves?

We are all very aware of human failings when it comes to people management in the workplace. Everything from unconscious bias through to wholly intentional discrimination. To that extent the handing over of some management decisions to algorithms and AI (a term for which there is no common definition but which can cover a scenario where many algorithms work together with the ability to improve their own function) may seem like a no brainer. The technology is certainly out there and being aggressively marketed.

The rise of the gig economy is tied into the increase in the use of algorithms and AI, as the software began to be used on platforms such as Uber in an attempt to optimise the deployment of workers. It has also been adopted in many other sectors and workplaces - including many global brands such as Amazon and Unilever. Common uses include recruitment, workforce management (eg task or shift allocation) and performance review. The benefits to business include faster decision making, more efficient workforce planning, improved speed of recruitment and the obvious reduction in opportunity for human bias.

However, the very nature of "algorithmic management" means increases in monitoring and collection of data upon which the automated, or semi-automated, decisions are made. This is particularly so for performance monitoring and brings with it the risk of monitoring and processing data without appropriate consent. Removing humans from the decision making process entirely also creates the potential for lack of accountability. Additionally, if bias is embedded into an algorithm this will increase rather than decrease the risk of discrimination.

In May 2021, the TUC and the AI Consultancy published a report - Technology Managing People - the legal implications - highlighting exactly these sorts of issues and calling for legal reform. One focus of the report is the lack of transparency in decision making that comes with the use of AI - the basis of the decision being made is often an unknown to those that the decisions are being made about. The report points out that where it is difficult to identify when, how and by whom discrimination is introduced, it becomes more difficult for workers and employees to enforce their rights to protection from discrimination.

Other issues identified by the report include a lack of guidance for employers explaining when workers' privacy rights under the ECHR may be infringed by AI and the risks posed by the lack of clarity in the application of the UK GDPR to the use of AI within the employment relationship. Although unfair dismissal rights provide some protection from dismissals that are factually inaccurate or opaque, and this could be applied to an AI based decision making processes, the need for qualifying service means this protection is not universal. The UK GDPR also provides protection for employees via the requirement, amongst other things, for all personal data that is processed by AI to be accurate but a complaint arising from such a breach cannot, in itself, be brought within the employment tribunal system.

The TUC report makes a number of recommendations on how theses issues can be overcome. The provision of statutory guidance on how to avoid discrimination in the use of AI and on the interplay between AI and workers' rights to privacy; the introduction of a statutory right not to subjected to detrimental treatment (including dismissal) due to the processing of inaccurate data; the right to "explainability" in relation to high risk AI systems; and a change to the UK's data protection regime to state that discriminatory data processing is always unlawful are amongst those recommendations. However, even if any of these proposals are acted upon by UK Government, they will take time to implement.

For employers looking for ideas on good practice in this area, the policy paper published by ACAS - My boss the algorithm: an ethical look at algorithms in the workplace - is a good starting point, although it should be noted this is not ACAS guidance. The recommendations look at what can be done at a human level within a business. Key to those recommendations is the need for human input - algorithms being used alongside human management rather than replacing it. This is something that the TUC report also picks up on, albeit more formally suggesting that there should be a comprehensive and universal right to human review of AI decisions made in the workplace that are "high risk". Both reports also highlight the need for good communication between employers and employees (or their representatives) to ensure technology is effectively used to improve workplace outcomes.

Given the growth in this area, further regulation to manage the use of algorithms and AI in the workplace seems inevitable. In the meantime, businesses making use of this technology need to fully understand exactly what it does, where there are risks to its use and the importance of transparency in its use.

Read the original post:
Artificial intelligence and algorithms in the workplace - Lexology

A History of Regular Expressions and Artificial Intelligence – kottke.org

I have an unusually good memory, especially for symbols, words, and text, but since I dont use regular expressions (ahem) regularly, theyre one of those parts of computer programming and HTML/EPUB editing that I find myself relearning over and over each time I need it. How did something this arcane but powerful even get started? Naturally, its creators were trying to discover (or model) artificial intelligence.

Thats the crux of this short history of regex by Buzz Andersen over at Why is this interesting?

The term itself originated with mathematician Stephen Kleene. In 1943, neuroscientist Warren McCulloch and logician Walter Pitts had just described the first mathematical model of an artificial neuron, and Kleene, who specialized in theories of computation, wanted to investigate what networks of these artificial neurons could, well, theoretically compute.

In a 1951 paper for the RAND Corporation, Kleene reasoned about the types of patterns neural networks were able to detect by applying them to very simple toy languagesso-called regular languages. For example: given a language whose grammar allows only the letters A and B, is there a neural network that can detect whether an arbitrary string of letters is valid within the A/B grammar or not? Kleene developed an algebraic notation for encapsulating these regular grammars (for example, a*b* in the case of our A/B language), and the regular expression was born.

Kleenes work was later expanded upon by such luminaries as linguist Noam Chomsky and AI researcher Marvin Minsky, who formally established the relationship between regular expressions, neural networks, and a class of theoretical computing abstraction called finite state machines.

This whole line of inquiry soon falls apart, for reasons both structural and interpersonal: Pitts, McCullough, and Jerome Lettvin (another early AI researcher) have a big falling out with Norbert Wiener (of cybernetics fame), Minsky writes a book (Perceptrons) that throws cold water on the whole simple neural network as model of the human mind thing, and Pitts drinks himself to death. Minsky later gets mixed up with Jeffrey Epsteins philanthropy/sex trafficking ring. The world of early theoretical AI is just weird.

But! Ken Thompson, one of the creators of UNIX at Bell Labs comes along and starts using regexes for text editor searches in 1968. And renewed takes on neural networks come along in the 21st century that give some of that older research new life for machine learning and other algorithms. So, until Skynet/global warming kills us all, it all kind of works out? At least, intellectually speaking.

(Via Jim Ray)

More about...

See the original post:
A History of Regular Expressions and Artificial Intelligence - kottke.org