Archive for the ‘Artificial Intelligence’ Category

"Promising development in the fight with cancer using artificial intelligence" with a new privacy-preserving approach named MEDomics -…

"We reported results of the first longitudinal approach to natural language processing of unstructured medical notes and demonstrate its ability to update and improve a prognostic model over time, as a patient's oncologic illness course unfolds," said Olivier Morin, leader of the project and head of the physics division at UCSF. The work was pioneered at UCSF, and is the result of a collaboration among an international consortium comprising U.S., (UCSF, San Francisco) Canadian (McGill, Universit de Sherbrooke, Montreal) and European centers (Oncoray in Dresden and Maastricht University).

Catherine Park, M.D. Co-senior author and Chair of the department of Radiation Oncology at UCSF added: "With this data we were able to validate findings of published clinical trials using real world data, e.g. the positive impact of immunotherapy in lung cancer. In addition, there are exciting opportunities to generate hypothesis based on associations from patients' individual health profiles, and risk factors."

Professor Phillippe Lambin senior author and Chair of the department of Precision Medicine at Maastricht University adds, "The MEDomics infrastructure allowed us to validate several new clinical hypotheses like the importance of cardio-vascular morbidity for the outcome of cancer treatment."

The consortium has created a secure, dynamic, continuously learning and expandable infrastructure, termed MEDomics, designed to constantly capture multimodal electronic health information, including imaging, across a large and multicentric healthcare system (watch the animation: https://youtu.be/2030Pdgm3_4).

Dr. Morin and collaborators added, now part of the code is open source and we would like to expand the international consortium (visitwww.medomics.ai).Our vision is to create an open-source computation platform integrating all MEDomics developments and from which both clinical staff and research scientists could tackle a diverse range of oncological problems using AI.

Contact: Dr. Olivier Morin PhD, [emailprotected], +14153089257

SOURCE University of California - San Francisco

ucsf.edu

Read this article:
"Promising development in the fight with cancer using artificial intelligence" with a new privacy-preserving approach named MEDomics -...

Research shows AI is often biased. Here’s how to make algorithms work for all of us – World Economic Forum

Can you imagine a just and equitable world where everyone, regardless of age, gender or class, has access to excellent healthcare, nutritious food and other basic human needs? Are data-driven technologies such as artificial intelligence and data science capable of achieving this or will the bias that already drives real-world outcomes eventually overtake the digital world, too?

Bias represents injustice against a person or a group. A lot of existing human bias can be transferred to machines because technologies are not neutral; they are only as good, or bad, as the people who develop them. To explain how bias can lead to prejudices, injustices and inequality in corporate organizations around the world, I will highlight two real-world examples where bias in artificial intelligence was identified and the ethical risk mitigated.

In 2014, a team of software engineers at Amazon were building a program to review the resumes of job applicants. Unfortunately, in 2015 they realized that the system discriminated against women for technical roles. Amazon recruiters did not use the software to evaluate candidates because of these discrimination and fairness issues. Meanwhile in 2019, San Francisco legislators voted against the use of facial recognition, believing they were prone to errors when used on people with dark skin or women.

The National Institute of Standards and Technology (NIST) conducted research that evaluated facial-recognition algorithms from around 100 developers from 189 organizations, including Toshiba, Intel and Microsoft. Speaking about the alarming conclusions, one of the authors, Patrick Grother, says: "While it is usually incorrect to make statements across algorithms, we found empirical evidence for the existence of demographic differentials in the majority of the algorithms we studied.

Ridding AI and machine learning of bias involves taking their many uses into consideration

Image: British Medical Journal

To list some of the source of fairness and non-discrimination risks in the use of artificial intelligence, these include: implicit bias, sampling bias, temporal bias, over-fitting to training data, and edge cases and outliers.

Implicit bias is discrimination or prejudice against a person or group that is unconscious to the person with the bias. It is dangerous because the person is unaware of the bias whether it be on grounds of gender, race, disability, sexuality or class.

This is a statistical problem where random data selected from the population do not reflect the distribution of the population. The sample data may be skewed towards some subset of the group.

This is based on our perception of time. We can build a machine-learning model that works well at this time, but fails in the future because we didn't factor in possible future changes when building the model.

This happens when the AI model can accurately predict values from the training dataset but cannot predict new data accurately. The model adheres too much to the training dataset and does not generalize to a larger population.

These are data outside the boundaries of the training dataset. Outliers are data points outside the normal distribution of the data. Errors and noise are classified as edge cases: Errors are missing or incorrect values in the dataset; noise is data that negatively impacts on the machine learning process.

Analytical techniques require meticulous assessment of the training data for sampling bias and unequal representations of groups in the training data. You can investigate the source and characteristics of the dataset. Check the data for balance. For instance, is one gender or race represented more than the other? Is the size of the data large enough for training? Are some groups ignored?

A recent study on mortgage loans revealed that the predictive models used for granting or rejecting loans are not accurate for minorities. Scott Nelson, a researcher at the University of Chicago, and Laura Blattner, a Stanford University economist, found out that the reason for the variance between mortgage approval for majority and minority group is because low-income and minority groups have less data documented in their credit histories. Without strong analytical study of the data, the cause of the bias will be undetected and unknown.

What if the environment you trained the data is not suitable for a wider population? Expose your model to varying environments and contexts for new insights. You want to be sure that your model can generalize to a wider set of scenarios.

A review of a healthcare-based risk prediction algorithm that was used on about 200 million American citizens showed racial bias. The algorithm predicts patients that should be given extra medical care. It was found out that the system favoured white patients over black patients. The problem with the algorithms development is that it wasn't properly tested with all major races before deployment.

Inclusive design emphasizes inclusion in the design process. The AI product should be designed with consideration for diverse groups such as gender, race, class, and culture. Foreseeability is about predicting the impact the AI system will have right now and over time.

Recent research published by the Journal of the American Medical Association (JAMA) reviewed more than 70 academic publications based on the diagnostic prowess of doctors against digital doppelgangers across several areas of clinical medicine. A lot of the data used in training the algorithms came from only three states: Massachusetts, California and New York. Will the algorithm generalize well to a wider population?

A lot of researchers are worried about algorithms for skin-cancer detection. Most of them do not perform well in detecting skin cancer for darker skin because they were trained primarily on light-skinned individuals. The developers of the skin-cancer detection model didn't apply principles of inclusive design in the development of their models.

Testing is an important part of building a new product or service. User testing in this case refers to getting representatives from the diverse groups that will be using your AI product to test it before it is released.

This is a method of performing strategic analysis of external environments. It is an acronym for social (i.e. societal attitudes, culture and demographics), technological, economic (ie interest, growth and inflation rate), environmental, political and values. Performing a STEEPV analysis will help you detect fairness and non-discrimination risks in practice.

The COVID-19 pandemic and recent social and political unrest have created a profound sense of urgency for companies to actively work to tackle inequity.

The Forum's work on Diversity, Equality, Inclusion and Social Justice is driven by the New Economy and Society Platform, which is focused on building prosperous, inclusive and just economies and societies. In addition to its work on economic growth, revival and transformation, work, wages and job creation, and education, skills and learning, the Platform takes an integrated and holistic approach to diversity, equity, inclusion and social justice, and aims to tackle exclusion, bias and discrimination related to race, gender, ability, sexual orientation and all other forms of human diversity.

The Platform produces data, standards and insights, such as the Global Gender Gap Report and the Diversity, Equity and Inclusion 4.0 Toolkit, and drives or supports action initiatives, such as Partnering for Racial Justice in Business, The Valuable 500 Closing the Disability Inclusion Gap, Hardwiring Gender Parity in the Future of Work, Closing the Gender Gap Country Accelerators, the Partnership for Global LGBTI Equality, the Community of Chief Diversity and Inclusion Officers and the Global Future Council on Equity and Social Justice.

It is very easy for the existing bias in our society to be transferred to algorithms. We see discrimination against race and gender easily perpetrated in machine learning. There is an urgent need for corporate organizations to be more proactive in ensuring fairness and non-discrimination as they leverage AI to improve productivity and performance. One possible solution is by having an AI ethicist in your development team to detect and mitigate ethical risks early in your project before investing lots of time and money.

The views expressed in this article are those of the author alone and not the World Economic Forum.

View original post here:
Research shows AI is often biased. Here's how to make algorithms work for all of us - World Economic Forum

The Global Artificial Intelligence Market is expected to grow by $ 13.26 billion during 2021-2025, progressing at a CAGR of almost 47% during the…

Global Artificial Intelligence Market in the Industrial Sector 2021-2025 The analyst has been monitoring the artificial intelligence market in the industrial sector and it is poised to grow by $ 13.

New York, July 22, 2021 (GLOBE NEWSWIRE) -- Reportlinker.com announces the release of the report "Global Artificial Intelligence Market in the Industrial Sector 2021-2025" - https://www.reportlinker.com/p04647367/?utm_source=GNW 26 billion during 2021-2025, progressing at a CAGR of almost 47% during the forecast period. Our report on the artificial intelligence market in the industrial sector provides a holistic analysis, market size and forecast, trends, growth drivers, and challenges, as well as vendor analysis covering around 25 vendors.The report offers an up-to-date analysis regarding the current global market scenario, latest trends and drivers, and the overall market environment. The market is driven by an increase in the adoption of data-guided decision making and evolving industrial IoT and big data integration. In addition, an increase in the adoption of data-guided decision making is anticipated to boost the growth of the market as well.The artificial intelligence market in the industrial sector analysis includes the end-user segment and geographic landscape.

The artificial intelligence market in the industrial sector is segmented as below:By End-user Process industries Discrete industries

By Geography North America Europe APAC MEA South America

This study identifies the rise in demand for cloud-based AI solutionsas one of the prime reasons driving the artificial intelligence market in the industrial sector growth during the next few years.

The analyst presents a detailed picture of the market by the way of study, synthesis, and summation of data from multiple sources by an analysis of key parameters. Our report on artificial intelligence market in the industrial sector covers the following areas: Artificial intelligence market in the industrial sector sizing Artificial intelligence market in the industrial sector forecast Artificial intelligence market in the industrial sector industry analysis

This robust vendor analysis is designed to help clients improve their market position, and in line with this, this report provides a detailed analysis of several leading artificial intelligence market vendors in the industrial sector that include Alphabet Inc., Amazon.com Inc., General Electric Co., International Business Machines Corp., Intel Corp., Landing AI, Microsoft Corp., Oracle Corp., SAP SE, and Siemens AG. Also, the artificial intelligence market in the industrial sector analysis report includes information on upcoming trends and challenges that will influence market growth. This is to help companies strategize and leverage all forthcoming growth opportunities.The study was conducted using an objective combination of primary and secondary information including inputs from key participants in the industry. The report contains a comprehensive market and vendor landscape in addition to an analysis of the key vendors.

The analyst presents a detailed picture of the market by the way of study, synthesis, and summation of data from multiple sources by an analysis of key parameters such as profit, pricing, competition, and promotions. It presents various market facets by identifying the key industry influencers. The data presented is comprehensive, reliable, and a result of extensive research - both primary and secondary. Technavios market research reports provide a complete competitive landscape and an in-depth vendor selection methodology and analysis using qualitative and quantitative research to forecast the accurate market growth.Read the full report: https://www.reportlinker.com/p04647367/?utm_source=GNW

About ReportlinkerReportLinker is an award-winning market research solution. Reportlinker finds and organizes the latest industry data so you get all the market research you need - instantly, in one place.

__________________________

Story continues

See the rest here:
The Global Artificial Intelligence Market is expected to grow by $ 13.26 billion during 2021-2025, progressing at a CAGR of almost 47% during the...

Artificial intelligence and warfare – an introduction (podcast) – Shephard News

Introducing Shephard Studios Artificial Intelligence on the Battlefield podcast, sponsored by our partner Systel.

Welcome to the first episode of theArtificial Intelligence on the Battlefield podcast. Listen onApple Podcasts,Google Podcasts,Spotify and more.

Artificial Intelligence (AI) is no longer the stuff of science fiction but part of everyday reality.

The promise and massive advantages of AI have led to doctrinal shifts and reimagining of systems and approaches for militaries around the world.

AI can now be found in a vast range of military technology, from autonomous systems to data-gathering sensors.

AI and machine learning applications are much quicker than humans, reducing user workload and enhancing productivity. This poses significant challenges and opportunities for militaries.

Welcome to Shephard Studios Artificial Intelligence on the Battlefield podcast, sponsored by our partner Systel.

Over the course of three episodes, we are looking at the evolution of AI and machine learning in modern warfare.

We discuss the changing capabilities of the US and its allies, as well as the growing challenge from their peer competitors.

And well consider areas such as information processing and human-machine teaming, asking what this will mean for warfighters in the coming decades.

Original post:
Artificial intelligence and warfare - an introduction (podcast) - Shephard News

COVID: Artificial intelligence in the pandemic – DW (English)

If artificial intelligence is the future, then the future is now. This pandemic has shown us just how fast artificial intelligence, or AI, works and what it can do in so many different ways.

Right from the start, AI has helped us learn about SARS-CoV-2, the virus that causes COVID-19 infections.

It's helped scientists analyse the virus' genetic information its DNA at speed. DNA is the stuff that makes the virus, indeed any living thing, what it is. And if you want to defend yourself, you had better know your enemy.

AI has also helped scientists understand how fast the virus mutates and helped them develop and test vaccines.

We won't be able to get into all of it this is just an overview. But let's start by recapping the basics about AI.

An AI is a set of instructions that tells a computer what to do, from recognizing faces in the photo albums on our phones to sifting through huge dumps of data for that proverbial needle in a haystack.

People often call them algorithms. It sounds fancy but an algorithm is nothing more thana static list of rules that tells a computer: "If this, then that."

A machine learning (ML) algorithm, meanwhile, is the kind of AI that many of us like to fear. It's an AI that can learn from the things it reads and analyzes and teach itself to do new things. And wehumansoften feel like we can't control or even know what ML algorithms learn. But actually, we can because we write the original code. Soyou can afford to relax. A bit.

In summary, AIs and MLs are programs that let us process lots and lots of information, a lot of it "raw" data, very fast. They are not all evil monsters out to kill us or steal our jobs not necessarily, anyway.

With COVID-19, AI and ML may have helped save a few lives. They have been used in diagnostic tools that read vast numbers of chest X-raysfaster than any radiologist. That's helped doctors identify and monitor COVID patients.

In Nigeria, the technology has been used at a very basic but practical level to help people assess their of risk of getting infected. People answer a series of questions online and depending on their answers, are offered remote medical advice or redirected to a hospital.

The makers, a company called Wellvis, say it has reduced the number of people calling disease control hotlines unnecessarily.

One of the most important things we've had to handle is finding out who is infected fast. And in South Korea, artificial intelligence gave doctors ahead start.

Way back when the rest of the world was still wondering whether it was time to go into the first lockdown, a company in Seoul used AI to develop a COVID-19 test in mere weeks. It would have taken them months without AI.

It was "unheard of," said Youngsahng "Jerry" Suh, head of data science and AI development at the company, Seegene, in an interview with DW.

Seegene's scientists ordered raw materials for the kits on January 24 and by February 5, the first version of the test was ready.

It was only the third time the company had used its supercomputer and Big Data analysis to design a test.

But they must have done something right because by mid-March 2020, international reports suggested that South Korea had tested 230,000 people.

And, at least for a while, the country was able to keep the number of new infections per day relatively flat.

"And we're constantly updating that as new variants and mutations come to light. So, that allows our machine learning algorithm to detect those new variants as well," says Suh.

One of the other major issues we've had to handle is tracking how the disease especially new variants and their mutations spread through a community and from country to country.

In South Africa, researchers used an AI-based algorithm to predictfuture daily confirmed cases of COVID-19.

It was based on historical data from South Africa's past infection history and other information, such as the way people move from one community to another.

They say they showed the country had a low risk of a third wave of the pandemic.

"People thought the beta variant was going to spread around the continent and overwhelm our health systems, but with AI we were able to control that," says Jude Kong, who leadsthe Africa-Canada Artificial Intelligence and Data Innovation Consortium.

The project is a collaboration between Wits University and the Provincial Government of Gauteng in South Africa and York University in Canada, where Kong, who comes from Cameroon, is an assistant professor.

Kong says "data is very sparse in Africa" and one of the problems is getting over the stigma attached to any kind of illness, whether it's COVID, HIV, Ebola or malaria.

But AI has helped them "reveal hidden realities" specific to each area, and that's informed local health policies, he says.

They have deployed their AI modelling in Botswana, Cameroon, Eswatini, Mozambique, Namibia, Nigeria, Rwanda, South Africa, and Zimbabwe.

"A lot of information is one-dimensional," Kong says. "You know the number of people entering a hospital and those that get out. But hidden below that is their age, comorbidities, and the community where they live. We reveal that with AI to determine how vulnerable they are and inform policy makers."

Other types of AI, similar to facial recognition algorithms, can be used to detect infected people, or those with elevated temperatures, in crowds. And AI-driven robots can clean hospitals and other public spaces.

But, beyond that, there are experts who say AI's potential has been overstated.

They include Neil Lawrence, a professor of machine learning at the University of Cambridge who was quoted in April 2020, calling out AI as "hyped."

It was not surprising, he said, that in a pandemic, researchers fell back on tried and tested techniques, like simple mathematical modelling. But one day, he said, AI might be useful.

That was only 15 months ago. And look how far we've come.

That's how to do it: If humans have COVID-19, dogs had better cuddle with their stuffed animals. Researchers from Utrecht in the Netherlands took nasal swabs and blood samples from 48 cats and 54 dogs whose owners had contracted COVID-19 in the last 200 days. Lo and behold, they found the virus in 17.4% of cases. Of the animals, 4.2% also showed symptoms.

About a quarter of the animals that had been infected were also sick. Although the course of the illness was mild in most of the animals, three were considered to be severe. Nevertheless, medical experts are not very concerned. They say pets do not play an important role in the pandemic. The biggest risk is human-to-human transmission.

The fact that cats can become infected with coronaviruses has been known since March 2020. At that time, the Veterinary Research Institute in Harbin, China, had shown for the first time that the novel coronavirus can replicate in cats. The house tigers can also pass on the virus to other felines, but not very easily, said veterinarian Hualan Chen at the time.

But cat owners shouldn't panic. Felines quickly form antibodies to the virus, so they aren't contagious for very long. Anyone who is acutely ill with COVID-19 should temporarily restrict outdoor access for domestic cats. Healthy people should wash their hands thoroughly after petting strange animals.

Should this pet pig keep a safe distance from the dog when walking in Rome? That question may now also have to be reassessed. Pigs hardly come into question as carriers of the coronavirus, the Harbin veterinarians argued in 2020. But at that time they had also cleared dogs of suspicion. Does that still apply?

Nadia, a four-year-old Malaysian tiger, was one of the first big cats to be detected with the virus in 2020 at a New York zoo. "It is, to our knowledge, the first time a wild animal has contracted COVID-19 from a human," the zoo's chief veterinarian told National Geographic magazine.

It is thought that the virus originated in the wild. So far, bats are considered the most likely first carriers of SARS-CoV-2. However, veterinarians assume there must have been another species as an intermediate host between them and humans in Wuhan, China, in December 2019. Only which species this could be is unclear.

This racoon dog is a known carrier of the SARS viruses. German virologist Christian Drosten spoke about the species being a potential virus carrier. "Racoon dogs are trapped on a large scale in China or bred on farms for their fur," he said. For Drosten, the racoon dog is clearly the prime suspect.

Pangolins are also under suspicion for transmitting the virus. Researchers from Hong Kong, China and Australia have detected a virus in a Malaysian Pangolin that shows stunning similarities to SARS-CoV-2.

Hualan Chen also experimented with ferrets. The result: SARS-CoV-2 can multiply in the scratchy martens in the same way as in cats. Transmission between animals occurs as droplet infections. At the end of 2020, tens of thousands of martens had to be killed in various fur farms worldwide because the animals had become infected with SARS-CoV-2.

Experts have given the all-clear for people who handle poultry, such as this trader in Wuhan, China, where scientists believe the first case of the virus emerged in 2019. Humans have nothing to worry about, as chickens are practically immune to the SARS-CoV-2 virus, as are ducks and other bird species.

Author: Fabian Schmidt

See the rest here:
COVID: Artificial intelligence in the pandemic - DW (English)