Archive for the ‘Artificial Intelligence’ Category

Getting the nitty-gritty of artificial intelligence right – BL on Campus

If you are a non-technical person learning a technical subject or trying to understand a technical field such as Artificial Intelligence (AI), let me share a simple tip that will help you a lot. Dont let the complex-sounding technical terms confuse you. In due course, you will get comfortable with terminologies if you focus on first principles and try to get an intuitive understanding of the concepts.

So, lets start with the basics and some working definitions. Ive noticed the specific phrase artificial intelligence and machine learning and its shorthand AI/ML being used quite often. They even have more than 15 million and 5 million Google search results respectively. Clearly, their usage is quite common.

But lets examine this phrase AI and ML. Such usage of both AI and ML in one single phrase is actually a marketing practice rather than any technical distinction. Artificial Intelligence is a broad field and Machine Learning is one of its branches. You can think of AI as the superset and ML as its subset. You wont say chai and masala chai when serving a single cup of tea or sell cars and red cars. But hey, using AI and ML makes one sound like an expert and is also good for search engine optimisation.

Next, let me provide working definitions of AI and ML and draw a distinction between them.

Whats intelligence, anyway?

Intelligence is the ability to understand, reason, and generalise. Artificial Intelligence is machines or software having this capability. Intelligence involves the ability for abstraction or generalisation (or in layman terms, common sense). Hence, this kind of AI is also known as Artificial General Intelligence (AGI). In 2020, it may come as a surprise to you but AGI is not on the table at all. We are nowhere close to AGI nor is it clear whether we will ever achieve AGI. Machines with malice, emotions or consciousness, presuppose AGI and are limited to science fiction and movies.

What we instead have is artificial narrow intelligence. Narrow intelligence is a machines ability to perform a single task very well. Examples of such tasks include deciphering handwriting, identifying images, and recognising spoken text. Early approaches, since the 1950s, to master such tasks involved codifying human expertise as rules for computers to follow. It wasnt possible to codify all rules and such rules-based expert systems worked well only in some limited scenarios.

Machine learning, a pattern recognition tool

A different approach is machine learning, where such rules are not explicitly programmed by humans, but the software is fed with large amounts of data to identify patterns and arrive at decision rules. Machine learning is where the software learns the examples it has been provided with and the learning refers the software becoming better with experience (with more data). In other words, machine learning is a great pattern recognition tool.

There are different types of machine learning methods which draw upon mathematics, probability, statistics and computer science for detecting these patterns. One particular set of machine learning techniques, popularly called deep learning, made rapid strides in recent years (and we will discuss deep learning in later columns) and is behind several modern machine learning applications.

These days when you see headlines such as AI solves X, AI-powered software or AI-enabled solution or my favourite AI/ML, it almost always refers to machine learning. Let me make two things clear. First, we made spectacular advances in machine learning in the last ten years. Second, it may not be AGI, but machine learning has a wide variety of uses for consumers, businesses, and governments.

When to use AI/ML

So, what are the takeaways from our discussion of AGI vs ML as you try to utilise ML in your organisation or business?

Machine learning is simply a pattern recognition powerhouse. It seems intelligent but does not have what we consider common sense. To use the example of an AI-powered TV camera which mistook the football referees bald head for the football and focused there instead of the actual match play, it was an amusing illustration of a mismatched pattern. No serious damage and everyone actually got a good chuckle out of this.

But in some situations, the mistakes are costly, even fatal. Take the case of a self-driving car being tested in Arizona in 2018. The algorithm knew to identify pedestrians and cyclists. But the data it was fed did not include a person pushing their cycle and walking alongside it. Arguably, a human driver would not have had a difficulty recognising the pedestrian. The algorithms failure to recognise the scenario contributed to an accident resulting in the pedestrians death.

As a manager who is looking to leverage AI, you should have a good grasp of the nature and scope of machine intelligence, its narrow scope of application, and be able to draw the boundaries beyond which AI will break down. Based on these, you can decide when to rely on AI and when not to. Good managers are expected to make decisions with imperfect data or limited data. AI cant do that!

See the original post here:
Getting the nitty-gritty of artificial intelligence right - BL on Campus

The Breadth Of Healthcare Applications Of Artificial Intelligence Even Includes Physical Therapy – Forbes

Artificial Intelligence

This column keeps returning to the healthcare industry because it is so much more complex and varied than so many others. Artificial intelligence (AI) coverage has focused on radiology, has moved to the operating theater, and has been discussed in the back office. Insurance and pharma fraud are arenas where AI risk analysis is useful. Now, along comes another area that is amenable to AI solutions. Its something many people think of as secondary, but is really a critical part of healthcare: physical therapy.

As someone who, many years ago, had an intriguing car crash, and who, not as many years ago, also proved he wasnt as young as he thought he was, by blowing out a knee, Im someone who is very aware of the need for physical therapy (PT). The basics of PT seem very simple: design therapies that cause repeated motions of damaged body parts, analyze that motion, then provide feedback to the patient and the medical community in order to help both improve. Its the capture and analysis of impact (yes, pun intended) of that motion which can prove complex.

Human physical therapists can see a lot of movement, but its impossible for them to capture all the necessary information. SWORD Health is a company focused on this unique healthcare segment. As they are a young company, they are focusing on a few key therapy areas. The hip, knee, lower back, shoulder, wrist and neck comprise more than 90 percent of all musculoskeletal issues in the U.S., said Virgilio Bento, CEO, SWORD Health. Rehabilitating them remotely requires a technology that can learn and expand.

One intriguing area that supports a separate call out section is the oft problematic issue of bias in testing. We know that visual neural networks have had problems identifying women of color. We know that, outside of AI, many drug trials dont include children, pregnant women, and other demographics who will need those drugs. Physical therapy is a healthcare sector that can avoid those problems.

There is already a body of PT information on the wide variety of demographics who receive PT. The ability to track far more information and to analyze it with demographic information (even anonymized for privacy), means that treatments can start with far more segmentation based on available information and then been quickly tuned on an individual basis based on direct, specific results. Starting with patterns based on more detailed segmentation and then transforming treatment on a case-by-case basis removes the bias issues that may be inherent in other areas of medicine or even in the minds of some medical personnel.

As has been regularly mentioned, AI is a tool, not a solution. The company isnt only working with machine learning. They make sensors to capture the information, with the kinematics being sent to the system via wireless communications. Then multiple techniques can be used to address the data. A mixture of deep learning and statistical linear regression is used to understand the progress of the therapies. Changing the therapy can then also be semi-automated, with the system suggesting changes. That doesnt need deep learning, as choosing the therapies is a rules based process.

As with all areas of healthcare dealing with patients, in the United State the FDA requires clearance of both new and updated appliances. The difference between hardware and AI is readily apparent with how each part is handled on change. When a hardware component is changed, detailed specifications can be sent to the FDA for fairly quick analysis and approval. The regulatory agency is still early in its analysis on how to manage AI, especially neural networks, so the process can be slower than with hardware.

AI is still a grey area, primarily through the fault of AI companies. While they like to talk about the black box that is a neural network, for instance, they know their layers, they know the nodes, the code and the weightings. While some of the inference is still not easily explicable, there is far more companies could provide to regulatory agencies if it were mandated.

In the lack of such transparency, expect for at least near-term job security for humans. They must remain in the loop, both as oversight for the AI and as a legal cover to say the AI is not making a prognosis but is providing the humans with options.

Deep learning and other machine learning techniques have an important place in healthcare, but it must be incorporated into the full patient treatment process, along with other technology. Unlike a deep learning system cranking along on its own in a research facility, investigating potential new drugs, AI must play well with other technology and processes the closer to patients it resides. Physical therapy is an excellent aspect of the needed growth, as it is a regular and visible part of patient treatment that includes humans, hardware and software interacting within a regulatory framework to improve patient outcomes.

See the article here:
The Breadth Of Healthcare Applications Of Artificial Intelligence Even Includes Physical Therapy - Forbes

MIT Researchers Explore New Advancements In Asymptomatic COVID-19 Detection Using Artificial Intelligence Through Cough Recordings – MarkTechPost

It has been affirmed that the asymptomatic people infected with Covid-19 do not exhibit the diseases visible physical symptoms. Thus they are less likely to get examined for the virus and unknowingly spread the infection to others around.

Recently researchers at MIT have discovered that asymptomatic people differ from healthy people in the way they cough. Although the differences are not decipherable to the human ear, it is stated that artificial intelligence can be employed to discover them. Takeda Pharmaceutical Company Limited supported the research.

Several healthy individuals have voluntarily submitted forced-cough recordings. The researchers at MIT have trained the model on a large number of such samples of coughs and oral words. The AI model distinguishes asymptomatic people from healthy individuals when a new cough sample is fed. The team is now incorporating the model into a user-friendly app that will be a free, easy, and non-invasive pre-screening tool to identify people expected to be asymptomatic for Covid-19. It will be adopted on a large scale if FDA-approved. A user can then login to the app daily, cough into their phone, and immediately get information if they are infected.

Vocal sentiments

Before the pandemics onset, research groups had already been training algorithms on cell phone recordings of coughs to accurately diagnose conditions such as pneumonia and asthma. Similarly, the MIT team was working on developing AI models that analyze forced-cough recordings to detect signs of Alzheimers disease, which is associated with neuromuscular degradation, such as weakened vocal cords along with memory decline.

Firstly, a neural network known as ResNet50 was trained to discriminate sounds associated with different vocal cord strength degrees. The research showed that the quality of the sound mmmm could indicate how weak or strong a persons vocal cords are. The researchers then developed a sentiment speech classifier model trained on a large dataset of actors intonating emotional states, such as neutral, calm, sad, and happy. A third neural network was trained on a cough database to discern changes in lung and respiratory performance. Lastly, all three models were combined, overlaying an algorithm to detect muscular degradation.

A remarkable relationship

The team found growing evidence that patients infected with coronavirus experienced similar neurological symptoms as Alzheimer patients, such as temporary neuromuscular impairment. So they questioned if their AI framework for Alzheimers would work for diagnosing Covid-19 as well.

The sounds of talking and coughing are affected by the vocal cords and organs surrounding them. When an individual talks, a part of their talking is like coughing, and vice versa. AI can pick up things from the cough that we derive from a speech like a persons gender, mother tongue, age, or even emotional well-being. The team says that there is sentiment embedded in how an individual coughs. Seeing the similarity between the two, they verified and confirmed the Alzheimers biomarkers for Covid.

The team discovered that the AI framework originally meant for Alzheimers discovered patterns in the four biomarkers of vocal cord strength, lung and respiratory performance, sentiment, and muscular degradation are specific to Covid-19. The team stated that the model accurately detected 98.5 percent coughs from people confirmed with Covid-19 and asymptomatic coughs.

Asymptomatic symptoms

The AI model is not intended to diagnose symptomatic people regarding whether their symptoms are due to Covid-19 or any other infirmities like flu or asthma. The models potency rests in its ability to recognize asymptomatic coughs from healthy coughs.

The team is now working with a company to develop a free pre-screening app based on their AI model. They are also partnering with several hospitals worldwide to collect a more extensive and diverse cough recording set, improving training and strengthening the models accuracy.

The team says that the pandemic could become a thing of the past if pre-screening tools are used, imposing a constant check. They also state that these AI models would be incorporated into smart speakers and other listening devices so that people can quickly get an initial assessment of their disease risk.

Paper: https://ieeexplore.ieee.org/document/9208795

Source: https://news.mit.edu/2020/covid-19-cough-cellphone-detection-1029

Related

Read the original:
MIT Researchers Explore New Advancements In Asymptomatic COVID-19 Detection Using Artificial Intelligence Through Cough Recordings - MarkTechPost

IBM and AMD will work together on security, artificial intelligence – MarketWatch

International Business Machines Corp. IBM, -0.60% and Advanced Micro Devices Inc. AMD, +4.21% announced Wednesday morning that they have entered a multi-year agreement focused on enhancing their security and artificial-intelligence offerings. "The joint development agreement will expand this vision by building upon open-source software, open standards, and open system architectures to drive Confidential Computing in hybrid cloud environments and support a broad range of accelerators across high-performance computing (HPC), and enterprise critical capabilities such as virtualization and encryption," the companies said in a release. Confidential Computing is a technology that allows for the encryption of data used to run virtual machines and it helps protect sensitive information. "Confidential Computing for hybrid cloud unlocks new potential for enterprise adoption of hybrid cloud computing, especially in regulated industries such as finance, healthcare and insurance," the companies said in their release. IBM shares are up 0.3% in premarket trading Wednesday, while AMD shares are up 1.4%. IBM shares have lost 12% so far this year as AMD's have risen 70%. The S&P 500 SPX, +0.76% is up 10% in that span, and the Dow Jones Industrial Average DJIA, -0.07%, of which IBM is a component, is up 3%.

Read the original post:
IBM and AMD will work together on security, artificial intelligence - MarketWatch

Artificial Intelligence: engaging communities and stakeholders in the AI’s development improves ethics and performance – Lexology

Two Google employees in a recent Harvard Business Review (HBR) article have emphasised the importance a collaborative approach to AI development; AI developers and data scientists should partner with the communities, stakeholders and experts who understand how those AI systems will interact with in practice.

As the authors note, "AI has the power to amplify unfair biases, making innate biases exponentially more harmful." There is a particular risk that data scientists and developers make "causation mistakes" where a correlation is wrongly thought to signal a cause and effect. "This lack of understanding can lead to designs based on oversimplified, incorrect causal assumptions that exclude critical societal factors and can lead to unintended and harmful outcomes."

To address this risk, the authors suggest that the societal context needs to be factored into the AI system - the "community-based system dynamics". However, no individual person or algorithm can see the society's complexity in its entirety or fully understand it. "So, to account for these inevitable blindspots and innovate responsibly, technologists must collaborate with stakeholders representatives from sociology, behavioral science, and the humanities, as well as from vulnerable communities to form a shared hypothesis of how they work."

The article is of particular interest because there are calls for an Accountability for Algorithms Act in the UK which include "a right for workers to be involved to a reasonable level in the development and application of systems". Such a right is motivated by the need to ensure transparency. But the HBR article shows that such stakeholder involvement can improve an AI system's performance also.

There have been many calls for AI to be developed "ethically" (see the EU's proposals for an ethical framework here); perhaps such calls will carry greater weight if the ethical principles can be shown to simultaneously improve technical performance also. As the authors say, AI engineers need to think beyond engineering.

AI system developers who usually do not have social science backgrounds typically do not understand the underlying societal systems and structures that generate the problems their systems are intended to solve. This lack of understanding can lead to designs based on oversimplified, incorrect causal assumptions that exclude critical societal factors and can lead to unintended and harmful outcomes.

https://hbr.org/2020/10/ai-engineers-need-to-think-beyond-engineering

More here:
Artificial Intelligence: engaging communities and stakeholders in the AI's development improves ethics and performance - Lexology