Archive for the ‘Artificial Intelligence’ Category

Reducing the carbon footprint of artificial intelligence – MIT News

Artificial intelligence has become a focus of certain ethical concerns, but it also has some major sustainability issues.

Last June, researchers at the University of Massachusetts at Amherst released a startling report estimating that the amount of power required for training and searching a certain neural network architecture involves the emissions of roughly 626,000 pounds of carbon dioxide. Thats equivalent to nearly five times the lifetime emissions of the average U.S. car, including its manufacturing.

This issue gets even more severe in the model deployment phase, where deep neural networks need to be deployed on diverse hardware platforms, each with different properties and computational resources.

MIT researchers have developed a new automated AI system for training and running certain neural networks. Results indicate that, by improving the computational efficiency of the system in some key ways, the system can cut down the pounds of carbon emissions involved in some cases, down to low triple digits.

The researchers system, which they call a once-for-all network, trains one large neural network comprising many pretrained subnetworks of different sizes that can be tailored to diverse hardware platforms without retraining. This dramatically reduces the energy usually required to train each specialized neural network for new platforms which can include billions of internet of things (IoT) devices. Using the system to train a computer-vision model, they estimated that the process required roughly 1/1,300 the carbon emissions compared to todays state-of-the-art neural architecture search approaches, while reducing the inference time by 1.5-2.6 times.

The aim is smaller, greener neural networks, says Song Han, an assistant professor in the Department of Electrical Engineering and Computer Science. Searching efficient neural network architectures has until now had a huge carbon footprint. But we reduced that footprint by orders of magnitude with these new methods.

The work was carried out on Satori, an efficient computing cluster donated to MIT by IBM that is capable of performing 2 quadrillion calculations per second. The paper is being presented next week at the International Conference on Learning Representations. Joining Han on the paper are four undergraduate and graduate students from EECS, MIT-IBM Watson AI Lab, and Shanghai Jiao Tong University.

Creating a once-for-all network

The researchers built the system on a recent AI advance called AutoML (for automatic machine learning), which eliminates manual network design. Neural networks automatically search massive design spaces for network architectures tailored, for instance, to specific hardware platforms. But theres still a training efficiency issue: Each model has to be selected then trained from scratch for its platform architecture.

How do we train all those networks efficiently for such a broad spectrum of devices from a $10 IoT device to a $600 smartphone? Given the diversity of IoT devices, the computation cost of neural architecture search will explode, Han says.

The researchers invented an AutoML system that trains only a single, large once-for-all (OFA) network that serves as a mother network, nesting an extremely high number of subnetworks that are sparsely activated from the mother network. OFA shares all its learned weights with all subnetworks meaning they come essentially pretrained. Thus, each subnetwork can operate independently at inference time without retraining.

The team trained an OFA convolutional neural network (CNN) commonly used for image-processing tasks with versatile architectural configurations, including different numbers of layers and neurons, diverse filter sizes, and diverse input image resolutions. Given a specific platform, the system uses the OFA as the search space to find the best subnetwork based on the accuracy and latency tradeoffs that correlate to the platforms power and speed limits. For an IoT device, for instance, the system will find a smaller subnetwork. For smartphones, it will select larger subnetworks, but with different structures depending on individual battery lifetimes and computation resources. OFA decouples model training and architecture search, and spreads the one-time training cost across many inference hardware platforms and resource constraints.

This relies on a progressive shrinking algorithm that efficiently trains the OFA network to support all of the subnetworks simultaneously. It starts with training the full network with the maximum size, then progressively shrinks the sizes of the network to include smaller subnetworks. Smaller subnetworks are trained with the help of large subnetworks to grow together. In the end, all of the subnetworks with different sizes are supported, allowing fast specialization based on the platforms power and speed limits. It supports many hardware devices with zero training cost when adding a new device.In total, one OFA, the researchers found, can comprise more than 10 quintillion thats a 1 followed by 19 zeroes architectural settings, covering probably all platforms ever needed. But training the OFA and searching it ends up being far more efficient than spending hours training each neural network per platform. Moreover, OFA does not compromise accuracy or inference efficiency. Instead, it provides state-of-the-art ImageNet accuracy on mobile devices. And, compared with state-of-the-art industry-leading CNN models , the researchers say OFA provides 1.5-2.6 times speedup, with superior accuracy. Thats a breakthrough technology, Han says. If we want to run powerful AI on consumer devices, we have to figure out how to shrink AI down to size.

The model is really compact. I am very excited to see OFA can keep pushing the boundary of efficient deep learning on edge devices, says Chuang Gan, a researcher at the MIT-IBM Watson AI Lab and co-author of the paper.

If rapid progress in AI is to continue, we need to reduce its environmental impact, says John Cohn, an IBM fellow and member of the MIT-IBM Watson AI Lab. The upside of developing methods to make AI models smaller and more efficient is that the models may also perform better.

More here:
Reducing the carbon footprint of artificial intelligence - MIT News

Artificial intelligence can take banks to the next level – TechRepublic

Banking has the potential to improve its customer service, loan applications, and billing with the help of AI and natural language processing.

Image: Kubkoo, Getty Images/iStockPhoto

When I was an executive in banking, we struggled with how to transform tellers at our branches into customer service specialists instead of the "order takers" that they were. This struggle with customer service is ongoing for financial institutions. But it's an area in which artificial intelligence (AI), and its ability to work with unstructured data like voice and images, can help.

"There are two things that artificial intelligence does really well," said Ameek Singh, vice president of IBM's Watson applications and solutions. "It's really good with analyzing images and it also performs uniquely well with natural language processing (NLP)."

SEE:Managing AI and ML in the enterprise 2020 (free PDF)(TechRepublic)

AI's ability to process natural language helps behind the scenes as banks interact with their customers. In call center banking transactions, the ability to analyze language can detect emotional nuances from the speaker, and understand linguistic differences such as the difference between American and British English. AI works with other languages as well, understanding the emotional nuances and slang terms that different groups use.

Collectively, real-time feedback from AI aids bank customer service reps in call centersbecause if they know the sentiments of their customers, it's easier for them to relate to customers and to understand customer concerns that might not have been expressed directly.

"We've developed AI models for natural language processing in a multitude of languages, and the AI continues to learn and refine these linguistics models with the help of machine learning (ML)," Singh said.

SEE:AI isn't perfect--but you can get it pretty darn close(TechRepublic)

The result is higher quality NLP that enables better relationships between customers and the call center front line employees who are trying to help them.

But the use of AI in banking doesn't stop there. Singh explained how AI engines like Watson were also helping on the loans and billing side.

"The (mortgage) loan underwriter looks at items like pay stubs and credit card statements. He or she might even make a billing inquiry," Singh said.

Without AI, these document reviews are time consuming and manual. AI changes that because the AI can "read" the document. It understands what the salient information is and also where irrelevant items, like a company logo, are likely to be located. The AI extracts the relevant information, places the information into a loan evaluation model, and can make a loan recommendation that the underwriter reviews, with the underwriter making a final decision.

Of course, banks have had software for years that has performed loan evaluations. However, they haven't had an easy way to process foundational documents such as bills and pay stubs, that go into the loan decisioning process and that AI can now provide.

SEE:These five tech trends will dominate 2020(ZDNet)

The best news of all for financial institutions is that AI modeling and execution don't exclude them.

"The AI is designed to be informed by bank subject matter experts so it can 'learn' the business rules that the bank wants to apply," Singh said. "The benefit is that real subject matter experts get involvednot just the data scientists."

Singh advises banks looking at expanding their use of AI to carefully select their business use cases, without trying to do too much at once.

"Start small instead of using a 'big bang' approach," he said. "In this way, you can continue to refine your AI model and gain success with it that immediately benefits the business."

Learn the latest news and best practices about data science, big data analytics, and artificial intelligence. Delivered Mondays

Continue reading here:
Artificial intelligence can take banks to the next level - TechRepublic

Health care of tomorrow, today: How artificial intelligence is fighting the current, and future, COVID-19 pandemic | TheHill – The Hill

SARS-COV-2 has upended modern health care, leaving health systems struggling to cope. Addressing a fast-moving and uncontrolled disease requires an equally efficient method of discovery, development and administration. Artificial Intelligence (AI) and Machine Learning driven health care solutions provide such an answer. AI-enabled health care is not the medicine of the future, nor does it mean robot doctors rolling room to room in hospitals treating patients. Instead of a hospital from some future Jetsons-like fantasy, AI is poised to make impactful and urgent contributions to the current health care ecosystem. Already AI-based systems are helping to alleviate the strain on health care providers overwhelmed by a crushing patient load, accelerate diagnostic and reporting systems, and enable rapid development of new drugs and existing drug combinations that better match a patients unique genetic profile and specific symptoms.

For the thousands of patients fighting for their lives against this deadly disease and the health care providers who incur a constant risk of infection, AI provides an accelerated route to understand the biology of COVID-19. Leveraging AI to assist in prediction, correlation and reporting allow health care providers to make informed decisions quickly. With the current standard of PCR based testing requiring up to 48 hours to return a result, New York-based Envisagenics has developed an AI platform that analyzes 1,000 patient samples in parallel in just two hours. Time saves lives, and the company hopes to release the platform for commercial use in the coming weeks.

AI-powered wearables, such as a smart shirt developed by Montreal-based Hexoskin to continuously measure biometrics including respiration effort, cardiac activity, and a host of other metrics, provide options for hospital staff to minimize exposure by limiting the required visits to infected patients. This real-time data provides an opportunity for remote monitoring and creates a unique dataset to inform our understanding of disease progression to fuel innovation and enable the creation of predictive metrics, alleviating strain on clinical staff. Hexoskin has already begun to assist hospitals in New York City with monitoring programs for their COVID-19 patients, and they are developing an AI/ML platform to better assess the risk profile of COVID-19 patients recovering at home. Such novel platforms would offer a chance for providers and researchers to get ahead of the disease and develop more effective treatment plans.

AI also accelerates discovery and enables efficient and effective interrogation of, the necessary chemistry to address COVID-19. An increasing number of companies are leveraging AI/ML to identify new treatment paths, whether from a list of existing molecules or de novo discovery. San Francisco-based Auransa is using AI to map the gene sequence of SARS-COV-2 to its effect on the host to generate a short-list of already approved drugs that have a high likelihood to alleviate symptoms of COVID-19. Similarly, UK-based Healx has set its AI platform to discover combination therapies, identifying multi-drug approaches to simultaneously treat different aspects of the disease pathology to improve patient outcomes. The company analyzed a library of 4,000 approved drugs to map eight million possible pairs and 10.5 billion triplets to generate combination therapy candidates. Preclinical testing will begin in May 2020.

Developers cannot always act alone - realizing the potential of AI often requires the resources of a collaboration to succeed. Generally, the best data sets and the most advanced algorithms do not exist within the same organization, and it is often the case that multiple data sources and algorithms need to be combined for maximum efficacy. Over the last month, we have seen the rise of several collaborations to encourage information sharing and hasten potential outcomes to patients.

Medopad, a UK-based AI developer, has partnered with Johns Hopkins University to mine existing datasets on COVID-19 and relevant respiratory diseases captured by the UK Biobank and similar databases to identify a biomarker associated with a higher risk for COVID-19. A biomarker database is essential in executing long-term population health measures, and can most effectively be generated by an AI system. In the U.S., over 500 leading companies and organizations, including Mayo Clinic, Amazon Web Services and Microsoft, have formed the COVID-19 Healthcare Coalition to assist in coordinating on all COVID-19 related matters. As part of this effort, LabCorp and HD1, among others, have come together to use AI to make testing and diagnostic data available to researchers to help build disease models including predictions of future hotspots and at-risk populations. On the international stage, the recently launched COAI, a consortium of AI-companies being assembled by French-US OWKIN, aims to increase collaborative research, to accelerate the development of effective treatments, and to share COVID-19 findings with the global medical and scientific community.

Leveraging the potential of AI and machine learning capabilities provides a potent tool to the global community in tackling the pandemic. AI presents novel ways to address old problems and opens doors to solving newly developing population health concerns. The work of our health care system, from the research scientists to the nurses and physicians, should be celebrated, and we should embrace the new tools which are already providing tremendous value. With the rapid deployment and integration of AI solutions into the COVID-19 response, the health care of tomorrow is already addressing the challenges we face today.

Brandon Allgood, PhD, is vice chair of the Alliance for Artificial Intelligence in Healthcare, a global advocacy organization dedicated to the discovery, development and delivery of better solutions to improve patient lives. Allgood is a SVP of DS&AI at Integral Health, a computationally driven biotechnology company in Boston.

More:
Health care of tomorrow, today: How artificial intelligence is fighting the current, and future, COVID-19 pandemic | TheHill - The Hill

When the coronavirus hit, California turned to artificial intelligence to help map the spread – 60 Minutes – CBS News

California was the first state to shut down in response to the COVID-19 pandemic. It also enlisted help from the tech sector, harnessing the computing power of artificial intelligence to help map the spread of the disease, Bill Whitaker reports. Whitaker's story will be broadcast on the next edition of 60 Minutes, Sunday, April 26 at 7 p.m. ET/PT on CBS.One of the companies California turned to was a small Canadian start-up called BlueDot that uses anonymized cell phone data to determine if social distancing is working. Comparing location data from cell phone users over a recent 24-hour period to a week earlier in Los Angeles, BlueDot's algorithm maps where people are still gathering. It could be a hospital or it could be a problem. "We can see on a moment by moment basis if necessary, where or not our stay at home orders were working," says California Governor Gavin Newsom.The data allows public health officials to predict which hospitals might face the greatest number of patients. "We are literally looking into the future and predicting in real time based on constant update of information where patterns are starting to occur," Newsom tells Whitaker. "So the gap between the words and people's actions is often anecdotal. But not with this technology."California is just one client of BlueDot. The firm was among the first to warn of the outbreak in Wuhan on December 31. Public officials in ten Asian countries, airlines and hospitals were alerted to the potential danger of the virus by BlueDot.BlueDot also uses anonymized global air ticket data to predict how an outbreak of infectious disease might spread. BlueDot founder Dr. Kamran Khan tells Whitaker, "We can analyze and visualize all this information across the globe in just a few seconds." The computing power of artificial intelligence lets BlueDot sort through billions of pieces of raw data offering the critical speed needed to map a pandemic. "Our surveillance system that picked up the outbreak of Wuhan automatically talks to the system that is looking at how travelers might go to various airports around Wuhan," says Dr. Khan.

2020 CBS Interactive Inc. All Rights Reserved.

See the original post here:
When the coronavirus hit, California turned to artificial intelligence to help map the spread - 60 Minutes - CBS News

Artificial Intelligence Gives Researchers the Scoop on Ancient Poop – Smithsonian.com

Everybody poopsand after a few thousand years underground, these droppings often start to look the same. That stool-based similarity poses something of a puzzle for archaeologists investigating sites where dogs and humans once cohabited, as it isnt always easy to deduce which species left behind specific feces.

But as a team of researchers writes in the journal PeerJ, a newly developed artificial intelligence system may end these troubles once and for all. Called corpoIDan homage to coprolite, the formal term for fossilized fecesthe program is able to distinguish the subtle differences between ancient samples of human and canine excrement based on DNA data alone, reports David Grimm for Science magazine.

Applied to feces unearthed from sites around the world, the new method could help researchers unveil a trove of valuable information about a defecators diet, health, and perhapsif the excretion contains enough usable DNAidentity. But in places where domesticated dogs once roamed, canine and human DNA often end up mixed in the same fecal samples: Dogs are known to snack on peoples poop, and some humans have historically dined on canine meat.

Still, differences in the defecations do existespecially when considering the genetic information left behind by the microbiome, or the microbes that inhabit the guts of all animals. Because microbiomes vary from species to species (and even from individual to individual within a species), they can be useful tools in telling droppings apart.

To capitalize on these genetic differences, a team led by Maxime Borry of Germanys Max Planck Institute for the Science of Human History trained a computer to analyze the DNA in fossilized feces, comparing it to known samples of modern human and canine stool. The researchers then tested the programs performance on a set of 20 samples with known (or at least strongly suspected) species origins, including seven that only contained sediments.

The system was able to identify all of the sediments as uncertain, and it correctly classified seven other samples as either dog or human. But the final six appeared to stump the program.

Writing in the study, Borry and his colleagues suggest that the system may have struggled to identify microbiomes that didnt fall in line with modern human and canine samples. People who had recently eaten large quantities of dog meat, for instance, might have thrown the program for a loop. Alternatively, ancient dogs with unusual diets could have harbored gut microbes that differed vastly from their peers, or from modern samples.

There is not so much known about the microbiome of dogs, Borry tells Vices Becky Ferreira.

With more information on how diverse canine gut microbes can get, he says, the teams machine learning program may have a shot at performing better.

Ainara Sistiaga, a molecular geoarchaeologist at the University of Copenhagen who wasnt involved in the study, echoes this sentiment in an interview with Science, pointing out that the data used to train coproID came exclusively from dogs living in the modern Western world. It therefore represented just a small sliver of the riches found in canine feces.

CoproID also failed to determine the origins of highly degraded samples that contained only minimal microbial DNA. With these issues and others, there are definite issues that need to be resolved before the method can be used widely, Lisa-Marie Shillito, an archaeologist at Newcastle University who wasnt involved in the study, tells Michael Le Page of New Scientist.

With more tinkering, the method could reveal a great deal about the history of humans and dogs alikeincluding details about how the two species first became close companions, Melinda Zeder, an archaeozoologist at the Smithsonian Institutions National Museum of Natural History who wasnt involved in the study, tells Science.

As dogs swapped the fleshy, protein-heavy diets of their wolfish ancestors for starchy human fare, their gut microbes were almost certainly taken along for the ride. Even thousands of years after the fact, feces could benchmark this transition.

Says Zeder, The ability to track this through time is really exciting.

Read the rest here:
Artificial Intelligence Gives Researchers the Scoop on Ancient Poop - Smithsonian.com