Archive for the ‘Machine Learning’ Category

Increasing the Accessibility of Machine Learning at the Edge – Industry Articles – All About Circuits

In recent years, connected devices and the Internet of Things (IoT) have become omnipresent in our everyday lives, be it in our homes and cars or at our workplace. Many of these small devices are connected to a cloud servicenearly everyone with a smartphone or laptop uses cloud-based services today, whether actively or through an automated backup service, for example.

However, a new paradigm known as "edge intelligence" is quickly gaining traction in technologys fast-changing landscape. This article introduces cloud-based intelligence, edge intelligence, and possible use-cases for professional users to make machine learning accessible for all.

Cloud computing, simply put, is the availability of remote computational resources whenever a client needs them.

For public cloud services, the cloud service provider is responsible for managing the hardware and ensuring that the service's availability is up to a certain standard and customer expectations. The customers of cloud services pay for what they use, and the employment of such services is generally only viable for large-scale operations.

On the other hand, edge computing happens somewhere between the cloud and the clients network.

While the definition of where exactly edge nodes sit may vary from application to application, they are generally close to the local network. These computational nodes provide services such as filtering and buffering data, and they help increase privacy, provide increased reliability, and reduce cloud-service costs and latency.

Recently, its become more common for AI and machine learning to complement edge-computing nodes and help decide what data is relevant and should be uploaded to the cloud for deeper analysis.

Machine learning (ML) is a broad scientific field, but in recent times, neural networks (often abbreviated to NN) have gained the most attention when discussing machine learning algorithms.

Multiclass or complex ML applications such as object tracking and surveillance, automatic speech recognition, and multi-face detection typically require NNs. Many scientists have worked hard to improve and optimize NN algorithms in the last decade to allow them to run on devices with limited computational resources, which has helped accelerate the edge-computing paradigms popularity and practicability.

One such algorithm is MobileNet, which is an image classification algorithm developed by Google. This project demonstrates that highly accurate neural networks can indeed run on devices with significantly restricted computational power.

Until recently, machine learning was primarily meant for data-science experts with a deep understanding of ML and deep learning applications. Typically, the development tools and software suites were immature and challenging to use.

Machine learning and edge computing are expanding rapidly, and the interest in these fields steadily grows every year. According to current research, 98% of edge devices will use machine learning by 2025. This percentage translates to about 18-25 billion devices that the researchers expect to have machine learning capabilities.

In general, machine learning at the edge opens doors for a broad spectrum of applications ranging from computer vision, speech analysis, and video processing to sequence analysis.

Some concrete examples for possible applications are intelligent door locks combined with a camera. These devices could automatically detect a person wanting access to a room and allow the person entry when appropriate.

Due to the previously discussed optimizations and performance improvements of neural network algorithms, many ML applications can now run on embedded devices powered by crossover MCUs such as the i.MX RT1170. With its two processing cores (a 1GHz Arm Cortex M7 and a 400 MHz Arm Cortex-M4 core), developers can choose to run compatible NN implementations with real-time constraints in mind.

Due to its dual-core design, the i.MX RT1170 also allows the execution of multiple ML models in parallel. The additional built-in crypto engines, advanced security features, and graphics and multimedia capabilities make the i.MX RT1170 suitable for a wide range of applications. Some examples include driver distraction detection, smart light switches, intelligent locks, fleet management, and many more.

The i.MX 8M Plus is a family of applications processors that focuses on ML, computer vision, advanced multimedia applications, and industrial automation with high reliability. These devices were designed with the needs of smart devices and Industry 4.0 applications in mind and come equipped with a dedicated NPU (neural processing unit) operating at up to 2.3 TOPS and up to four Arm Cortex A53 processor cores.

Built-in image signal processors allow developers to utilize either two HD camera sensors or a single 4K camera. These features make the i.MX 8M Plus family of devices viable for applications such as facial recognition, object detection, and other ML tasks. Besides that, devices of the i.MX 8M Plus family come with advanced 2D and 3D graphics acceleration capabilities, multimedia features such as video encode and decode support including H.265), and 8 PDM microphone inputs.

An additional low-power 800 MHz Arm Cortex M7 core complements the package. This dedicated core serves real-time industrial applications that require robust networking features such as CAN FD support and Gigabit Ethernet communication with TSN capabilities.

With new devices comes the need for an easy-to-use, efficient, and capable development ecosystem that enables developers to build modern ML systems. NXPs comprehensive eIQ ML software development environment is designed to assist developers in creating ML-based applications.

The eIQ tools environment includes inference engines, neural network compilers, and optimized libraries to enable working with ML algorithms on NXP microcontrollers, i.MX RT crossover MCUs, and the i.MX family of SoCs. The needed ML technologies are accessible to developers through NXPs SDKs for the MCUXpresso IDE and Yocto BSP.

The upcoming eIQ Toolkit adds an accessible GUI; eIQ Portal and workflow, enabling developers of all experience levels to create ML applications.

Developers can choose to follow a process called BYOM (bring your own model), where developers build their trained models using cloud-based tools and then import them to the eIQ Toolkit software environment. Then, all thats left to do is select the appropriate inference engine in eIQ. Or the developer can use the eIQ Portal GUI-based tools or command line interface to import and curate datasets and use the BYOD (bring your own data) workflow to train their model within the eIQ Toolkit.

Most modern-day consumers are familiar with cloud computing. However, in recent years a new paradigm known as edge computing has seen a rise in interest.

With this paradigm, not all data gets uploaded to the cloud. Instead, edge nodes, located somewhere between the end-user and the cloud, provide additional processing power. This paradigm has many benefits, such as increased security and privacy, reduced data transfer to the cloud, and lower latency.

More recently, developers often enhance these edge nodes with machine learning capabilities. Doing so helps to categorize collected data and filter out unwanted results and irrelevant information. Adding ML to the edge enables many applications such as driver distraction detection, smart light switches, intelligent locks, fleet management, surveillance and categorization, and many more.

ML applications have traditionally been exclusively designed by data-science experts with a deep understanding of ML and deep learning applications. NXP provides a range of inexpensive yet powerful devices, such as the i.MX RT1170 and the i.MX 8M Plus, and the eIQ ML software development environment to help open ML up to any designer. This hardware and software aims to allow developers to build future-proof ML applications at any level of experience, regardless of how small or large the project will be.

Industry Articles are a form of content that allows industry partners to share useful news, messages, and technology with All About Circuits readers in a way editorial content is not well suited to. All Industry Articles are subject to strict editorial guidelines with the intention of offering readers useful news, technical expertise, or stories. The viewpoints and opinions expressed in Industry Articles are those of the partner and not necessarily those of All About Circuits or its writers.

See more here:
Increasing the Accessibility of Machine Learning at the Edge - Industry Articles - All About Circuits

PS5 Capable Of Machine Learning, AI Upscaling According To Insomniac Games – PlayStation Universe

Spider-Man: Miles Morales developer Insomniac Games has revealed that the PS5 is capable of Machine Learning and AI Upscaling.

The comments come via Insomniac Games Josh DiCarlo during a series of tweets about the performance of the PS5 and its work on the Spider-Man franchise. DiCarlo revealed that its innards are ML based, and that the studio is only just scratching the service on what Sonys new console is capable of achieving.

Not sure how specific I can get with specs now for now, but you are correct in the assumption that all final deformations are resulting via ML interference at runtime on the PS5 hardware. There are no blend shapes, skin decamp, or tradition lyrics of the trade (nothing against them!)

Insomniac Games has been pretty busy with PS5 hardware, having released a remastered version of Marvels Spider-Man, a dedicated version of Spider-Man: Miles Morales alongside the PS4 edition, and is currently working on the upcoming Ratchet & Clank: Rift Apart.

Related Content Sony PS5 Complete Guide A Total Resource On PlayStation 5

[Source Joe Miller on Twitter via NeoGAF]

Read the original:
PS5 Capable Of Machine Learning, AI Upscaling According To Insomniac Games - PlayStation Universe

Is Machine Learning The Future Of Coffee Health Research? – Sprudge

If youve been a reader of Sprudge for any reasonable amount of time, youve no doubt by now ready multiple articles about how coffee is potentially beneficial for some particular facet of your health. The stories generally go like this: a study finds drinking coffee is associated with a X% decrease in [bad health outcome] followed shortly by the study is observational and does not prove causation.

In a new study in theAmerican Heart Associations journal Circulation: Heart Failure, researchers found a link between drinking three or more cups of coffee a day and a decreased risk of heart failure. But theres something different about this observational study. This study used machine learning to get to its conclusion, and it may significantly alter the utility of this sort of study in the future.

As reported by the New York Times, the new study isnt exactly new at all. Led by David Kao, a cardiologist at University of Colorado School of Medicine, researchers re-examined the Framingham Heart Study (FHS), a long-term, ongoing cardiovascular cohort studyof residents of the city of Framingham, Massachusetts that began in 1948 and has grown to include over 14,000 participants.

Whereas most research starts out with a hypothesis that it then seeks to prove or disprove, which can lead to false relationships being established by the sort variables researchers choose to include or exclude in their data analysis, Kao et al instead approached the FHS with no intended outcome. Instead, they utilized a powerful and increasingly popular data-analysis technique known as machine learning to find any potential links between patient characteristics captured in the FHS and the odds of the participants experiencing heart failure.

Able to analyze massive amounts of data in a short amount of timeas well as be programmed to handle uncertainties in the data, like if a reported cup of coffee is six ounces or eight ouncesmachine learning can then start to ascertain and rank which variables are most associated with incidents of heart failure, giving even observational studies more explanatory power in their findings. And indeed, when the results of the FHS machine learning analysis were compare to two other well-known studies, the Cardiovascular Heart Study (CHS) and the Atherosclerosis Risk in Communities study (ARIC), the algorithm was able to correctly predict the relationship between coffee intake and heart failure.

But, of course, there are caveats. Machine learning algorithms are only as good as the data being fed to it. If the scope is too narrow, the results may not translate more broadly and its real-world predictive utility is significantly decreased. The New York Times offers facial recognition software as an example: Trained primarily on white male subjects, the algorithms have been much less accurate in identifying women and people of color.

Still, the new study shows promise, not just for the health benefits the algorithm uncovered, but for how we undertake and interpret this sort of analysis-driven research.

Zac Cadwaladeris the managing editor at Sprudge Media Network and a staff writer based in Dallas.Read more Zac Cadwaladeron Sprudge.

See the rest here:
Is Machine Learning The Future Of Coffee Health Research? - Sprudge

Machine learning tool sets out to find new antimicrobial peptides – Chemistry World

By combining machine learning, molecular dynamics simulations and experiments it has been possible to design antimicrobial peptides from scratch.1 The approach by researchers at IBM is an important advance in a field where data is scarce and trial-and-error design is expensive and slow.

Antimicrobial peptides small molecules consisting of 12 to 50 amino acids are promising drug candidates for tackling antibiotic resistance. The co-evolution of antimicrobial peptides and bacterial phyla over millions of years suggests that resistance development against antimicrobial peptides is unlikely, but that should be taken with caution, comments Hvard Jenssen at Roskilde University in Denmark, who was not involved in the study.

Artificial intelligence (AI) tools are helpful in discovering new drugs. Payel Das from the IBM Thomas J Watson Research Centre in the US says that such methods can be broadly divided into two classes. Forward design involves screening of peptide candidates using sequenceactivity or structureactivity models, whereas the inverse approach considers targeted and de novo molecule design. IBMs AI framework, which is formulated for the inverse design problem, outperforms other de novo strategies by almost 10%, she adds.

Within 48 days, this approach enabled us to identify, synthesise and experimentally test 20 novel AI-generated antimicrobial peptide candidates, two of which displayed high potency against diverse Gram-positive and Gram-negative pathogens, including multidrug-resistant Klebsiella pneumoniae, as well as a low propensity to induce drug resistance in Escherichia coli, explains Das.

The team first used a machine learning system called a deep generative autoencoder to capture information about different peptide sequences and then applied controlled latent attribute space sampling, a new computational method for generating peptide molecules with custom properties. This created a pool of 90,000 possible sequences. We further screened those molecules using deep learning classifiers for additional key attributes such as toxicity and broad-spectrum activity, Das says. The researchers then carried out peptidemembrane binding simulations on the pre-screened candidates and finally selected 20 peptides, which were tested in lab experiments and in mice. Their studies indicated that the new peptides work by disrupting pathogen membranes.

The authors created an exciting way of producing new lead compounds, but theyre not the best compounds that have ever been made, says Robert Hancock from the University of British Columbia in Canada, who discovered other peptides with antimicrobial activity in 2009.2 Jenssen participated in that study too and agrees. The identified sequences are novel and cover a new avenue of the classical chemical space, but to flag them as interesting from a drug development point of view, the activities need to be optimised.

Das points out that IBMs tool looks for new peptides from scratch and doesnt depend on engineered input features. This line of earlier work relies on the forward design problem, that is, screening of pre-defined peptide libraries designed using an existing antimicrobial sequence, she says.

Hancock agrees that this makes the new approach challenging. The problem they were trying to solve was much more complex because we narrowed down to a modest number of amino acids whereas they just took anything that came up in nature, he says. That could represent a significant advance, but the output at this stage isnt optimal. Hancock adds that the strategy does find some good sequences to start with, so he thinks it could be combined with other methods to improve on those leads and come up with really good molecules.

Visit link:
Machine learning tool sets out to find new antimicrobial peptides - Chemistry World

Machine learning methods to predict mechanical ventilation and mortality in patients with COVID-19 – DocWire News

This article was originally published here

PLoS One. 2021 Apr 1;16(4):e0249285. doi: 10.1371/journal.pone.0249285. eCollection 2021.

ABSTRACT

BACKGROUND: The Coronavirus disease 2019 (COVID-19) pandemic has affected millions of people across the globe. It is associated with a high mortality rate and has created a global crisis by straining medical resources worldwide.

OBJECTIVES: To develop and validate machine-learning models for prediction of mechanical ventilation (MV) for patients presenting to emergency room and for prediction of in-hospital mortality once a patient is admitted.

METHODS: Two cohorts were used for the two different aims. 1980 COVID-19 patients were enrolled for the aim of prediction ofMV. 1036 patients data, including demographics, past smoking and drinking history, past medical history and vital signs at emergency room (ER), laboratory values, and treatments were collected for training and 674 patients were enrolled for validation using XGBoost algorithm. For the second aim to predict in-hospital mortality, 3491 hospitalized patients via ER were enrolled. CatBoost, a new gradient-boosting algorithm was applied for training and validation of the cohort.

RESULTS: Older age, higher temperature, increased respiratory rate (RR) and a lower oxygen saturation (SpO2) from the first set of vital signs were associated with an increased risk of MV amongst the 1980 patients in the ER. The model had a high accuracy of 86.2% and a negative predictive value (NPV) of 87.8%. While, patients who required MV, had a higher RR, Body mass index (BMI) and longer length of stay in the hospital were the major features associated with in-hospital mortality. The second model had a high accuracy of 80% with NPV of 81.6%.

CONCLUSION: Machine learning models using XGBoost and catBoost algorithms can predict need for mechanical ventilation and mortality with a very high accuracy in COVID-19 patients.

PMID:33793600 | DOI:10.1371/journal.pone.0249285

See the rest here:
Machine learning methods to predict mechanical ventilation and mortality in patients with COVID-19 - DocWire News