Archive for the ‘Machine Learning’ Category

The rise of Machine Learning Robots: Explore machine learning in … – Robotics Tomorrow

Machine learning robots are changing the way humans interact with technology and also the way technology interacts with the world around it. These robots use machine learning skills to acquire knowledge and improve their performance over time. The field of Artificial Intelligence includes deep learning, as a branch of machine learning, which further boosts the capabilities of these robots by enabling them to process complex data and recognise meaningful patterns.

Among the latest technological advances, artificial intelligence (AI) and machine learning have become increasingly significant. The transformative capacity they bring with them is notorious in different fields, and one example of this is robotics. Machine learning robots are changing the way machines interact with their environment, acquiring knowledge and adapting to new situations. This article explores several trending topics in the technology sector such as machine learning robots, their relationship with deep learning, the intersection of robotics and machine learning, as well as the differences between artificial intelligence and machine learning.

To provide a simple example that shows how machine learning works, we can consider the one applied by streaming platforms: it is based on user behaviour for future recommendations of audiovisual content. The platform's recommendations are not static but adapt as the user's preferences change.

A machine learning robot is a type of robot that includes these machine learning techniques to acquire knowledge and improve its responsiveness, based on what it learns. These robots are designed to collect data from their environment using a variety of sensors, process the information and adjust their behaviour based on the data collected, greatly extending their autonomy.

The machine learning process allows robots to recognise patterns that help them understand their environment and perform specific tasks more efficiently by applying what they learn. By using machine learning algorithms, robots can learn autonomously without requiring specific programming for each task.

DEEP LEARNING AND MACHINE LEARNING ROBOTS In technical terms, deep learning is a model within machine learning that is of particular interest to the robotics sector. This model is based on layered algorithms known as artificial neural networks, imitating the functioning of the human brain for data processing.

These neural networks allow deep learning robots to process complex data, extract meaningful characteristics, assess whether the prediction it is making is accurate or not, and thus make more accurate decisions.

In short, the development of deep learning algorithms aims to make them increasingly efficient with less human supervision.

Thus, the ability of robots to identify objects, recognise speech and understand natural language is driven by deep learning techniques.

MACHINE LEARNING AND ROBOTICS The intersection of robotics and machine learning introduces new possibilities for the autonomy of mobile robots and for the intelligence of their task execution. Machine learning robots are being used in a wide range of applications, from inspection and maintenance or surveillance to manufacturing and healthcare.

Surveillance functions that a mobile robot is able to perform efficiently (such as maintenance rounds in an infrastructure), reach higher levels of accuracy and anticipation thanks to machine learning algorithms.

In the manufacturing industry, machine learning robots can improve the efficiency and accuracy of production processes by learning to perform complex tasks more quickly and accurately. In healthcare, we can already see the value they bring by assisting in surgeries, making accurate diagnoses or providing personalised patient care.

WHAT IS THE DIFFERENCE BETWEEN AI AND MACHINE LEARNING? The main difference between artificial intelligence and machine learning lies in their focus and application.

Artificial intelligence seeks to develop systems capable of performing tasks that require human intelligence, such as speech recognition, decision making and natural language understanding. Moreover, AI works with structured data as well as semi-structured and unstructured data.

Machine learning, on the other hand, focuses on teaching machines to learn from data, improving their performance as they acquire more information. Instead of explicitly programming each step, machine learning allows robots to adapt and improve their behaviour autonomously. Deep learning only works with structured or semi-structured data.

In summary, artificial intelligence is a broad field that involves a variety of techniques and approaches, while machine learning is a specific technique used to train machines to learn and improve their accuracy from experience.

WHICH IS BETTER, ARTIFICIAL INTELLIGENCE OR MACHINE LEARNING? The question of which is better, artificial intelligence or machine learning, is not a simple one to answer. Artificial intelligence is a broader field that covers a variety of techniques, including machine learning. While artificial intelligence focuses on creating systems that mimic human intelligence in a general way, machine learning focuses on teaching machines to learn from experience and improve from data.

Artificial intelligence is the broader, more aspirational concept, while machine learning is a specific technique within artificial intelligence that has proven to be very effective in a variety of applications. In short, machine learning is a powerful tool used in the field of artificial intelligence.

CONCLUSION Machine learning robots are changing the way humans interact with technology and also the way technology interacts with the world around it. These robots use machine learning skills to acquire knowledge and improve their performance over time. The field of Artificial Intelligence includes deep learning, as a branch of machine learning, which further boosts the capabilities of these robots by enabling them to process complex data and recognise meaningful patterns.

While artificial intelligence and machine learning are related concepts, machine learning is a specific technique within the broader field of artificial intelligence. Finally, machine learning robots demonstrate the power of combining robotics and machine learning to create machines that are more intelligent, adaptive and ultimately useful to humans.

The rest is here:
The rise of Machine Learning Robots: Explore machine learning in ... - Robotics Tomorrow

UW-Madison: Cancer diagnosis and treatment could get a boost … – University of Wisconsin System

Thanks to machine learning algorithms, short pieces of DNA floating in the bloodstream of cancer patients can help doctors diagnose specific types of cancer and choose the most effective treatment for a patient.

The new analysis technique, created by University of WisconsinMadison researchersandpublished recently in Annals of Oncology, is compatible with liquid biopsy testing equipment already approved in the United States and in use in cancer clinics. This could speed the new methods path to helping patients.

Liquid biopsies rely on simple blood draws instead of taking a piece of cancerous tissue from a tumor with a needle.

Marina Sharifi

Liquid biopsies are much less invasive than a tissue biopsy which may even be impossible to do in some cases, depending on where a patients tumor is, saysMarina Sharifi, a professor of medicine and oncologist in UWMadisons School of Medicine and Public Health. Its much easier to do them multiple times over the course of a patients disease to monitor the status of cancer and its response to treatment.

Cancerous tumors shed genetic material, called cell-free DNA, into the bloodstream as they grow. But not all parts of a cancer cells DNA are likely to tumble away. Cells store some of their DNA by coiling it up in protective balls called histones. They unwrap sections to access parts of the genetic code as needed.

Kyle Helzer, a UWMadison bioinformatics scientist, says that parts of the DNA containing the genes that cancer cells use often are uncoiled more frequently and thus are more likely to fragment.

Were exploiting that larger distribution of those regions among cell-free DNA to identify cancer types, adds Helzer, who is also a co-lead author of the study along with Sharifi and scientist Jamie Sperger.

Shuang Zhao

The research team, led by UWMadison senior authorsShuang (George) Zhao, professor of human oncology, andJoshua Lang, professor of medicine, used DNA fragments found in blood samples from a past study of nearly 200 patients (some with, some without cancer), and new samples collected from more than 300 patients treated for breast, lung, prostate or bladder cancers at UWMadison and other research hospitals in the Big Ten Cancer Research Consortium.

The scientists divided each group of samples into two. One portion was used to train a machine-learning algorithm to identify patterns among the fragments of cell-free DNA, relatively unique fingerprints specific to different types of cancers. They used the other portion to test the trained algorithm. The algorithm topped 80 percent accuracy translating the results of a liquid biopsy into both a cancer diagnosis and the specific types of cancer afflicting a patient.

In addition, the machine learning approach was able to tell apart two subtypes of prostate cancer: the most common version, adenocarcinoma, and a swift-progressing variant called neuroendocrine prostate cancer (NEPC) that is resistant to standard treatment approaches. Because NEPC is often difficult to distinguish from adenocarcinoma, but requires aggressive action, it puts oncologists like Lang and Sharifi in a bind.

Joshua Lang

Currently, the only way to diagnose NEPC is via a needle biopsy of a tumor site, and it can be difficult to get a conclusive answer from this approach, even if we have a high clinical suspicion for NEPC, Sharifi says.

Liquid biopsies have advantages, Sperger adds, in that you dont have to know which tumor site to biopsy at, and it is much easier for the patient to get a standard blood draw.

The blood samples were processed using cell-free DNA sequencing technology marketed by Iowa-based Integrated DNA Technologies. Using standard panels like those currently in the clinic is a departure one that can reduce the time and cost of testing from other methods of fragmentomic analysis of cancer DNA in blood samples.

Most commercial panels have been developed around the most important cancer genes that indicate certain drugs for treatment, and they sequence those select genes, says Zhao. What weve shown is that we can use those same panels and same targeted genes to look at the fragmentomics of the cell-free DNA in a blood sample and identify the type of cancer a patient has.

The UW Carbone Cancer Centers Circulating Biomarker Core and Biospecimen Disease-Oriented Team contributed to the collection of the studys hundreds of patient samples.

This research was funded in part by grants from the National Institutes of Health (DP2 OD030734, 1UH2CA260389 and R01CA247479) and the Department of Defense (PC190039, PC200334 and PC180469.)

Written by Chris Barncard

Link to original story: https://news.wisc.edu/algorithmic-blood-test-analysis-will-ease-diagnosis-of-cancer-types-guide-treatment/

The rest is here:
UW-Madison: Cancer diagnosis and treatment could get a boost ... - University of Wisconsin System

Department of Energy Grant will Fund EECS Professor Lu’s … – University of California, Merced

UC Merced Computer Science and Engineering Professor Xiaoyi Luis leading a collaboration that secureda $4.35 million grant from the Department of Energy (DOE) to improve federated machine learning systems.

Lu is partnering with the University of Iowa and Argonne National Laboratory near Chicago to improve the understanding of scalable, federated, privacy-preserving machine learning. This project is one of five initiatives centered on distributed resilient systems in science that have collectively received $40 million in funding from the DOE.

"Scientific research is getting more complex and will need next-generation workflows as we move forward with larger data sets and new tools spread across the U.S.," Ceren Susut, DOE acting Associate Director of Science for Advanced Scientific Computing Research, said in a news release announcing the awards. "This program will explore how science can be conducted in this new environment - where tools and data are in multiple places but must be integrated in a high-performance fashion."

According to his abstract, Lu's proposal "aims to address the critical need for a scalable and resilient Federated Learning simulation and modeling system in the context of edge computing-related scientific research and exploration."

Federated Learning embodies a decentralized approach to training machine learning models, placing a strong emphasis on enhancing data privacy. In contrast to the traditional method that requires data to be transferred from client devices to global servers, Federated Learning harnesses raw data residing on edge devices to facilitate local model training. These edge devices, responsible for connecting various devices and facilitating network traffic between them, assume a pivotal role in this process.

"Federated learning is becoming an essential technique for machine learning on edge devices as the sheer amount of raw data generated by these devices requires real-time, effective data processing at the edge device ends," Lu wrote in his abstract. "The processed data carrying intelligent information must be encrypted for privacy protection, making federated learning the best solution for building a well-trained model across decentralized smart edge devices with secure and efficient data-sharing policies."

Lu and his partners propose a scalable and resilient federated learning simulation and modeling system. This system will empower users to harness privacy-preserving algorithms, introduce novel algorithms, and simulate as well as deploy a wide range of federated learning algorithms with privacy-preserving techniques.

"The proposed system brings forth substantial advantages for researchers and developers engaged in real-world federated learning systems," Lu explained. "It furnishes them with a valuable platform for conducting proof-of-concept implementations and performance validation, which are essential prerequisites before deploying and testing their machine learning models in real-world contexts. Additionally, the proposed system is poised to make a significant scientific impact on DOE-mission-based applications, including scientific machine learning and critical infrastructure, where concerns regarding data privacy hold significant weight."

See the original post here:
Department of Energy Grant will Fund EECS Professor Lu's ... - University of California, Merced

Can AI help in climate change? CSU researchers have the answer. – Source

A machine learning model created at CSU has improved forecasters confidence in storm predictions and is now used daily by the National Weather Services Storm Prediction Center and Weather Prediction Center.

The model, developed in the Department of Atmospheric Science by a team led by Schumacher, is capable of accurately predicting excessive rainfall, hail and tornadoes four to eight days in advance. The model is called CSU-MLP for Colorado State University-Machine Learning Probabilities.

Schumachers team worked with NWS forecasters over six years to test and refine the model for their purposes. The CSU code is now running on the Storm Prediction Centers and Weather Prediction Centers operational computer systems, helping forecasters predict hazardous weather, so people in harms way have enough lead time to prepare.

The atmospheric scientists trained the model on historical records of severe weather and NOAA reforecasts, retrospective forecasts run with todays improved numerical models.

Team member Allie Mazurek, a Ph.D. student, is working on explainable AI for the CSU-MLP forecasts. Shes trying to figure out which atmospheric data inputs are most important to the models predictions, so the model will be more transparent for forecasters.

These new tools that use AI for weather prediction are developing quickly and showing some really promising and exciting results, Schumacher said. But they also have limitations, just like traditional weather prediction models and human forecasters have strengths and limitations. The best way to advance the field and improve forecasts will be to take advantage of each of their strengths: the AI for what its good at, which is identifying patterns in massive datasets; numerical weather prediction models for being grounded in the physics; and humans for synthesizing, understanding and communicating.

Schumacher discusses the promise and limitations of AI for weather prediction in more detail in this piece in The Conversation, co-authored by Aaron Hill, a former CSU research scientist who is now a faculty member at the University of Oklahoma.

Read more from the original source:
Can AI help in climate change? CSU researchers have the answer. - Source

Unlocking the potential of IoT systems: The role of Deep Learning … – Innovation News Network

The Internet of Things (IoT), a network of interconnected devices equipped with sensors and software, has revolutionised how we interact with the world around us, empowering us to collect and analyse data like never before.

As technology advances and becomes more accessible, more objects are equipped with connectivity and sensor capabilities, making them part of the IoT ecosystem. The number of active IoT systems is expected to reach 29.7 billion by 2027, marking a significant surge from the 3.6 billion devices recorded in 2015. This exponential growth requires a tremendous demand for solutions to mitigate the safety and computational challenges of IoT applications. In particular, industrial IoT, automotive, and smart homes are three main areas with specific requirements, but they share a common need for efficient IoT systems to enable optimal functionality and performance.

Increasing the efficiency of IoT systems and unlocking their potential can be achieved through Artificial Intelligence (AI), creating AIoT architectures. By utilising sophisticated algorithms and Machine Learning techniques, AI empowers IoT systems to make intelligent decisions, process vast amounts of data, and extract valuable insights. For instance, this integration drives operational optimisation in industrial IoT, facilitates advanced autonomous vehicles, and offers intelligent energy management and personalised experiences in smart homes.

Among the different AI algorithms, Deep Learning that leverages artificial neural networks is very appropriate for IoT systems for several reasons. One of the primary reasons is its ability to learn and extract features automatically from raw sensor data. This is particularly valuable in IoT applications where the data can be unstructured, noisy, or have complex relationships. Additionally, Deep Learning enables IoT applications to handle real-time and streaming data efficiently. This ability allows for continuous analysis and decision-making, which is crucial in time-sensitive applications such as real-time monitoring, predictive maintenance, or autonomous control systems.

Despite the numerous advantages of Deep Learning for IoT systems, its implementation has inherent challenges, such as efficiency and safety, that must be addressed to fully leverage its potential. The Very Efficient Deep Learning in IoT (VEDLIoT) project aims to solve these challenges.

A high-level overview of the different VEDLIoT components is given in Fig. 1. IoT is integrated with Deep Learning by the VEDLIoT project to accelerate applications and optimise the energy efficiency of IoT. VEDLIoT achieves these objectives through the utilisation of several key components:

VEDLIoT concentrates on some use cases, such as demand-oriented interaction methods in smart homes (see Fig. 2), industrial IoT applications like Motor Condition Classification and Arc Detection, and the Pedestrian Automatic Emergency Braking (PAEB) system in the automotive sector (see Fig. 3). VEDLIoT systematically optimises such use cases through a bottom-up approach by employing requirement engineering and verification techniques, as shown in Fig. 1. The project combines expert-level knowledge from diverse domains to create a robust middleware that facilitates development through testing, benchmarking, and deployment frameworks, ultimately ensuring the optimisation and effectiveness of Deep Learning algorithms within IoT systems. In the following sections, we briefly present each component of the VEDLIoT project.

Various accelerators are available for a wide range of applications, from small embedded systems with power budgets in the milliwatt range to high-power cloud platforms. These accelerators are categorised into three main groups based on their peak performance values, as shown in Fig. 4.

The first group is the ultra-low power category (< 3 W), which consists of energy-efficient microcontroller-style cores combined with compact accelerators for specific Deep Learning functions. These accelerators are designed for IoT applications and offer simple interfaces for easy integration. Some accelerators in this category provide camera or audio interfaces, enabling efficient vision or sound processing tasks. They may offer a generic USB interface, allowing them to function as accelerator devices attached to a host processor. These ultra-low power accelerators are ideal for IoT applications where energy efficiency and compactness are key considerations, providing optimised performance for Deep Learning tasks without excessive power.

The VEDLIoT use case of predictive maintenance is a good example and makes use of an ultra-low power accelerator. One of the most important design criteria is low power consumption, as it is a battery-powered small box that can externally be installed on any electric motor and should monitor the electronic motor for at least three years without a battery change.

The next category is the low-power group (3 W to 35 W), which targets a broad range of automation and automotive applications. These accelerators feature high-speed interfaces for external memories and peripherals and efficient communication with other processing devices or host systems such as PCIe. They support modular and microserver-based approaches and provide compatibility with various platforms. Additionally, many accelerators in this category incorporate powerful application processors capable of running full Linux operating systems, allowing for flexible software development and integration. Some devices in this category include dedicated application-specific integrated circuits (ASICs), while others feature NVIDIAs embedded graphics processing units (GPUs). These accelerators balance power efficiency and processing capabilities, making them well-suited for various compute-intensive tasks in the automation and automotive domains.

The high-performance category (> 35 W) of accelerators is designed for demanding inference and training scenarios in edge and cloud servers. These accelerators offer exceptional processing power, making them suitable for computationally-intensive tasks. They are commonly deployed as PCIe extension cards and provide high-speed interfaces for efficient data transfer. The devices in this category have high thermal design powers (TDPs), indicating their ability to handle significant workloads. These accelerators include dedicated ASICs, known for their specialised performance in Deep Learning tasks. They deliver accelerated processing capabilities, enabling faster inference and training times. Some consumer-class GPUs may also be included in benchmarking comparisons to provide a broader perspective.

Selecting the proper accelerator from the abovementioned wide range of available options is not straightforward. However, VEDLIoT takes on this crucial responsibility by conducting thorough assessments and evaluations of various architectures, including GPUs, field-programmable gate arrays (FPGAs), and ASICs. The project carefully examines these accelerators performances and energy consumptions to ensure their suitability for specific use cases. By leveraging its expertise and comprehensive evaluation process, VEDLIoT guides the selection of Deep Learning accelerators within the project and in the broader landscape of IoT and Deep Learning applications.

Trained Deep Learning models have redundancy that can sometimes be compressed to 49 times their original size, with negligible accuracy loss. Although many works are related to such compression, most results show theoretical speed-ups that only sometimes translate into more efficient hardware execution since they do not consider the target hardware. On the other hand, the process of deploying Deep Learning models on edge devices involves several steps, such as training, optimisation, compilation, and runtime. Although various frameworks are available for these steps, their interoperability can vary, resulting in different outcomes and performance levels. VEDLIoT addresses these challenges through hardware-aware model optimisation using ONNX, an open format for representing Machine Learning models, ensuring compatibility with the current open ecosystem. Additionally, Renode, an open-source simulation framework, serves as a functional simulator for complex heterogeneous systems, allowing for the simulation of complete System-on-Chips (SoCs) and the execution of the same software used on hardware.

Furthermore, VEDLIoT uses the EmbeDL toolkit to optimise Deep Learning models. The EmbeDL toolkit offers comprehensive tools and techniques to optimise Deep Learning models for efficient deployment on resource-constrained devices. By considering hardware-specific constraints and characteristics, the toolkit enables developers to compress, quantise, prune, and optimise models while minimising resource utilisation and maintaining high inference accuracy. EmbeDL focuses on hardware-aware optimisation and ensures that Deep Learning models can be effectively deployed on edge devices and IoT devices, unlocking the potential for intelligent applications in various domains. With EmbeDL, developers can achieve superior performance, faster inference, and improved energy efficiency, making it an essential resource for those seeking to maximise the potential of Deep Learning in real-world applications.

Since VEDLIoT aims to combine Deep Learning with IoT systems, ensuring security and safety becomes crucial. In order to emphasise these aspects in its core, the project leverages trusted execution environments (TEEs), such as Intel SGX and ARM TrustZone, along with open-source runtimes like WebAssembly. TEEs provide secure environments that isolate critical software components and protect against unauthorised access and tampering. By using WebAssembly, VEDLIoT offers a common environment for execution throughout the entire continuum, from IoT, through the edge and into the cloud.

In the context of TEEs, VEDLIoT introduces Twine and WaTZ as trusted runtimes for Intels SGX and ARMs TrustZone, respectively. These runtimes simplify software creation within secure environments by leveraging WebAssembly and its modular interface. This integration bridges the gap between trusted execution environments and AIoT, helping to seamlessly integrate Deep Learning frameworks. Within TEEs using WebAssembly, VEDLIoT achieves hardware-independent robust protection against malicious interference, preserving the confidentiality of both data and Deep Learning models. This integration highlights VEDLIoTs commitment to securing critical software components, enabling secure development, and facilitating privacy-enhanced AIoT applications in cloud-edge environments.

Additionally, VEDLIoT employs a specialised architectural framework, as shown in Fig. 5, that helps to define, synchronise and co-ordinate requirements and specifications of AI components and traditional IoT system elements. This framework consists of various architectural views that address the systems specific design concerns and quality aspects, including security and ethical considerations. By using these architecture views as templates and filling them out, correspondences and dependencies can be identified between the quality-defining architecture views and other design decisions, such as AI model construction, data selection, and communication architecture. This holistic approach ensures that security and ethical aspects are seamlessly integrated into the overall system design, reinforcing VEDLIoTs commitment to robustness and addressing emerging challenges in AI-enabled IoT systems.

Traditional hardware platforms support only homogeneous IoT systems. However, RECS, an AI-enabled microserver hardware platform, allows for the seamless integration of diverse technologies. Thus, it enables fine-tuning of the platform towards specific applications, providing a comprehensive cloud-to-edge platform. All RECS variants share the same design paradigm to be a densely-coupled, highly-integrated communication infrastructure. For the varying RECS variants, different microserver sizes are used, from credit card size to tablet size. This allows customers to choose the best variant for each use case and scenario. Fig. 6 gives an overview of the RECS variants.

The three different RECS platforms are suitable for cloud/data centre (RECS|Box), edge (t.RECS) and IoT usage (u.RECS). All RECS servers use industry-standard microservers, which are exchangeable and allow for use of the latest technology just by changing a microserver. Hardware providers of these microservers offer a wide spectrum of different computing architectures like Intel, AMD and ARM CPUs, FPGAs and combinations of a CPU with an embedded GPU or AI accelerator.

VEDLIoT addresses the challenge of bringing Deep Learning to IoT devices with limited computing performance and low-power budgets. The VEDLIoT AIoT hardware platform provides optimised hardware components and additional accelerators for IoT applications covering the entire spectrum, from embedded via edge to the cloud. On the other hand, a powerful middleware is employed to ease the programming, testing, and deployment of neural networks in heterogeneous hardware. New methodologies for requirement engineering, coupled with safety and security concepts, are incorporated throughout the complete framework. The concepts are tested and driven by challenging use cases in key industry sectors like automotive, automation, and smart homes.

Please note, this article will also appear in the fifteenthedition of ourquarterly publication.

See the rest here:
Unlocking the potential of IoT systems: The role of Deep Learning ... - Innovation News Network