Archive for the ‘Machine Learning’ Category

Alteryx Ventures Announces Strategic Investment in Fiddler to Boost … – PR Newswire

The partnership deepens Alteryx's commitment to customers and their democratization journey through investments in governance, ethical AI, and machine learning model management

IRVINE, Calif., April 13, 2023 /PRNewswire/ -- Alteryx, Inc.(NYSE: AYX), the Analytics Cloud Platform company, has announced a strategic investment in Fiddler, a pioneer in Model Performance Management (MPM), to augment Alteryx Machine Learning within the Alteryx Analytics Cloud Platform. With this investment from Alteryx Ventures, joint customers will be able to better operationalize how they build enterprise-level machine learning pipelines with increased governance.

Fiddler is an MPM provider that offers advanced model monitoring and model governance capabilities. As Alteryx democratizes analytics for all employees across all systems at many of the world's largest and most complex enterprises, this investment in Fiddler will help customers establish stronger governance and ethical AI practices.

"Alteryx recognizes the importance of operationalizing machine learning to accelerate insights and time to value," said Asa Whillock, vice president and general manager, Alteryx Machine Learning at Alteryx. "With Fiddler, we aligned around a common vision of democratizing machine learning and managing the performance of machine learning models. We expect our customers will be able to transform ML predictions into consistently better business decisions at a faster pace."

Commenting on this partnership, Krishna Gade, founder and CEO of Fiddler, said, "As organizations launch more ML models and AI applications into production, it is imperative to validate, monitor, and retrain in a continuous fashion. We are proud to partner with Alteryx to help customers connect model and AI performance to KPIs that drive better business outcomes and help build trust into AI."

Alteryx Ventures invests in companies with innovative technology and services that complement Alteryx's analytics and data science products and encourage innovation within the analytics ecosystem. Alteryx's vision centers on enabling every person to achieve breakthrough outcomes from data through analytics automation, data science, and unprecedented ease of use.

Learn more about Alteryx Machine Learning here.

About AlteryxAlteryx (NYSE: AYX) powers analytics for all by providing our leading Analytics Automation Platform. Alteryx delivers easy end-to-end automation of data engineering, analytics, reporting, machine learning, and data science processes, enabling enterprises everywhere to democratize data analytics across their organizations for a broad range of use cases. More than 8,000 customers globally rely on Alteryx to deliver high-impact business outcomes. To learn more, visit http://www.alteryx.com.

Alteryx is a registered trademark of Alteryx, Inc. All other product and brand names may be trademarks or registered trademarks of their respective owners.

SOURCE Alteryx, Inc.

Read this article:
Alteryx Ventures Announces Strategic Investment in Fiddler to Boost ... - PR Newswire

TinyML: The Future of Machine Learning on a Minuscule Scale – Unite.AI

In recent years, the field of machine learning has experienced exponential growth, with applications in diverse domains such as healthcare, finance, and automation. One of the most promising areas of development is TinyML, which brings machine learning to resource-constrained devices. We will explore the concept of TinyML, its applications, and its potential to revolutionize industries by offering intelligent solutions on a small scale.

TinyML is an emerging area in machine learning that focuses on the development of algorithms and models that can run on low-power, memory-constrained devices. The term TinyML is derived from the words tiny and machine learning, reflecting the goal of enabling ML capabilities on small-scale hardware. By designing efficient models that can operate in such environments, TinyML has the potential to bring artificial intelligence (AI) to billions of devices that were previously unable to support it.

As the number of IoT devices skyrockets, so does the need for intelligent, localized decision-making. Traditional cloud-based approaches to AI can be limited by factors such as latency, bandwidth, and privacy concerns. In contrast, TinyML enables on-device intelligence, allowing for faster, more efficient decision-making without the need for constant communication with the cloud.

Furthermore, the resource constraints of small devices necessitate efficient algorithms that consume minimal power and memory. TinyML addresses these challenges by optimizing models and leveraging specialized hardware to achieve impressive results, even with limited resources.

Several technologies and advancements have facilitated the growth of TinyML:

The potential applications of TinyML are vast, spanning various industries:

Wildlife Conservation: TinyML-enabled devices can help track and monitor endangered species, allowing for more effective conservation efforts and data collection.

While TinyML presents immense potential, it also faces several challenges that must be addressed to fully realize its capabilities:

Conclusion

TinyML is an exciting and rapidly growing field that promises to bring the power of machine learning to billions of small, resource-constrained devices. By optimizing ML models and leveraging cutting-edge hardware and software technologies, TinyML has the potential to revolutionize industries and improve the lives of people worldwide. As researchers and engineers continue to innovate and overcome the challenges facing TinyML, the future of this technology looks incredibly promising.

More here:
TinyML: The Future of Machine Learning on a Minuscule Scale - Unite.AI

Machine Learning Tidies Up the Cosmos – Universe Today

Amanda Morris, a press release writer at Northwestern University, describes an important astronomical effect in terms entertaining enough to be worth reposting here: The cosmos would look a lot better if the Earths atmosphere wasnt photobombing it all the time. Thats certainly one way to describe the airs effect on astronomical observations, and its annoying enough to astronomers that they constantly have to correct for distortions from the Earths atmosphere, even at the most advanced observatories at the highest altitudes. Now a team from Northwestern and Tsinghua Universities have developed an AI-based tool to allow astronomers to automatically remove the blurring effect of the Earths atmosphere from pictures taken for their research.

Dr. Emma Alexander and her student Tianao Li developed the technique in the Bio Inspired Vision Lab, a part of Northwesterns engineering school, though Li was a visiting undergraduate from Tsinghua University in Beijing. Dr. Alexander realized that accuracy was an essential part of scientific imaging, but astronomers had a tough time as their work was constantly being photobombed, as Ms. Morris put it, by the atmosphere.

Weve spent plenty of time in articles discussing the difficulties of seeing and the distortion effect that air brings to astronomical pictures, so we wont rehash that here. But its worth looking at the details of this new technique, which could save astronomers significant amounts of time either chasing bad data or deblurring their own images.

Using a technique known as optimization and a more commonly known AI technique called deep learning, the researchers developed an algorithm that could successfully deblur an image with less error than both classic and modern methods. This resulted in crisper images that were both better scientifically but also more visually appealing. However, Dr. Alexander notes that was simply a happy side effect of their work to try to improve the science.

To train and test their algorithm, the team worked with simulated data developed by the team responsible for the upcoming Vera C Rubin observatory, which is set to be one of the worlds most powerful ground-based telescopes when it begins operations next year. Utilizing the simulated data as a training set allowed the Northwestern researchers to get a head start on testing their algorithm ahead of the observatorys opening but also tweak it to make it well-suited for use with what will arguably be one of the most important observatories of the coming decades.

Besides that usefulness, the team also decided to make the project open-source. They have released a version on Github, so programmers and astronomers alike can pull the code, tweak it to their own specific needs, and even contribute to a set of tutorials the team developed that could be utilized on almost any data from a ground-based telescope. One of the beauties of algorithms like that is they can easily remove photobombers even if they are less substantive than most.

Learn More:Northwestern AI algorithm unblurs the cosmosLi & Alexander Galaxy Image Deconvolution for Weak Gravitational Lensing with Unrolled Plug-and-Play ADMMUT Telescopes Laser Pointer Clarifies Blurry SkiesUT A Supercomputer Gives Better Focus to Blurry Radio Images

Lead Image: Different phases of deblurring the algorithm applies to a galaxy. Original image is in the top left, final image is the bottom right.Credit Li & Alexander

Like Loading...

Continued here:
Machine Learning Tidies Up the Cosmos - Universe Today

Automated Machine Learning with Python: A Case Study – KDnuggets

In todays world, all organizations want to use Machine learning to analyze the data they generate daily from the users. With the help of a machine or deep learning algorithms, they can analyze the data. Afterwards, they can make the prediction of testing data in the production environment. But suppose we start following the mentioned process. In that case, we may face problems such as building and training machine learning models since this is time-consuming and requires expertise in domains like programming, statistics, data science, etc.

So, to overcome such challenges, Automated Machine Learning (AutoML) comes into the picture, which emerged as one of the most popular solutions that can automate many aspects of the machine learning pipeline. So, in this article, we will discuss AutoML with Python through a real-life case study on the Prediction of heart disease.

We can easily observe that problem-related to the heart are the major cause of death worldwide. The only way to reduce such types of impact is to detect the disease early with some of the automated methods so that less time will be consumed there and, after that, take some prevention measures to reduce its effect. So, by keeping this problem in mind, we will explore one of the datasets related to medical patient records to build a machine-learning model from which we can predict the likelihood or probability of a patient with heart disease. This type of solution can easily be applied in hospitals to check so doctors can provide some treatments as soon as possible.

The complete model pipeline we followed in this case study is shown below.

Step-1: Before starting to implement, let's import the required libraries, including NumPy for matrix manipulation, Pandas for data analysis, and Matplotlib for Data Visualization.

Step-2: After importing all the required libraries in the above step, we will now try to load our dataset while utilizing the Pandas data frame to store that in an optimized manner, as they are much more efficient in terms of both space and time complexity compared to other data structures like a linked list, arrays, trees, etc.

Further, we can perform Data preprocessing to prepare the data for further modelling and generalization. To download the dataset which we are using here, you can easily refer to the link.

Step-3: After preparing the data for the machine learning model, we will use one of the famous automated machine learning libraries called H2O.ai, which helps us create and train the model.

The main benefit of this platform is that it provides high-level API from which we can easily automate many aspects of the pipeline, including Feature Engineering, Model selection, Data Cleaning, Hyperparameter Tuning, etc., which drastically the time required to train the machine learning model for any of the data science projects.

Step-4: Now, to build the model, we will use the API of the H2O.ai library, and to use this, we have to specify the type of problem, whether it is a regression problem or a classification problem, or some other type with the target variable mentioned. Then, automatically this library chooses the best model for the given problem statement, including algorithms such as Support Vector Machines, Decision Trees, Deep neural networks, etc.

Step-5: After finalizing the best model from a set of algorithms, the most critical task is fine-tuning our model based on the hyperparameters involved. This tuning process involved many techniques, such as Grid-search Cross Validation, etc., which allowed for finding the best set of hyperparameters for the given problem.

Step-6: Now, the final task is to check the models performance, using evaluation metrics such as Confusion matrix, Precision, recall, etc., for classification problems and MSE, MAE, RMSE, and R-square, for regression models so that we can find some inference of our models working in the production environment.

Step-7: Finally, we will plot the ROC curve which shows the graph between false positive rate (which means that our model is predicting the wrong result compare to the actual and model predicts the positive class, where it belongs to the negative class), and false negative rate(which means that our model is predicting the wrong result compare to the actual and model predicts the negative class, where it belongs to the positive class) and also print the confusion matrix and eventually our model prediction and evaluation on the test data is completed. Then we will shut down our H2O.

You can access the notebook of the mentioned code from here.

To conclude this article, we have explored the different aspects of one of the most popular platforms which automate the whole process of machine learning or data science tasks, through which we can easily create and train machine learning models using the python programming language and also we have covered one of the famous case studies of heart disease prediction, which enhances the understanding on how to use such platforms effectively. Using such platforms, machine learning pipelines can be easily optimized, saving the engineers time in the organization and reducing system latency and resource utilization such as GPU and CPU cores, which are easily accessible to a large audience.Aryan Garg is a B.Tech. Electrical Engineering student, currently in the final year of his undergrad. His interest lies in the field of Web Development and Machine Learning. He have pursued this interest and am eager to work more in these directions.

Link:
Automated Machine Learning with Python: A Case Study - KDnuggets

Exploring movement optimization for a cyborg cockroach with machine learning – Tech Xplore

This article has been reviewed according to ScienceX's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

by Beijing Institute of Technology Press Co., Ltd

Scientists from Osaka University designed a cyborg cockroach and optimized its movement by utilizing machine learning-based automatic stimulation. Credit: Cyborg and Bionic Systems

Have you ever wondered why some insects like cockroaches prefer to stay or decrease movement in darkness? Some may tell you it's called photophobia, a habit deeply coded in their genes. A further question would be whether we can correct this habit of cockroaches, that is, moving in the darkness just as they move in bright backgrounds.

Scientists from Osaka University may have answered this question by converting a cockroach into a cyborg. They published their research in the journal Cyborg and Bionic Systems.

With millions of years of evolution, natural animals are endowed with outstanding capabilities to survive and thrive in hostile environments. In recent years, these animals have inspired roboticists to develop automatic machines to recapitulate part of these extinguished capabilities, that is, biologically inspired biomimetic robots.

An alternative to this path is to directly build controllable machines on these natural animals by implanting stimulation electrodes into their brains or peripheral nervous system to control their movement and even see what they see, so-called cyborgs. Among these studies, cyborg insects are attracting ever-increasing attention for their availability, simpler neuro-muscular pathways, and easier operation to intrusively stimulate their peripheral nervous system or muscles.

Cockroaches have marvelous locomotion ability, which significantly outperforms any biomimetic robots of similar size. Therefore, cyborg cockroaches equipped with such agile locomotion are suitable for search and rescue missions in unknown and unstructured environments that traditional robots can hardly access.

"Cockroaches prefer to stay in the darkened, narrow areas over the bright, spacious areas. Moreover, they tend to be active in the hotter environment," explained study author Keisuke Morishima, a roboticist from Department of Mechanical Engineering, Osaka University, "These natural behaviors will hinder the cockroaches to be utilized in unknown and under-rubble environments for search and rescue applications. It will be difficult to apply a mini live stream camera attached to them in a dark or without light areas for real-time monitoring purposes."

"This study aims to optimize cyborg cockroach movement performance," said Morishima. To this end, they proposed a machine learning-based approach that automatically detects the motion state of this cyborg cockroach via IMU measurements. If the cockroach stops or freezes in darkness or cooler environment, electrical stimulation would be applied to their brain to make it move.

"With this online detector, the stimulation is minimized to prevent the cockroaches from fatigue due to too many stimulations," said Mochammad Ariyanto, Morishima's colleague from Department of Mechanical Engineering, Osaka University.

This idea of restraining electrical stimulation to necessary circumstances, which is determined by AI algorithms via onboard measurements, is intuitively promising. "We don't have to control the cyborg like controlling a robot. They can have some extent of autonomy, which is the basis of their agile locomotion. For example, in a rescue scenario, we only need to stimulate the cockroach to turn its direction when it's walking the wrong way or move when it stops unexpectedly," said Morishima.

"Equipped with such a system, the cyborg successfully increased its average search rate and traveled distance up to 68% and 70%, respectively, while the stop time was reduced by 78%," said the study authors. "We have proven that it's feasible to apply electrical stimulation on the cockroach's cerci; it can overcome its innate habit, for example, increase movement in dark and cold environments where it normally decreases its locomotion."

"In this study, cerci were stimulated to trigger the free-walking motion of the Madagascar hissing cockroach (MHC)."

More information: Mochammad Ariyanto et al, Movement Optimization for a Cyborg Cockroach in a Bounded Space Incorporating Machine Learning, Cyborg and Bionic Systems (2023). DOI: 10.34133/cbsystems.0012

Provided by Beijing Institute of Technology Press Co., Ltd

More:
Exploring movement optimization for a cyborg cockroach with machine learning - Tech Xplore