Archive for the ‘Machine Learning’ Category

My Robot Brings All the Boys to the Yard, Its AI is Better than Yours – insideBIGDATA

In this special guest feature, Aviran Yaacov, CEO, and Co-founder of EcoPlant, believes that both AI and ML technologies are making impactful strides in manufacturing, and there is no time like the present for manufacturers to get on board and explore ways to transform their processes to benefit across all fronts. Aviran has over ten years of experience and expertise in operations, finance, sales, and people management in the IT industry. Before his current role, Aviran was a Senior Sales Executive for a SAP Business One integration firm. He is part of the management team in Ecoplant since it was established in 2016. From the bootstrapping stage, he oversaw business development in the company. He generated partnerships with Ecoplants solution with large corporations including Ecolab, Dannon, Nestle, Unilever, and Hill-Rom.

Robots and machines are already everywhere, especially in manufacturing. However, many experts predicted they would have advanced faster than they have. The truth is bringing automation and dynamic controlling into the physical world turned out to be much more challenging than was previously assumed. But with state-of-the-art AI and machine learning (ML) available today, the leaps are getting larger by the day. The technology might be new, but its implementation will have various effects on manufacturing.

Better than ever before

Thanks to AI and ML technologies, machines can now learn to handle a wide range of objects and tasks on their own. These enhancements are a far cry from the robots of yesteryear, which simply performed monotonous tasks. Machines are now capable of being endowed with greater levels of intelligence to acquire new skills autonomously, and to generalize unseen situations. Its a true game-changer for the manufacturing industry as a whole in the following ways:

Newer machines can now handle a much wider range of objects and tasks like never before. For instance, 3D industrial cameras are taken to new heights with the backing of AI, as it can help machines determine depth and distance, and general image recognition in a way that was formerly exclusive to the human eye.

Since ML closely resembles human learning, the need for human intervention (such as for the creation of new programs or updates) becomes reduced as the machines are capable of handling new parts on their own. Since information is generally stored on the cloud, robots can learn from each other through shared knowledge. As more data is gathered through operation, accuracy also increases and becomes more enhanced. This translates to less of a need for surrounding equipment (such as shaker tables and feeders) to be needed for each robot, which plays a major role in savings and scalability for manufacturers.

In addition to scalability, manufacturers can also enjoy the benefits of energy efficiency with machines that are optimized accordingly. Through the usage of predictive AI algorithms to conduct ongoing energy surveys and dynamically control each air compressor, and the whole system, manufacturers can dramatically reduce the carbon footprint of their facilities.

Humans and robots joining forces

Robots are now capable of doing far more than grasp and assemble objects. They can make their own decisions and solve problems based on their skill sets, while human operators solely focus on high-level commands. While these developments, paired with sci-fi movies, may make it appear as though robots are going to take over the world and take jobs away from humans, that isnt necessarily the case.

They simply help humans do their jobs better.

The best results come from the pairing of human intelligence with machine intelligence. Humans bring creativity and ingenuity, while industrial robots bring speed, strength, and accuracy. As summed up by Patrick Sobalvarro on WeForum, The idea of a fully automated lights-out factory with no production workersone requiring only machine programming and maintenancehas proven to be a dead end. So much of what happens in a factory requires human ingenuity, learning, and adaptability. As products have become more varied and customized to local markets and customer needs, the economics of full automation make no sense. With the support of necessary regulatory oversight, machines with AI-based components can also enable sustainable development, thereby helping manufacturers dramatically reduce the carbon footprint of their facilities.

The post-pandemic world sparked many changes in manufacturing, not only for the health and safety of workers, but also to ramp things up in supply chains in response to ever-changing market needs. In order to stay relevant and compete in the evolving global market, manufacturers need to transform the way they produce their products. The most complex challenges stem from demands for higher product variability, mass customization, quality expectations, and faster product cycles. This is all the more reason why manufacturing processes are faster, more efficient, and more cost-effective when humans and robots work together.

While the advantages of humans working together with robots were known well before the pandemic, the crisis made the pairing crucial as manufacturers began to reopen their facilities, for improved productivity, quality of output, and working conditions.

Both AI and ML technologies are making impactful strides in manufacturing, and there is no time like the present for manufacturers to get on board and explore ways to transform their processes to benefit across all fronts.

Sign up for the free insideBIGDATAnewsletter.

Join us on Twitter:@InsideBigData1 https://twitter.com/InsideBigData1

Visit link:
My Robot Brings All the Boys to the Yard, Its AI is Better than Yours - insideBIGDATA

Android 12 improves gesture navigation using machine learning – Dividend Wealth

From Android 9.0 Pie, Google has taken important steps in the way it is possible to navigate the Android OS. Starting with Android 12, Google offers a limited amount of machine learning to adapt gesture navigation to the way someone uses their phone.

Since the introduction of Gesture Navigation in Android to replace the buttons of outdated software, complaints have persisted about how the flow navigation method works. In many applications, the usual navigation actions within applications are seen as navigation actions in the system, as a result of which you are suddenly kicked out of the application, or unintentionally returning to the previous page. From Android 10, a solution was presented: developers can Manually set exclusion areas.

With these exclusion areas, gesture navigation has been prohibited within a certain area. Moreover, Android has been equipped with sensitivity settings. From the settings it was possible to determine how quickly the system responds to the navigation action. For Android 12, Google is working on solutions that will tailor the operation of gesture navigation according to users desires Dew On XDA Developers. The developer found two of his apps in a list of 43,000 apps being monitored in the new OS for navigation actions.

Current Customization Options for Android 12 Navigation, Photo: Android Police.

Google uses the TensorFlow Lite model for this purpose, through which machine learning can be done on the phone. According to Quinny899, Google offers a specific reference in EdgeBackGestureHandler, which deals with gesture navigation in Android 12, for a file in which the background gesture data is saved. With a machine learning model, it is possible to recognize specific behavior and adjust gesture navigation based on the models results.

Google also made another change to gesture navigation in Android 12 Android Police Described. In Android 12, the gesture navigation actions to return to the previous screen or home screen work from full screen in one go. In Android 11, you will first have to tap the screen once and then perform the navigation action. Looks like it requires an Android app tweak: this tweak doesnt work for Twitter.

From the last modification to the gestures, you can expect that Google will actually work on the stable version of Android 12. Whether Google will also develop a machine learning model for the stable release which will likely be towards the end of the third quarter launch is a different story. Currently, the flag has to be changed in Android 12 to enable machine learning: thus the changes cannot be noticed automatically.

Are you hoping that Google continues the machine learning features of Android 12, or are you satisfied with the way the gesture navigation works on your phone? Let us know for sure in the comments, and dont forget to mention which Android version you are using.

Go here to see the original:
Android 12 improves gesture navigation using machine learning - Dividend Wealth

Identifying COVID-19 Therapy Candidates With Machine Learning – Contagionlive.com

Study pinpoints the protein RIPK1 as a promising target for SARS-CoV-2 treatment.

Investigators from the Massachusetts Institute of Technology, in collaboration with Harvard University and ETH Zurich, have developed a machine learning-based approach that can identify therapies that are already on the market that have potential for repurposing to help fight the coronavirus disease 2019 (COVID-19). Results from the study were published in the journal Nature Communications.

As the COVID-19 pandemic continues to surge across the globe and investigators rush to find treatments, the information provided from the approach may have a significant impact.

The target population for the study is the elderly, as the virus impacts them more severely than younger populations. The approach accounts for gene expression changes in lung cells caused by COVID-19 as well as aging. The hope is that this would allow medical experts to find therapies for clinical testing faster.

"Earlier work by the Shivashankar lab showed that if you stimulate cells on a stiffer substrate with a cytokine, similar to what the virus does, they actually turn on different genes," Caroline Uhler, a computational biologist in MIT's Department of Electrical Engineering and Computer Science and the Institute for Data, Systems and Society, and an associate member of the Broad Institute of MIT and Harvard said. "So, that motivated this hypothesis. We need to look at aging together with SARS-CoV-2 -- what are the genes at the intersection of these two pathways?"

The investigators took 3 steps to identify the most promising candidates for repurposing. They first generated a large list of possible candidates using the machine-learning technology and then mapped the genes and proteins involved in the aging process and in a SARS-CoV-2 infection. They then employed algorithms to pinpoint genes that caused cascading effects through the mapped network which narrowed the list of therapies. The overlap caused by the 2 maps is where the team found the precise gene expression network of therapies that would target COVID-19.

The team plans to share the findings with pharmaceutical companies to aid in finding more therapies that can be repurposed for COVID-19. However, they emphasize that any of the therapies identified must undergo clinical testing before they can be approved for use in elderly populations.

"Making new drugs takes forever," Uhler said. "Really, the only expedient option is to repurpose existing drugs."

Link:
Identifying COVID-19 Therapy Candidates With Machine Learning - Contagionlive.com

New Machine Learning Theory Raises Questions About the Very Nature of Science – SciTechDaily

A novel computer algorithm, or set of rules, that accurately predicts the orbits of planets in the solar system could be adapted to better predict and control the behavior of the plasma that fuels fusion facilities designed to harvest on Earth the fusion energy that powers the sun and stars.

The algorithm, devised by a scientist at the U.S. Department of Energys (DOE) Princeton Plasma Physics Laboratory (PPPL), applies machine learning, the form of artificial intelligence (AI) that learns from experience, to develop the predictions. Usually in physics, you make observations, create a theory based on those observations, and then use that theory to predict new observations, said PPPL physicist Hong Qin, author of a paper detailing the concept in Scientific Reports. What Im doing is replacing this process with a type of black box that can produce accurate predictions without using a traditional theory or law.

Qin (pronounced Chin) created a computer program into which he fed data from past observations of the orbits of Mercury, Venus, Earth, Mars, Jupiter, and the dwarf planet Ceres. This program, along with an additional program known as a serving algorithm, then made accurate predictions of the orbits of other planets in the solar system without using Newtons laws of motion and gravitation. Essentially, I bypassed all the fundamental ingredients of physics. I go directly from data to data, Qin said. There is no law of physics in the middle.

PPPL physicist Hong Qin in front of images of planetary orbits and computer code. Credit: Elle Starkman / PPPL Office of Communications

The program does not happen upon accurate predictions by accident. Hong taught the program the underlying principle used by nature to determine the dynamics of any physical system, said Joshua Burby, a physicist at the DOEs Los Alamos National Laboratory who earned his Ph.D. at Princeton under Qins mentorship. The payoff is that the network learns the laws of planetary motion after witnessing very few training examples. In other words, his code really learns the laws of physics.

Machine learning is what makes computer programs like Google Translate possible. Google Translate sifts through a vast amount of information to determine how frequently one word in one language has been translated into a word in the other language. In this way, the program can make an accurate translation without actually learning either language.

The process also appears in philosophical thought experiments like John Searles Chinese Room. In that scenario, a person who did not know Chinese could nevertheless translate a Chinese sentence into English or any other language by using a set of instructions, or rules, that would substitute for understanding. The thought experiment raises questions about what, at root, it means to understand anything at all, and whether understanding implies that something else is happening in the mind besides following rules.

Qin was inspired in part by Oxford philosopher Nick Bostroms philosophical thought experiment that the universe is a computer simulation. If that were true, then fundamental physical laws should reveal that the universe consists of individual chunks of space-time, like pixels in a video game. If we live in a simulation, our world has to be discrete, Qin said. The black box technique Qin devised does not require that physicists believe the simulation conjecture literally, though it builds on this idea to create a program that makes accurate physical predictions.

The resulting pixelated view of the world, akin to what is portrayed in the movie The Matrix, is known as a discrete field theory, which views the universe as composed of individual bits and differs from the theories that people normally create. While scientists typically devise overarching concepts of how the physical world behaves, computers just assemble a collection of data points.

Qin and Eric Palmerduca, a graduate student in the Princeton University Program in Plasma Physics, are now developing ways to use discrete field theories to predict the behavior of particles of plasma in fusion experiments conducted by scientists around the world. The most widely used fusion facilities are doughnut-shaped tokamaks that confine the plasma in powerful magnetic fields.

Fusion, the power that drives the sun and stars, combines light elements in the form of plasma the hot, charged state of matter composed of free electrons and atomic nuclei that represents 99% of the visible universe to generate massive amounts of energy. Scientists are seeking to replicate fusion on Earth for a virtually inexhaustible supply of power to generate electricity.

In a magnetic fusion device, the dynamics of plasmas are complexand multi-scale, and the effective governing laws or computational models for a particular physical process that we are interested in are not always clear, Qin said. In these scenarios, we can apply the machine learning technique that I developed to create a discrete field theory and then apply this discrete field theory to understand and predict new experimental observations.

This process opens up questions about the nature of science itself. Dont scientists want to develop physics theories that explain the world, instead of simply amassing data? Arent theories fundamental to physics and necessary to explain and understand phenomena?

I would argue that the ultimate goal of any scientist is prediction, Qin said. You might not necessarily need a law. For example, if I can perfectly predict a planetary orbit, I dont need to know Newtons laws of gravitation and motion. You could argue that by doing so you would understand less than if you knew Newtons laws. In a sense, that is correct. But from a practical point of view, making accurate predictions is not doing anything less.

Machine learning could also open up possibilities for more research. It significantly broadens the scope of problems that you can tackle because all you need to get going is data, Palmerduca said.

The technique could also lead to the development of a traditional physical theory. While in some sense this method precludes the need of such a theory, it can also be viewed as a path toward one, Palmerduca said. When youre trying to deduce a theory, youd like to have as much data at your disposal as possible. If youre given some data, you can use machine learning to fill in gaps in that data or otherwise expand the data set.

Reference: Machine learning and serving of discrete field theories by Hong Qin, 9 November 2020, Scientific Reports.DOI: 10.1038/s41598-020-76301-0

Here is the original post:
New Machine Learning Theory Raises Questions About the Very Nature of Science - SciTechDaily

Using machine learning to find COVID-19 treatment options – Health Europa

The team have developed a machine learning-based approach to identify drugs already on the market that could potentially be repurposed to fight the virus. The system accounts for changes in gene expression in lung cells caused by both the disease and ageing.

The researchers have pinpointed the protein RIPK1 as a promising target for COVID-19 drugs and have identified three approved drugs that act on the expression of RIPK1.

The research has been published in the journal Nature Communications and the co-authors include MIT PhD students Anastasiya Belyaeva, Adityanarayanan Radhakrishnan, Chandler Squires, and Karren Dai Yang, as well as PhD student Louis Cammarata of Harvard University and long-term collaborator G.V. Shivashankar of ETH Zurich in Switzerland.

The researchers focused in on the most promising drug repurposing candidates by generating a list of possible drugs using a machine learning technique called an autoencoder then mapping the network of genes and proteins involved in both ageing and SARS-CoV-2 infection. They then used statistical algorithms to understand causality in that network, allowing them to pinpoint upstream genes that caused cascading effects throughout the network. Drugs targeting those upstream genes and proteins should be promising candidates for clinical trials.

Making new drugs takes forever, says Caroline Uhler, a computational biologist in MITs Department of Electrical Engineering and Computer Science and the Institute for Data, Systems and Society, and an associate member of the Broad Institute of MIT and Harvard. Really, the only expedient option is to repurpose existing drugs.

Uhler and Shivashankar suggest that one of the main changes in the lung that happens through ageing is that it becomes stiffer. The stiffening lung tissue shows different patterns of gene expression than in younger people, even in response to the same signal.

Uhler said: Earlier work by the Shivashankar lab showed that if you stimulate cells on a stiffer substrate with a cytokine, similar to what the virus does, they actually turn on different genes. So, that motivated this hypothesis. We need to look at ageing together with SARS-CoV-2 what are the genes at the intersection of these two pathways?

To select approved drugs that might act on these pathways, the team turned to big data and Artificial Intelligence (AI). The researchers narrowed the list of potential drugs by homing in on key genetic pathways, mapping the interactions of proteins involved in the ageing and SARS-CoV-2 infection pathways.

The team then identified areas of overlap among the two maps. That effort pinpointed the precise gene expression network that a drug would need to target to combat COVID-19 in elderly patients.

We want to identify a drug that has an effect on all of these differentially expressed genes downstream, says Belyaeva.

The team used algorithms that infer causality in interacting systems to turn their undirected network into a causal network. The final causal network identified RIPK1 as a target gene/protein for potential COVID-19 drugs since it has numerous downstream effects. The researchers identified a list of the approved drugs that act on RIPK1 and may have potential to treat the virus, including ribavirin and quinapril, which are already in clinical trials for COVID-19.

Im really excited that this platform can be more generally applied to other infections or diseases, says Belyaeva.

The team plans to share its findings with pharmaceutical companies.

Read the rest here:
Using machine learning to find COVID-19 treatment options - Health Europa