Archive for the ‘Machine Learning’ Category

Machine Learning Gives Cats One More Way To Control Their Humans – Hackaday

For those who choose to let their cats live a more or less free-range life, there are usually two choices. One, you can adopt the role of servant and run for the door whenever the cat wants to get back inside from their latest bird-murdering jaunt. Or two, install a cat door and let them come and go as they please, sometimes with a present for you in their mouth. Heads you win, tails you lose.

Theres another way, though: just let the cat ask to be let back in. Thats the approach that [Tennis Smith] took with this machine-learning kitty doorbell. Its based on a Raspberry Pi 4, which lives inside the house, and a USB microphone thats outside the front door. The Pi uses Tensorflow Lite to classify the sounds it picks up outside, and when one of those sounds fits the model of a cats meow, a message is dispatched to AWS Lambda. From there a text message is sent to alert [Tennis] that the cat is ready to come back in.

Theres a ton of useful information included in the repo for this project, including step-by-step instructions for getting Amazon Web Services working on the Pi. If youre a dog person, fear not: changing from meows to barks is as simple as tweaking a single line of code. And if youd rather not be at the beck and call of a cat but still want to avoid the evidence of a prey event on your carpet, machine learning can help with that too.

[via Toms Hardware]

Read more:
Machine Learning Gives Cats One More Way To Control Their Humans - Hackaday

Machine and deep learning are a MUST at the North-West… – Daily Maverick

The last century alone has seen a meteoric increase in the accumulation of data and we are able to store unfathomable quantities of information to help us solve problems known and unknown. At some point the ability to optimally utilise these vast amounts of data will be beyond our reach, but not beyond that of the tools we have made. At the North-West University (NWU), Professor Marelie Davel, director of the research group MUST Deep Learning, and her team are ensuring that our ever-growing data repositories will continue to benefit society.

The teams focus on machine learning and, specifically, deep learning, is creating magic to the untrained eye. Here is why.

Machine learning is a catch-all term for systems that learn in an automated way from their environment. These systems are not programmed with the steps to solve a specific task, but they are programmed to know how to learn from data. In the process, the system uncovers the underlying patterns in the data and comes up with its own steps to solve the specific task, explains Professor Davel.

According to her, machine learning is becoming increasingly important as more and more practical tasks are being solved by machine learning systems: From weather prediction to drug discovery to self-driving cars. Behind the scenes we see that many of the institutions we interact with, like banks, supermarket chains and hospitals, all nowadays incorporate machine learning in aspects of their business. Machine learning makes everyday tools from internet searches to every smartphone photo we take work better.

The NWU and MUST go a step beyond this by doing research on deep learning. This is a field of machine learning that was originally inspired by the idea of artificial neural networks, which were simple models of how neurons were thought to interact in the human brain. This was conceived in the early forties! Modern networks have come a long way since then, with increasingly complex architectures creating large, layered models that are particularly effective at solving human-like tasks, such as processing speech and language, or identifying what is happening in images.

She explains that, although these models are very well utilised, there are still surprisingly many open questions about how they work and when they fail.

We work on some of these open questions, specifically on how the networks perform when they are presented with novel situations that did not form part of their training environment. We are also studying the reasons behind the decisions the networks make. This is important in order to determine whether the steps these models use to solve tasks are indeed fair and unbiased, and sometimes it can help to uncover new knowledge about the world around us. An example is identifying new ways to diagnose and understand a disease.

The uses of this technology are nearly boundless and will continue to grow, and that is why Professor Davel encourages up-and-coming researchers to consider focusing their expertise in this field.

By looking inside these tools, we aim to be better users of the tools as well. We typically apply the tools with industry partners, rather than on our own. Speech processing for call centres, traffic prediction, art authentication, space weather prediction, even airfoil design. We have worked in quite diverse fields, but all applications build on the availability of large, complex data sets that we then carefully model. This is a very fast-moving field internationally. There really is a digital revolution that is sweeping across every industry one can think of, and machine learning is a critical part of it. The combination of practical importance and technical challenge makes this an extremely satisfying field to work in.

She confesses that, while some of the ideas of MUSTs collaborators may sound far-fetched at first, the team has repeatedly found that if the data is there, it is possible to build a tool to use it.

One can envision a future where human tasks such as speech recognition and interaction have been so well mimicked by these machines, that they are indistinguishable from their human counterparts. The famed science fiction writer Arthur C Clarke once remarked that any sufficiently advanced technology is indistinguishable from magic. At the NWU, MUST is doing their part in bringing this magic to life. DM

Author: Bertie Jacobs

Read more:
Machine and deep learning are a MUST at the North-West... - Daily Maverick

AI that can learn the patterns of human language – MIT News

Human languages are notoriously complex, and linguists have long thought it would be impossible to teach a machine how to analyze speech sounds and word structures in the way human investigators do.

But researchers at MIT, Cornell University, and McGill University have taken a step in this direction. They have demonstrated an artificial intelligence system that can learn the rules and patterns of human languages on its own.

When given words and examples of how those words change to express different grammatical functions (like tense, case, or gender) in one language, this machine-learning model comes up with rules that explain why the forms of those words change. For instance, it might learn that the letter a must be added to end of a word to make the masculine form feminine in Serbo-Croatian.

This model can also automatically learn higher-level language patterns that can apply to many languages, enabling it to achieve better results.

The researchers trained and tested the model using problems from linguistics textbooks that featured 58 different languages. Each problem had a set of words and corresponding word-form changes. The model was able to come up with a correct set of rules to describe those word-form changes for 60 percent of the problems.

This system could be used to study language hypotheses and investigate subtle similarities in the way diverse languages transform words. It is especially unique because the system discovers models that can be readily understood by humans, and it acquires these models from small amounts of data, such as a few dozen words. And instead of using one massive dataset for a single task, the system utilizes many small datasets, which is closer to how scientists propose hypotheses they look at multiple related datasets and come up with models to explain phenomena across those datasets.

One of the motivations of this work was our desire to study systems that learn models of datasets that is represented in a way that humans can understand. Instead of learning weights, can the model learn expressions or rules? And we wanted to see if we could build this system so it would learn on a whole battery of interrelated datasets, to make the system learn a little bit about how to better model each one, says Kevin Ellis 14, PhD 20, an assistant professor of computer science at Cornell University and lead author of the paper.

Joining Ellis on the paper are MIT faculty members Adam Albright, a professor of linguistics; Armando Solar-Lezama, a professor and associate director of the Computer Science and Artificial Intelligence Laboratory (CSAIL); and Joshua B. Tenenbaum, the Paul E. Newton Career Development Professor of Cognitive Science and Computation in the Department of Brain and Cognitive Sciences and a member of CSAIL; as well as senior author

Timothy J. ODonnell, assistant professor in the Department of Linguistics at McGill University, and Canada CIFAR AI Chair at the Mila -Quebec Artificial IntelligenceInstitute.

The research is published today in Nature Communications.

Looking at language

In their quest to develop an AI system that could automatically learn a model from multiple related datasets, the researchers chose to explore the interaction of phonology (the study of sound patterns) and morphology (the study of word structure).

Data from linguistics textbooks offered an ideal testbed because many languages share core features, and textbook problems showcase specific linguistic phenomena. Textbook problems can also be solved by college students in a fairly straightforward way, but those students typically have prior knowledge about phonology from past lessons they use to reason about new problems.

Ellis, who earned his PhD at MIT and was jointly advised by Tenenbaum and Solar-Lezama, first learned about morphology and phonology in an MIT class co-taught by ODonnell, who was a postdoc at the time, and Albright.

Linguists have thought that in order to really understand the rules of a human language, to empathize with what it is that makes the system tick, you have to be human. We wanted to see if we can emulate the kinds of knowledge and reasoning that humans (linguists) bring to the task, says Albright.

To build a model that could learn a set of rules for assembling words, which is called a grammar, the researchers used a machine-learning technique known as Bayesian Program Learning. With this technique, the model solves a problem by writing a computer program.

In this case, the program is the grammar the model thinks is the most likely explanation of the words and meanings in a linguistics problem. They built the model using Sketch, a popular program synthesizer which was developed at MIT by Solar-Lezama.

But Sketch can take a lot of time to reason about the most likely program. To get around this, the researchers had the model work one piece at a time, writing a small program to explain some data, then writing a larger program that modifies that small program to cover more data, and so on.

They also designed the model so it learns what good programs tend to look like. For instance, it might learn some general rules on simple Russian problems that it would apply to a more complex problem in Polish because the languages are similar. This makes it easier for the model to solve the Polish problem.

Tackling textbook problems

When they tested the model using 70 textbook problems, it was able to find a grammar that matched the entire set of words in the problem in 60 percent of cases, and correctly matched most of the word-form changes in 79 percent of problems.

The researchers also tried pre-programming the model with some knowledge it should have learned if it was taking a linguistics course, and showed that it could solve all problems better.

One challenge of this work was figuring out whether what the model was doing was reasonable. This isnt a situation where there is one number that is the single right answer. There is a range of possible solutions which you might accept as right, close to right, etc., Albright says.

The model often came up with unexpected solutions. In one instance, it discovered the expected answer to a Polish language problem, but also another correct answer that exploited a mistake in the textbook. This shows that the model could debug linguistics analyses, Ellis says.

The researchers also conducted tests that showed the model was able to learn some general templates of phonological rules that could be applied across all problems.

One of the things that was most surprising is that we could learn across languages, but it didnt seem to make a huge difference, says Ellis. That suggests two things. Maybe we need better methods for learning across problems. And maybe, if we cant come up with those methods, this work can help us probe different ideas we have about what knowledge to share across problems.

In the future, the researchers want to use their model to find unexpected solutions to problems in other domains. They could also apply the technique to more situations where higher-level knowledge can be applied across interrelated datasets. For instance, perhaps they could develop a system to infer differential equations from datasets on the motion of different objects, says Ellis.

This work shows that we have some methods which can, to some extent, learn inductive biases. But I dont think weve quite figured out, even for these textbook problems, the inductive bias that lets a linguist accept the plausible grammars and reject the ridiculous ones, he adds.

This work opens up many exciting venues for future research. I am particularly intrigued by the possibility that the approach explored by Ellis and colleagues (Bayesian Program Learning, BPL) might speak to how infants acquire language, says T. Florian Jaeger, a professor of brain and cognitive sciences and computer science at the University of Rochester, who was not an author of this paper. Future work might ask, for example, under what additional induction biases (assumptions about universal grammar) the BPL approach can successfully achieve human-like learning behavior on the type of data infants observe during language acquisition. I think it would be fascinating to see whether inductive biases that are even more abstract than those considered by Ellis and his team such as biases originating in the limits of human information processing (e.g., memory constraints on dependency length or capacity limits in the amount of information that can be processed per time) would be sufficient to induce some of the patterns observed in human languages.

This work was funded, in part, by the Air Force Office of Scientific Research, the Center for Brains, Minds, and Machines, the MIT-IBM Watson AI Lab, the Natural Science and Engineering Research Council of Canada, the Fonds de Recherche du Qubec Socit et Culture, the Canada CIFAR AI Chairs Program, the National Science Foundation (NSF), and an NSF graduate fellowship.

Read more:
AI that can learn the patterns of human language - MIT News

17-Year-Old Invents Software That Detects Elephant Poaching – My Modern Met

Photo courtesy of Society for Science

Despite conservationists efforts, animal poaching continues to devastate vulnerable species. So, when New Yorker Anika Puri came across ivory jewelry at a market in India four years ago, she felt inspired to do her part in stopping elephant hunting. The solution: she invented a low-cost machine learning software that can detect poachers in real time with 91% accuracy.

Discovering the numerous ivory objects in Mumbai was the catalyst for her project. I was quite taken aback because I always thought, Well, poaching is illegal, how come it really is still such a big issue? she says about the incident. So, the 17-year-old delved into the poaching numbers and discovered that Africa's forest elephant population declined by about 61% between 2002 and 2011, with numbers that continue to drop.

Poachers are usually detected by drones; however, Puri noticed the success rate could be significantly higher. I realized that we could use this disparity between these two movement patterns in order to actually increase the detection accuracy of potential poachers, she explains. As a result, Puri spent two years developing her solution: a machine learning software named ElSa (an abbreviation for Elephant Savior). It analyzes the movement patterns of humans and elephants in thermal infrared videos and is four times more accurate than the existing detection methods. Even better, the software can be used with low-cost cameras, eliminating the need for high-resolution thermal cameras.

Puri presented her project at the Regeneron Internation Science and Engineering Fair, winning the $10,000 Peggy Scripps Award for Science Communication and first place in the earth and environmental sciences category. It's quite remarkable that a high school student has been able to do something like this, Jasper Eikelboom, an ecologist at Wageningen University in the Netherlands, comments. Not only the research and the analysis but alsobeing able to implement it in the prototypes. Puri will be attending MIT in fall 2022 with hopes to expand her project to protect other endangered animal species.

h/t: [Smithsonian]

Meet the All-Female Anti-Poaching Team Changing the Face of Conservation in Africa [Interview]

Gorillas Pose for a Selfie with Virunga National Parks Anti-Poaching Rangers

Watch Olive the White Rhino Give Birth to a Healthy Baby Calf

See the original post here:
17-Year-Old Invents Software That Detects Elephant Poaching - My Modern Met

Why Machine Learning is a central part of business operations – Intelligent CIO

To make decisions more quickly and accurately, enterprises are increasingly turning to Machine Learning, arguably todays most practical application of Artificial Intelligence (AI). Machine Learning is a type of AI that allows software applications to become more accurate at predicting outcomes without being explicitly programmed to do so. Machine Learning algorithms use historical data as input to predict new output values. Industry pundits share insights why Machine Learning has been made a central part of business operations.

As organisations emerge from the lockdown restrictions that were imposed on businesses because of the COVID-19 pandemic, Machine Learning has taken centre stage because it gives enterprises a view of trends in customer behaviour and business operational patterns, as well as supports the development of new products. Many of todays leading multinational companies, such as Facebook, Google and Uber, have made Machine Learning a central part of their operations. Machine Learning has become a significant competitive differentiator for many companies across the Middle East and Africa (MEA).

According to research firm Gartner, the adoption of Machine Learning in the enterprise is being catalysed by Digital Transformation, the need for democratisation and the urgency of industrialisation. The firm says 48% of respondents to the 2022 Gartner CIO and Technology Executive Survey have already deployed or plan to deploy AI/Machine Learning in the next 12 months. And Gartner said that the on-going Digital Transformation requires better and faster but also ethical decision making, enabled by advances in decision intelligence and AI governance.

Gartner said one of the most prominent reasons why the IT industry is seeing an increasing enterprise adoption of Machine Learning is the desire to bring the power of Machine Learning to a widening audience the democratisation of data science and Machine Learning (DSML), lowering the barrier to entry which is enabled by technical advances in automation and augmentation.

Farhan Choudhary, Principal Analyst, Gartner, said to assess where Machine Learning can be applied in the enterprise, the CIO and IT team first need to determine the what of the problem statement, for example, what business KPIs does the organisation want to be impacted through the work in Machine Learning, and second, the how of the problem statement, i.e., how will the organisation accomplish this task.

Choudhary said Machine Learning can be applied across many parts of the business, some applications or opportunities could be low hanging fruits, some could be money-pits or some cutting edge. He said a thorough and systematic assessment of opportunities should be conducted before determining where Machine Learning can be applied by enterprise IT, and where a democratised approach can be followed.

This should be a top-down approach. Lets assume were in retail business and we want to leverage Machine Learning while working in collaboration with enterprise IT to generate tangible business value. The first order of business is to conduct an assessment on business value we expect the project to generate or KPIs that it would impact, and the feasibility of using Machine Learning in the enterprise. Say our priorities are revenue growth, and we want to use Machine Learning to impact the volume of sales; then, this could be done through use of Machine Learning in products and services, sales and marketing or in customer service (these are our separate lines of businesses that can leverage Machine Learning), he said.

Choudhary pointed out that there are opportunities in sales and marketing, R&D, corporate legal, human capital management, customer service, IT operations, software development and testing, and many other areas where Machine Learning can be applied.

Mike Brooks, Global Director, Asset Performance Management, Aspen, said: Machine Learningalgorithms are basically free from many open sources. It seems everybody is using it but Machine Learning itself is hardly the secret sauce, but it is how you use it and what for. The biggest issue with Machine Learningis the data science skills required to implement and the absolute necessity to engage the subject matter experts with deep familiarity of the problem space, including perhaps, process, mechanical, reliability, planning/scheduling personnel, etc.

Brooks said Aspen has embed Machine Learningand engineering smarts in anomaly and failure/degradation agents that exercise every few minutes to do the Machine Learning and guidance to ensure they hunt for causation rather than simple correlation is differentiating methodology.

The methodology copied from the iPhone ideas is that the smarts are on the inside doing the complex and hard work, so you do not have to. That approach assures it is easier and faster to do Machine Learningimplementations on specific equipment with an application that scales rapidly and easily, meaning faster time to cash for many assets. The alternative is a pure Machine Learning approach on a specific Machine Learning platform that takes the user nowhere near the problem space where every application is an open project every time complete with fragility and grand requirements for domain expertise.

With Machine Learning witnessing enterprise-wide adoption of the technology in various business environments across MEA, organisations are being urged to establish a business case before embarking on any project.

Ramprakash Ramamoorthy, Director, AI Research, ManageEngine, said since the onset of the pandemic, the first touchpoint for many businesses has been digital. Ramamoorthy said organisations must remain digitally competitive to stay afloat, and they achieve this by implementing newer technologies like Machine Learning. He said another factor is the ongoing AI summer, during which there have been a lot of investments in AI and other associated technologies, which in turn has increased the adoption of Machine Learning across the globe.

Ramamoorthy pointed out that because Machine Learning enables enterprise software to move from process automation to decision automation, using Machine Learning involves rewriting current, traditionally deterministic processes and workflows to make them probabilistic.

For instance, a traditional anomaly system uses the bell curve to identify anomalies, whereas an Machine Learning-powered anomaly system identifies anomalies along with the probability of an outage occurring. CIOs have to drive these changes and incentivise teams to use and integrate new technologies like ML into their everyday workflows by citing the impact they could have on business growth, he said.

Walid Issa, Senior Manager, Pre-sales and Solutions Engineering Middle East Region, NetApp, said Artificial Intelligence and Machine Learning have moved beyond the realm of concept into real-world application, representing the great opportunity to stay competitive, drive growth, and cut costs.

Issa said AI and ML are well suited in different verticals such as manufacturing, healthcare, telecom, public sector, retail, finance and automatise. If I select healthcare as an example, Artificial Intelligence is transforming healthcare in ways we never thought possible. And it really is all about data. Using data, AI and ML can help healthcare professionals make more informed, accurate, and proactive assessments and diagnoses. The ability to analyse data in real time enables healthcare professionals to improve the quality of life for patients and ultimately save lives. This will enable proactive diagnoses using smarter healthcare tools, help physicians find the right data faster and keep patients and healthcare organisations safe from cyber criminals and attacks, he said.

CIOs and IT leaders should involve business to ensure buy-in for a Machine Learning system deployment in their organisation as that ensures success in the organisation.

Chris Royles, EMEA Field CTO, Cloudera, said CIOs and IT leaders will be influential in building and maintaining a data culture in the organisation. Royles said helping develop a data literacy programme and working across lines of business to instill the importance of data in each domain is an important start. We then suggest a democratised approach to data management where ownership of the business domain and data problems are managed by those closest to the systems. It is then for each domain to identify the opportunities they can apply to their data processes to introduce Machine Learning, he said.

Kevin Thompson, Cloud Operations Manager, Sage Africa, Middle East and Asia Pacific, said one of the key elements to consider is change management since ML and AI could potentially take over many of the tasks human workers currently execute manually. Thompson said businesses should look at how these new technologies can augment, rather than replace their people, and show people how the technology will free them from routine, repetitive processes so they can focus on work that needs more creative, strategic, or emotional intelligence.

According to Thompson, within a few years, ML will be so deeply embedded into every computer system that the industry will take it for granted. To get ROI, organisations should start out with a clear idea of the business outcome they would like to achieve and how they will measure success. For example, they might want to use Machine Learning to generate efficiencies in customer service. In this case, they could measure call centre volumes versus customers served by a ML/AI-powered chatbot. An insurance company could use ML for fraud detection and measure the value of the fraudulent claims the system picks up, he said.

Facebook Twitter LinkedInEmailWhatsApp

Originally posted here:
Why Machine Learning is a central part of business operations - Intelligent CIO