Archive for the ‘Machine Learning’ Category

What is document classification, and how can machine learning help? – Robotics and Automation News

It is hard to classify documents. At least manually.

Imagine this: you head into a standard bookstore where pieces are supposed to be classified as genres like thriller, romance, science fiction, and more. You want to pick Andy Weirs Hail Mary a novel with thriller/mystery and science fiction elements.

While the book choice seems on point, the question is: which genre should you head towards? The book can be on the science fiction shelf or on the thriller counter. It can be anywhere. And that is when the manual document classification becomes troublesome.

Sweating already? Fret not, as machine learning is here to help. Not to throw shade at the manual document classification, but they can be tedious if you plan on looking at a world outside books including inventories and databases.

Yet, document classification with machine learning can be a game changer, courtesy of the relevant and available technologies like NLP, Robots, Sentiment Analysis, OCR, and more.

Lets take a deeper dive into all of these.

Simply put, document classification is the automation process where relevant/classifying documents are stacked into relevant classes or even categories.

Often regarded as one of the sub-domain of text classification, an oversimplified version of document classification means tagging the docs and setting them right into predefined categories for the purpose of easy maintenance and efficient discovery.

In hindsight, the process is simple. Its all about extracting and retrieving information. Yet, due to the sheer size of data sets, companies often need to rely on deep learning and machine learning technologies to get ahead of document classification, albeit with a focus on speed, accuracy, scalability, and cost-effectiveness.

And just to mention, document classification can be considered a sub-domain of IDP or intelligent document processing. But more on that later.

As for the approach, document classification takes the text and visual classification techniques into consideration primarily for analyzing the document-specific phrases and also the visual structure.

Visual and text classification can help companies classify every kind of document (stills, pictures, large data modules, and more) with ease.

Short story: intelligent models scan through structured, unstructured, and even semi-structured documents to match them with the corresponding categories.

Long story: The following machine learning techniques are put to use for classifying documents according to categories:

Regardless of the approach, businesses need to find a good way to classify documents as going manual can be time-consuming, erroneous, and obviously hard.

However, if you are looking for broader shades in regards to the process, here are the steps associated with an automated and efficient document classification process:

Theoretical discourse is all cool, but what about the use-cases for document classification. We have it all sorted for you.

Opinion Classification: Businesses use this feature to segregate positive reviews from negative ones.

Spam Detection: Have you ever thought about how your email provider separates standard emails from spam emails? Well, document classification is the answer.

Customer support classification: A random day in the life of a customer support executive can be stressful. Document classification helps them understand the tickets better, especially when the request volume far exceeds their patience.

In addition to the mentioned use cases, document classification can also be used for social listening, document scanning, and even object recognition.

Every organization is information-dependent. Yet, every kind of information isnt meant for everyone. This is the reason why document classification becomes all the more important helping organizations collect, store, and eventually classify details as per requirements. And if you are still a manual evangelist, remember one thing: automation is the key to the future.

About the author: Vatsal Ghiya is a serial entrepreneur with more than 20 years of experience in healthcare AI software and services. He is the CEO and co-founder of Shaip, which enables the on-demand scaling of our platform, processes, and people for companies with the most demanding machine learning and artificial intelligence initiatives. Linkedin: https://www.linkedin.com/in/vatsal-ghiya-4191855/

You might also like

Read the original here:
What is document classification, and how can machine learning help? - Robotics and Automation News

Uber and AMC bring machine learning company Rokt onboard to drive revenue – Mugglehead

Rokt has partnered with Uber Technologies (NYSE:UBER) and AMC Theatres (NYSE:AMC) to help both companies make more money on their websites and mobile apps.

Rokt is an ecommerce tech company using machine learning to help tailor transactions to each shopper. The idea behind the technology is to give companies the chance to get additional revenue, find customers at scale and give extra options to existing customers by using machine learning to present offers to each shopper as theyre entering into the final stages of a transaction. The analog here would be the impulse buying section prior to a checkout line, except specifically tailored due to each consumer due to collected data.

Uber and AMC Theatres are two of the most recognized brands in the world and were extremely pleased to partner with both of them as we accelerate our growth globally. Our global partnership with Uber will support the Uber Eats internal ad network and unlock additional profitability for the company. Our partnership with AMC has already begun generating outstanding results for the company. We look forward to expanding our relationships with both of these companies in the future, said Elizabeth Buchanan, chief commercial officer of Rokt.

Rokts deal is ecommerce technology that helps customers find the full potential of every transaction to grow revenue. Existing customers include Live Nation, Groupon, Staples, Lands End, Fanatics, GoDaddy, Vistaprint and HelloFresh, but also extend out to include 2,500 other global businesses and advertisers. The company is originally from Australia, but its moved its headquarters to New York City in the United States, and has expanded out to include 19 countries across three continents.

Rokts partnership with Uber will initially launch with Uber Eats in the US, Canada, Australia and Japan, with Rokts machine learning technology driving additional revenue for Uber during the checkout experience. AMC has partnered with Rokt to drive revenue and customer lifetime value across the companys online and mobile channels.

As millions of moviegoers come to AMC each week to enjoy the unmatched entertainment of the big screen, its important that we are offering a guest experience thats personally relevant across the entire moviegoing journey. Our partnership with Rokt enables us to better personally engage our consumers and drive higher value per transaction by optimizing each online touchpoint without adding additional cost to the moviegoer, said Mark Pearson, chief strategy officer for AMC Theatres.

Rokt uses intelligence taken from five billion transactions across hundreds of ecommerce businesses to allow brands to create a tailored customer experience wherein they can control the types of offers on display to their customers. Businesses that partner with Rokt can unlock profit upwards to $0.30 per transaction through high performance techniques relevant to each individual from the moment where the customer puts the item in their digital cart to the time their payment goes through.

View post:
Uber and AMC bring machine learning company Rokt onboard to drive revenue - Mugglehead

Autonomous Experiments in the Age of Computing, Machine Learning and Automation: Progress and Challenges – Argonne National Laboratory

Abstract:Machine learning has by now become a widely used tool within materials science, spawning entirely new fields such as materials informatics that seek to accelerate the discovery and optimization of material systems through both experiments and computational studies. Similarly, the increasing use of robotic systems has led to the emergence of autonomous systems ranging from chemical synthesis to personal vehicles, which has spurred the scientific community to investigate these directions for their own tasks. This begs the question, when will mainstay scientific synthesis and characterization tools, such as electron and scanning probe microscopes, start to perform experiment autonomously?

In this talk, I will discuss the history of how machine learning, automation and availability of compute has led to nascent autonomous microscopy platforms at the Center for Nanophase Materials Sciences. I will illustrate the challenges to making autonomous experiments happen, as well as the necessity for data, computation, and abstractions to fully realize the potential these systems can offer for scientific discovery. I will then focus on our work on reinforcement learning as a tool that can be leveraged to facilitate autonomous decision making to optimize material characterization (and material properties) on the fly, on a scanning probe microscope. Finally, some workflow and data infrastructure issues will also be discussed. This research was conducted at and supported by the Center for Nanophase Materials Sciences, a US DOE Office of Science User Facility.

Follow this link:
Autonomous Experiments in the Age of Computing, Machine Learning and Automation: Progress and Challenges - Argonne National Laboratory

Researchers using artificial intelligence to assist with early detection of autism spectrum disorder – EurekAlert

image:Han-Seok Seo view more

Credit: University Relations

Could artificial intelligence be used to assist with the early detection of autism spectrum disorder? Thats a question researchers at the University of Arkansas are trying to answer. But theyre taking an unusual tack.

Han-Seok Seo, an associate professor with a joint appointment in food science and the UA System Division of Agriculture, and Khoa Luu, an assistant professor in computer science and computer engineering, will identify sensory cues from various foods in both neurotypical children and those known to be on the spectrum. Machine learning technology will then be used to analyze biometric data and behavioral responses to those smells and tastes as a way of detecting indicators of autism.

There are a number of behaviors associated with ASD, including difficulties with communication, social interaction or repetitive behaviors. People with ASD are also known to exhibit some abnormal eating behaviors, such as avoidance of some if not many foods, specific mealtime requirements and non-social eating. Food avoidance is particularly concerning, because it can lead to poor nutrition, including vitamin and mineral deficiencies. With that in mind, the duo intend to identify sensory cues from food items that trigger atypical perceptions or behaviors during ingestion. For instance, odors like peppermint, lemons and cloves are known to evoke stronger reactions from those with ASD than those without, possibly triggering increased levels of anger, surprise or disgust.

Seo is an expert in the areas of sensory science, behavioral neuroscience, biometric data and eating behavior. He is organizing and leading this project, including screening and identifying specific sensory cues that can differentiate autistic children from non-autistic children with respect to perception and behavior. Luu isan expert in artificial intelligence with specialties in biometric signal processing, machine learning, deep learning and computer vision. He will develop machine learning algorithms for detecting ASD in children based on unique patterns of perception and behavior in response to specific test-samples.

The duo are in the second year of a three-year, $150,000 grant from the Arkansas Biosciences Institute.

Their ultimate goalis to create an algorithm that exhibits equal or better performance in the early detection of autism in children when compared to traditional diagnostic methods, which require trained healthcare and psychological professionals doing evaluations, longer assessment durations, caregiver-submitted questionnaires and additional medical costs. Ideally, they will be able to validate a lower-cost mechanism to assist with the diagnosis of autism. While their system would not likely be the final word in a diagnosis, it could provide parents with an initial screening tool, ideally eliminating children who are not candidates for ASD while ensuring the most likely candidates pursue a more comprehensive screening process.

Seo said that he became interested in the possibility of using multi-sensory processing to evaluate ASD when two things happened: he began working with a graduate student, Asmita Singh, who had background in working with autistic students, and the birth of his daughter. Like many first-time parents, Seo paid close attention to his newborn baby, anxious that she be healthy. When he noticed she wouldnt make eye contact, he did what most nervous parents do: turned to the internet for an explanation. He learned that avoidance of eye contact was a known characteristic of ASD.

While his child did not end up having ASD, his curiosity was piqued, particularly about the role sensitivities to smell and taste play in ASD. Further conversations with Singh led him to believe fellow anxious parents might benefit from an early detection tool perhaps inexpensively alleviating concerns at the outset. Later conversations with Luu led the pair to believe that if machine learning, developed by his graduate student Xuan-Bac Nguyen, could be used to identify normal reactions to food, it could be taught to recognize atypical responses, as well.

Seo is seeking volunteers 5-14 years old to participate in the study. Both neurotypical children and children already diagnosed with ASD are needed for the study. Participants receive a $150 eGift card for participating and are encouraged to contact Seo athanseok@uark.edu.

Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.

See the rest here:
Researchers using artificial intelligence to assist with early detection of autism spectrum disorder - EurekAlert

This Smart Doorbell Responds to Meowing Cats Using Machine Learning and IoT – Hackster.io

Those who own an outdoor cat or even several might run into the occasional problem of having to let them back in. Due to finding it annoying when having to constantly monitor for when his cat wanted to come inside the house, GitHub user gamename opted for a more automated system.

The solution gamename came up with involves listening to ambient sounds with a single Raspberry Pi and an attached USB microphone. Whenever the locally-running machine learning model detects a meow, it sends a message to an AWS service over the internet where it can then trigger a text to be sent. This has the advantage of limiting false events while simultaneously providing an easy way for the cat to be recognized at the door.

This project started by installing the AWS command-line interface (CLI) onto the Raspberry Pi 4 and then signing in with an account. From here, gamename registered a new IoT device, downloaded the resulting configuration files, and ran the setup script. After quickly updating some security settings, a new function was created that waits for new messages coming from the MQTT service and causes a text message to be sent with the help of the SNS service.

After this plethora of services and configurations had been made to the AWS project, gamename moved onto the next step of testing to see if messages are sent at the right time. His test script simply emulates a positive result by sending the certificates, key, topic, and message to the endpoint, where the user can then watch as the text appears on their phone a bit later.

The Raspberry Pi and microSD card were both placed into an off-the-shelf chassis, which sits just inside the house's entrance. After this, the microphone was connected with the help of two RJ45-to-USB cables that allow the microphone to sit outside inside of a waterproof housing up to 150 feet away.

Running on the Pi is a custom bash script that starts every time the board boots up, and its role is to launch the Python program. This causes the Raspberry Pi to read samples from the microphone and pass them to a TensorFlow audio classifier, which attempts to recognize the sound clip. If the primary noise is a cat, then the AWS API is called in order to publish the message to the MQTT topic. More information about this project can be found here in gamename's GitHub repository.

Read this article:
This Smart Doorbell Responds to Meowing Cats Using Machine Learning and IoT - Hackster.io