Archive for the ‘Machine Learning’ Category

Going Beyond Machine Learning To Machine Reasoning – Forbes

From Machine Learning to Machine Reasoning

The conversation around Artificial Intelligence usually revolves around technology-focused topics: machine learning, conversational interfaces, autonomous agents, and other aspects of data science, math, and implementation. However, the history and evolution of AI is more than just a technology story. The story of AI is also inextricably linked with waves of innovation and research breakthroughs that run headfirst into economic and technology roadblocks. There seems to be a continuous pattern of discovery, innovation, interest, investment, cautious optimism, boundless enthusiasm, realization of limitations, technological roadblocks, withdrawal of interest, and retreat of AI research back to academic settings. These waves of advance and retreat seem to be as consistent as the back and forth of sea waves on the shore.

This pattern of interest, investment, hype, then decline, and rinse-and-repeat is particularly vexing to technologists and investors because it doesn't follow the usual technology adoption lifecycle. Popularized by Geoffrey Moore in his book "Crossing the Chasm", technology adoption usually follows a well-defined path. Technology is developed and finds early interest by innovators, and then early adopters, and if the technology can make the leap across the "chasm", it gets adopted by the early majority market and then it's off to the races with demand by the late majority and finally technology laggards. If the technology can't cross the chasm, then it ends up in the dustbin of history. However, what makes AI distinct is that it doesn't fit the technology adoption lifecycle pattern.

But AI isn't a discrete technology. Rather it's a series of technologies, concepts, and approaches all aligning towards the quest for the intelligent machine. This quest inspires academicians and researchers to come up with theories of how the brain and intelligence works, and their concepts of how to mimic these aspects with technology. AI is a generator of technologies, which individually go through the technology lifecycle. Investors aren't investing in "AI, but rather they're investing in the output of AI research and technologies that can help achieve the goals of AI. As researchers discover new insights that help them surmount previous challenges, or as technology infrastructure finally catches up with concepts that were previously infeasible, then new technology implementations are spawned and the cycle of investment renews.

The Need for Understanding

It's clear that intelligence is like an onion (or a parfait) many layers. Once we understand one layer, we find that it only explains a limited amount of what intelligence is about. We discover there's another layer thats not quite understood, and back to our research institutions we go to figure out how it works. In Cognilyticas exploration of the intelligence of voice assistants, the benchmark aims to tease at one of those next layers: understanding. That is, knowing what something is recognizing an image among a category of trained concepts, converting audio waveforms into words, identifying patterns among a collection of data, or even playing games at advanced levels, is different from actually understanding what those things are. This lack of understanding is why users get hilarious responses from voice assistant questions, and is also why we can't truly get autonomous machine capabilities in a wide range of situations. Without understanding, there's no common sense. Without common sense and understanding, machine learning is just a bunch of learned patterns that can't adapt to the constantly evolving changes of the real world.

One of the visual concepts thats helpful to understand these layers of increasing value is the "DIKUW Pyramid":

DIKUW Pyramid

While the Wikipedia entry above conveniently skips the Understanding step in their entry, we believe that understanding is the next logical threshold of AI capability. And like all previous layers of this AI onion, tackling this layer will require new research breakthroughs, dramatic increases in compute capabilities, and volumes of data. What? Don't we have almost limitless data and boundless computing power? Not quite. Read on.

The Quest for Common Sense: Machine Reasoning

Early in the development of artificial intelligence, researchers realized that for machines to successfully navigate the real world, they would have to gain an understanding of how the world works and how various different things are related to each other. In 1984, the world's longest-lived AI project started. The Cyc project is focused on generating a comprehensive "ontology" and knowledge base of common sense, basic concepts and "rules of thumb" about how the world works. The Cyc ontology uses a knowledge graph to structure how different concepts are related to each other, and an inference engine that allows systems to reason about facts.

The main idea behind Cyc and other understanding-building knowledge encodings is the realization that systems can't be truly intelligent if they don't understand what the underlying things they are recognizing or classifying are. This means we have to dig deeper than machine learning for intelligence. We need to peel this onion one level deeper, scoop out another tasty parfait layer. We need more than machine learning - we need machine reasoning.

Machine reason is the concept of giving machines the power to make connections between facts, observations, and all the magical things that we can train machines to do with machine learning. Machine learning has enabled a wide range of capabilities and functionality and opened up a world of possibility that was not possible without the ability to train machines to identify and recognize patterns in data. However, this power is crippled by the fact that these systems are not really able to functionally use that information for higher ends, or apply learning from one domain to another without human involvement. Even transfer learning is limited in application.

Indeed, we're rapidly facing the reality that we're going to soon hit the wall on the current edge of capabilities with machine learning-focused AI. To get to that next level we need to break through this wall and shift from machine learning-centric AI to machine reasoning-centric AI. However, that's going to require some breakthroughs in research that we haven't realized yet.

The fact that the Cyc project has the distinction as being the longest-lived AI project is a bit of a back-handed compliment. The Cyc project is long lived because after all these decades the quest for common sense knowledge is proving elusive. Codifying commonsense into a machine-processable form is a tremendous challenge. Not only do you need to encode the entities themselves in a way that a machine knows what you're talking about but also all the inter-relationships between those entities. There are millions, if not billions, of "things" that a machine needs to know. Some of these things are tangible like "rain" but others are intangible such as "thirst". The work of encoding these relationships is being partially automated, but still requires humans to verify the accuracy of the connections... because after all, if machines could do this we would have solved the machine recognition challenge. It's a bit of a chicken and egg problem this way. You can't solve machine recognition without having some way to codify the relationships between information. But you can't scalable codify all the relationships that machines would need to know without some form of automation.

Are we still limited by data and compute power?

Machine learning has proven to be very data-hungry and compute-intensive. Over the past decade, many iterative enhancements have lessened compute load and helped to make data use more efficient. GPUs, TPUs, and emerging FPGAs are helping to provide the raw compute horsepower needed. Yet, despite these advancements, complicated machine learning models with lots of dimensions and parameters still require intense amounts of compute and data. Machine reasoning is easily one order or more of complexity beyond machine learning. Accomplishing the task of reasoning out the complicated relationships between things and truly understanding these things might be beyond today's compute and data resources.

The current wave of interest and investment in AI doesn't show any signs of slowing or stopping any time soon, but it's inevitable it will slow at some point for one simple reason: we still don't understand intelligence and how it works. Despite the amazing work of researchers and technologists, we're still guessing in the dark about the mysterious nature of cognition, intelligence, and consciousness. At some point we will be faced with the limitations of our assumptions and implementations and we'll work to peel the onion one more layer and tackle the next set of challenges. Machine reasoning is quickly approaching as the next challenge we must surmount on the quest for artificial intelligence. If we can apply our research and investment talent to tackling this next layer, we can keep the momentum going with AI research and investment. If not, the pattern of AI will repeat itself, and the current wave will crest. It might not be now or even within the next few years, but the ebb and flow of AI is as inevitable as the waves upon the shore.

See the original post:
Going Beyond Machine Learning To Machine Reasoning - Forbes

AI and machine learning trends to look toward in 2020 – Healthcare IT News

Artificial intelligence and machine learning will play an even bigger role in healthcare in 2020 than they did in 2019, helping medical professionals with everything from oncology screenings to note-taking.

On top of actual deployments, increased investment activity is also expected this year, and with deeper deployments of AI and ML technology, a broader base of test cases will be available to collect valuable best practices information.

As AI is implemented more widely in real-world clinical practice, there will be more academic reports on the clinical benefits that have arisen from the real-world use, said Pete Durlach, senior vice president for healthcare strategy and new business development at Nuance.

"With healthy clinical evidence, we'll see AI become more mainstream in various clinical settings, creating a positive feedback loop of more evidence-based research and use in the field," he explained. "Soon, it will be hard to imagine a doctor's visit, or a hospital stay that doesn't incorporate AI in numerous ways."

In addition, AI and ambient sensing technology will help re-humanize medicine by allowing doctors to focus less on paperwork and administrative functions, and more on patient care.

"As AI becomes more commonplace in the exam room, everything will be voice enabled, people will get used to talking to everything, and doctors will be able to spend 100% of their time focused on the patient, rather than entering data into machines," Durlach predicted. "We will see the exam room of the future where clinical documentation writes itself."

The adoption of AI for robotic process automation ("RPA") for common and high value administrative functions such as the revenue cycle, supply chain and patient scheduling also has the potential to rapidly increase as AI helps automate or partially automate components of these functions, driving significantly enhanced financial outcomes to provider organizations.

Durlach also noted the fear that AI will replace doctors and clinicians has dissipated, and the goal now is to figure out how to incorporate AI as another tool to help physicians make the best care decisions possible effectively augmenting the intelligence of the clinician.

"However, we will still need to protect against phenomenon like alert fatigue, which occurs when users who are faced with many low-level alerts, ignore alerts of all levels, thereby missing crucial ones that can affect the health and safety of patients," he cautioned.

In the next few years, he predicts the market will see a technology that finds a balance between being too obtrusive while supporting doctors to make the best decisions for their patients as the learn to trust the AI powered suggestions and recommendations.

"So many technologies claim they have an AI component, but often there's a blurred line in which the term AI is used in a broad sense, when the technology that's being described is actually basic analytics or machine learning," Kuldeep Singh Rajput, CEO and founder of Boston-based Biofourmis, told Healthcare IT News. "Health system leaders looking to make investments in AI should ask for real-world examples of how the technology is creating ROI for other organizations."

For example, he pointed to a study of Brigham & Women's Home Hospital program, recently published in Annals of Internal Medicine, which employed AI-driven continuous monitoring combined with advanced physiology analytics and related clinical care as a substitute for usual hospital care.

The study found that the program--which included an investment in AI-driven predictive analytics as a key component--reduced costs, decreased healthcare use, and lowered readmissions while increasing physical activity compared with usual hospital care.

"Those types of outcomes could be replicated by other healthcare organizations, which makes a strong clinical and financial case to invest in that type of AI," Rajput said.

Nathan Eddy is a healthcare and technology freelancer based in Berlin.Email the writer:nathaneddy@gmail.comTwitter:@dropdeaded209

Go here to read the rest:
AI and machine learning trends to look toward in 2020 - Healthcare IT News

Chemists are training machine learning algorithms used by Facebook and Google to find new molecules – News@Northeastern

For more than a decade, Facebook and Google algorithms have been learning as much as they can about you. Its how they refine their systems to deliver the news you read, those puppy videos you love, and the political ads you engage with.

These same kinds of algorithms can be used to find billions of molecules and catalyze important chemical reactions that are currently induced with expensive and toxic metals, says Steven A. Lopez, an assistant professor of chemistry and chemical biology at Northeastern.

Lopez is working with a team of researchers to train machine learning algorithms to spot the molecular patterns that could help find new molecules in bulk, and fast. Its a much smarter approach than scanning through billionsand billionsof molecules without a streamlined process.

Were teaching the machines to learn the chemistry knowledge that we have, Lopez says. Why should I just have the chemical intuition for myself?

The alternative to using expensive metals is organic molecules, and particularly plastics, which are everywhere, Lopez says. Depending on their molecular structure and ability to absorb light, these plastics can be converted with chemistry to produce better materials for todays most important problems.

Lopez says the goal is to find molecules with the right properties and similar structures as metal catalysts. But to attain that goal, Lopez will need to explore an enormous number of molecules.

Thus far, scientists have been able to synthesize only about a million molecules. But conservative estimates of the number of possible molecules that could be analyzed is a quintillion, which is 10 raised to the power of 18, or the number one followed by 18 zeros.

Lopez thinks of this enormous number of possibilities as a vast ocean made up of billions of unexplored molecules. Such an immense molecular space is practically impossible to navigateeven if scientists were to combine experiments with supercomputer analysis.

Lopez says all of the calculations that have ever been done by computers add up to about a billion, or 10 to the ninth power. Thats about a million times less than the possible molecules.

Forget it, theres no chance, he says. We just have to use a smarter search technique.

Thats why Lopez is leading a team, supported by a grant from the National Science Foundation, that includes research from Tufts University, Washington University in St. Louis, Drexel University, and Colorado School of Mines. The team is using an open-access database of organic molecules called VERDE materials DB, which Lopez and colleagues recently published, to improve their algorithms and find more useful molecules.

The database will also register newly found molecules, and can serve as a data hub of information for researchers across several different domains, Lopez says. Thats because it can launch researchers toward finding different molecules with many new properties and applications.

In tandem with the database, the algorithms will allow scientists to use computational resources more efficiently. After molecules of interest are found, researchers will recalibrate the algorithm to find more similar groups of molecules.

The active-search algorithm, developed by Roman Garnett at Washington University in St. Louis, uses a process similar to the classic board game Battleship, in which two players guess hidden locations off a grid to target and destroy vessels within a naval fleet.

In that grid, players place vessels as far apart as possible to make opponents miss targets. Once a ship is hit, players can readjust their strategy and redirect their attacks to the coordinates surrounding that hit.

Thats exactly how Lopez thinks of the concept of exploring a vast ocean of molecules.

We are looking for regions within this ocean, he says. We are starting to set up the coordinates of all the possible molecules.

Hitting the right candidate molecules might also expand the understanding that chemists have of this unexplored chemical space.

Maybe well find out through this analysis that we have something really at the edge of what we call the ocean, and that we can expand this ocean out a bit more in that region, Lopez says. Those are things that we wouldnt [be able to find by searching] with a brute force, trial-and-error kind of approach.

For media inquiries, please contact Jessica Hair at j.hair@northeastern.edu or 617-373-5718.

Visit link:
Chemists are training machine learning algorithms used by Facebook and Google to find new molecules - News@Northeastern

Forget Machine Learning, Constraint Solvers are What the Enterprise Needs – – RTInsights

Constraint solvers take a set of hard and soft constraints in an organization and formulate the most effective plan, taking into account real-time problems.

When a business looks to implement an artificial intelligence strategy, even proper expertise can be too narrow. Its what has led many businesses to deploy machine learning or neural networks to solve problems that require other forms of AI, like constraint solvers.

Constraint solvers take a set of hard and soft constraints in an organization and formulate the most effective plan, taking into account real-time problems. It is the best solution for businesses that have timetabling, assignment or efficiency issues.

In a RedHat webinar, principal software engineer, Geoffrey De Smet, ran through three use cases for constraint solvers.

Vehicle Routing

Efficient delivery management is something Amazon has seemingly perfected, so much so its now an annoyance to have to wait 3-5 days for an item to be delivered. Using RedHats OptaPlanner, businesses can improve vehicle routing by 9 to 18 percent, by optimizing routes and ensuring drivers are able to deliver an optimal amount of goods.

To start, OptaPlanner takes in all the necessary constraints, like truck capacity and driver specialization. It also takes into account regional laws, like the amount of time a driver is legally allowed to drive per day and creates a route for all drivers in the organization.

SEE ALSO: Machine Learning Algorithms Help Couples Conceive

In a practical case, De Smet said RedHat saved a technical vehicle routing company over $100 million in savings per year with the constraint solver. Driving time was reduced by 25 percent and the business was able to reduce its headcount by 10,000.

The benefits [of OptaPlanner] are to reduce cost, improve customer satisfaction, employee well-being and save the planet, said De Smet. The nice thing about some of these are theyre complementary, for example reducing travel time also reduces fuel consumption.

Employee timetabling

Knowing who is covering what shift can be an infuriating task for managers, with all the requests for time off, illness and mandatory days off. In a place where 9 to 5 isnt regular, it can be even harder to keep track of it all.

RedHats OptaPlanner is able to take all of the hard constraints (two days off per week, no more than eight-hour shifts) and soft constraints (should have up to 10 hours rest between shifts) and can formulate a timetable that takes all that into account. When someone asks for a day off, OptaPlanner is able to reassign workers in real-time.

De Smet said this is useful for jobs that need to run 24/7, like hospitals, the police force, security firms, and international call centers. According to RedHats simulation, it should improve employee well-being by 19 to 85 percent, alongside improvements in retention and customer satisfaction.

Task assignment

Even within a single business department, there are skills only a few employees have. For instance, in a call center, only a few will be able to speak fluently in both English and French. To avoid customer annoyance, it is imperative for employees with the right skill-set to be assigned correctly.

With OptaPlanner, managers are able to add employee skills and have the AI assign employees correctly. Using the call center example again, a bilingual advisor may take all calls in French for one day when theres a high demand for it, but on others have a mix of French and English.

For customer support, the constraint solver would be able to assign a problem to the correct advisor, or to the next best thing, before the customer is connected, thus avoiding giving out the wrong advice or having to pass the customer on to another advisor.

In the webinar, De Smet said that while the constraint solver is a valuable asset for businesses looking to reduce costs, this shouldnt be their only aim.

Without having all stakeholders involved in the implementation, the AI could end up harming other areas of the business, like customer satisfaction or employee retention. This is a similar warning given from all analysts on AI implementation it needs to come from a genuine desire to improve the business to get the best outcome.

See original here:
Forget Machine Learning, Constraint Solvers are What the Enterprise Needs - - RTInsights

Machine Learning to Predict the 1-Year Mortality Rate After Acute Ante | TCRM – Dove Medical Press

Yi-ming Li,1,* Li-cheng Jiang,2,* Jing-jing He,1 Kai-yu Jia,1 Yong Peng,1 Mao Chen1

1Department of Cardiology, West China Hospital, Sichuan University, Chengdu, Peoples Republic of China; 2Department of Cardiology, The First Affiliated Hospital, Chengdu Medical College, Chengdu, Peoples Republic of China

*These authors contributed equally to this work

Correspondence: Yong Peng; Mao ChenDepartment of Cardiology, West China Hospital, Sichuan University, 37 Guoxue Street, Chengdu 610041, Peoples Republic of ChinaEmail pengyongcd@126.com; hmaochen@vip.sina.com

Abstract: A formal risk assessment for identifying high-risk patients is essential in clinical practice and promoted in guidelines for the management of anterior acute myocardial infarction. In this study, we sought to evaluate the performance of different machine learning models in predicting the 1-year mortality rate of anterior ST-segment elevation myocardial infarction (STEMI) patients and to compare the utility of these models to the conventional Global Registry of Acute Coronary Events (GRACE) risk scores. We enrolled all of the patients aged >18 years with discharge diagnoses of anterior STEMI in the Western China Hospital, Sichuan University, from January 2011 to January 2017. A total of 1244 patients were included in this study. The mean patient age was 63.812.9 years, and the proportion of males was 78.4%. The majority (75.18%) received revascularization therapy. In the prediction of the 1-year mortality rate, the areas under the curve (AUCs) of the receiver operating characteristic curves (ROCs) of the six models ranged from 0.709 to 0.942. Among all models, XGBoost achieved the highest accuracy (92%), specificity (99%) and f1 score (0.72) for predictions with the full variable model. After feature selection, XGBoost still obtained the highest accuracy (93%), specificity (99%) and f1 score (0.73). In conclusion, machine learning algorithms can accurately predict the rate of death after a 1-year follow-up of anterior STEMI, especially the XGBoost model.

Keywords: machine learning, prediction model, acute anterior myocardial infarction

This work is published and licensed by Dove Medical Press Limited. The full terms of this license are available at https://www.dovepress.com/terms.php and incorporate the Creative Commons Attribution - Non Commercial (unported, v3.0) License.By accessing the work you hereby accept the Terms. Non-commercial uses of the work are permitted without any further permission from Dove Medical Press Limited, provided the work is properly attributed. For permission for commercial use of this work, please see paragraphs 4.2 and 5 of our Terms.

Read the original:
Machine Learning to Predict the 1-Year Mortality Rate After Acute Ante | TCRM - Dove Medical Press