Archive for the ‘Machine Learning’ Category

Clean data, AI advances, and provider/payer collaboration will be key in 2020 – Healthcare IT News

In 2020, the importance of clean data, advancements in AI and machine learning, and increased cooperation between providers and payers will rise to the fore among important healthcare and health IT trends, predicts Don Woodlock, vice president of HealthShare at InterSystems.

All of these trends are good news for healthcare provider organizations, which are looking to improve the delivery of care, enhance the patient and provider experiences, achieve optimal outcomes, and trim costs.

The importance of clean data will become clear in 2020, Woodlock said.

Data is becoming an increasingly strategic asset for healthcare organizations as they work toward a true value-based care model, he explained. With the power of advanced machine learning models, caregivers can not only prescribe more personalized treatment, but they can even predict and hopefully prevent issues from manifesting.

However, there is no machine learning without clean data meaning the data needs to be aggregated, normalized and deduplicated, he added.

Don Woodlock, InterSystems

Data science teams spend a significant part of their day cleaning and sorting data to make it ready for machine learning algorithms, and as a result, the rate of innovation slows considerably as more time is spent on prep then experimentation, he said. In 2020, healthcare leaders will better see the need for clean data as a strategic asset to help their organization move forward smartly.

This year, AI and machine learning will move from if and when to how and where, Woodlock predicted.

AI certainly is at the top of the hype cycle, but the use in practice currently is very low in healthcare, he noted. This is not such a bad thing as we need to spend time perfecting the technology and finding the areas where it really works. In 2020, I foresee the industry moving toward useful, practical use-cases that work well, demonstrate value, fit into workflows, and are explainable and bias-free.

Well-developed areas like image recognition and conversational user experiences will find their foothold in healthcare along with administrative use-cases in billing, scheduling, staffing and population management where the patient risks are lower, he added.

In 2020, there will be increased collaboration between payers and providers, Woodlock contended.

The healthcare industry needs to be smarter and more inclusive of all players, from patient to health system to payer, in order to truly achieve a high-value health system, he said.

Payers and providers will begin to collaborate more closely in order to redesign healthcare as a platform, not as a series of disconnected events, he concluded. They will begin to align all efforts on a common goal: positive patient and population outcomes. Technology will help accelerate this transformation by enabling seamless and secure data sharing, from the patient to the provider to the payer.

InterSystems will be at booth 3301 at HIMSS20.

Twitter:@SiwickiHealthITEmail the writer:bill.siwicki@himssmedia.comHealthcare IT News is a HIMSS Media publication.

See the original post:
Clean data, AI advances, and provider/payer collaboration will be key in 2020 - Healthcare IT News

An Open Source Alternative to AWS SageMaker – Datanami

(Robert Lucian Crusitu/Shutterstock)

Theres no shortage of resources and tools for developing machine learning algorithms. But when it comes to putting those algorithms into production for inference, outside of AWSs popular SageMaker, theres not a lot to choose from. Now a startup called Cortex Labs is looking to seize the opportunity with an open source tool designed to take the mystery and hassle out of productionalizing machine learning models.

Infrastructure is almost an afterthought in data science today, according to Cortex Labs co-founder and CEO Omer Spillinger. A ton of energy is going into choosing how to attack problems with data why, use machine learning of course! But when it comes to actually deploying those machine learning models into the real world, its relatively quiet.

We realized there are two really different worlds to machine learning engineering, Spillinger says. Theres the theoretical data science side, where people talk about neural networks and hidden layers and back propagation and PyTorch and Tensorflow. And then you have the actual system side of things, which is Kubernetes and Docker and Nvidia and running on GPUs and dealing with S3 and different AWS services.

Both sides of the data science coin are important to building useful systems, Spillinger says, but its the development side that gets most of the glory. AWS has captured a good chunk of the market with SageMaker, which the company launched in 2017 and which has been adopted by tens of thousands of customers. But aside from just a handful of vendors working in the area, such as Algorithmia, the general data-building public has been forced to go it alone when it comes to inference.

A few years removed from UC Berkeleys computer science program and eager to move on from their tech jobs, Spillinger and his co-founders were itching to build something good. So when it came to deciding what to do, Spillinger and his co-founders decided to stick with what they knew, which was working with systems.

(bluebay/Shutterstock.com)

We thought that we could try and tackle everything, he says. We realized were probably never going to be that good at the data science side, but we know a good amount about the infrastructure side, so we can help people who actually know how to build models get them into their stack much faster.

Cortex Labs software begins where the development cycle leaves off. Once a model has been created and trained on the latest data, then Cortex Labs steps in to handle the deployment into customers AWS accounts using its Kubernetes engine (AWS is the only supported cloud at this time; on-prem inference clusters are not supported).

Our starting point is a trained model, Spillinger says. You point us at a model, and we basically convert it into a Web API. We handle all the productionalization challenges around it.

That could be shifting inference workloads from CPUs to GPUs in the AWS cloud, or vice versa. It could be we automatically spinning up more AWS servers under the hood when calls to the ML inference service are high, and spinning down the servers when that demand starts to drop. On top of its build-in AWS cost-optimization capabilities, the Cortex Labs software logs and monitors all activities, which is a requirement in todays security- and regulatory-conscious climate.

Cortex Labs is a tool for scaling real-time inference, Spillinger says. Its all about scaling the infrastructure under the hood.

Cortex Labs delivers a command line interface (CLI) for managing deployments of machine learning models on AWS

We dont help at all with the data science, Spillinger says. We expect our audience to be a lot better than us at understanding the algorithms and understanding how to build interesting models and understanding how they affect and impact their products. But we dont expect them to understand Kubernetes or Docker or Nvidia drivers or any of that. Thats what we view as our job.

The software works with a range of frameworks, including TensorFlow, PyTorch, scikit-learn, and XGBoost. The company is open to supporting more. Theres going to be lots of frameworks that data scientists will use, so we try to support as many of them as we can, Spillinger says.

Cortex Labs software knows how to take advantage of EC2 spot instances, and integrates with AWS services like Elastic Kubernetes Service (EKS), Elastic Container Service (ECS), Lambda, and Fargate. The Kubernetes management alone may be worth the price of admission.

You can think about it as a Kubernetes thats been massaged for the data science use case, Spillinger says. Theres some similarities to Kubernetes in the usage. But its a much higher level of abstraction because were able to make a lot of assumptions about the use case.

Theres a lack of publicly available tools for productionalizing machine learning models, but thats not to say that they dont exist. The tech giants, in particular, have been building their own platforms for doing just this. Airbnb, for instance, has its BigHead offering, while Uber has talked about its system, called Michelangelo.

But the rest of the industry doesnt have these machine learning infrastructure teams, so we decided wed basically try to be that team for everybody else, Spillinger says.

Cortex Labs software is distributed under an open source license and is available for download from its GitHub Web page. Making the software open source is critical, Spillinger says, because of the need for standards in this area. There are proprietary offerings in this arena, but they dont have a chance of becoming the standard, whereas Cortex Labs does.

We think that if its not open source, its going to be a lot more difficult for it to become a standard way of doing things, Spillinger says.

Cortex Labs isnt the only company talking about the need for standards in the machine learning lifecycle. Last month, Cloudera announced its intention to push for standards in machine learning operations, or MLOps. Anaconda, which develops a data science platform, also is backing

Eventually, the Oakland, California-based company plans to develop a managed service offering based on its software, Spillinger says. But for now, the company is eager to get the tool into the hands of as many data scientists and machine learning engineers as it can.

Related Items:

Its Time for MLOps Standards, Cloudera Says

Machine Learning Hits a Scaling Bump

Inference Emerges As Next AI Challenge

More:
An Open Source Alternative to AWS SageMaker - Datanami

Red Hat Survey Shows Hybrid Cloud, AI and Machine Learning are the Focus of Enterprises – Computer Business Review

Add to favorites

The data aspect in particular is something that we often see overlooked

Open source enterprise software firm Red Hat now a subsidiary of IBM have conducted its annual survey of its customers which highlights just how prevalent artificial intelligence and machine learning is becoming, while a talent and skill gap is still slowing down companies ability to enact digital transformation plans.

Here are the top three takeaways from Red Hats customer survey;

When asked to best describe their companies approach to cloud infrastructure 31 percent stated that they run a hybrid cloud, while 21 percent said their firm has a private cloud first strategy in place.

The main reason cited for operating a hybrid cloud strategy was the security and cost benefits it provided. Some responders noted that data integration was easier within a hybrid cloud.

Not everyone is fully sure about their approach yet, as 17 percent admitted they are in the process of establishing a cloud strategy, while 12 percent said they have no plans at all to focus on the cloud.

When it comes to digital transformation there has been a notable rise in the amount of firms that undertaken transformation projects. In 2018; under a third of responders (31 percent) said they were implementing new processes and technology, this year that number has nearly doubled as 58 percent confirm they are introducing new technology.

Red Hat notes that: The drivers for these projects vary. And the drivers also vary by the role of the respondent. System administrators care most about simplicity. IT architects focus on user experience and innovation. For managers, simplicity, user experience, and innovation are all tied for top priority. Developers prioritize innovationwhich, overall, was cited as the most important reason to do digital transformation projects.

However, one in ten surveyed said they are facing a talent and skillset gap that is slowing down the pace at which they can transform their business. The skillset is being made worse by the amount of new technologies that are being brought to market such as artificial intelligence, machine learning and containerisation, the use of which is expected to grow significantly in the next 24 months.

Artificial intelligence, machine learning models and processes is the clear emerging technology for firms in 2019, as 30 percent said that they are planning to implement an AI or ML project within the next 12 months.

However, enterprises are worried about the compatibility and complexity of implementing AI or ML, with 29 percent stating they are worried about evolving software stacks.

One in five (22 percent) responders are worried about getting access to the right data. The data aspect in particular is something that we often see overlooked; obtaining relevant data and cleansing or transforming it in ways that its a useful input for models can be one of the most challenging aspects of an AI project, Red Hat notes.

Red Hats survey was created by compiling 876 qualified responses from Red Hat customers during August and September of 2019.

Originally posted here:
Red Hat Survey Shows Hybrid Cloud, AI and Machine Learning are the Focus of Enterprises - Computer Business Review

The Problem with Hiring Algorithms – Machine Learning Times – machine learning & data science news – The Predictive Analytics Times

Originally published in EthicalSystems.org, December 1, 2019

In 2004, when a webcam was relatively unheard-of tech, Mark Newman knew that it would be the future of hiring. One of the first things the 20-year old did, after getting his degree in international business, was to co-found HireVue, a company offering a digital interviewing platform. Business trickled in. While Newman lived at his parents house, in Salt Lake City, the company, in its first five years, made just $100,000 in revenue. HireVue later received some outside capital, expanded and, in 2012, boasted some 200 clientsincluding Nike, Starbucks, and Walmartwhich would pay HireVue, depending on project volume, between $5,000 and $1 million. Recently, HireVue, which was bought earlier this year by the Carlyle Group, has become the source of some alarm, or at least trepidation, for its foray into the application of artificial intelligence in the hiring process. No longer does the company merely offer clients an asynchronous interviewing service, a way for hiring managers to screen thousands of applicants quickly by reviewing their video interview HireVue can now give companies the option of letting machine-learning algorithms choose the best candidates for them, based on, among other things, applicants tone, facial expressions, and sentence construction.

If that gives you the creeps, youre not alone. A 2017 Pew Research Center report found few Americans to be enthused, and many worried, by the prospect of companies using hiring algorithms. More recently, around a dozen interviewees assessed by HireVues AI told the Washington Post that it felt alienating and dehumanizing to have to wow a computer before being deemed worthy of a companys time. They also wondered how their recording might be used without their knowledge. Several applicants mentioned passing on the opportunity because thinking about the AI interview, as one of them told the paper, made my skin crawl. Had these applicants sat for a standard 30-minute interview, comprised of a half-dozen questions, the AI could have analyzed up to 500,000 data points. Nathan Mondragon, HireVues chief industrial-organizational psychologist, told the Washington Post that each one of those points become ingredients in the persons calculated score, between 1 and 100, on which hiring decisions candepend. New scores are ranked against a store of traitsmostly having to do with language use and verbal skillsfrom previous candidates for a similar position, who went on to thrive on the job.

HireVue wants you to believe that this is a good thing. After all, their pitch goes, humans are biased. If something like hunger can affect a hiring managers decisionlet alone classism, sexism, lookism, and other ismsthen why not rely on the less capricious, more objective decisions of machine-learning algorithms? No doubt some job seekers agree with the sentiment Loren Larsen, HireVues Chief Technology Officer, shared recently with theTelegraph: I would much prefer having my first screening with an algorithm that treats me fairly rather than one that depends on how tired the recruiter is that day. Of course, the appeal of AI hiring isnt just about doing right by the applicants. As a 2019 white paper, from the Society for Industrial and Organizational Psychology, notes, AI applied to assessing and selecting talent offers some exciting promises for making hiring decisions less costly and more accurate for organizations while also being less burdensome and (potentially) fairer for job seekers.

Do HireVues algorithms treat potential employees fairly? Some researchers in machine learning and human-computer interaction doubt it. Luke Stark, a postdoc at Microsoft Research Montreal who studies how AI, ethics, and emotion interact, told the Washington Post that HireVues claimsthat its automated software can glean a workers personality and predict their performance from such things as toneshould make us skeptical:

Systems like HireVue, he said, have become quite skilled at spitting out data points that seem convincing, even when theyre not backed by science. And he finds this charisma of numbers really troubling because of the overconfidence employers might lend them while seeking to decide the path of applicants careers.

The best AI systems today, he said, are notoriously prone to misunderstanding meaning and intent. But he worried that even their perceived success at divining a persons true worth could help perpetuate a homogenous corporate monoculture of automatons, each new hire modeled after the last.

Eric Siegel, an expert in machine learning and author of Predictive Analytics: The Power to Predict Who Will Click, Buy, Lie, or Die, echoed Starks remarks. In an email, Siegel told me, Companies that buy into HireVue are inevitably, to a great degree, falling for that feeling of wonderment and speculation that a kid has when playing with a Magic Eight Ball. That, in itself, doesnt mean HireVues algorithms are completely unhelpful. Driving decisions with data has the potential to overcome human bias in some situations, but also, if not managed correctly, could easily instill, perpetuate, magnify, and automate human biases, he said.

To continue reading this article click here.

See the rest here:
The Problem with Hiring Algorithms - Machine Learning Times - machine learning & data science news - The Predictive Analytics Times

Machine Learning and Artificial Intelligence Are Poised to Revolutionize Asthma Care – Pulmonology Advisor

The advent of large data sets from many sources (big data), machine learning, and artificial intelligence (AI) are poised to revolutionize asthma care on both the investigative and clinical levels, according to an article published in the Journal of Allergy and Clinical Immunology.

According to the researchers, a patient with asthma endures approximately 2190 hours of experiencing and treating or not treating their asthma symptoms. During 15-minute clinic visits, only a short amount of time is spent understanding and treating what is a complex disease, and only a fraction of the necessary data is captured in the electronic health record.

Our patients and the pace of data growth are compelling us to incorporate insights from Big Data to inform care, the researchers posit. Predictive analytics, using machine learning and artificial intelligence has revolutionized many industries, including the healthcare industry.

When used effectively, big data, in conjunction with electronic health record data, can transform the patients healthcare experience. This is especially important as healthcare continues to embrace both e-health and telehealth practices. The data resulting from these thoughtful digital health innovations can result in personalized asthma management, improve timeliness of care, and capture objective measures of treatment response.

According to the researchers, the use of machine learning algorithms and AI to predict asthma exacerbations and patterns of healthcare utilization are within both technical and clinical reach. The ability to predict who is likely to experience an asthma attack, as well as when that attack may occur, will ultimately optimize healthcare resources and personalize patient management.

The use of longitudinal birth cohort studies and multicenter collaborations like the Severe Asthma Research Program have given clinical investigators a broader understanding of the pathophysiology, natural history, phenotypes, seasonality, genetics, epigenetics, and biomarkers of the disease. Machine learning and data-driven methods have utilized this data, often in the form of large datasets, to cluster patients into genetic, molecular, and immune phenotypes. These clusters have led to work in the genomics and pharmacogenomics fields that should ultimately lead to high-fidelity exacerbation predictions and the advent of true precision medicine.

This work, the researchers noted, if translated into clinical practice can potentially link genetic traits to phenotypes that can for example predict rapid response, or non-response to medications like albuterol and steroids, or identify an individuals risk for cortisol suppression.

As with any innovation, though, challenges abound. One in particular is the siloed nature of the clinical and scientific insights about asthma that have come to light in recent years. Although data are now being generated and interpreted across various domains, researchers must still contend with a lack of data standards and disease definitions, data interoperability and sharing difficulties, and concerns about data quality and fidelity.

Machine learning and AI present their own challenges; namely, those who utilize these technologies must consider the issues of fairness, bias, privacy, and medical bioethics. Legal accountability and medical responsibility issues must also be considered as algorithms are adopted into routine practice.

We must, as clinicians and researchers, constructively transform the concern and lack of understanding many clinicians have about digital health, [machine learning], and [artificial intelligence] into educated and critical engagement, the researchers concluded. Our job is to use [machine learning and artificial intelligence] tools to understand and predict how asthma affects patients and help us make decisions at the patient and population levels to treat it better.

Reference

Messinger AI, Luo G, Deterding RR. The doctor will see you now: How machine learning and artificial intelligence can extend our understanding and treatment of asthma [published online December 25, 2019]. J Allergy Clin Immunol. doi: 10.1016/j.jaci.2019.12.898

Link:
Machine Learning and Artificial Intelligence Are Poised to Revolutionize Asthma Care - Pulmonology Advisor