Archive for the ‘Machine Learning’ Category

Artificial Intelligence and Machine Learning in Healthcare | JHL – Dove Medical Press

Innovative scientific and technological developments have ushered in a remarkable transformation in medicine that continues to impact virtually all stakeholders from patients to providers to Healthcare Organizations (HCOs) and the community in general.1,2 Increasingly incorporated into clinical practice over the past few decades, these innovations include widespread use of Electronic Health Records (EHR), telemedicine, robotics, and decision support for surgical procedures. Ingestible microchips allow healthcare providers to monitor patient compliance with prescribed pharmacotherapies and their therapeutic efficacy through big data analysis,15 as well as streamlining drug design, screening, and discovery.6 Adoption of novel medical technologies has allowed US healthcare to maintain its vanguard position in select domains of clinical care such as improving access by reducing wait times, enriching patient-provider communication, enhancing diagnostic accuracy, improving patient satisfaction, augmenting outcome prediction, decreasing mortality, and extending life expectancy.35,7

Yet despite the theoretical advantages of these innovative medical technologies, many issues remain requiring careful consideration as we integrate these novel technologies into our armamentarium. This descriptive literature-based article explicates on the advantages, future potential, challenges, and caveats with the predictable and impending importation of AI and ML into all facets of healthcare.

By far the most revolutionary of these novel technologies is Artificial Intelligence (AI), a branch of computer science that attempts to construct intelligent entities via Machine Learning (ML), which is the ability of computers to learn without being explicitly programed.8 ML utilizes algorithms to identify patterns, and its subspecialty Deep Learning (DL) employs artificial neural networks with intervening frameworks to identify patterns and data.1,8 Although ML was first conceived by computer scientist Arthur Samuel as far back as 1956, applications of AI have only recently begun to pervade our daily life with computers simulating human cognitioneg, visual perception, speech recognition, decision-making, and language translation.8 Everyday examples of AI include smart phones, autonomous vehicles, digital assistants (eg, Siri, Alexa), chatbots and auto-correcting software, online banking, facial recognition, and transportation (eg, Uber, air traffic control operations, etc.). The iterative nature of ML allows the machine to adapt its systems and outputs following exposure to new data with supervised learningie, utilizing training algorithms to predict future events from historical data inputsor unsupervised learning, whereby the machine explores the data and attempts to develop patterns or structures de novo. The latter methodology is often used to determine and distinguish outliers. Neural networks in AI utilize an adaptive system comprised of an interconnected group of artificial neurons and mathematical or computational modeling for processing information from input and output data via pattern recognition.9 Through predictive analytics, ML has demonstrated its effectiveness in the realm of finance (eg, identifying credit card fraud) and in the retail industry to anticipate customer behavior.1,10,11

Extrapolation of AI to medicine and healthcare is expected to increase exponentially in the three principal domains of research, teaching, and clinical care. With improved computational efficiencies, common applications of ML in healthcare will include enhanced diagnostic modalities, improved therapeutic interventions, augmenting and refining workflow by processing large amounts of hospital and national EHR data, more accurate clinical course and prediction through precision and personalized medicine, and genome interpretation. ML can provide basic clinical triage in geographical areas inaccessible to specialty care. It can also detect treatable psychiatric conditions via analysis of affective and anxiety disorders using speech patterns and facial expressions (eg, bipolar disorder, major depression, anxiety spectrum and psychotic disorders, attention deficit hyperactivity disorder, addiction disorders, Tourettes Syndrome, etc.)12,13 (Figure 1). Deep learning algorithms are highly effective compared to human interpretation in medical subspecialties where pattern recognition plays a dominant role, such as dermatology, hematology, oncology, histopathology, ophthalmology, radiology (eg, programmed image analyses), and neurology (eg, analysis for seizures utilizing electroencephalography). Artificial neural networks are being developed and employed for diagnostic accuracy, timely interventions, outcomes and prognostication of neurosurgical conditions, such as spinal stenosis, traumatic brain injury, brain tumors, and cerebral vasospasm following aneurysmal subarachnoid hemorrhage.14 Theoretically, ML can improve triage by directing patients to proper treatments at lower cost and by keeping those with chronic conditions out of costly and time-intensive emergency care centers. In clinical practice, ~5% of all patients account for 50% of healthcare costs, and those with chronic medical conditions comprise 85% of total US healthcare costs.3

Figure 1 Potential Applications of Machine Learning.

Patients can benefit from ML in other ways. For follow-up visits, not having to arrange transportation or take time off work for face-to-face interaction with healthcare providers may be an attractive alternative to patients and to the community, even more so in restricted circumstances like the recent COVID-19 pandemic-associated lockdowns and social distancing.

Ongoing ML-related research and its applications are robust. Companies developing automation, topological data analysis, genetic mapping, and communications systems include Pathway Genomics, Digital Reasoning Systems, Ayandi, Apixio, Butterfly Network, Benevolent AI, Flatiron Health, and several others.1,10

Despite the many theoretical advantages and potential benefits of ML in healthcare, several challenges (Figure 2) must be met15 before it can achieve broader acceptance and application.

Figure 2 Caveats and Challenges with use of Machine Learning.

Frequent software updates will be necessary to ensure continued improvement in ML-assisted models over time. Encouraging the use of such software, the Food and Drug Administration has recommended a pre-certified approach for agility.1,2 To be of pragmatic clinical import, high-quality input-data is paramount for validating and refining diagnostic and therapeutic procedures. At present, however, there is a dearth of robust comparative data that can be validated against the commonly accepted gold standard, comprised of blinded, placebo-controlled randomized clinical trials versus the ML-output data that is typically an area-under-the-curve analysis.1,7 Clinical data generated from ML-assisted calculations and more rigorous multi-variate analysis will entail integration with other relevant patient demographic information (eg, socio-economic status, including values, social and cultural norms, faith and belief systems, social support structures in-situ, etc.).16

All stakeholders in the healthcare delivery system (HCOs, providers, patients, and the community) will have to adjust to the paradigm shift away from traditional in-person interactions. Healthcare providers will have to surmount actual or perceived added workload to avoid burnout especially during the initial adaptive phase. They will also have to cope with increased ML-generated false-positive and -negative alerts. The traditional practice of clinical medicine is deeply entrenched in the framework of formulating a clinical hypothesis via rigorous history-taking and physical examination followed by sequential confirmation through judicious ancillary and diagnostic testing. Such traditional in-person interactions have underscored the importance of an empathetic approach to the provider-patient relationship. This traditional view has been characterized as archaic, particularly by those with a futuristic mindset, who envision an evolutionary change leading to whole body scans that deliver a more accurate assessment of health and diagnosis of disease. However, incidental findings not attributable to symptoms may lead to excessive ancillary tests underscoring the adage testing begets more testing.17

Healthcare is one of the fastest growing segments of the world economy and is presently at a crossroads of unprecedented transformation. As an example, US healthcare expenditure has accelerated dramatically over the past several decades (~19% of Gross National Product; exceeding $4.1 trillion, or $12,500 per person per year)18 with widespread ramifications for all stakeholders including patients and their families, healthcare providers, government, community, and the US economy.1,35 A paradigm shift from volume-based to performance-based reimbursements from third-party payers warrants focus on some of the most urgent issues in healthcare including cost containment, access, and providing low-cost, high-value healthcare commensurate with the proposed six-domain framework (safe, effective, patient-centered, timely, efficient, and equitable) articulated by the Institute of Medicine in 2001.35,19 Of note, uncontrolled use of expensive technology and excessive ancillary testing account for ~2530% of total healthcare costs.17 While technologies will probably never completely replace the function of healthcare providers, they will definitely transform healthcare, benefiting both providers and patients. However, there is a paucity of costbenefit data and analysis of the use of these innovative emerging medical technologies. All stakeholders should remain cost-conscious as the newer technological diagnostic approaches may further drive up the already rising costs of healthcare. Educating and training the next generation of healthcare providers in the context of AI will also require transformation with simulation approaches and inter-professional education. Therefore, the value proposition of novel technologies must be critically appraised via longitudinal and continuous valuations and patient outcomes in terms of its impact on health and disease management.13 To mitigate healthcare costs, we must control the technological imperativethe overuse of technology because of easy availability without due consideration to disease course or outcomes and irrespective of costbenefit ratio.3

Issues surrounding consumer privacy and proprietorship of colossal quantities of healthcare data under an AI regime are legitimate concerns. Malicious or unintentional breaches may result in financial or other harm. Akin to the challenges encountered with EHR, easy access to data and interoperability with broader compatibility of interfaces by healthcare providers spread across space and time will present unique challenges. Databases will likely be owned by large profit-oriented technology companies who may decide to dispense data to third parties. Additional costs are predictable as well, particularly during the early stages of development of ML algorithms, which is likely to be more bearable to large HCOs. Delay in the use of such processes is anticipated by smaller organizations with resulting potential for mergers and acquisitions or even failure of smaller hospitals and clinics. Concerns regarding ownership, responsibility, and accountability of ML algorithms may arise owing to the probability of detrimental outcomes, which ideally should be apportioned between developer, interpreter, healthcare provider, and patient.1 Simulation techniques can be preemptively utilized for ML training for clinical scenarios; practice runs may require formal certification courses and workshops. Regulations must be developed by policymakers and legislative bodies to delineate the role of third-party payers in ML-assisted healthcare financing. Finally, education and training via media outlets, internet, and social media will be necessary to address public opinion, misperceptions, and nave expectations about ML-assisted algorithms.7

For centuries, the practice of medicine has been deeply embedded in a tradition of meticulous history-taking, physical examination, and thoughtful ancillary investigations to confirm clinical hypotheses and diagnoses. The great physician, Sir William Osler (18491919)14,20 encapsulated the desired practice of good medicine with his famous quotes, Listen to your patient he is telling you the diagnosis, The good physician treats the disease; the great physician treats the patient who has the disease, and Medicine is a science of uncertainty and an art of probability. With rapid technological advances, we are at the crossroads of practicing medicine that would be distinctly different from the traditional approach and practice(s), a change that may be characterized as evolutionary.

AI and ML have enormous potential to transform healthcare and the practice of medicine, although these modalities will never substitute an astute and empathetic bedside clinician. Furthermore, several issues remain as to whether their value proposition and cost-benefit are complementary to the overarching focus on providing low-cost, high-value healthcare to the community at large. While innovative technological advances play a critical role in the rapid diagnosis and management of disease, the phenomenon of the technological imperative35,17 deserves special consideration among both public and providers for the future use of AI and ML in delivering healthcare.

The author reports no conflicts of interest in this work.

1. Bhardwaj R, Nambiar AR, Dutta D A Study of Machine Learning in Healthcare. 2017 IEEE 41st Annual Computer Software and Applications Conference. 236241. Available from: https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8029924. Accessed March 30, 2022.

2. Deo RC. Machine Learning in Medicine. Circulation. 2015;132:19201930. doi:10.1161/CIRCULATIONAHA.115.001593

3. Shi L, Singh DA. Delivering Health Care in America: A Systems Approach. 7th ed. Burlington, MA: Jones & Bartlett Learning; 2019.

4. Barr DA. Introduction to US Health Policy. The Organization, Financing, and Delivery of Health Care in America. 4th ed. Baltimore, MD: John Hopkins University Press; 2016.

5. Wilensky SE, Teitelbaum JB. Essentials of Health Policy and Law. Fourth ed. Burlington, MA: Jones & Bartlett Learning; 2020.

6. Gupta R, Srivastava D, Sahu M, Tiwan S, Ambasta RK, Kumar P. Artificial intelligence to deep learning; machine intelligence approach for drug discovery. Mol Divers. 2021;25:13151360. doi:10.1007/s11030-021-10217-3

7. Dabi A, Taylor AJ. Machine Learning, Ethics and Brain Death Concepts and Framework. Arch Neurol Neurol Disord. 2020;3:19.

8. Handelman GS, Kok HK, Chandra RV, Razavi AH, Lee MJ, Asadi H. eDoctor: machine learning and the future of medicine. J Int Med. 2018;284:603619. doi:10.1111/joim.12822

9. Hopfield JJ. Neural networks and physical systems with emergent collective computational abilities. Proc Natl Acad Sci U S A. 1982;79:25542558. doi:10.1073/pnas.79.8.2554

10. Ghassemi M, Naumann T, Schulam P, Beam AL, Ranganath R Opportunities in Machine Learning for Healthcare. 2018. Available from: https://pdfs.semanticscholar.org/1e0b/f0543d2f3def3e34c51bd40abb22a05937bc.pdf. Accessed March 30, 2022.

11. Jnr YA Artificial Intelligence and Healthcare: a Qualitative Review of Recent Advances and Predictions for the Future. Available from: https://pimr.org.in/2019-vol7-issue-3/YawAnsongJnr_v3.pdf. Accessed March 30, 2022.

12. Chandler C, Foltz PW, Elvevag B. Using machine learning in Psychiatry; the need to establish a Framework that nurtures trustworthiness. Schizophr Bull. 2019;46:1114.

13. Ray A, Bhardwaj A, Malik YK, Singh S, Gupta R. Artificial intelligence and Psychiatry: an overview. Asian J Psychiatr. 2022;70:103021. doi:10.1016/j.ajp.2022.103021

14. Ganapathy K Artificial intelligence in neurosciences-are we really there? Available from: https://www.sciencedirect.com/science/article/pii/B9780323900379000084. Accessed June 10, 2022.

15. Sunarti S, Rahman FF, Naufal M, Risky M, Febriyanto K, Mashina R. Artificial intelligence in healthcare: opportunities and risk for future. Gac Sinat. 2012;35(S1):S67S70. doi:10.1016/j.gaceta.2020.12.019.

16. Yu B, Beam A, Kohane I. Artificial Intelligence in Healthcare. Nature Biomed Eng. 2018;2:719731. doi:10.1038/s41551-018-0305-z

17. Bhardwaj A. Excessive Ancillary Testing by Healthcare Providers: reasons and Proposed Solutions. J Hospital Med Management. 2019;5(1):16.

18. Fact Sheet NHE. Centers for Medicare and Medicaid Services. Available from: https://www.cms.gov/Research-Statistics-Data-and-Systems/Statistics-Trends-and-Reports/NationalHealthExpendData/NHE-Fact-Sheet. Accessed April 14, 2022.

19. Institute of Medicine (IOM). Crossing the Quality Chasm: A New Health System for the 21st Century. Washington, D.C: National Academy Press; 2001.

20. Bliss M. William Osler: A Life in Medicine. New York, NY: Oxford University Press; 1999.

More:
Artificial Intelligence and Machine Learning in Healthcare | JHL - Dove Medical Press

What is machine learning and why does it matter for business? – Verdict

Machine learning is a nifty branch of artificial intelligence (AI) that uses algorithms to make predictions. In essence, its giving computers the power to learn by themselves without any human interaction.

While this may initially draw thoughts to out-of-control sentient robots in sci-fi films, youve probably been using the technology every day. From what appears at the top of your social media feed, to the life-saving (or life-ruining) predictive text system in your mobile phone or even the sci-fi flicks that Netflix recommends you after finishing Blade Runner 2049, machine learning has been integrated into mainstream technology for decades. Its now even being used to treat cancer patients and help doctors predict the outcome of treatments.

The sole way of programming computers, before AI, would be to create a specific and detailed set of instructions for them to follow. This is a time-consuming task completed by one person or whole teams of people but sometimes, its just not possible at all.

For example, you could quite easily get a computer to create an artistic replica of your favourite family photo by giving it a precise set of instructions. But it would be extremely difficult to tell a computer how to recognise and identify different people within that photo. This is where machine learning comes into play, programming the computer to learn through experience much like humans would, which is what artificial intelligence is all about.

Most businesses handling large amounts of data have discovered the advantages of using machine learning technology. Its fast becoming an essential for organisations wanting to be at the cutting edge of societal predications or companies looking to beat their competitors to the latest trends and profitable opportunities.

Transport, retail, governments, healthcare, financial services and other sectors are all utilising the technology to gain valuable insights that may not have been attainable through manual action.

The most common and recognisable use of machine learning for businesses are chatbots. Companies have been able to implement this technology to deal with customer queries around the clock without increasing their headcount. Facebook Messenger is a popular platform which allows businesses to easily program a chatbot to perform tasks, understand questions and guide customers through to where they need to go.

Online retail businesses like Amazon, ASOS and eBay use machine learning to recommend their customers products they think theyll be interested in. This is a division of the technology called customer behaviour modelling. Using collected data on their customers habits, companies are able to categorise what users with similar browsing behaviours might want to see.

This trend is set to carry on growing. Data from GlobalData shows the proportion of technology and communications companies hiring for AI related positions in May was up 58.9% from those hiring last year, while recent research from Helomics predicts the global AI market hitting a whopping $20bn by 2025.

GlobalData is the parent company of Verdict and its sister publications.

Excerpt from:
What is machine learning and why does it matter for business? - Verdict

8 most innovative AI and machine learning companies – TechRepublic

Image: peshkova/Adobe Stock

As enterprises increasingly try to put their data to work using artificial intelligence and machine learning, the landscape of vendors and open source projects can be daunting. And if anything, its only becoming more chaotic.

As FirstMark partner Matt Turck has written, in 2021 the industry saw a rapid emergence of a whole new generation of data and ML startups, and in 2022, this trend looks set to continue. AI/ML is so hot, in fact, that even with a recession looming CIOs remain loath to cut spending on AI/ML projects.

So where will enterprises spend that money? Or, rather, with whom?

To help you navigate the sometimes bewildering array of AI/ML options out there, I talked with data science professionals to get their picks on the most innovative companies in AI/ML. Though historically the industry focused on gee-whiz AI, such as computers that could play games or seemingly offer human reason, much of todays innovation is in less sexy but more essential areas like data preparation and operational concerns.

Jump to:

For many enterprises, the easy button for AI/ML will be to use the AI/ML services offered through their preferred cloud vendor. Though Google usually gets credited with having the strongest portfolio of AI/ML services, any of the big clouds will prove a solid choice. Google has led the market by open sourcing key frameworks like TensorFlow, and more recently has made it easy for companies to run things like TensorFlow in production with Cloud AutoML.

SEE: Artificial Intelligence Ethics Policy (TechRepublic Premium)

AWS has tended to innovate less in terms of frameworks and has instead focused on tooling like Sagemaker Studio, an IDE for machine learning, to help enterprises do more with less expertise. Microsoft offers something similar in Azure Machine Learning, enabling users to configure machine learning operations and pipelines. All three clouds also offer a bevy of API-driven services like Amazon Polly, a text-to-speech service.

As stated, many enterprises will begin and end with the AI/ML services they discover with their default cloud provider. Thats fine, but it misses much of the innovation happening elsewhere in startups and beyond. Though every enterprise should look to their cloud provider for AI/ML services, they should also consider innovators like those profiled below.

Though enterprises embraced R in the early days of data science, Python has since supplanted R to become the dominant language for AI/ML. Dask, an open source project that facilitates scaling Python workloads, has become a must-have for the data science crowd because it makes it possible to scale popular computational libraries like NumPy, pandas and scikit-learn beyond a single machine to multi-core machines and distributed clusters.

Scikit-learn can tap into Dask for parallelism, enabling the data scientist to train estimators using all the cores of a cluster without making significant changes to the underlying code. This sort of parallelism is critical for ML, because data scientists need to break up computations across a cluster to execute on large datasets.

The company behind Dask, Coiled, manages Dask clusters on AWS or Google Cloud, thereby making it easier to run Dask clusters in production. Coileds Dask innovation is all about lowering the bar to Python professionals doing more with ML.

With Coiled, data scientists can stick with the Python libraries they love, while Coiled takes care of provisioning cloud resources, handling instance failures, coordinating data synchronization across machines and securing the cloud environment, as Dask developer James Courbeau explained.

In a similar manner, OctoML introduces DevOps-level agility and automation to ML deployment on any hardware. Or, even more simply put, OctoML optimizes ML model performance on any hardware, no matter where its running. Given that getting models into production is one of the biggest barriers to enterprise productivity with AI/ML, OctoML is tackling a tough problem.

SEE: Metaverse cheat sheet: Everything you need to know (free PDF) (TechRepublic)

The deployment problem is made more difficult due to the rigid set of dependencies between a ML training framework like Pytorch, the model itself and the different hardware it will need to run on. OctoML automatically creates customized code for specific hardware parameters, selects appropriate libraries and compiler options and then configures hardware configuration settings to fine-tune performance. This requires knowledge of more than 80 deployment targets.

Such optimization of model deployment led the company founders to start by open sourcing what became Apache TVM, a deep learning compiler that has become the de facto deep learning compiler for ML giants like Amazon and Facebook. Building off that expertise, OctoML now tries to make it simpler for all companies to deploy machine learning models on varied hardware configurations.

Keeping with the theme of making ML more approachable for a wider population of users, MindsDB is all about bringing the power of ML to something enterprises already use daily: Their database. As one person explained to me, MindsDB is a way to raise the IQ of databases.

How so? By allowing users to add an ML-based prediction layer to their datasets. This means that anyone with knowledge of SQL can add ML capabilities to their databases by adding an ML-based prediction layer to their datasets. This layer, or extension of SQL, makes it so ML models can be created, queried and maintained as if they were database tables. MindsDB meets data professionals where they are, giving them a shortcut to ML proficiency.

In this way, MindsDB helps organizations make better use of their data to yield forecasts of what future data will look like based on past data. Of course, ML has long depended on pulling data from databases and other sources. The difference with MindsDBs approach is that companies dont need to go through the bother of extracting, transforming and loading their data into other systems. MindsDBs big innovation is to make ML possible right in the database.

I may ski 150+ days each season in Utahs backcountry, but Im sadly not in contention to become a professional skier. As such, Ill never get to use Zone7, the AI-driven human performance platform that analyzes extensive athlete data to suggest optimal rest and training regimens for professional sports teams.

If that seems niche, perhaps it is. But it led Liverpool, one of the most successful soccer clubs on the planet to reduce injuries by a third last season, even as the team competed across multiple competitions and won two of them. Sports is a big business, and a swelling number of professional teams across soccer, American football and rugby leagues are turning to Zone7.

SEE: Best business intelligence tools (TechRepublic)

So what does the company do, exactly? As the company has detailed themselves, Zone7 analyzes comprehensive player data, including in-game and training positioning information, as well as biometric, strength, sleep and stress levels. The platform, in turn, identifies undetected risk patterns, creates real-time injury threat alerts, and offers practical intervention methods to help guide and inform coaches decision-making.

Zone7, in other words, isnt something you or your company are likely to use. It is, however, something that the team you support just might embrace. Given my soccer teams injury record (Arsenal), it cant happen soon enough.

BLOOM is an open source, multilingual language that aims to tackle the biases ML systems inherit from their training texts. In every other example provided here, the AI/ML innovations are for sale. Not BLOOM. In fact, this is a key requirement of the language as it attempts to break large technology companies grip on natural language processing. Though companies are involved, organized into a group called BigScience, no one company controls BLOOM.

The costs and expertise associated with training large language models to make statistical inferences between billions of words are immense, so only big companies can afford to participate. By contrast, BLOOM is being developed and shaped by hundreds of researchers, including some from Facebook and Google, working as individuals in true open source fashion.

Rather than taking the standard approach of training the model based on text pulled from the Internet just imagine how impartial a model based on a days worth of text from Twitter would be the researchers carefully selected roughly two-thirds of their 341-billion word data set from 500 sources. This doesnt guarantee that BLOOM will be bias-free, but as an open source project, contributors can improve it to remove biases.

Importantly, BLOOM will be made available free of charge. Yes, there will be a cost associated with running it, but Hugging Face and other companies are figuring out ways to make the costs minimal. BLOOM is not yet available to use, but it may significantly democratize NLP.

Landing AI should be on everyones list of AI/ML innovators if for no other reason than it was founded by Andrew Ng, co-founder of Coursera and founding lead of Google Brain. Ng is a big deal in big data, and with his pedigree comes experience putting ML into practice. As such, its perhaps not surprising that a big focus for Landing AI is improving data quality.

Data preparation tends to be as much as 70% of the work done by data scientists, and Landing AI tries to ameliorate this by taking a data-centric approach to ML. As Ng put it, instead of focusing on the code, companies should focus on developing systematic engineering practices for improving data in ways that are reliable, efficient and systematic.

The companys first product is LandingLens, an enterprise MLOps platform for machine vision. LandingLens is a visual inspection platform that aims to ensure product quality by improving inspection accuracy and reducing false positives. It does this through collaboration between ML engineers to train, test, confirm and deploy deep-learning models based on high-quality, verified data to edge devices within the manufacturing process. Landing AI is trying to apply cutting-edge ML to legacy industries like manufacturing, healthcare and agriculture.

Databricks is hardly a startup, and that shows in its integrated, holistic ML platform that includes managed services for experiment tracking, model training, feature development and management, and feature and model serving. Databricks started Delta Lake, a lakehouse approach to incorporating massive quantities of enterprise data in one place. From there, the company offers a platform that enables ML teams to collaborate on data preparation and processing, giving teams a central, standardized approach to working with data and associated ML models.

Databricks integrates well with each of the cloud providers, particularly Microsoft Azure. Though Databricks relies on Apache Spark, users can also use their preferred programming languages like Python, R and SQL, and Databricks does the backend work to ensure they work fine with Spark too.

SEE: Hiring Kit: Artificial Intelligence Architect (TechRepublic Premium)

In fact, this type of work is arguably Databricks biggest innovation: Giving data scientists and others a one-stop shop for tracking experiments, reproducing results at significant scale, moving models into production, and redeploying and rolling out updated models. Other companies tackle isolated aspects of these challenges, but Databricks takes an end-to-end platform approach.

The most strangely named company may also be the most innovative. Hugging Face, which started as a chatbot and evolved to offer a registry of NLP models used to deliver those chatbots, is now on track to become the GitHub of ML. Today the company hosts over 100,000 pre-trained transformer models and more than 10,000 datasets for NLP, computer vision, speech, time-series and reinforcement learning. More than 10,000 companies use Hugging Face to privately collaborate on ML applications.

It has long been an impediment to ML adoption that collaboration within an organization has been so challenging. Different teams might be building essentially the same models, duplicating effort, and there was no standardized approach to building and deploying transformer models.

Hugging Face changes this by making it simple to discover and collaborate on models within an organization, just as GitHub and GitLab do for code. The company offers its Inference API, which provides access to tens of thousands of pre-trained models. This is important because most companies lack the expertise to build models themselves.

The company also offers AutoTrain, which helps enterprises easily develop and automatically fine-tune models. Finally, Hugging Face takes care of deployment. And as with GitHub, a Hugging Face user can blend the best of public transformers with private models securely and safely.

Hugging Face co-founder and CEO Clement Delangue believes that the number of ML professionals could surpass the number of developers by 2027. By making ML accessible to a broader variety of professionals, including developers, Hugging Face may well be a critical accelerant to reaching that goal. The company, which has open sourced key elements of its technology since its chatbot founding, has made open collaboration a key tenet for how it builds and enables others to build. So far, it seems to be working.

Disclosure: I work for MongoDB, but the views expressed herein are mine.

Go here to read the rest:
8 most innovative AI and machine learning companies - TechRepublic

Google Will Utilize AI And Machine Learning Algorithms To Fine Tune Gmail Suggestions – Digital Information World

Google's setting up Gmail for an upgrade, as it employs ML models to help users with better search suggestions.

Google has been tampering with, and soon after disposing of, new technology and updates since its inception. We have the Google Graveyard as a testament to such haphazard progress. However, let's be real: the tech giant knows when it's onto a good thing, and Gmail is that good thing. The email-based service is one of Google's trademark platforms and continues to flourish even as emails are considered more and more restricting. No one types out an email to converse when they can use WhatsApp or Instagram. However, rarely do people utilize the same platforms to send out CVs or letters of recommendation: even if they could, it would be considered informal and inappropriate.

That is the sort of environment Google has managed to cultivate with Gmail. It's what Hotmail and AOL chat rooms could never really muster up; Gmail is a brand that resonates with professionalism and the company leans into it as well. This brings us to today's feature of interest, and how Google will utilize it to the email service's full effect.

ML or machine learning algorithms have evolved from being a novel, rarely used concept that evoked "oohs" and "aahs", into a tool that's almost regularly utilized by every social media platform with skin in the game. Honestly, they fit the social media market quite well: ML algorithms rely on using tons of data to generate whatever they're ordered. Automated messages can appear more natural if AI spends its time looking at and attempting to emulate examples of normal speech. With social media platforms (I'm willing to loosen the definition enough to include email platforms) providing a near-endless well of such information to draw from, it's open season for ML programs.

Google intends on utilizing ML models to help AI-based suggestions grow with the writer. To be clearer, Gmail's word suggestions for users will come to reflect what the user is aiming for based on previous interactions. This way, a consumer's suggested vernacular can essentially come forward as a more elegant version of what they'd employ in day-to-day life.

These suggestions will also be heavily utilized in helping users sift through folders and such, attempting to look for prior emails or other similar content. ML models can learn from the keywords that a user utilizes in looking up content in the past, and help narrow down searches in the future.

Google will be rolling out these new updated ML models for user testing across Android audiences.

See more here:
Google Will Utilize AI And Machine Learning Algorithms To Fine Tune Gmail Suggestions - Digital Information World

The imperative need for machine learning in the public sector – VentureBeat

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!

The sheer number of backlogs and delays across the public sector are unsettling for an industry designed to serve constituents. Making the news last summer was the four-month wait period to receive passports, up substantially from the pre-pandemic norm of 6-8 weeks turnaround time. Most recently, the Internal Revenue Service (IRS) announced it entered the 2022 tax season with 15 times the usual amount of filing backlogs, alongside its plan for moving forward.

These frequently publicized backlogs dont exist due to a lack of effort. The sector has made strides with technological advancements over the last decade. Yet, legacy technology and outdated processes still plague some of our nations most prominent departments. Todays agencies must adopt digital transformation efforts designed to reduce data backlogs, improve citizen response times and drive better agency outcomes.

By embracing machine learning (ML) solutions and incorporating advancements in natural language processing (NLP), backlogs can be a thing of the past.

Whether tax documents or passport applications, processing items manually takes time and is prone to errors on the sending and receiving sides. For example, a sender may mistakenly check an incorrect box or the receiver may interpret the number 5 as the letter S. This creates unforeseen processing delays or, worse, inaccurate outcomes.

But managing the growing government document and data backlog problem is not as simple and clean-cut as uploading information to processing systems. The sheer number of documents and citizens information entering agencies in varied unstructured data formats and states, often with poor readability, make it nearly impossible to reliably and efficiently extract data for downstream decision-making.

Embracing artificial intelligence (AI) and machine learning in daily government operations, just as other industries have done in recent years, can provide the intelligence, agility and edge needed to streamline processes and enable end-to-end automation of document-centric processes.

Government agencies must understand that real change and lasting success will not come with quick patchworks built upon legacy optical character recognition (OCR) or alternative automation solutions, given the vast amount of inbound data.

Bridging the physical and digital worlds can be attained with intelligent document processing (IDP), which leverages proprietary ML models and human intelligence to classify and convert complex, human-readable document formats. PDFs, images, emails and scanned forms can all be converted into structured, machine-readable information using IDP. It does so with greater accuracy and efficiency than legacy alternatives or manual approaches.

In the case of the IRS, inundated with millions of documents such as 1099 forms and individuals W-2s, sophisticated ML models and IDP can automatically identify the digitized document, extract printed and handwritten text, and structure it into a machine-readable format. This automated approach speeds up processing times, incorporates human support where needed and is highly effective and accurate.

Alongside automation and IDP, introducing ML and NLP technologies can significantly support the sectors quest to improve processes and reduce backlogs. NLP isan area of computer science that processes and understands text and spoken words like humans do, traditionally grounded in computational linguistics, statistics and data science.

The field has experienced significant advancements, like the introduction of complex language models that contain more than 100 billion parameters. These models could power many complex text processing tasks, such as classification, speech recognition and machine translation. These advancements could support even greater data extraction in a world overrun by documents.

Looking ahead, NLP is on course to reach the level of text understanding capability similar to that of a human knowledge worker, thanks to technological advancements driven by deep learning.Similar advancements in deep learning also enable the computer to understand and process other human-readable content such as images.

For the public sector specifically, this could be images included in disability claims or other forms or applications consisting of more than just text. These advancements could also improve downstream stages of public sector processes, such as ML-powered decision-making for agencies determining unemployment assistance, Medicaid insurance and other invaluable government services.

Though weve seen a handful of promising digital transformation improvements, the call for systemic change has yet to be fully answered.

Ensuring agencies go beyond patching and investing in various legacy systems is needed to move forward today. Patchwork and investments in outdated processes fail to support new use cases, are fragile to change and cannot handle unexpected surges in volume. Instead, introducing a flexible solution that can take the most complex, difficult-to-read documents from input to outcome should be a no-brainer.

Why? Citizens deserve more out of the agencies who serve them.

CF Su is VP of machine learning at Hyperscience.

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even considercontributing an articleof your own!

Read More From DataDecisionMakers

Read more here:
The imperative need for machine learning in the public sector - VentureBeat