Archive for the ‘Machine Learning’ Category

Artificial Intelligence in Health Care: Benefits and Challenges of Machine Learning Technologies for Medical Diagnostics – Government Accountability…

What GAO Found

Several machine learning (ML) technologies are available in the U.S. to assist with the diagnostic process. The resulting benefits include earlier detection of diseases; more consistent analysis of medical data; and increased access to care, particularly for underserved populations. GAO identified a variety of ML-based technologies for five selected diseases certain cancers, diabetic retinopathy, Alzheimers disease, heart disease, and COVID-19 with most technologies relying on data from imaging such as x-rays or magnetic resonance imaging (MRI). However, these ML technologies have generally not been widely adopted.

Academic, government, and private sector researchers are working to expand the capabilities of ML-based medical diagnostic technologies. In addition, GAO identified three broader emerging approachesautonomous, adaptive, and consumer-oriented ML-diagnosticsthat can be applied to diagnose a variety of diseases. These advances could enhance medical professionals capabilities and improve patient treatments but also have certain limitations. For example, adaptive technologies may improve accuracy by incorporating additional data to update themselves, but automatic incorporation of low-quality data may lead to inconsistent or poorer algorithmic performance.

Spectrum of adaptive algorithms

We identified several challenges affecting the development and adoption of ML in medical diagnostics:

These challenges affect various stakeholders including technology developers, medical providers, and patients, and may slow the development and adoption of these technologies.

GAO developed three policy options that could help address these challenges or enhance the benefits of ML diagnostic technologies. These policy options identify possible actions by policymakers, which include Congress, federal agencies, state and local governments, academic and research institutions, and industry. See below for a summary of the policy options and relevant opportunities and considerations.

Policy Options to Help Address Challenges or Enhance Benefits of ML Diagnostic Technologies

Evaluation (reportpage 28)

Policymakers could create incentives, guidance, or policies to encourage or require the evaluation of ML diagnostic technologies across a range of deployment conditions and demographics representative of the intended use.

This policy option could help address the challenge of demonstrating real world performance.

Data Access (reportpage 29)

Policymakers could develop or expand access to high-quality medical data to develop and test ML medical diagnostic technologies. Examples include standards for collecting and sharing data, creating data commons, or using incentives to encourage data sharing.

This policy option could help address the challenge of demonstrating real world performance.

Collaboration (reportpage 30)

Policymakers could promote collaboration among developers, providers, and regulators in the development and adoption of ML diagnostic technologies. For example, policymakers could convene multidisciplinary experts together in the design and development of these technologies through workshops and conferences.

This policy option could help address the challenges of meeting medical needs and addressing regulatory gaps.

Source: GAO. | GAO-22-104629

Diagnostic errors affect more than 12 million Americans each year, with aggregate costs likely in excess of $100 billion, according to a report by the Society to Improve Diagnosis in Medicine. ML, a subfield of artificial intelligence, has emerged as a powerful tool for solving complex problems in diverse domains, including medical diagnostics. However, challenges to the development and use of machine learning technologies in medical diagnostics raise technological, economic, and regulatory questions.

GAO was asked to conduct a technology assessment on the current and emerging uses of machine learning in medical diagnostics, as well as the challenges and policy implications of these technologies. This report discusses (1) currently available ML medical diagnostic technologies for five selected diseases, (2) emerging ML medical diagnostic technologies, (3) challenges affecting the development and adoption of ML technologies for medical diagnosis, and (4) policy options to help address these challenges.

GAO assessed available and emerging ML technologies; interviewed stakeholders from government, industry, and academia; convened a meeting of experts in collaboration with the National Academy of Medicine; and reviewed reports and scientific literature. GAO is identifying policy options in this report.

For more information, contact Karen L. Howard at (202) 512-6888 or howardk@gao.gov.

More here:
Artificial Intelligence in Health Care: Benefits and Challenges of Machine Learning Technologies for Medical Diagnostics - Government Accountability...

4 Types of Machine Learning to Know – Built In

How else could you analyze 36,000 naked mole rat chirps to find out what theyre talking about?

Or translate your cats purr or meow to know its just chilling?

Or auto-generate an image like this just by typing in the words: giant squid assembling Ikea furniture?

Thanks to different types of machine learning, thats all seemingly possible.

More on AI vs. Machine LearningArtificial Intelligence vs. Machine Learning vs. Deep Learning: Whats the Difference?

Machine learning is a branch of artificial intelligence where algorithms identify patterns in data, which are then used to make accurate predictions or complete a given task, like filtering spam emails. The process, which relies on algorithms and statistical models to identify patterns in data, doesnt require consistent, or explicit, programming. Its then further optimized through trial and error and feedback, meaning machines learn by experience and increased exposure to data, much the same way humans do.

Today, machine learning is a popular tool used in a range of industries, from banking and insurance where its used to detect fraud to healthcare, retail marketing and trend forecasting in housing and other markets.

Supervised learning is machine learning with a human touch.

With supervised learning, tagged input and output data is constantly fed and re-fed into human-trained systems that offer real-time guidance, with predictions increasing in accuracy after each new data set is fed into the system. One of the most popular forms of machine learning, supervised learning requires a significant amount of human intervention on data the system may be uncertain about and time along with vast volumes of data to make accurate predictions, which restricts use from one use case to another.

Supervised learning, like each of these machine learning types, serves as an umbrella for specific algorithms and statistical models. Here are a few that fall under supervised learning.

Used to further categorize data think pesky spam and unrelenting marketing emails classification algorithms are a great tool to sort, and even hide, that data. (If you use a Gmail or any large email client, you may notice that some emails are automatically redirected to a spam or promotions folder, essentially hiding those emails from view.)

Under the broad umbrella of classification algorithms, theres an even narrower subset of specific machine learning algorithms like naive Bayes classifier algorithms, support vector machine algorithms, decision trees and random forest models that are used to sort data.

When it comes to forecasting trends, like home prices in the housing market, regression algorithms are popular tools. These algorithms identify relationships between outcomes and other independent variables to make accurate predictions. Linear regression algorithms are the most widely used, but other commonly used regression algorithms include logistic regressions, ridge regressions and lasso regressions.

With unsupervised learning, raw data thats neither labeled nor tagged is processed by the system, meaning less legwork for humans.

Unsupervised learning algorithms work by identifying patterns within a data set, grouping information based on similarities and differences, which is helpful when youre not sure what to look for though outcomes and predictions are less accurate than with supervised learning. Unsupervised learning is especially useful in customer and audience segmentation, as well as identifying patterns in recorded audio and image data.

Heres one example of an unsupervised learning algorithm.

Clustering algorithms are the most widely used example of unsupervised machine learning. These algorithms focus on similarities within raw data, and then groups that information accordingly. More simply, these algorithms provide structure to raw data. Clustering algorithms are often used with marketing data to garner customer (or potential customer) insights, as well as for fraud detection. Some clustering algorithms include KNN clustering, principal component analysis, hierarchical clustering and k-means clustering.

Semi-supervised learning offers a balanced mix of both supervised and unsupervised learning. With semi-supervised learning, a hybrid approach is taken as small amounts of tagged data are processed alongside larger chunks of raw data. This strategy essentially gives algorithms a head start when it comes to identifying relevant patterns and making accurate predictions when compared with unsupervised learning algorithms, without the time, effort and cost associated with more labor-intensive supervised learning algorithms.

Semi-supervised learning is typically used in applications ranging from fraud detection to speech recognition as well as text document classification. Because semi-supervised learning uses labeled data and unlabeled data, it often relies on modified unsupervised and unsupervised algorithms trained for both data types.

More on Machine Learning Innovation28 Machine Learning Companies You Should Know

With reinforcement learning, AI-powered computer software programs outfitted with sensors, commonly referred to as intelligent agents, respond to their surrounding environment think simulations, computer games and the real world to make decisions independently that achieve a desired outcome. By perceiving and interacting with their environment, intelligent agents learn through trial and error, ultimately reaching optimal proficiency through positive reinforcement, or rewards, during the learning process. Reinforcement learning is often used in robotics, helping robots acquire specific skills and behaviors.

These are some of the algorithms that fall under reinforcement learning.

Q-learning is a reinforcement learning algorithm that does not require a model of the intelligent agents environment. Q-learning algorithms calculate the value of actions based on rewards resulting from those actions to improve outcomes and behaviors.

Used in the development of self-driving cars, video games and robots, deep reinforcement learning combines deep learning machine learning based on artificial neural networks with reinforcement learning where actions, or responses to the artificial neural networks environment, are either rewarded or punished. With deep reinforcement learning, vast amounts of data and increased computing power are required.

Read the original:
4 Types of Machine Learning to Know - Built In

Google turns to machine learning to advance translation of text out in the real world – TechCrunch

Google is giving its translation service an upgrade with a new machine learning-powered addition that will allow users to more easily translate text that appears in the real world, like on storefronts, menus, documents, business cards and other items. Instead of covering up the original text with the translation the new feature will instead smartly overlay the translated text on top of the image, while also rebuilding the pixels underneath with an AI-generated background to make the process of reading the translation feel more natural.

Often its that combination of the word plus the context like the background image that really brings meaning to what youre seeing, explained Cathy Edwards, VP and GM of Google Search, in a briefing ahead of todays announcement. You dont want to translate a text to cover up that important context that can come through in the images, she said.

Image Credits: Google

To make this process work, Google is using a machine learning technology known as generative adversarial networks, otherwise known as GAN models the same technology that powers the Magic Eraser feature to remove objects from photos taken on the Google Pixel smartphones. This advancement will allow Google to now blend the translated text into even very complex images, making the translation feel natural and seamless, the company says. It should seem as if youre looking at the item or object itself with translated text, not an overlay obscuring the image.

The feature is another development that seems to point to Googles plans to further invest in the creation of new AR glasses, as an ability to translate text in the real world could be a key selling point for such a device. The company noted that every month, people use Google to translate text and images over a billion times in more than 100 languages. It also this year began testing AR prototypes in public settings with a handful of employees and trusted testers, it said.

While theres obvious demand for better translation, its not clear if users will prefer to use their smartphone for translations rather than special eyewear. After all, Googles first entry into the smartglasses space, Google Glass, ultimately failed as a consumer product.

Google didnt speak to its long-term plans for the translation feature today, noting only that it would arrive sometime later this year.

Read the original post:
Google turns to machine learning to advance translation of text out in the real world - TechCrunch

Machine learning-based risk factor analysis of adverse birth outcomes in very low birth weight infants | Scientific Reports – Nature.com

Participants and variables

Data consisted of 10,423 VLBW infants from the Korean Neonatal Network (KNN) database during January 2013-December 2017. The KNN started on April 2013 as a national prospective cohort registry of VLBW infants admitted or transferred to neonatal intensive care units across South Korea (It covers 74 neonatal intensive care units now). It collects the perinatal and neonatal data of VLBW infants based on a standardized operating procedure37.

Five adverse birth outcomes were considered as binary dependent variables (no, yes), i.e., gestational age less than 28weeks (GA<28), GA less than 26weeks (GA<26), birth weight less than 1000g (BW<1000), BW less than 750g (BW<750) and SGA. Thirty-three predictors were included: sexmale (no, yes), birth-year (2013, 2014, 2015, 2016, 2017), birth-month (1, 2, , 12), birth-season-spring (no, yes), birth-season-summer (no, yes), birth-season-autumn (no, yes), birth-season-winter (no, yes), number of fetuses (1, 2, 3, 4 or more), in vitro fertilization (no, yes), gestational diabetes mellitus (no, yes), overt diabetes mellitus (no, yes), pregnancy-induced hypertension (no, yes), chronic hypertension (no, yes), chorioamnionitis (no, yes), prelabor rupture of membranes (no, yes), prelabor rupture of membranes>18h (no, yes), antenatal steroid (no, yes), cesarean section (no, yes), oligohydramnios (no, yes), polyhydramnios (no, yes), maternal age (years), primipara (no, yes), maternal education (elementary, junior high, senior high, college or higher), maternal citizenship (Korea, Vietnam, China, Philippines, Japan, Cambodia, United States, Thailand, Mongolia, Other), paternal education (elementary, junior high, senior high, college or higher), paternal citizenship (Korea, Vietnam, China, Philippines, Japan, Cambodia, United States, Thailand, Mongolia, Other), unmarried (no, yes), congenital infection (no, yes), PM10 year (PM10 for each year), PM10 month (PM10 for each birth-month), temperature average (for each year), temperature min (for each year) and temperature max (for each year). PM10 and temperature data came from the Korea Meteorological Administration (PM10 https://data.kma.go.kr/data/climate/selectDustRltmList.do?pgmNo=68; temperature https://web.kma.go.kr/weather/climate/past_cal.jsp). The definition of each variable is given in Text S1, supplementary text.

The artificial neural network, the decision tree, the logistic regression, the Nave Bayes, the random forest and the support vector machine were used for predicting preterm birth38,39,40,41,42,43. A decision tree includes three elements, i.e., a test on an independent variable (intermediate note), an outcome of the test (branch) and a value of the dependent variable (terminal node). A nave Bayesian classifier performs classification on the basis of Bayes theorem. Here, the theorem states that the probability of the dependent variable given certain values of independent variables can be calculated based on the probabilities of the independent variables given a certain value of the dependent variable. A random forest is a collection of many decision trees, which make majority votes on the dependent variable (bootstrap aggregation). Let us take a random forest with 1000 decision trees as an example. Let us assume that original data includes 10,000 participants. Then, the training and test of this random forest takes two steps. Firstly, new data with 10,000 participants is created based on random sampling with replacement, and a decision tree is created based on this new data. Here, some participants in the original data would be excluded from the new data and these leftovers are called out-of-bag data. This process is repeated 1000 times, i.e., 1000 new data are created, 1000 decision trees are created and 1000 out-of-bag data are created. Secondly, the 1000 decision trees make predictions on the dependent variable of every participant in the out-of-bag data, their majority vote is taken as their final prediction on this participant, and the out-of-bag error is calculated as the proportion of wrong votes on all participants in the out-of-bag data38,39.

A support vector machine estimates a group of support vectors, that is, a line or space called hyperplane. The hyperplane separates data with the greatest gap between various sub-groups. An artificial neural network consists of neurons, information units combined through weights. In general, the artificial neural network includes one input layer, one, two or three intermediate layers and one output layer. Neurons in a previous layer link with weights in the next layer (Here, these weights denote the strengths of linkages between neurons in a previous layer and their next-layer counterparts). This feedforward operation begins from the input layer, runs through intermediate layers and ends in the output layer. Then, this process is followed by learning: These weights are updated according to their contributions for a gap between the actual and predicted final outputs. This backpropagation operation begins from the output layer, runs through intermediate layers and ends in the input layer. The two processes are repeated until the performance measure reaches a certain limit38,39. Data on 10,423 observations with full information were divided into training and validation sets with a 70:30 ratio (7296 vs. 3127). Accuracy, a ratio of correct predictions among 3127 observations, was employed as a standard for validating the models. Random forest variable importance, the contribution of a certain variable for the performance (GINI) of the random forest, was used for examining major predictors of adverse birth outcomes in VLBW infants including PM10. The random split and analysis were repeated 50 times then its average was taken for external validation44,45. R-Studio 1.3.959 (R-Studio Inc.: Boston, United States) was employed for the analysis during August 1, 2021September 30, 2021.

The KNN registry was approved by the institutional review board (IRB) at each participating hospital (IRB No. of Korea University Anam Hospital: 2013AN0115). Informed consent was obtained from the parent(s) of each infant registered in the KNN. All methods were carried out in accordance with the IRB-approved protocol and in compliance with relevant guidelines and regulations.

The names of the institutional review board of the KNN participating hospitals were as follows: The institutional review board of Gachon University Gil Medical Center, The Catholic University of Korea Bucheon ST. Marys Hospital, The Catholic University of Korea Seoul ST. Marys Hospital, The Catholic University of Korea ST. Vincents Hospital, The Catholic University of Korea Yeouido ST. Marys Hospital, The Catholic University of Korea Uijeongbu ST. Marys Hospital, Gangnam Severance Hospital, Kyung Hee University Hospital at Gangdong, GangNeung Asan Hospital, Kangbuk Samsung Hospital, Kangwon National University Hospital, Konkuk University Medical Center, Konyang University Hospital, Kyungpook National University Hospital, Gyeongsang National University Hospital, Kyung Hee University Medical center, Keimyung University Dongsan Medical Center, Korea University Guro Hospital, Korea University Ansan Hospital, Korea University Anam Hospital, Kosin University Gospel Hospital, National Health Insurance Service Iilsan Hospital, Daegu Catholic University Medical Center, Dongguk University Ilsan Hospital, Dong-A University Hospital, Seoul Metropolitan Government-Seoul National University Boramae Medical Center, Pusan National University Hospital, Busan ST. Marys Hospital, Seoul National University Bundang Hospital, Samsung Medical Center, Samsung Changwon Medical Center, Seoul National University Hospital, Asan Medical Center, Sungae Hospital, Severance Hospital, Soonchunhyang University Hospital Bucheon, Soonchunhyang University Hospital Seoul, Soonchunhyang University Hospital Cheonan, Ajou University Hospital, Pusan National University Childrens Hospital, Yeungnam University Hospital, Ulsan University Hospital, Wonkwang University School of Medicine & Hospital, Wonju Severance Christian Hospital, Eulji University Hospital, Eulji General Hospital, Ewha Womans University Medical.

Center, Inje University Busan Paik Hospital, Inje University Sanggye Paik Hospital, Inje University Ilsan Paik Hospital, Inje University Haeundae Paik Hospital, Inha University Hospital, Chonnam National University Hospital, Chonbuk National University Hospital, Cheil General Hospital & Womens Healthcare Center, Jeju National University Hospital, Chosun University Hospital, Chung-Ang University Hospital, CHA Gangnam Medical Center, CHA University, CHA Bundang Medical Center, CHA University, Chungnam National University Hospital, Chungbuk National University, Kyungpook National University Chilgok Hospital, Kangnam Sacred Heart Hospital, Kangdong Sacred Heart Hospital, Hanyang University Guri Hospital, and Hanyang University Medical Center.

See the original post:
Machine learning-based risk factor analysis of adverse birth outcomes in very low birth weight infants | Scientific Reports - Nature.com

ART-ificial Intelligence: Leveraging the Creative Power of Machine Learning – Little Black Book – LBBonline

Above: Chago's AI self-portrait, generated in Midjourney.

I have learnt to embrace and explore the creative possibilities of computer-generated imagery. It all started with the introduction of Photoshop thirty years ago, and more recently, I became interested in the AI software program, Midjourney, a wonderful tool that allows creatives to explore ideas more efficiently than ever before. The best description for Midjourney that Ive found is an AI-driven tool for the exploration of creative ideas.

If I was talking to somebody who was unfamiliar with AI-generated art, I would show them some examples, as this feels like a great place to start. Midjourney is syntax-driven; users must break down the language and learn the key phrases and special order of the words, in order to take full advantage of the program. As well as using syntax, users can upload reference imagery to help bring their idea to life. An art director could upload a photo of Mars and use that as a reference to create new imagery I think this is a fantastic tool.

Im a producer, with an extensive background as a production artist, mostly in retouching and leading post production teams. I also have a background in CGI, I took some postgraduate classes at NYU for a couple semesters, and I went to college for architecture, so I can draw a little bit but I'm not going to pretend that I could ever do a CGI project. A lot of art directors and creative directors are in the same boat, they direct and creative direct - especially on the client side - a lot of CGI projects, but dont necessarily know CGI. Programs like Midjourney let people like us dip our toes into the creative waters, by giving us access to an inventive and artistic toolset.

Last week, the Steelworks team was putting together a treatment deck for a possible new project. We had some great ideas to send to the client, but sourcing certain specific references felt like finding a needle in the haystack. If we were looking for a black rose with gold dust powder on the petals, it is hard to find exactly what we want. Its times like these when a program like Midjourney can boost the creative. By entering similar references into the software and developing a syntax that is as close to what youre looking for as possible, you are given imagery that provides more relevant references for a treatment deck. For this reason, in the future I see us utilising Midjourney more often for these tasks, as it can facilitate the creative ideation for treatments and briefs for clients.

I'm optimistic about Midjourney because, as technology evolves, humans in the creative industries continue to find ways to stay relevant. I was working as a retoucher during the time Photoshop first came out with the Healing Brush. Prior to that, all retouching was done manually by manipulating and blending pixels. All of a sudden, the introduction of the Healing Brush meant that with one swipe, three hours of work was removed. I remember we were sitting in our post production studio when someone showed it to us and we thought, Oh my God, we're gonna be out of a job. Twenty years later, retouching still has relevance, as do the creatives who are valued for their unique skill sets.

I don't do much retouching anymore, but I was on a photo shoot recently and I had to get my hands in the sauce and put comps together for people. There were plenty of new selection tools in Photoshop that have come out in the last three years and I had no idea about most of them. I discovered that using these tools cut out roughly an hour's worth of work, which was great. As a result, it opened up time for me to talk to clients, and be more present at work and home. It's less time in front of the computer at the end of the day.

While these advancements in technology may seem daunting at first, I try not to think of it as a threat to human creativity, rather a tool which grants us more time to immerse ourselves in the activities that boost our creative thinking. Using AI programs like Midjourney helps to speed up the creative process which, in turn, frees up more time to do things like sit outside and enjoy our lunch in the sun, go to the beach or to the park with your kids things that feed our frontal cortex and inspire us creatively. It took me a long time to be comfortable with taking my nose off the grindstone and relearn how to be inspired creatively.

The rest is here:
ART-ificial Intelligence: Leveraging the Creative Power of Machine Learning - Little Black Book - LBBonline