Archive for the ‘Machine Learning’ Category

Artificial Intelligence and Machine Learning in Cancer Detection – Targeted Oncology

Toufic Kachaamy, MD

City oh Hope Phoenix

Since the first artificial intelligence (AI) enabled medical device received FDA approval in 1995 for cervical slide interpretation, there have been 521 FDA approvals provided for AI-powered devices as of May 2023.1 Many of these devices are for early cancer detection, an area of significant need since most cancers are diagnosed at a later stage. For most patients, an earlier diagnosis means a higher chance of positive outcomes such as cure, less need for systemic therapy and a higher chance of maintaining a good quality of life after cancer treatment.

While an extensive review of these is beyond the scope of one article, this article will summarize the major areas where AI and machine learning (ML) are currently being used and studied for early cancer detection.

The first area is large database analyses for identifying patients at risk for cancer or with early signs of cancer. These models analyze the electronic medical records, a structured digital database, and use pattern recognition and natural language processing to identify patients with specific characteristics. These include individuals with signs and symptoms suggestive of cancer; those at risk of cancer based on known risk factors; or specific health measures associated with cancer. For example, pancreatic cancer has a relatively low incidence but is still the fourth leading cause of cancer death. Because of the low incidence, screening the general population is neither practical nor cost-effective. ML can be used to analyze specific health outcomes such as new onset hyperglycemia2 and certain health data from questionnaires (3) to classify members of the population as high risk for pancreatic cancer. This allows the screened population to be "enriched with pancreatic cancer," thus making screening higher yield and more cost-effective at an earlier stage.

Another area leveraging AI and ML learning is image analyses. The human vision is best centrally, representing less than 3 degrees of the visual field. Peripheral vision has significantly less special resolution and is more suited for rapid movements and "big picture" analysis. In addition, "inattentional blindness" or missing significant findings when focused on a specific task is one of the vulnerabilities of humans, as demonstrated in the study that showed even experts missed a gorilla in a CT when searching for lung nodules.3 Machines are not susceptible to fatigue, distraction, blind spots or inattentional blindness. In a study that compared a deep learning algorithm to radiologist from the National Lung Screening trial, the algorithm performed better than the radiologist in detecting lung cancer on chest X-rays.4

AI algorithm analysis of histologic specimens can serve as an initial screening tool and an assistant as a real-time interactive interface during histological analysis.5 AI is capable of diagnosing cancer with high accuracy.6 It can accurately determine grades, such as the Gleason score for prostate cancer and identify lymph node metastasis.7 AI is also being explored in predicting gene mutations from histologic analysis. This has the potential of decreasing cost and improving time to analysis. Both are limitations in today's practice limiting universal gene analysis in cancer patients,8 but at the same time are gaining a role in precision cancer treatment.9

An excitingand up-and-coming area where AI and deep learning are the combination of the above such as combining large data analysis with pathology assessment and/ or image analyses. For example, using medical record analysis and CXR findings, deep learning was used to identify patients at high risk for lung cancer and who would benefit the most from lung cancer screening. This has great potential, especially since only 5% of patients eligible for lung cancer screening are currently being screened.10

Finally, the holy grail of cancer detection: blood-based multicancer detection tests, many of which are already available and in development, often use AI algorithms to develop, analyze and validate their test.11

It is hard to imagine an area of medicine that AI and ML will not impact. AI is unlikely, at least for the foreseeable future, to replace physicians. It will be used to enhance physician performance, improve accuracy and efficiency. However, it is essential to note that machine-human interaction is very complicated, and we are scratching the surface of this era. It is premature to assume that real-world outcomes will be like outcomes seen in trials. Any outcome that involves human analysis and final decision-making is affected by human performance. Training and studying human behavior are needed for human-machine interaction to produce optimal outcomes. For example, randomized controlled studies have shown increased polyp detection during colonoscopy using computer-aided detection or AI-based image analysis.12 However, real-life data did not show similar findings13 likely due to a difference in how AI impacts different endoscopists.

Artificial intelligence and machine learning dramatically alter how medicine is practiced, and cancer detection is no exception. Even in the medical world, where change is typically slower than in other disciplines, AI's pace of innovation is coming upon us quickly and, in certain instances, faster than many can grasp and adapt.

Here is the original post:
Artificial Intelligence and Machine Learning in Cancer Detection - Targeted Oncology

ASCRS 2023: Predicting vision outcomes in cataract surgery with … – Optometry Times

Mark Packer, MD, sat down with Sheryl Stevenson, Group Editorial Director,Ophthalmology Times, to discuss his presentation on machine learning and predicting vision outcomes after cataract surgery at the 2023 ASCRS annual meeting in San Diego

Editors note:This transcript has been edited for clarity.

Sheryl Stevenson:

We're joined by Dr. Mark Packer, who will be presenting at this year's ASCRS. Hello to Dr. Packard. Great to see you again.

Mark Packer, MD:

Good to see you, Sheryl.

Stevenson:

Sure, tell us a little bit about your talk about machine learning, and visual, predicting vision outcomes after cataract surgery.

Packer:

Sure, well, as we know, humans tend to be fallible, and even though surgeons don't like to admit it, they have been prone to make errors from time to time. And you know, one of the errors that we make is that we always extrapolate from our most recent experience. So if I just had a patient who was very unhappy with a multifocal IOL, all of a sudden, I'm going to be a lot more cautious with my next patient, and maybe the one after that, too.

And, the reverse can happen as well. If I just had a patient who was absolutely thrilled with their toric multifocal, and they never have to wear glasses again, and they're leaving for Hawaii in the morning, you know, getting a full makeover, I'm going to think, wow, that was the best thing I ever did. And now all of a sudden, everyone looks like a candidate. and even for someone like me, who has been doing multifocal IOL for longer than I care to admit, you know, this can still pose a problem. That's just human nature.

And, so what we're attempting to do with the oculotics program is to bring a little objectivity into the mix. Now, of course, we already do that, when we talked about IOL power calculations, we, we leave that up to algorithms and let them do the work. One of the things that we've been able to do with oculotics is actually improve upon the way that power calculations are done. So rather than just looking at the Dioptric power of a lens, for example, we're actually looking at the real optical properties of the lens, the modulation transfer function, in order to help correlate that with what a patient desires in terms of spectacle independence.

But the real brainchild here is the idea of incorporating patient feedback after surgery into the decision making process. So part of this is actually to give our patients and app that they can use to then provide feedback on their level of satisfaction, essentially, by filling out the VFQ-25, which is a simply, a 25 item questionnaire that was developed in the 1990s by RAND Corporation, to look at visual function and how satisfied people are with their vision, whether they have to worry about it, and how they feel about their vision, that sort of thing, whether they can drive at night comfortably and all that.

So if we can incorporate that feedback into our decision making, now instead of my going into the next room, you know, with fresh in my mind just what happened today, actually, I'll be incorporating the knowledge of every patient that I've operated on since I started using this system, and how they fared with these different IOLs.

So the machine learning algorithm can actually take this patient feedback and put that together with the preoperative characteristics such as, you know, personal items, such as hobbies, what they do for recreation, what their employment is, what kind of visual demands they have. And also anatomic factors, you know, the axial length, anterior chamber depth, corneal curvature, all of that, put that all together, and then we can begin to match inter ocular lens selection, actually to patients based not only on their biometry, but also on their personal characteristics, and how they actually felt about the results of their surgery.

So that's how I think machine learning can help us, and hopefully bring surgeons up to speed with premium IOLs more quickly because, you know, it's taken some of us years and years to gain the experience to really become confident in selecting which patients are right for premium lenses, particularly multifocal extended depth of focus lenses and that sort of thing where, you know, there are visual side effects, and there are limitations, but there also are great advantages. And so hopefully using machine learning can bring young surgeons up more quickly increase their confidence and allow them to increase the rate of adoption among their patients for these premium lenses.

The rest is here:
ASCRS 2023: Predicting vision outcomes in cataract surgery with ... - Optometry Times

How AI and Machine Learning is Transforming the Online Gaming … – Play3r

Are you an avid online gamer? Do you find yourself craving a more immersive experience every time you jump into playing your favorite slot games or any game at that? If so, you may be interested to learn about how advances in AI and machine learning are transforming the gaming experience.

In this blog post, we will explore the ways that artificial intelligence and machine learning technologies are making online gaming smoother and more thrilling than ever before. Well look at how these technologies have been used to enhance graphics, user interfaces, and in-game dynamics all of which can drastically improve your gameplay.

Whether your favorite pastime is first-person shooters or real-time strategy games, lets delve into everything AI has to offer gamers!

As the online gaming industry continues to grow and evolve, AI and machine learning have become increasingly important tools for developers. These technologies can change the way we experience our favorite games, from providing more realistic and unpredictable opponents to personalized gameplay.

Through the use of AI and machine learning, game developers can analyze vast amounts of data, allowing them to create better-balanced and more engaging gaming experiences.

Additionally, these tools can help identify and prevent cheating, making online gaming fairer and more enjoyable for all. As the gaming industry moves forward, its clear that AI and machine learning will play an important role in shaping the future of the industry.

The world of online gaming is constantly evolving and with the introduction of AI and machine learning, it just keeps getting better. These technologies have revolutionized the gaming industry and brought about countless benefits for both players and developers.

AI algorithms help create more realistic gameplay and sophisticated opponents, while machine learning helps predict player behavior and preferences, leading to a more personalized gaming experience.

Additionally, AI can help game developers optimize their games for performance and eliminate bugs faster than ever before. In short, the benefits of using AI and machine learning in online gaming are diverse and far-reaching, making it an exciting area to watch for future developments.

Developing AI and machine learning technologies can be incredibly challenging for software developers. One of the biggest obstacles faced by developers is finding the right data to train their algorithms effectively.

In addition to this, there is also a lot of complexity involved in designing AI systems that can learn from data with minimal human intervention. Moreover, creating machine learning models that can accurately predict and analyze data in real time requires a sophisticated understanding of various statistical techniques and programming languages.

With these challenges in mind, its no wonder that many developers in this field feel overwhelmed. However, with the right tools and resources, developers can overcome these obstacles and continue advancing the exciting field of AI and machine learning.

The world of gaming has evolved significantly in recent years, and one major factor in this transformation is the integration of AI and machine learning into popular online games. From first-person shooters to strategy and adventure games, players have been enjoying a more immersive experience thanks to the inclusion of smarter, more complex non-player characters (NPCs) and advanced game optimization.

For example, in the game AI Dungeon, players can enter any storyline, and the AI generates a unique adventure based on their input. Similarly, the popular game League of Legends uses machine learning to optimize matchmaking, ensuring players are pitted against opponents of similar skill levels.

With AI and machine learning continually improving, the future of online gaming promises to be even more exciting and engrossing.

Artificial intelligence and machine learning have drastically transformed the gaming industry in recent years. These technologies can analyze vast amounts of data, predict outcomes, and make recommendations for players to improve their overall gameplay experience. AI can also assist developers in creating more immersive worlds, where virtual characters have reactive behaviors that mimic real-life behaviors.

Machine learning algorithms, on the other hand, can help determine a players skill level and preferences, adapting gameplay accordingly. Many gamers have already seen the benefits of these technologies, with smarter NPCs, more adaptive environments, and improved matchmaking systems.

As AI and machine learning continue to evolve, the gaming experience will only become more enhanced and personalized, creating an even more immersive world for players to explore.

AI and machine learning-based games have become increasingly popular in recent years, offering players a unique and immersive gaming experience. But how can you make the most of these cutting-edge titles?

Firstly, take the time to understand the game mechanics and the AIs decision-making process. This can help you anticipate actions and develop strategies to stay ahead of the curve. Additionally, be sure to give feedback to the developers, as this can help them improve the games machine-learning algorithms and provide a better experience for everyone.

Lastly, dont be afraid to experiment and try out different approaches to see what works best. With these tips, youll be well on your way to dominating the world of AI and machine learning-based gaming.

Online gaming experiences have been revolutionized by AI and Machine Learning technology. The ability to offer players intelligent, personalized gaming experiences that feel unique and engaging. Not only is this creating games that boost user retention, but it is also opening up exciting possibilities for multiplayer gaming.

Additionally, developers are increasingly leaning towards AI and ML to create more immersive worlds for gamers to explore. Despite challenges in implementation, the advancements of AI and Machine Learning are offering a wide range of captivating new experiences for online gamers from improved graphics to real-time learning obstacles making them an important component in crafting better gameplay experiences than ever before.

As players continue to enjoy the ever-evolving exciting world of online gaming, they must keep up with the latest trends related to AI and Machine Learning technology to make sure they are getting the most out of their experience.

Like Loading...

Go here to see the original:
How AI and Machine Learning is Transforming the Online Gaming ... - Play3r

Using Machine Learning to Predict the 2023 Kentucky Derby … – DataDrivenInvestor

Can the forecasted weather be used to predict the winning race time?

My hypothesis is that the weather plays a major impact on the Kentucky Derbys winning race time. In this analysis I will use the Kentucky Derbys forecasted weather to predict the winning race time using Machine Learning (ML). In previous articles I discussed the importance of using explainable ML in a business setting to provide business insights and help with buy-in and change management. In this analysis, because Im striving purely for accuracy, I will disregard this advice and go directly to the more complex, but accurate, black box Gradient Boosted Machine (GBM), because we want to win some money!

The data I will use comes from the National Weather Service:

# Read in Data #data <- read.csv("...KD Data.csv")

# Declare Year Variables #year <- data[,1]

# Declare numeric x variables #numeric <- data[,c(2,3,4)]

# Scale numeric x variablesscaled_x <- scale(numeric)# check that we get mean of 0 and sd of 1colMeans(scaled_x)apply(scaled_x, 2, sd)

# One-Hot Encoding #data$Weather <- as.factor(data$Weather)xfactors <- model.matrix(data$Year ~ data$Weather)[, -1]

# Bring prepped data all back together #scaled_df <- as.data.frame(cbind(year,y,scaled_x,xfactors))

# Isolate pre-2023 data #old_data <- scaled_df[-1,]new_data <- scaled_df[1,]

# Gradient Boosted Machine ## Find Max Interaction Depth #floor(sqrt(NCOL(old_data)))

# find index for n trees with minimum CV errorbest.iter <- gbm.perf(tree_mod, method="OOB", plot.it=TRUE, oobag.curve=TRUE, overlay=TRUE)print(best.iter)

In this article, I chose a more accurate, but complex, black box model to predict the Kentucky Derbys winning race time. This is because I dont care about generating insights or winning buy-in or change management, rather I want to use the model that is the most accurate so I can make a data driven gamble. In most business cases you will give up accuracy for explainability, however there are some instances (like this one) in which accuracy is the primary requirement of a model.

This prediction is based off forecasted weather for Saturday May 6th, taken on Thursday May 4th, so obviously it should be taken with a grain of salt. As everyone knows, even with huge amounts of technology, predicting weather is very difficult. Using forecasted weather to predict the winning race time adds even more uncertainity. That being said, I will take either the over or the under that matches my predicted winning time of 122.12 seconds.

Read more here:
Using Machine Learning to Predict the 2023 Kentucky Derby ... - DataDrivenInvestor

From machine learning to robotics: WEF report predicts the most lucrative AI jobs – The Indian Express

Its happening already. Following Dropboxs move to lay off 500 employees as it shifts its focus to AI, IBM now plans to replace 7,800 jobs with AI technology and pause hiring for roles that could be automated. Company CEO Arvind Krishna stated that most back-office positions, such as HR and accounting, will be replaced.

Layoffs due to AI were inevitable, but amid lingering job losses, new jobs are also being created. A report by the World Economic Forum states that demand for AI and machine learning specialists will grow at the fastest rate in the next five years. The organisation has also listed a number of AI jobs that are expected to see massive growth in the coming years. Lets take a look at them.

AI and machine learning specialists: These are professionals who design, develop, and implement AI and ML systems and applications. They use various tools and techniques to analyse data, build models, and optimise algorithms. The demand for AI and machine learning specialists will grow at the fastest rate in the next five years, the WEF report says.

Big data specialists: They specialise in managing, analysing and interpreting large and complex data sets. They use cutting-edge technologies to organise, store, and retrieve vast amounts of information, turning it into valuable insights that can drive business decisions. They work with a variety of industries such as healthcare, finance, and technology, to help them understand and leverage the power of data.

Data engineers: They are responsible for the design, construction and maintenance of the data infrastructure that supports an organisations data management and analytics needs. They develop and manage data pipelines, work with large datasets, and ensure that data is available and accessible to those who need it. They also work with other data professionals to design and implement data architectures that meet the needs of the organisation.

Data analysts and scientists: These are experts who collect, process, and interpret large and complex datasets to generate insights and solutions for various problems and domains. They use statistical methods, programming languages, and visualisation tools to manipulate and communicate data. Data analysts and scientists are expected to see a 32% growth in demand by 2023.

Apart from the aforementioned jobs listed by the World Economic Forum, heres a list of other jobs AI is expected to create in the near future.

AI trainers: They are responsible for teaching machines to learn from data effectively. They also help to ensure that the AI models accurately interpret the data, providing businesses with valuable insights that can drive informed decisions.

AI ethicists: They use their expertise to ensure that AI systems are developed and deployed responsibly. They also identify potential ethical concerns related to privacy, fairness, and transparency, and work to address them through policy and guidelines.

AI user experience designers: They create interfaces and experiences that are intuitive and user-friendly for AI-driven products and services. They also work to ensure that users can easily interact with AI systems, making their experiences more enjoyable and productive.

AI security analysts: They focus on ensuring the safety and integrity of AI-driven solutions. They also identify potential threats, vulnerabilities, and attacks that could compromise AI systems and develop strategies to mitigate them.

Robotics engineers: They design, build, and program autonomous machines that can perform a wide range of tasks, from assembly line work to surgical procedures. By incorporating AI capabilities such as computer vision and natural language processing, they create intelligent machines that can work alongside humans in new and exciting ways.

Of course, these are just a few examples of the new jobs that AI is expected to create. As AI continues to evolve and become more integrated into various industries, its likely that even more new job opportunities will emerge.

IE Online Media Services Pvt Ltd

First published on: 03-05-2023 at 19:39 IST

Follow this link:
From machine learning to robotics: WEF report predicts the most lucrative AI jobs - The Indian Express