Archive for the ‘Machine Learning’ Category

Podcast: Machine Learning and Education The Badger Herald – The Badger Herald

Jeff Deiss 0:00Greetings, this is Jeff, director of the Badger Herald podcast. And today we have a very exciting episode were talking with Professor Kangwook Lee, part of the Electrical and Computer Engineering Department at the University of Wisconsin Madison. And were going to talk about his research on deep learning and recent developments in machine learning. And also a little bit about his influence on a popular test prep service called Riiid.

So, I originally saw your name in a New York Times article, about Riiid, which is a test prep service started by YJ Jang that uses deep learning to essentially better guide students towards more accurate test prep and just overall academic success. But we can get into that a little bit later. So first, if you want to introduce yourself, and just give a little background on your life.

Lee 1:18Alright, hi, Im Kangwook Lee. Again, Im assistant professor in the ECE department here. Came here in 2019, fall. So its been about three and a half years since I joined here. Ive been enjoying a lot, except the COVID. But everything is great. I mostly work on information theory, machine learning and deep learning in terms of research area. Before that, I did my PhD in Berkeley Masters and PhD in Berkeley. Before that, I was doing my undergrad studies in Korea, I grew up in Korea. So yeah, its been its been a while since I came to the United States. I did went back to Korea for three years for my military service, after my Ph.D., but yeah. So yeah, happy to meet you guys and talk about my research.

Deiss 2:09Of course, and thats the first question I have. So with any topic related to machine learning or information theory, even as someone who studied this at a pretty low level in school, it can be hard to wrap your head around some of these concepts, but maybe just in laymans terms, can you describe some of your recent research to give our listeners a better sense of what you do here at UW-Madison?

Lee 2:32Since I joined Madison, I worked on three different research topics. The first one was, how much data do we need to rely on machine learning? That one, I particularly studied the problem of recommendation where we have data from clients or customers, they provide their ratings on the different types of items. And from that kind of partially observed data. If you want to make recommendations for their future service, we should figure out how much data we need. So that kind of recommendation systems and algorithms was number one topic I worked on. The second topic I worked on was called trustworthy machine learning. So by trustworthy machine learning, I mean, machine learning algorithms are, in most cases, they are not fair. So they are not robust. And others are private, they used to leak private data that was used for training data. So there are many issues like this. And people started looking at how to solve this issue and make more robust, more fair, more or less more private algorithms. So those are the research topics I really liked working on in the last few years. I still work on them. Recently, I have started working on another research topic called large models. So large models are I guess you must have heard about like GPT, diffusion, models lips, those are the models that are becoming more and more popular, but we are lacking in theory in terms of how they work. So thats what I am surprised to see in this case.

Deiss 4:18Yeah, so I just wanted to ask you I often hear not necessarily in true academic papers, but just in the media, I hear about how some of these large models, especially if theyre convoluted, complicated neural networks or deep learning algorithms. Ive heard them described as a black box, where the actual mechanics of whats going on inside what what the algorithm is doing with the data is a little unclear from the outside, or as you have like a simple regression model. Its actually pretty easy to work out the math of what the algorithm is doing with the data but with a large model, is that the case and can describe a little bit about that black box problem that researchers have to deal with

Lee 4:57The black box aspect actually was for more general classes, lets say entire deep learning, you can say they are kind of blackbox. I, I think thats half correct, half incorrect, half incorrect in a sense that when we design those models, we have a particular goal that this, we want this to behave like this. So for instance, even if we call GPT, mostly are largely blackbox-ish, we still design the systems and algorithms such that it is good at predicting the next word. Thats, thats not something just came out out of box we designed such that it predicts the next word well, so. And thats what we are seeing in ChatGPT and OD GPT. So the, in terms of the operation or the final objective, they are doing what they people who designed wanted to do. So its less blackbox in that sense, however, how it actually works that well, I think thats the mysterious part, we couldnt expect how well it will work. But somehow it worked much better than what people expected. So explaining why thats the case. Thats an interesting research question. But thats what makes it a little black box-ish. Whats also very interesting to me is when it comes to GPT, and really large language models, while there is there are more mysterious things happening, going back to the first aspect. In fact, there are some interesting behaviors that people didnt intend to design. So things like incontext learning or future learning. Thats basically like, when you use GPT, you provide a few examples to the to the model, and the model is trying to learn some parents from the examples that are provided, which is a little bit beyond that what people used to expect from the model. So the model has some new properties or behaviors that we didnt design.

Deiss 7:00Yes, and I want to get back to ChatGPT for another perspective and a little bit, but one thing I saw that you were recently researching, I saw come up in interviews is about the straggler problem in machine learning. As far as I know, its where a certain I dont know if node is the correct term or just some part of the machine learning algorithm is so deficient that it brings down the performance of the whole algorithm as a whole. Can you describe a little bit about what the straggler problem is and the research youre doing on it?

Lee 7:29Yeah. So the straggler problem is, is a term that describes where you have a large cluster and your entire cluster is working on a particular task jointly. And if one of the nodes or machine within the cluster starts performing bad or starts producing wrong output or start behaving slower than the other, that the entire system is either getting wrong answers, or either they are becoming entirely very slow. So straggler problem basically means that you have a bigger system consisting of large workers, one of the few workers become very slow, or erroneous, the entire system becomes bad. Thats the phenomenon or the problem. This problem has been first observed in large data centers like Google or Facebook, about a decade ago, they were reporting that there are a few stragglers that make their entire data center really slow, and really bad in terms of the performances. So we started working on how to fix these problems using more principled approaches like information and coding theory, that are very related to large scale machine learning systems. Because large scale machine learning systems require cluster training, distributed training, that kind of stuff. So thats how its connected to distribute machine.

Deiss 8:57Very interesting stuff. I want to pivot away from your research for a little bit and just talk about how I originally heard about your name, like I said, In the beginning, I saw a New York Times article was about a test prep service. And why YJ Jang who started Riiid this test prep service, you said he was inspired by you to kind of use deep learning in his startup, whatever software he was originally creating, what is your relationship with him? And how did you influence them to utilize deep learning?

Lee 9:25Sure. Heres a friend of mine. He texted me with the link to the article is I was really interested to see that link to see the article. I met him about 10 years ago, when I was a student at Berkeley. He was also a student at Berkeley, but we didnt know each other. But we both participated in some some startup competetion over the weekend. So we had when we drove down to San Jose, where the startup competition was happening, and I didnt know him so I was on Find finding some other folks there. And we created a some demo and we gave a pitch. We won the second place, he won the first place.

Deiss 10:09Wow.

Lee 10:10So, and I was talking to him, Hey, where are you from? And he said he was from Berkeley. So Im from Berkeley. So I got to know him from there. I knew he was a really good businessman back then. But, but then we came back to Berkeley, we started talking more and more. And we had some idea of having a startup. So we had some ideas, we spent about six months developing business ideas, and also building some demos. It was also related to education. So its slightly different from what they are working on now. But eventually, we found that the business is really difficult to run. So we gave up. But after that, he started his own business. And he started asking me, Hey, I have this interesting problem. But I think machine learning could play a big role here. So he started sharing his business idea. And then that was the time when I was working on machine learning. In particular, I was working on recommendation system. And I was able to find the connection between the recommendation system, and what the problem they are working with the problem they are working on is students are working and spending so much time on prepping test. And they waste so much time on working on something they already know, efficient test prep is no different from not wasting time on watching some, something thats not yours on Netflix. So yeah, so thats the point where I started this kind of idea, sharing the sharing this idea with him. And in fact, deep learning was necessarily being used for recommendation system. So all these ideas I shared with him, and he made a great business out of it.

Deiss 11:54Yes, definitely. Obviously, test prep services like this are some ways in which machine learning and deep learning models could actually help educators. But in the media, and I see all the time, its all about ChatGPT all that I hear like every day, theres some new news about ChatGPT. And I think that actually the panel here at UW-Madison recently about students using this potentially to cheat on things that they didnt think you could cheat on before like having it write your essay for you and stuff. As an educator or someone connected to the education system here. Do you think that these chat bots pose a threat to traditional methods of teaching?

Lee 12:32My opinion, I would say no, I dont see much difference between the moment where we started having access to say calculators, or MATLAB, or Python, those are some things that we still exercise when we are in elementary school. In elementary schools we are supposed to do 12 plus 13 or 10 minus 5, youre still doing it. And of course, I mean, they can go home and to use calculator, and cheat. But we dont care. Because at some point, unless youre going to rely all those machines and devices to do entire your work, you have to do it on your own sometimes. And also you have to understand the principles behind those tasks. So for instance, essay writing is the biggest issues right now with ChatGPT. While I mean, you can always use ChatGPT without knowing anything about essay writing, and I think thats coming is going to be better and better way better this year. However, if you dont decide to not learn how to write essays, then you didnt you end up not knowing something thats really important in your life. So eventually people will choose to learn it anyway. And not cheat. In terms of how to fairly great them. Thats the problem. Yeah, I think grading is the issue. Entire education on breakout.

Deiss 14:01Yes, thats thats kind of the thing. In my opinion, I thought a similar thing where if a student is really good, and they want to improve, and they want to have that good grade on the final exam, whats whatever it is, theyre going to learn what they need to learn. But when it comes to grading individual assignments, I feel if something were it can write your essay for you, it throws the whole, the whole book out the window, where its like, how do I know how to grade things if I cant tell if someone wrote this by themselves for three days, or they put it into a chatbot essentially, regardless of ChatGPT kind of taking over the media and public discourse around machine learning. I often joke with my friends I say, if we think ChatGPT is cool, I dont know what like Google is cooking up in the back for 10 years. Who knows whats going to be here over the next decade? So in your opinion, are there more interesting developments in machine learning right now? People can expect to see and if so, what do you think they are?

Lee 14:56Yeah, but before we move on, I think Google also has a lot of interesting techniques and models, but they are just slower in terms of releasing them and adapting them. So well see, I think the recent announcement on part is super interesting. So well get to see more and more coming like that. So anyway, so talking about other interesting matters. Other than larger models, what also interests me, theres these are diffusion models, I guess, perhaps most have heard about, like data lead to where the model is where you provide text prompt and throw something for you. That was more or less fun, activities, because you couldnt do much with that, like textured image model. But I think the fundamental technique has been applied to many different domains. And now its being used for not just for images, but for audio music, something else like 3D assets, and things are going wider and wider. And we will probably see a moment where these things become really powerful and being used everywhere, basically. So I dont think we need to draw any diagrams by hands. When you create a PowerPoint, you just need to type, whatever you think, how it should look like. It should be able to draw everything for you. And any design problems any Ill say, think about web design, product design, things are going to be very different. Yeah.

Deiss 16:35Yes. I guess just to wrap it up, do people like to kind of fear monger about a lot of this stuff like this is going to destroy the job market, everyones going to be automated away? Thats just one thing I hear. But people people do have concerns about just the prevalence of machine learning thats kind of emerging in our lives. Do you have any concerns about whats going on right now, in the world of machine learning? Or do you think people might be a little too pessimistic?

Lee 17:03There are certainly I will say there are some certain jobs that are going to be less useful than now. Thats clearly a concern. However, for most jobs out there, I think, either they can be benefited from these models and tools, their productivity will become better. And they probably can make more money if they know how to use these tools better. However, for instance, lets say concept artist, or designers, for instance, talking about this diffusion models. At some point, these kind of automated models could become really good at doing almost a job almost as good as what theyre doing right now. And thats the point where its really tricky because either we were gonna see some two different markets, right now, if you go to pottery market, then there are handmade potteries. And factory made pottery is no one can distinguish, to be honest. Yeah, handmade pottery is even more unique. They have some slightly different ways of coloring, and it actually has a little bit of defects that made this handmade pottery is look even more unique and beautiful than the factory made ones. But back in the days, we used to appreciate factory made like pottery, no defect, completely symmetric. Thats what human couldnt make. But I think we are going that way. Because now models are going to be better at making perfect flawless architectures and designs. And probably what we will do as a human designers and artists have a little bit of I wouldnt call it flaws or defects, but well turn look like what machines can make. So maybe those two markets will emerge. And maybe those two markets will survive forever, like pottery market. So I dont know, I cannot expect what will happen, but Im still optimistic.

Deiss 19:05Awesome. I think thats a good end it off on a high note there. And thank you for coming to talk to me today on the Badger Herald podcast, and Im excited to see what you do next in your research.

Lee 19:14All right. Thank you. It was great talking to you.

Deiss 19:15Thank you so much.

Follow this link:
Podcast: Machine Learning and Education The Badger Herald - The Badger Herald

A.I. and machine learning are about to have a breakout moment in finance – Fortune

Good morning,

Theres been a lot of discussion on the use of artificial intelligence and the future of work. Will it replace workers? Will human creativity be usurped by bots? How will A.I. be incorporated into the finance function? These are just some of the questions organizations will face.

I asked Sayan Chakraborty, copresident at Workday (sponsor of CFO Daily), who also leads the product and technology organization, for his perspective on a balance between tech and human capabilities.

Workdays approach to A.I. and machine learning (ML) is to enhance people, not replace them, Chakraborty tells me. Our approach ensures humans can effectively harness A.I. by intelligently applying automation and providing supporting information and recommendationswhile keeping humans in control of all decisions. He continues, We believe that technology and people, working together, can allow businesses to strengthen competitive advantage, be more responsive to customers, deliver greater economic and social value, and generate more meaning and purpose for individuals in their work.

Workday, a provider of enterprise cloud applications for finance and HR, has been building and delivering A.I. and ML to customers for nearly a decade, according to Chakraborty. He holds a seat on the National Artificial Intelligence Advisory Committee (NAIAC), which advises the White House on policy issues related to A.I. (And as much as I pressed, Chakraborty is not at liberty to discuss NAIAC efforts or speak for the committee, he says.) But he did share that generative A.I. continues to be a growing part of policy discussions both in the U.S. and in Europe, which has embraced a risk-based approach to A.I. governance.

Techs future in finance

Chakrabortys Workday colleague Terrance Wampler, group general manager for the Office of the CFO at Workday, has further thoughts on how A.I. will impact finance. If you can automate transaction processes, that means you reduce risk because you reduce manual intervention, Wampler says. Finance chiefs are also looking for the technology to help in accelerating data-based decision-making and recommendations for the company, as well as play a role in training people with new skills, he says.

Consulting firm Gartner recently made three predictions on financial planning and analysis (FP&A) and controller functions and the use of technology:

By 2025, 70% of organizations will use data-lineage-enabling technologies including graph analytics, ML, A.I., and blockchain as critical components of their semantic modeling.

By 2027, 90% of descriptive and diagnostic analytics in finance will be fully automated.

By 2028, 50% of organizations will have replaced time-consuming bottom-up forecasting approaches with A.I.

Workday thinks about and implements A.I. and ML differently than other enterprise software companies, Wampler says. I asked him to explain. Enterprise resource planning (ERP) is a type of software that companies use to manage day-to-day business activities like accounting and procurement. What makes Workdays ERP for finance and HR different is A.I. and ML are embedded into the platform, he says. So, its not like the ERP is just using an A.I. or ML program. It is actually an A.I. and ML construct. And having ML built into the foundation of the system means theres a quicker adaptation of new ML applications when theyre added. For example, Workday Financial Management allows for faster automation of high-volume transactions, he says.

ML gets better the more you use it, and Workday has over 60 million users representing about 442 billion transactions a year, according to the company. So ML improves at a faster rate. The platform also allows you to use A.I. predictively. Lets say an FP&A team has its budget for the year. Using ML, they predictively identify reasons why they would meet that budget, he says. And Workday works on a single cloud-based database for both HR and financials. You have all the information in one place. For quite some time, the company has been using large language models, the technology that has enabled generative A.I., Wampler says. Workday will continue to look into use cases where generative A.I. can add value, he says.

It will definitely be interesting to have a front-row seat as technology in the finance function continues to evolve over the next decade.

Sheryl Estradasheryl.estrada@fortune.com

Upcoming event: The nextFortuneEmerging CFO virtual event, Addressing the Talent Gap with Advanced Technologies, presented in partnership with Workday (a CFO Daily sponsor), will take place from 11 a.m.-12 p.m. EST on April 12. Matt Heimer, executive editor of features atFortune, and I will be joined byKatie Rooney, CFO at Alight Solutions; andAndrew McAfee, cofounder and codirector of MITs Initiative on the Digital Economy and principal research scientist at MIT Sloan School of Management.Click here to learn more and register.

The race to cloud: Reaching the inflection point to long-sought value, a report by Accenture, finds that over the past two years, theres been a surge in cloud commitment, with more than 86% of companies reporting an increase in cloud initiatives. To gauge how companies today are approaching the cloud, Accenture asked them to describe the current state of their cloud journeys. Sixty-eight percent said they still consider their cloud journeys incomplete. About a third of respondents (32%) see their cloud journeys as complete and are satisfied with their abilities to meet current business goals. However, 41% acknowledge their cloud journeys are ongoing and continue to evolve to meet changing business needs. The findings are based on a global survey of 800 business and IT leaders in a variety of industries.

The workforce well-being imperative, a new report by Deloitte, exploresthree factors that have a prominent impact on well-being in todays work environment: leadership behaviors at all levels, from a direct supervisor to the C-suite; how the organization and jobs are designed; and the ways of working across organizational levels. Deloitte refers to these as work determinants of well-being.

Lance Tucker was promoted to CFO at Papa Johns International, Inc. (Nasdaq: PZZA). Tucker succeeds David Flanery, who will retire from Papa Johns after 16 years with the company. Flanery will continue at the company through May, during a transition period. Tucker, 42, has served as Papa Johns SVP of strategic planning and chief of staff since 2010. He has 20 years of finance and management experience, including previously serving in manager and director of finance roles at Papa Johns from 1994 to 1999. Before Papa Johns, Tucker was CFO of Evergreen Real Estate, LLC.

Narayan Menon was named CFO at Matillion, a data productivity cloud company. Menon brings over 25 years of experience in finance and operations. Most recently, Menon served as CFO of Vimeo Inc., where he helped raise multiple rounds of funding and took the company public in 2021. Hes also held senior executive roles at Prezi, Intuit, and Microsoft. Menon also served as an advisory board member for the Rutgers University Big Data program.

This was a bank that was an outlier.

Federal Reserve Chair Jerome Powell said of Silicon Valley Bank in a press conference following a Fed decision to hike interest rates 0.25%, Yahoo Finance reported. Powell referred to the banks high percentage of uninsured deposits and its large investment in bonds with longer durations. These are not weaknesses that are there at all broadly through the banking system, he said.

Read the original post:
A.I. and machine learning are about to have a breakout moment in finance - Fortune

Crypto AI Announces Its Launch, Using AI Machine Learning to … – GlobeNewswire

LONDON, UK, March 23, 2023 (GLOBE NEWSWIRE) -- Crypto AI ($CAI), an AI-powered NFT generator that uses machine learning algorithms to create unique digital assets, has announced its official launch in March 2023. The project aims to revolutionize the NFT space by combining the power of artificial intelligence and machine learning.

Crypto AI ($CAI) is a software application that generates NFTs through a proprietary algorithm that creates unique digital assets. These assets can then be sold on various NFT marketplaces or used as part of a larger project.

Discover What Crypto AI Do

Crypto AI Strives to Disrupt the NFT and Chat GPT space using Artificial Intelligence and Machine Learning.

Martin Weiner, the CEO of Crypto AI, stated, "We are excited to announce the official launch of Crypto AI, an AI-powered NFT generator that uses machine learning algorithms to create unique digital assets. Our goal is to disrupt the NFT space by offering a product that can generate truly unique NFTs that stand out in the marketplace."

Weiner went on to explain the key features of Crypto AI that sets it apart from other NFT generators. "What sets Crypto AI apart is the power of our proprietary algorithm. Our algorithm uses advanced machine learning techniques to create unique digital assets that are truly one-of-a-kind. Our AI-powered NFT generator is not only faster than traditional methods, but it is also more accurate and efficient."

Crypto AI aims to offer a new way for artists and creators to monetize their work through NFTs. The project believes that AI-powered NFTs will help increase the value of digital assets and make them more accessible to a broader audience.

Weiner added, "We believe that AI-powered NFTs have the potential to revolutionize the art world by making it more inclusive and accessible to a wider audience. Our platform offers a new way for artists and creators to monetize their work and showcase it to the world."

Crypto AI is also committed to sustainability and plans to use renewable energy sources for its operations. The project believes that it is essential to minimize the environmental impact of its operations and is actively exploring ways to reduce its carbon footprint.

"We understand the importance of sustainability, and we are committed to minimizing our environmental impact. We plan to use renewable energy sources for our operations and explore ways to reduce our carbon footprint," Weiner stated.

Crypto AI's launch is highly anticipated by the NFT community, and the project has already gained significant interest from artists and collectors worldwide. The project's innovative approach to NFT creation and its commitment to sustainability have made it stand out in a crowded marketplace.

About Crypto AI

Crypto AI ChatGPT Bot is an AI-powered bot that assists users in their conversations with automated and intelligent responses. We use natural language processing and machine learning algorithms to generate meaningful and relevant responses to user queries.

AI App on

https://cai.codes/artist

https://cai.codes/chat

Social Links

Twitter: https://twitter.com/CryptoAIbsc

Telegram: https://t.me/CryptoAI_eng

Medium: https://medium.com/@CryptoAI

Discord: https://github.com/crypto-ai-git

Media Contact

Brand: Crypto AI

E-mail: team@cai.codes

Website: https://cai.codes

SOURCE: Crypto AI

Excerpt from:
Crypto AI Announces Its Launch, Using AI Machine Learning to ... - GlobeNewswire

Machine learning may guide use of neoadjuvant therapy for … – Healio

March 22, 2023

2 min read

Chang J, et al. Machine learning-based investigation of prognostic indicators for oncologic outcome of pancreatic ductal adenocarcinoma. Presented at: Society of Surgical Oncology Annual Meeting; March 22-25, 2023; Boston.

Disclosures: Chang reports no relevant financial disclosures. One researcher reports funding from AngioDynamics, Checkmate Pharmaceuticals, Optimum Therapeutics and Regeneron for unrelated projects or clinical trials.

ADD TOPIC TO EMAIL ALERTS

Receive an email when new articles are posted on

Back to Healio

Machine learning algorithms can help predict positive resection margin and lymph node metastases among patients with pancreatic ductal adenocarcinoma, according to study results.

The approach yielded greater positive predictive values than CT scan for both variables, findings presented at Society of Surgical Oncology Annual Meeting showed.

This hopefully can give providers the ability to identify patients with resectable pancreatic cancer who may benefit from neoadjuvant therapies, researcher Jeremy Chang, MD, MS, surgery resident at University of Iowa Hospitals, said during a press conference.

Pancreatic cancer is the third leading cause of cancer-related death, with a disproportionately high mortality rate compared with incidence due to most patients being diagnosed at advanced stages.

Approximately 15% to 20% of cases are deemed curable with surgery, according to study background. However, up to 80% of patients who undergo surgery develop local or distant recurrence, with key risk factors including lymph node metastasis, positive margins after surgery, larger tumor size and no receipt of chemotherapy.

A recent novel notion is there may be patients with resectable tumors at time of diagnosis who would actually benefit from neoadjuvant therapy or chemoradiation before surgery, Chang said. The question now is, how do we find who those patients are?

Chang and colleagues conducted a pilot study to assess the potential of machine learning which uses algorithms to learn and recognize patterns from input data to predict lymph node metastases or positive resection margins from preoperative scans.

Researchers used a 3-D convolutional neural network, optimized to process pixel or image data.

The network can be divided into three segments and 17 layers, Chang said. The first input layer consists of a CT image, followed by 12 layers of feature extraction, and then four layers of classification or output.

The cohort included adults diagnosed with pancreatic ductal adenocarcinoma who underwent pancreatectomy at University of Iowa Hospitals between 2015 and 2021. All patients had viable preoperative CT and postoperative pathology.

The analysis included 79 patients with a combined 480 CT images. The margin portion of the study also included 31 patients with unresectable locally advanced disease who served as positive controls.

Researchers divided patients into a training group which allowed the algorithm to learn and develop its pattern of recognition and a validation group.

The lymph node status portion of the study included a training group of 59 patients with a combined 340 images, and a validation group of 20 patients with a combined 140 images.

Results of a per-patient analysis showed a sensitivity of 100% (95% CI, 80-100) and specificity of 60% (95% CI, 23-93).

Researchers reported a prediction accuracy of 90%, a positive predictive value of 88% (95% CI, 66-88) and a negative predictive value of 100% (95% CI, 44-100).

The margin status portion of the study included a training group of 83 patients with a combined 629 images, as well as a validation group of 27 patients with a combined 252 images.

Results showed a prediction accuracy of 81%, a positive predictive value of 80% (95% CI, 64-98) and a negative predictive value of 82% (95% CI, 59-94).

For context, the positive predictive value of CT scans the most common modality for pancreatic cancer diagnosis and assessment is 73% for identifying positive nodes and 68% for determining whether resection margins will be positive, Chang said.

Future directions for this study will include increasing size of the training and testing cohorts to increase generalizability, Chang said. Were also planning to use this technology to develop a prospective clinical trial to help stratify patients for neoadjuvant treatment.

ADD TOPIC TO EMAIL ALERTS

Receive an email when new articles are posted on

Back to Healio

Read the original:
Machine learning may guide use of neoadjuvant therapy for ... - Healio

Unlock the Next Wave of Machine Learning with the Hybrid Cloud – The New Stack

Machine learning is no longer about experiments. Most industry-leading enterprises have already seen dramatic successes from their investments in machine learning (ML), and there is near-universal agreement among business executives that building data science capabilities is vital to maintaining and extending their competitive advantage.

The bullish outlook is evident in the U.S. Bureau of Labor Statistics predictions regarding growth of the data science career field: Employment of data scientists is projected to grow 36% from 2021 to 2031, much faster than the average for all occupations.

The aim now is to grow these initial successes beyond the specific parts of the business where they had initially emerged. Companies are looking to scale their data science capabilities to support their entire suite of business goals and embed ML-based processes and solutions everywhere the company does business.

Vanguards within the most data-centric industries, including pharmaceuticals, finance, insurance, aerospace and others, are investing heavily. They are assembling formidable teams of data scientists with varied backgrounds and expertise to develop and place ML models at the core of as many business processes as possible.

More often than not, they are running headlong into the challenges of executing data science projects across the regional, organizational, and technological divisions that abound in every organization. Data is worthless without the tools and infrastructure to use it, and both are fragmented across regions and business units, as well as in cloud and on-premises environments.

Even when analysts and data scientists overcome the hurdle of getting access to data in other parts of the business, they quickly find that they lack effective tools and hardware to leverage the data. At best, this results in low productivity, weeks of delays, and significantly higher costs due to suboptimal hardware, expensive data storage, and unnecessary data transfers. At worst, it results in project failure, or not being able to initiate the project to begin with.

Successful enterprises are learning to overcome these challenges by embracing hybrid-cloud strategies. Hybrid cloud the integrated use of on-premises and cloud environments also encompasses multicloud, the use of cloud offerings from multiple cloud providers. A hybrid-cloud approach enables companies to leverage the best of all worlds.

They can take advantage of the flexibility of cloud environments, the cost benefits of on-premises infrastructure, and the ability to select best-of-breed tools and services from any cloud vendor and machine learning operations tooling. More importantly for data science, hybrid cloud enables teams to leverage the end-to-end set of tools and infrastructure necessary to unlock data-driven value everywhere their data resides.

It allows them to arbitrage the inherent advantages of different environments while preserving data sovereignty and providing the flexibility to evolve as business and organizational conditions change.

While many organizations try to cope with disconnected platforms spread across different on-premises and cloud environments, today the most successful organizations understand that their data science operations must be hybrid cloud by design. That is, to implement end-to-end ML platforms that support hybrid cloud natively and provide integrated capabilities that work seamlessly and consistently across environments.

In a recent Forrester survey of AI infrastructure decision-makers, 71% of IT decision-makers say hybrid cloud support by their AI platform is important for executing their AI strategy, and 29% say its already critical. Further, 91% said they will be investing in hybrid cloud within two years, and 66% said they already had invested in hybrid support for AI workloads.

In addition to the overarching benefit of a hybrid-cloud strategy for data science the ability to execute data science projects and implement ML solutions anywhere in your business there are three key drivers that are accelerating the trend:

Data sovereignty: Regulatory requirements like GDPR are forcing companies to process data locally with the threat of heavy fines in more and more parts of the world. The EU Artificial Intelligence Act, which triages AI applications across three risk categories and calls for outright bans on applications deemed to be the riskiest, will go a step further than fines. Gartner predicts that 65% of the worlds population will soon be covered by similar regulations.

Cost optimization: The size of ML workloads grows as companies scale data science because of the increasing number of use cases, larger volumes of data and the use of computationally intensive, deep learning models. Hybrid-cloud platforms enable companies to direct workloads to the most cost-effective infrastructure; e.g., optimize utilization of an on-premise GPU cluster, and mitigate rising cloud costs.

Flexibility: Taking a hybrid-cloud approach allows for future-proofing to address the inevitable changes in business operations and IT strategy, such as a merger or acquisition involving a company that has a different tech stack, expansion to a new geography where your default cloud vendor does not operate or even a cloud vendor becoming a significant competitor.

Implementing a hybrid-cloud strategy for ML is easier said than done. For example, no public cloud vendor offers more than token support for on-premises workloads, let alone support for a competitors cloud, and the range of tools and infrastructure your data science teams need scales as you grow your data science rosters and undertake more ML projects. Here are the three essential capabilities for which every business must provide hybrid-cloud support in order to scale data science across the organization:

Full data science life cycle coverage: From model development to deployment to monitoring, enterprises need data science tooling and operations to manage every aspect of data science at scale.

Agnostic support for data science tooling: Given the variety of ML and AI projects and the differing skills and backgrounds of the data scientists across your distributed enterprise, your strategy needs to provide hybrid cloud support for the major open-source data science languages and frameworks and likely a few proprietary tools not to mention the extensibility to support the host of new tools and methods that are constantly being developed.

Scalable compute infrastructure: More data, more use cases and more advanced methods require the ability to scale up and scale out with distributed compute and GPU support, but this also requires an ability to support multiple distributed compute frameworks since no single framework is optimal for all workloads. Spark may work perfectly for data engineering, but you should expect that youll need a data-science-focused framework like Ray or Dask (or even OpenMPI) for your ML model training at scale.

Embedding ML models throughout your core business functions lies in the heart of AI-based digital transformation. Organizations must adopt a hybrid-cloud or equivalent multicloud strategy to expand beyond initial successes and deploy impactful ML solutions everywhere.

Data science teams need end-to-end, extensible and scalable hybrid-cloud ML platforms to access the tools, infrastructure and data they need to develop and deploy ML solutions across the business. Organizations need these platforms for the regulatory, cost and flexibility benefits they provide.

The Forrester survey notes that organizations that adopt hybrid cloud approaches to AI development are already seeing the benefits across the entire AI/ML life cycle, experiencing 48% fewer challenges in deploying and scaling their models than companies relying on a single cloud strategy. All evidence suggests that the vanguard of companies who have already invested in their data science teams and platforms are pulling even further ahead using hybrid cloud.

See original here:
Unlock the Next Wave of Machine Learning with the Hybrid Cloud - The New Stack