Podcast: Machine Learning and Education The Badger Herald – The Badger Herald
Jeff Deiss 0:00Greetings, this is Jeff, director of the Badger Herald podcast. And today we have a very exciting episode were talking with Professor Kangwook Lee, part of the Electrical and Computer Engineering Department at the University of Wisconsin Madison. And were going to talk about his research on deep learning and recent developments in machine learning. And also a little bit about his influence on a popular test prep service called Riiid.
So, I originally saw your name in a New York Times article, about Riiid, which is a test prep service started by YJ Jang that uses deep learning to essentially better guide students towards more accurate test prep and just overall academic success. But we can get into that a little bit later. So first, if you want to introduce yourself, and just give a little background on your life.
Lee 1:18Alright, hi, Im Kangwook Lee. Again, Im assistant professor in the ECE department here. Came here in 2019, fall. So its been about three and a half years since I joined here. Ive been enjoying a lot, except the COVID. But everything is great. I mostly work on information theory, machine learning and deep learning in terms of research area. Before that, I did my PhD in Berkeley Masters and PhD in Berkeley. Before that, I was doing my undergrad studies in Korea, I grew up in Korea. So yeah, its been its been a while since I came to the United States. I did went back to Korea for three years for my military service, after my Ph.D., but yeah. So yeah, happy to meet you guys and talk about my research.
Deiss 2:09Of course, and thats the first question I have. So with any topic related to machine learning or information theory, even as someone who studied this at a pretty low level in school, it can be hard to wrap your head around some of these concepts, but maybe just in laymans terms, can you describe some of your recent research to give our listeners a better sense of what you do here at UW-Madison?
Lee 2:32Since I joined Madison, I worked on three different research topics. The first one was, how much data do we need to rely on machine learning? That one, I particularly studied the problem of recommendation where we have data from clients or customers, they provide their ratings on the different types of items. And from that kind of partially observed data. If you want to make recommendations for their future service, we should figure out how much data we need. So that kind of recommendation systems and algorithms was number one topic I worked on. The second topic I worked on was called trustworthy machine learning. So by trustworthy machine learning, I mean, machine learning algorithms are, in most cases, they are not fair. So they are not robust. And others are private, they used to leak private data that was used for training data. So there are many issues like this. And people started looking at how to solve this issue and make more robust, more fair, more or less more private algorithms. So those are the research topics I really liked working on in the last few years. I still work on them. Recently, I have started working on another research topic called large models. So large models are I guess you must have heard about like GPT, diffusion, models lips, those are the models that are becoming more and more popular, but we are lacking in theory in terms of how they work. So thats what I am surprised to see in this case.
Deiss 4:18Yeah, so I just wanted to ask you I often hear not necessarily in true academic papers, but just in the media, I hear about how some of these large models, especially if theyre convoluted, complicated neural networks or deep learning algorithms. Ive heard them described as a black box, where the actual mechanics of whats going on inside what what the algorithm is doing with the data is a little unclear from the outside, or as you have like a simple regression model. Its actually pretty easy to work out the math of what the algorithm is doing with the data but with a large model, is that the case and can describe a little bit about that black box problem that researchers have to deal with
Lee 4:57The black box aspect actually was for more general classes, lets say entire deep learning, you can say they are kind of blackbox. I, I think thats half correct, half incorrect, half incorrect in a sense that when we design those models, we have a particular goal that this, we want this to behave like this. So for instance, even if we call GPT, mostly are largely blackbox-ish, we still design the systems and algorithms such that it is good at predicting the next word. Thats, thats not something just came out out of box we designed such that it predicts the next word well, so. And thats what we are seeing in ChatGPT and OD GPT. So the, in terms of the operation or the final objective, they are doing what they people who designed wanted to do. So its less blackbox in that sense, however, how it actually works that well, I think thats the mysterious part, we couldnt expect how well it will work. But somehow it worked much better than what people expected. So explaining why thats the case. Thats an interesting research question. But thats what makes it a little black box-ish. Whats also very interesting to me is when it comes to GPT, and really large language models, while there is there are more mysterious things happening, going back to the first aspect. In fact, there are some interesting behaviors that people didnt intend to design. So things like incontext learning or future learning. Thats basically like, when you use GPT, you provide a few examples to the to the model, and the model is trying to learn some parents from the examples that are provided, which is a little bit beyond that what people used to expect from the model. So the model has some new properties or behaviors that we didnt design.
Deiss 7:00Yes, and I want to get back to ChatGPT for another perspective and a little bit, but one thing I saw that you were recently researching, I saw come up in interviews is about the straggler problem in machine learning. As far as I know, its where a certain I dont know if node is the correct term or just some part of the machine learning algorithm is so deficient that it brings down the performance of the whole algorithm as a whole. Can you describe a little bit about what the straggler problem is and the research youre doing on it?
Lee 7:29Yeah. So the straggler problem is, is a term that describes where you have a large cluster and your entire cluster is working on a particular task jointly. And if one of the nodes or machine within the cluster starts performing bad or starts producing wrong output or start behaving slower than the other, that the entire system is either getting wrong answers, or either they are becoming entirely very slow. So straggler problem basically means that you have a bigger system consisting of large workers, one of the few workers become very slow, or erroneous, the entire system becomes bad. Thats the phenomenon or the problem. This problem has been first observed in large data centers like Google or Facebook, about a decade ago, they were reporting that there are a few stragglers that make their entire data center really slow, and really bad in terms of the performances. So we started working on how to fix these problems using more principled approaches like information and coding theory, that are very related to large scale machine learning systems. Because large scale machine learning systems require cluster training, distributed training, that kind of stuff. So thats how its connected to distribute machine.
Deiss 8:57Very interesting stuff. I want to pivot away from your research for a little bit and just talk about how I originally heard about your name, like I said, In the beginning, I saw a New York Times article was about a test prep service. And why YJ Jang who started Riiid this test prep service, you said he was inspired by you to kind of use deep learning in his startup, whatever software he was originally creating, what is your relationship with him? And how did you influence them to utilize deep learning?
Lee 9:25Sure. Heres a friend of mine. He texted me with the link to the article is I was really interested to see that link to see the article. I met him about 10 years ago, when I was a student at Berkeley. He was also a student at Berkeley, but we didnt know each other. But we both participated in some some startup competetion over the weekend. So we had when we drove down to San Jose, where the startup competition was happening, and I didnt know him so I was on Find finding some other folks there. And we created a some demo and we gave a pitch. We won the second place, he won the first place.
Deiss 10:09Wow.
Lee 10:10So, and I was talking to him, Hey, where are you from? And he said he was from Berkeley. So Im from Berkeley. So I got to know him from there. I knew he was a really good businessman back then. But, but then we came back to Berkeley, we started talking more and more. And we had some idea of having a startup. So we had some ideas, we spent about six months developing business ideas, and also building some demos. It was also related to education. So its slightly different from what they are working on now. But eventually, we found that the business is really difficult to run. So we gave up. But after that, he started his own business. And he started asking me, Hey, I have this interesting problem. But I think machine learning could play a big role here. So he started sharing his business idea. And then that was the time when I was working on machine learning. In particular, I was working on recommendation system. And I was able to find the connection between the recommendation system, and what the problem they are working with the problem they are working on is students are working and spending so much time on prepping test. And they waste so much time on working on something they already know, efficient test prep is no different from not wasting time on watching some, something thats not yours on Netflix. So yeah, so thats the point where I started this kind of idea, sharing the sharing this idea with him. And in fact, deep learning was necessarily being used for recommendation system. So all these ideas I shared with him, and he made a great business out of it.
Deiss 11:54Yes, definitely. Obviously, test prep services like this are some ways in which machine learning and deep learning models could actually help educators. But in the media, and I see all the time, its all about ChatGPT all that I hear like every day, theres some new news about ChatGPT. And I think that actually the panel here at UW-Madison recently about students using this potentially to cheat on things that they didnt think you could cheat on before like having it write your essay for you and stuff. As an educator or someone connected to the education system here. Do you think that these chat bots pose a threat to traditional methods of teaching?
Lee 12:32My opinion, I would say no, I dont see much difference between the moment where we started having access to say calculators, or MATLAB, or Python, those are some things that we still exercise when we are in elementary school. In elementary schools we are supposed to do 12 plus 13 or 10 minus 5, youre still doing it. And of course, I mean, they can go home and to use calculator, and cheat. But we dont care. Because at some point, unless youre going to rely all those machines and devices to do entire your work, you have to do it on your own sometimes. And also you have to understand the principles behind those tasks. So for instance, essay writing is the biggest issues right now with ChatGPT. While I mean, you can always use ChatGPT without knowing anything about essay writing, and I think thats coming is going to be better and better way better this year. However, if you dont decide to not learn how to write essays, then you didnt you end up not knowing something thats really important in your life. So eventually people will choose to learn it anyway. And not cheat. In terms of how to fairly great them. Thats the problem. Yeah, I think grading is the issue. Entire education on breakout.
Deiss 14:01Yes, thats thats kind of the thing. In my opinion, I thought a similar thing where if a student is really good, and they want to improve, and they want to have that good grade on the final exam, whats whatever it is, theyre going to learn what they need to learn. But when it comes to grading individual assignments, I feel if something were it can write your essay for you, it throws the whole, the whole book out the window, where its like, how do I know how to grade things if I cant tell if someone wrote this by themselves for three days, or they put it into a chatbot essentially, regardless of ChatGPT kind of taking over the media and public discourse around machine learning. I often joke with my friends I say, if we think ChatGPT is cool, I dont know what like Google is cooking up in the back for 10 years. Who knows whats going to be here over the next decade? So in your opinion, are there more interesting developments in machine learning right now? People can expect to see and if so, what do you think they are?
Lee 14:56Yeah, but before we move on, I think Google also has a lot of interesting techniques and models, but they are just slower in terms of releasing them and adapting them. So well see, I think the recent announcement on part is super interesting. So well get to see more and more coming like that. So anyway, so talking about other interesting matters. Other than larger models, what also interests me, theres these are diffusion models, I guess, perhaps most have heard about, like data lead to where the model is where you provide text prompt and throw something for you. That was more or less fun, activities, because you couldnt do much with that, like textured image model. But I think the fundamental technique has been applied to many different domains. And now its being used for not just for images, but for audio music, something else like 3D assets, and things are going wider and wider. And we will probably see a moment where these things become really powerful and being used everywhere, basically. So I dont think we need to draw any diagrams by hands. When you create a PowerPoint, you just need to type, whatever you think, how it should look like. It should be able to draw everything for you. And any design problems any Ill say, think about web design, product design, things are going to be very different. Yeah.
Deiss 16:35Yes. I guess just to wrap it up, do people like to kind of fear monger about a lot of this stuff like this is going to destroy the job market, everyones going to be automated away? Thats just one thing I hear. But people people do have concerns about just the prevalence of machine learning thats kind of emerging in our lives. Do you have any concerns about whats going on right now, in the world of machine learning? Or do you think people might be a little too pessimistic?
Lee 17:03There are certainly I will say there are some certain jobs that are going to be less useful than now. Thats clearly a concern. However, for most jobs out there, I think, either they can be benefited from these models and tools, their productivity will become better. And they probably can make more money if they know how to use these tools better. However, for instance, lets say concept artist, or designers, for instance, talking about this diffusion models. At some point, these kind of automated models could become really good at doing almost a job almost as good as what theyre doing right now. And thats the point where its really tricky because either we were gonna see some two different markets, right now, if you go to pottery market, then there are handmade potteries. And factory made pottery is no one can distinguish, to be honest. Yeah, handmade pottery is even more unique. They have some slightly different ways of coloring, and it actually has a little bit of defects that made this handmade pottery is look even more unique and beautiful than the factory made ones. But back in the days, we used to appreciate factory made like pottery, no defect, completely symmetric. Thats what human couldnt make. But I think we are going that way. Because now models are going to be better at making perfect flawless architectures and designs. And probably what we will do as a human designers and artists have a little bit of I wouldnt call it flaws or defects, but well turn look like what machines can make. So maybe those two markets will emerge. And maybe those two markets will survive forever, like pottery market. So I dont know, I cannot expect what will happen, but Im still optimistic.
Deiss 19:05Awesome. I think thats a good end it off on a high note there. And thank you for coming to talk to me today on the Badger Herald podcast, and Im excited to see what you do next in your research.
Lee 19:14All right. Thank you. It was great talking to you.
Deiss 19:15Thank you so much.
Follow this link:
Podcast: Machine Learning and Education The Badger Herald - The Badger Herald
- 3D Shape Tokenization - Apple Machine Learning Research - January 9th, 2025 [January 9th, 2025]
- Machine Learning Used To Create Scalable Solution for Single-Cell Analysis - Technology Networks - January 9th, 2025 [January 9th, 2025]
- Robotics: machine learning paves the way for intuitive robots - Hello Future - January 9th, 2025 [January 9th, 2025]
- Machine learning-based estimation of crude oil-nitrogen interfacial tension - Nature.com - January 9th, 2025 [January 9th, 2025]
- Machine learning Nomogram for Predicting endometrial lesions after tamoxifen therapy in breast Cancer patients - Nature.com - January 9th, 2025 [January 9th, 2025]
- Staying ahead of the automation, AI and machine learning curve - Creamer Media's Engineering News - January 9th, 2025 [January 9th, 2025]
- Machine Learning and Quantum Computing Predict Which Antibiotic To Prescribe for UTIs - Consult QD - January 9th, 2025 [January 9th, 2025]
- Machine Learning, Innovation, And The Future Of AI: A Conversation With Manoj Bhoyar - International Business Times UK - January 9th, 2025 [January 9th, 2025]
- AMD's FSR 4 will use machine learning but requires an RDNA 4 GPU, promises 'a dramatic improvement in terms of performance and quality' - PC Gamer - January 9th, 2025 [January 9th, 2025]
- Explainable artificial intelligence with UNet based segmentation and Bayesian machine learning for classification of brain tumors using MRI images -... - January 9th, 2025 [January 9th, 2025]
- Understanding the Fundamentals of AI and Machine Learning - Nairobi Wire - January 9th, 2025 [January 9th, 2025]
- Machine learning can help blood tests have a separate normal for each patient - The Hindu - January 1st, 2025 [January 1st, 2025]
- Artificial Intelligence and Machine Learning Programs Introduced this Spring - The Flash Today - January 1st, 2025 [January 1st, 2025]
- Virtual reality-assisted prediction of adult ADHD based on eye tracking, EEG, actigraphy and behavioral indices: a machine learning analysis of... - January 1st, 2025 [January 1st, 2025]
- Open source machine learning systems are highly vulnerable to security threats - TechRadar - December 22nd, 2024 [December 22nd, 2024]
- After the PS5 Pro's less dramatic changes, PlayStation architect Mark Cerny says the next-gen will focus more on CPUs, memory, and machine-learning -... - December 22nd, 2024 [December 22nd, 2024]
- Accelerating LLM Inference on NVIDIA GPUs with ReDrafter - Apple Machine Learning Research - December 22nd, 2024 [December 22nd, 2024]
- Machine learning for the prediction of mortality in patients with sepsis-associated acute kidney injury: a systematic review and meta-analysis - BMC... - December 22nd, 2024 [December 22nd, 2024]
- Machine learning uncovers three osteosarcoma subtypes for targeted treatment - Medical Xpress - December 22nd, 2024 [December 22nd, 2024]
- From Miniatures to Machine Learning: Crafting the VFX of Alien: Romulus - Animation World Network - December 22nd, 2024 [December 22nd, 2024]
- Identification of hub genes, diagnostic model, and immune infiltration in preeclampsia by integrated bioinformatics analysis and machine learning -... - December 22nd, 2024 [December 22nd, 2024]
- This AI Paper from Microsoft and Novartis Introduces Chimera: A Machine Learning Framework for Accurate and Scalable Retrosynthesis Prediction -... - December 18th, 2024 [December 18th, 2024]
- Benefits and Challenges of Integrating AI and Machine Learning into EHR Systems - Healthcare IT Today - December 18th, 2024 [December 18th, 2024]
- The History Of AI: How Machine Learning's Evolution Is Reshaping Everything Around Us - SlashGear - December 18th, 2024 [December 18th, 2024]
- AI and Machine Learning to Enhance Pension Plan Governance and the Investor Experience: New CFA Institute Research - Fintech Finance - December 18th, 2024 [December 18th, 2024]
- Address Common Machine Learning Challenges With Managed MLflow - The New Stack - December 18th, 2024 [December 18th, 2024]
- Machine Learning Used To Classify Fossils Of Extinct Pollen - Offworld Astrobiology Applications? - Astrobiology News - December 18th, 2024 [December 18th, 2024]
- Machine learning model predicts CDK4/6 inhibitor effectiveness in metastatic breast cancer - News-Medical.Net - December 18th, 2024 [December 18th, 2024]
- New Lockheed Martin Subsidiary to Offer Machine Learning Tools to Defense Customers - ExecutiveBiz - December 18th, 2024 [December 18th, 2024]
- How Powerful Will AI and Machine Learning Become? - International Policy Digest - December 18th, 2024 [December 18th, 2024]
- ChatGPT-Assisted Machine Learning for Chronic Disease Classification and Prediction: A Developmental and Validation Study - Cureus - December 18th, 2024 [December 18th, 2024]
- Blood Tests Are Far From Perfect But Machine Learning Could Change That - Inverse - December 18th, 2024 [December 18th, 2024]
- Amazons AGI boss: You dont need a PhD in machine learning to build with AI anymore - Fortune - December 18th, 2024 [December 18th, 2024]
- From Novice to Pro: A Roadmap for Your Machine Learning Career - KDnuggets - December 10th, 2024 [December 10th, 2024]
- Dimension nabs $500M second fund for 'still contrary' intersection of bio and machine learning - Endpoints News - December 10th, 2024 [December 10th, 2024]
- Using Machine Learning to Make A Really Big Detailed Simulation - Astrobites - December 10th, 2024 [December 10th, 2024]
- Driving Business Growth with GreenTomatos Data and Machine Learning Strategy on Generative AI - AWS Blog - December 10th, 2024 [December 10th, 2024]
- Unlocking the power of data analytics and machine learning to drive business performance - WTW - December 10th, 2024 [December 10th, 2024]
- AI and the Ethics of Machine Learning | by Abwahabanjum | Dec, 2024 - Medium - December 10th, 2024 [December 10th, 2024]
- Differentiating Cystic Lesions in the Sellar Region of the Brain Using Artificial Intelligence and Machine Learning for Early Diagnosis: A Prospective... - December 10th, 2024 [December 10th, 2024]
- New Amazon SageMaker AI Innovations Reimagine How Customers Build and Scale Generative AI and Machine Learning Models - Amazon Press Release - December 10th, 2024 [December 10th, 2024]
- What is Machine Learning? 18 Crucial Concepts in AI, ML, and LLMs - Netguru - December 5th, 2024 [December 5th, 2024]
- Machine learning-based prediction of antibiotic resistance in Mycobacterium tuberculosis clinical isolates from Uganda - BMC Infectious Diseases - December 5th, 2024 [December 5th, 2024]
- Interdisciplinary Team Needed to Apply Machine Learning in Epilepsy Surgery: Lara Jehi, MD, MHCDS - Neurology Live - December 5th, 2024 [December 5th, 2024]
- A multimodal machine learning model for the stratification of breast cancer risk - Nature.com - December 5th, 2024 [December 5th, 2024]
- Machine learning based intrusion detection framework for detecting security attacks in internet of things - Nature.com - December 5th, 2024 [December 5th, 2024]
- Machine learning evaluation of a hypertension screening program in a university workforce over five years - Nature.com - December 5th, 2024 [December 5th, 2024]
- Vaultree Introduces VENum Stack: Combining the Power of Machine Learning and Encrypted Data Processing for Secure Innovation - PR Newswire - December 5th, 2024 [December 5th, 2024]
- Direct simulation and machine learning structure identification unravel soft martensitic transformation and twinning dynamics - pnas.org - December 5th, 2024 [December 5th, 2024]
- AI and Machine Learning - Maryland to use AI technology to manage traffic flow - SmartCitiesWorld - December 5th, 2024 [December 5th, 2024]
- Researchers make machine learning breakthrough in lithium-ion tech here's how it could make aging batteries safer - Yahoo! Voices - December 5th, 2024 [December 5th, 2024]
- Integrating IoT and machine learning: Benefits and use cases - TechTarget - December 5th, 2024 [December 5th, 2024]
- Landsat asks industry for artificial intelligence (AI) and machine learning for satellite operations - Military & Aerospace Electronics - December 5th, 2024 [December 5th, 2024]
- Machine learning optimized efficient graphene-based ultra-broadband solar absorber for solar thermal applications - Nature.com - December 5th, 2024 [December 5th, 2024]
- Polymathic AI Releases The Well: 15TB of Machine Learning Datasets Containing Numerical Simulations of a Wide Variety of Spatiotemporal Physical... - December 5th, 2024 [December 5th, 2024]
- Prediction of preterm birth using machine learning: a comprehensive analysis based on large-scale preschool children survey data in Shenzhen of China... - December 5th, 2024 [December 5th, 2024]
- Application of machine learning algorithms to identify serological predictors of COVID-19 severity and outcomes - Nature.com - November 30th, 2024 [November 30th, 2024]
- Predicting the time to get back to work using statistical models and machine learning approaches - BMC Medical Research Methodology - November 30th, 2024 [November 30th, 2024]
- AI and Machine Learning - US releases recommendations for use of AI in critical infrastructure - SmartCitiesWorld - November 30th, 2024 [November 30th, 2024]
- Machine learning-based diagnostic model for stroke in non-neurological intensive care unit patients with acute neurological manifestations -... - November 28th, 2024 [November 28th, 2024]
- Analysis of four long non-coding RNAs for hepatocellular carcinoma screening and prognosis by the aid of machine learning techniques - Nature.com - November 28th, 2024 [November 28th, 2024]
- Evaluation and prediction of the physical properties and quality of Jatob-do-Cerrado seeds processed and stored in different conditions using machine... - November 28th, 2024 [November 28th, 2024]
- Researchers use fitness tracker data and machine learning to detect bipolar disorder mood swings - Medical Xpress - November 28th, 2024 [November 28th, 2024]
- Advances in AI and Machine Learning for Nuclear Applications - Frontiers - November 28th, 2024 [November 28th, 2024]
- Researchers make machine learning breakthrough in lithium-ion tech here's how it could make aging batteries safer - The Cool Down - November 28th, 2024 [November 28th, 2024]
- Svitla Systems Publishes Results of the Study on Machine Learning's Role in Credit Scoring - Newsfile - November 28th, 2024 [November 28th, 2024]
- Predicting poor performance on cognitive tests among older adults using wearable device data and machine learning: a feasibility study - Nature.com - November 28th, 2024 [November 28th, 2024]
- Quantum Machine Learning: Bridging the Future of AI and Quantum Computing - TechBullion - November 28th, 2024 [November 28th, 2024]
- AI and machine learning trends in healthcare - Healthcare Leader - November 28th, 2024 [November 28th, 2024]
- Identification of biomarkers for the diagnosis in colorectal polyps and metabolic dysfunction-associated steatohepatitis (MASH) by bioinformatics... - November 28th, 2024 [November 28th, 2024]
- Revolutionizing Business Systems with Machine Learning: Practical Innovations for the Modern Era - TechBullion - November 28th, 2024 [November 28th, 2024]
- Can AI improve plant-based meats? Using mechanical testing and machine learning to mimic the sensory experience - Phys.org - November 16th, 2024 [November 16th, 2024]
- Machine Learning Reveals Impact of Microbial Load on Gut Health and Disease - Genetic Engineering & Biotechnology News - November 16th, 2024 [November 16th, 2024]
- Machine learning for predicting in-hospital mortality in elderly patients with heart failure combined with hypertension: a multicenter retrospective... - November 16th, 2024 [November 16th, 2024]
- Apple Researchers Propose Cut Cross-Entropy (CCE): A Machine Learning Method that Computes the Cross-Entropy Loss without Materializing the Logits for... - November 16th, 2024 [November 16th, 2024]
- Exploring electron-beam induced modifications of materials with machine-learning assisted high temporal resolution electron microscopy - Nature.com - November 16th, 2024 [November 16th, 2024]
- Facilitated the discovery of new / Co-based superalloys by combining first-principles and machine learning - Nature.com - November 16th, 2024 [November 16th, 2024]
- Thwarting Phishing Attacks with Predictive Analytics and Machine Learning in 2024 - Petri.com - November 16th, 2024 [November 16th, 2024]
- Optoelectronic performance prediction of HgCdTe homojunction photodetector in long wave infrared spectral region using traditional simulations and... - November 16th, 2024 [November 16th, 2024]
- A new approach for sex prediction by evaluating mandibular arch and canine dimensions with machine-learning classifiers and intraoral scanners (a... - November 16th, 2024 [November 16th, 2024]