Reinforcement learning for the real world – TechTalks
This article is part of ourreviews of AI research papers, a series of posts that explore the latest findings in artificial intelligence.
Labor- and data-efficiency remain two of the key challenges of artificial intelligence. In recent decades, researchers have proven that big data and machine learning algorithms reduce the need for providing AI systems with prior rules and knowledge. But machine learningand more recently deep learninghave presented their own challenges, which require manual labor albeit of different nature.
Creating AI systems that can genuinely learn on their own with minimal human guidance remain a holy grail and a great challenge. According to Sergey Levine, assistant professor at the University of California, Berkeley, a promising direction of research for the AI community is self-supervised offline reinforcement learning.
This is a variation of the RL paradigm that is very close to how humans and animals learn to reuse previously acquired data and skills, and it can be a great boon for applying AI to real-world settings. In a paper titled Understanding the World Through Action and a talk at the NeurIPS 2021 conference, Levine explained how self-supervised learning objectives and offline RL can help create generalized AI systems that can be applied to various tasks.
One common argument in favor of machine learning algorithms is their ability to scale with the availability of data and compute resources. Decades of work on developing symbolic AI systems have produced limited results. These systems require human experts and engineers to manually provide the rules and knowledge that define the behavior of the AI system.
The problem is that in some applications, the rules can be virtually limitless, while in others, they cant be explicitly defined.
In contrast, machine learning models can derive their behavior from data, without the need for explicit rules and prior knowledge. Another advantage of machine learning is that it can glean its own solutions from its training data, which are often more accurate than knowledge engineered by humans.
But machine learning faces its own challenges. Most ML applications are based on supervised learning and require training data to be manually labeled by human annotators. Data annotation poses severe limits to the scaling of ML models.
More recently, researchers have been exploring unsupervised and self-supervised learning, ML paradigms that obviate the need for manual labels. These approaches have helped overcome the limits of machine learning in some applications such as language modeling and medical imaging. But theyre still faced with challenges that prevent their use in more general settings.
Current methods for learning without human labels still require considerable human insight (which is often domain-specific!) to engineer self-supervised learning objectives that allow large models to acquire meaningful knowledge from unlabeled datasets, Levine writes.
Levine writes that the next objective should be to create AI systems that dont require manual labeling or the manual design of self-supervised objectives. These models should be able to distill a deep and meaningful understanding of the world and can perform downstream tasks with robustness generalization, and even a degree of common sense.
Reinforcement learning is inspired by intelligent behavior in animals and humans. Reinforcement learning pioneer Richard Sutton describes RL as the first computational theory of intelligence. An RL agent develops its behavior by interacting with its environment, weighing the punishments and rewards of its actions, and developing policies that maximize rewards.
RL, and more recently deep RL, have proven to be particularly efficient at solving complicated problems such as playing games and training robots. And theres reason to believe reinforcement learning can overcome the limits of current ML systems.
But before it does, RL must overcome its own set of challenges that limit its use in real-world settings.
We could think of modern RL research as consisting of three threads: (1) getting good results in simulated benchmarks (e.g., video games); (2) using simulation+ transfer; (3) running RL in the real world, Levine told TechTalks. I believe that ultimately (3) is the most importantthing, because thats the most promising approach to solve problems that we cant solve today.
Games are simple environments. Board games such as chess and go are closed worlds with deterministic environments. Even games such as StarCraft and Dota, which are played in real-time and have near unlimited states, are much simpler than the real world. Their rules dont change. This is partly why game-playing AI systems have found very few applications in the real world.
On the other hand, physics simulators have seen tremendous advances in recent years. One of the popular methods in fields such as robotics and self-driving cars has been to train reinforcement learning models in simulated environments and then finetune the models with real-world experience. But as Levine explained, this approach is limited too because the domains where we most need learningthe ones where humans far outperform machinesare also the ones that are hardest to simulate.
This approach is only effective at addressing tasks that can be simulated, which is bottlenecked by our ability to create lifelike simulated analogues of the real world and to anticipate all the possible situations that an agent might encounter in reality, Levine said.
One of the biggest challenges we encounter when we try to do real-world RL is generalization, Levine said.
For example, in 2016, Levine was part of a team that constructed an arm farm at Google with 14 robots all learning concurrently from their shared experience. They collected more than half a million grasp attempts, and it was possible to learn effective grasping policies in this way.
But we cant repeat this process for every single task we want robots to learn with RL, he says. Therefore, we need more general-purpose approaches, where a single ever-growing dataset is used as the basis for a general understanding of the world on which more specific skills can be built.
In his paper, Levine points to two key obstacles in reinforcement learning. First, RL systems require manually defined reward functions or goals before they can learn the behaviors that help accomplish those goals. And second, reinforcement learning requires online experience and is not data-driven, which makes it hard to train them on large datasets. Most recent accomplishments in RL have relied on engineers at very wealthy tech companies using massive compute resources to generate immense experiences instead of reusing available data.
Therefore, RL systems need solutions that can learn from past experience and repurpose their learnings in more generalized ways. Moreover, they should be able to handle the continuity of the real world. Unlike simulated environments, you cant reset the real world and start everything from scratch. You need learning systems that can quickly adapt to the constant and unpredictable changes to their environment.
In his NeurIPS talk, Levine compares real-world RL to the story of Robinson Crusoe, the story of a man who is stranded on an island and learns to deal with unknown situations through inventiveness and creativity, using his knowledge of the world and continued exploration in his new habitat.
RL systems in the real world have to deal with a lifelong learning problem, evaluate objectives and performance based entirely on realistic sensing without access to privileged information, and must deal with real-world constraints, including safety, Levine said. These are all things that are typically abstracted away in widely used RL benchmark tasks and video game environments.
However, RL does work in more practical real-world settings, Levine says. For example, in 2018, he and his colleagues an RL-based robotic grasping system attain state-of-the-art results with raw sensory perception. In contrast to static learning behaviors that choose a grasp point and then execute the desired grasp, in their method, the robot continuously updated its grasp strategy based on the most recent observations to optimize long-horizon grasp success.
To my knowledge this is still the best existing system for grasping from monocular RGB images, Levine said. But this sort of thing requires algorithms that are somewhat different from those that perform best in simulated video game settings: it requires algorithms that are adept at utilizing and reusing previously collected data, algorithms that can train large models that generalize, and algorithms that can support large-scale real-world data collection.
Levines reinforcement learning solution includes two key components: unsupervised/self-supervised learning and offline learning.
In his paper, Levine describes self-supervised reinforcement learning as a system that can learn behaviors that control the world in meaningful ways and provides some mechanism to learn to control [the world] in as many ways as possible.
Basically, this means that instead of being optimized for a single goal, the RL agent should be able to achieve many different goals by computing counterfactuals, learning causal models, and obtaining a deep understanding of how actions affect its environment in the long term.
However, creating self-supervised RL models that can solve various goals would still require a massive amount of experience. To address this challenge, Levine proposes offline reinforcement learning, which makes it possible for models to continue learning from previously collected data without the need for continued online experience.
Offline RL can make it possible to apply self-supervised or unsupervised RL methods even in settings where online collection is infeasible, and such methods can serve as one of the most powerful tools for incorporating large and diverse datasets into self-supervised RL, he writes.
The combination of self-supervised and offline RL can help create agents that can create building blocks for learning new tasks and continue learning with little need for new data.
This is very similar to how we learn in the real world. For example, when you want to learn basketball, you use basic skills you learned in the past such as walking, running, jumping, handling objects, etc. You use these capabilities to develop new skills such as dribbling, crossovers, jump shots, free throws, layups, straight and bounce passes, eurosteps, dunks (if youre tall enough), etc. These skills build on each other and help you reach the bigger goal, which is to outscore your opponent. At the same time, you can learn from offline data by reflecting on your past experience and thinking about counterfactuals (e.g., what would have happened if you passed to an open teammate instead of taking a contested shot). You can also learn by processing other data such as videos of yourself and your opponents. In fact, on-court experience is just part of your continuous learning.
Ina paper, Yevgen Chetobar, one of Levines colleagues, shows how self-supervised offline RL can learn policies for fairly general robotic manipulation skills, directly reusing data that they had collected for another project.
This system was able to reach a variety of user-specified goals, and also act as a general-purpose pretraining procedure (a kind of BERT for robotics) for other kinds of tasks specified with conventional reward functions, Levine said.
One of the great benefits of offline and self-supervised RL is learning from real-world data instead of simulated environments.
Basically, it comes down to this question: is it easier to create a brain, or is it easier to create the universe? I think its easier to create a brain, because it is part of the universe, he said.
This is, in fact, one of the great challenges engineers face when creating simulated environments. For example, Levine says, effective simulation for autonomous driving requires simulating other drivers, which requires having an autonomous driving system, which requires simulating other drivers, which requires having an autonomous driving system, etc.
Ultimately, learning from real data will be more effective because it will simply be much easier and more scalable, just as weve seen in supervised learning domains in computer vision and NLP, where no one worries about using simulation, he said. My perspective is that we should figure out how to do RL in a scalable and general-purpose way using real data, and this will spare us from having to expend inordinate amounts of effort building simulators.
See the article here:
Reinforcement learning for the real world - TechTalks
- Infleqtion Unveils Contextual Machine Learning (CML) at GTC 2025, Powering AI Breakthroughs with NVIDIA CUDA-Q and Quantum-Inspired Algorithms - Yahoo... - March 22nd, 2025 [March 22nd, 2025]
- Karlie Kloss' coding nonprofit offering free AI and machine learning workshop this weekend - KSDK.com - March 22nd, 2025 [March 22nd, 2025]
- Machine learning reveals distinct neuroanatomical signatures of cardiovascular and metabolic diseases in cognitively unimpaired individuals -... - March 22nd, 2025 [March 22nd, 2025]
- Machine learning analysis of cardiovascular risk factors and their associations with hearing loss - Nature.com - March 22nd, 2025 [March 22nd, 2025]
- Weekly Recap: Dual-Cure Inks, AI And Machine Learning Top This Weeks Stories - Ink World Magazine - March 22nd, 2025 [March 22nd, 2025]
- Network-based predictive models for artificial intelligence: an interpretable application of machine learning techniques in the assessment of... - March 22nd, 2025 [March 22nd, 2025]
- Machine learning aids in detection of 'brain tsunamis' - University of Cincinnati - March 22nd, 2025 [March 22nd, 2025]
- AI & Machine Learning in Database Management: Studying Trends and Applications with Nithin Gadicharla - Tech Times - March 22nd, 2025 [March 22nd, 2025]
- MicroRNA Biomarkers and Machine Learning for Hypertension Subtyping - Physician's Weekly - March 22nd, 2025 [March 22nd, 2025]
- Machine Learning Pioneer Ramin Hasani Joins Info-Tech's "Digital Disruption" Podcast to Explore the Future of AI and Liquid Neural Networks... - March 22nd, 2025 [March 22nd, 2025]
- Predicting HIV treatment nonadherence in adolescents with machine learning - News-Medical.Net - March 22nd, 2025 [March 22nd, 2025]
- AI And Machine Learning In Ink And Coatings Formulation - Ink World Magazine - March 22nd, 2025 [March 22nd, 2025]
- Counting whales by eavesdropping on their chatter, with help from machine learning - Mongabay.com - March 22nd, 2025 [March 22nd, 2025]
- Associate Professor - Artificial Intelligence and Machine Learning job with GALGOTIAS UNIVERSITY | 390348 - Times Higher Education - March 22nd, 2025 [March 22nd, 2025]
- Innovative Machine Learning Tool Reveals Secrets Of Marine Microbial Proteins - Evrim Aac - March 22nd, 2025 [March 22nd, 2025]
- Exploring the role of breastfeeding, antibiotics, and indoor environments in preschool children atopic dermatitis through machine learning and hygiene... - March 22nd, 2025 [March 22nd, 2025]
- Applying machine learning algorithms to explore the impact of combined noise and dust on hearing loss in occupationally exposed populations -... - March 22nd, 2025 [March 22nd, 2025]
- 'We want them to be the creators': Karlie Kloss' coding nonprofit offering free AI and machine learning workshop this weekend - KSDK.com - March 22nd, 2025 [March 22nd, 2025]
- New headset reads minds and uses AR, AI and machine learning to help people with locked-in-syndrome communicate with loved ones again - PC Gamer - March 22nd, 2025 [March 22nd, 2025]
- Enhancing cybersecurity through script development using machine and deep learning for advanced threat mitigation - Nature.com - March 11th, 2025 [March 11th, 2025]
- Machine learning-assisted wearable sensing systems for speech recognition and interaction - Nature.com - March 11th, 2025 [March 11th, 2025]
- Machine learning uncovers complexity of immunotherapy variables in bladder cancer - Hospital Healthcare - March 11th, 2025 [March 11th, 2025]
- Machine-learning algorithm analyzes gravitational waves from merging neutron stars in the blink of an eye - The University of Rhode Island - March 11th, 2025 [March 11th, 2025]
- Precision soil sampling strategy for the delineation of management zones in olive cultivation using unsupervised machine learning methods - Nature.com - March 11th, 2025 [March 11th, 2025]
- AI in Esports: How Machine Learning is Transforming Anti-Cheat Systems in Esports - Jumpstart Media - March 11th, 2025 [March 11th, 2025]
- Whats that microplastic? Advances in machine learning are making identifying plastics in the environment more reliable - The Conversation Indonesia - March 11th, 2025 [March 11th, 2025]
- Application of machine learning techniques in GlaucomAI system for glaucoma diagnosis and collaborative research support - Nature.com - March 11th, 2025 [March 11th, 2025]
- Elucidating the role of KCTD10 in coronary atherosclerosis: Harnessing bioinformatics and machine learning to advance understanding - Nature.com - March 11th, 2025 [March 11th, 2025]
- Hugging Face Tutorial: Unleashing the Power of AI and Machine Learning - - March 11th, 2025 [March 11th, 2025]
- Utilizing Machine Learning to Predict Host Stars and the Key Elemental Abundances of Small Planets - Astrobiology News - March 11th, 2025 [March 11th, 2025]
- AI to the rescue: Study shows machine learning predicts long term recovery for anxiety with 72% accuracy - Hindustan Times - March 11th, 2025 [March 11th, 2025]
- New in 2025.3: Reducing false positives with Machine Learning - Emsisoft - March 5th, 2025 [March 5th, 2025]
- Abnormal FX Returns And Liquidity-Based Machine Learning Approaches - Seeking Alpha - March 5th, 2025 [March 5th, 2025]
- Sentiment analysis of emoji fused reviews using machine learning and Bert - Nature.com - March 5th, 2025 [March 5th, 2025]
- Detection of obstetric anal sphincter injuries using machine learning-assisted impedance spectroscopy: a prospective, comparative, multicentre... - March 5th, 2025 [March 5th, 2025]
- JFrog and Hugging Face team to improve machine learning security and transparency for developers - SDxCentral - March 5th, 2025 [March 5th, 2025]
- Opportunistic access control scheme for enhancing IoT-enabled healthcare security using blockchain and machine learning - Nature.com - March 5th, 2025 [March 5th, 2025]
- AI and Machine Learning Operationalization Software Market Hits New High | Major Giants Google, IBM, Microsoft - openPR - March 5th, 2025 [March 5th, 2025]
- FICO secures new patents in AI and machine learning technologies - Investing.com - March 5th, 2025 [March 5th, 2025]
- Study on landslide hazard risk in Wenzhou based on slope units and machine learning approaches - Nature.com - March 5th, 2025 [March 5th, 2025]
- NVIDIA Is Finding Great Success With Vulkan Machine Learning - Competitive With CUDA - Phoronix - March 3rd, 2025 [March 3rd, 2025]
- MRI radiomics based on machine learning in high-grade gliomas as a promising tool for prediction of CD44 expression and overall survival - Nature.com - March 3rd, 2025 [March 3rd, 2025]
- AI and Machine Learning - Identifying meaningful use cases to fulfil the promise of AI in cities - SmartCitiesWorld - March 3rd, 2025 [March 3rd, 2025]
- Prediction of contrast-associated acute kidney injury with machine-learning in patients undergoing contrast-enhanced computed tomography in emergency... - March 3rd, 2025 [March 3rd, 2025]
- Predicting Ag Harvest using ArcGIS and Machine Learning - Esri - March 1st, 2025 [March 1st, 2025]
- Seeing Through The Hype: The Difference Between AI And Machine Learning In Marketing - AdExchanger - March 1st, 2025 [March 1st, 2025]
- Machine Learning Meets War Termination: Using AI to Explore Peace Scenarios in Ukraine - Center for Strategic & International Studies - March 1st, 2025 [March 1st, 2025]
- Statistical and machine learning analysis of diesel engines fueled with Moringa oleifera biodiesel doped with 1-hexanol and Zr2O3 nanoparticles |... - March 1st, 2025 [March 1st, 2025]
- Spatial analysis of air pollutant exposure and its association with metabolic diseases using machine learning - BMC Public Health - March 1st, 2025 [March 1st, 2025]
- The Evolution of AI in Software Testing: From Machine Learning to Agentic AI - CSRwire.com - March 1st, 2025 [March 1st, 2025]
- Wonder Dynamics Helps Boxel Studio Embrace Machine Learning and AI - Animation World Network - March 1st, 2025 [March 1st, 2025]
- Predicting responsiveness to fixed-dose methylene blue in adult patients with septic shock using interpretable machine learning: a retrospective study... - March 1st, 2025 [March 1st, 2025]
- Workplace Predictions: AI, Machine Learning To Transform Operations In 2025 - Facility Executive Magazine - March 1st, 2025 [March 1st, 2025]
- Development and validation of a machine learning approach for screening new leprosy cases based on the leprosy suspicion questionnaire - Nature.com - March 1st, 2025 [March 1st, 2025]
- Machine learning analysis of gene expression profiles of pyroptosis-related differentially expressed genes in ischemic stroke revealed potential... - March 1st, 2025 [March 1st, 2025]
- Utilization of tree-based machine learning models for predicting low birth weight cases - BMC Pregnancy and Childbirth - March 1st, 2025 [March 1st, 2025]
- Machine learning-based pattern recognition of Bender element signals for predicting sand particle-size - Nature.com - March 1st, 2025 [March 1st, 2025]
- Wearable Tech Uses Machine Learning to Predict Mood Swings - IoT World Today - March 1st, 2025 [March 1st, 2025]
- Machine learning can prevent thermal runaway in EV batteries - Automotive World - March 1st, 2025 [March 1st, 2025]
- Integration of multiple machine learning approaches develops a gene mutation-based classifier for accurate immunotherapy outcomes - Nature.com - March 1st, 2025 [March 1st, 2025]
- Data Analytics Market Size to Surpass USD 483.41 Billion by 2032 Owing to Rising Adoption of AI & Machine Learning Technologies - Yahoo Finance - March 1st, 2025 [March 1st, 2025]
- Predictive AI Only Works If Stakeholders Tune This Dial - The Machine Learning Times - March 1st, 2025 [March 1st, 2025]
- Relationship between atherogenic index of plasma and length of stay in critically ill patients with atherosclerotic cardiovascular disease: a... - March 1st, 2025 [March 1st, 2025]
- A global survey from SAS shows that artificial intelligence and machine learning are producing major benefits in combating money laundering and other... - March 1st, 2025 [March 1st, 2025]
- Putting the AI in air cargo: How machine learning is reshaping demand forecasting - Air Cargo Week - March 1st, 2025 [March 1st, 2025]
- Meta speeds up its hiring process for machine-learning engineers as it cuts thousands of 'low performers' - Business Insider - February 11th, 2025 [February 11th, 2025]
- AI vs. Machine Learning: The Key Differences and Why They Matter - Lifewire - February 11th, 2025 [February 11th, 2025]
- Unravelling single-cell DNA replication timing dynamics using machine learning reveals heterogeneity in cancer progression - Nature.com - February 11th, 2025 [February 11th, 2025]
- Climate change and machine learning the good, bad, and unknown - MIT Sloan News - February 11th, 2025 [February 11th, 2025]
- Theory, Analysis, and Best Practices for Sigmoid Self-Attention - Apple Machine Learning Research - February 11th, 2025 [February 11th, 2025]
- Yielding insights: Machine learning driven imputations to fill in agricultural data gaps in surveys - World Bank - February 11th, 2025 [February 11th, 2025]
- SKUtrak Promote tool taps machine learning powered analysis to shake up way brands run promotions - Retail Technology Innovation Hub - February 11th, 2025 [February 11th, 2025]
- Machine learning approaches for resilient modulus modeling of cement-stabilized magnetite and hematite iron ore tailings - Nature.com - February 11th, 2025 [February 11th, 2025]
- The Alignment Problem: Machine Learning and Human Values - Harvard Gazette - February 11th, 2025 [February 11th, 2025]
- Narrowing the gap between machine learning scoring functions and free energy perturbation using augmented data - Nature.com - February 11th, 2025 [February 11th, 2025]
- Analyzing the influence of manufactured sand and fly ash on concrete strength through experimental and machine learning methods - Nature.com - February 11th, 2025 [February 11th, 2025]
- Machine learning prediction of glaucoma by heavy metal exposure: results from the National Health and Nutrition Examination Survey 2005 to 2008 -... - February 11th, 2025 [February 11th, 2025]
- Correlation of rivaroxaban solubility in mixed solvents for optimization of solubility using machine learning analysis and validation - Nature.com - February 11th, 2025 [February 11th, 2025]
- Characterisation of cardiovascular disease (CVD) incidence and machine learning risk prediction in middle-aged and elderly populations: data from the... - February 11th, 2025 [February 11th, 2025]
- Unlock the Secrets of AI: How Mohit Pandey Makes Machine Learning Fun! - Mi Valle - February 11th, 2025 [February 11th, 2025]