Are We Overly Infatuated With Deep Learning? – Forbes
Deep Learning
One of the factors often credited for this latest boom in artificial intelligence (AI) investment, research, and related cognitive technologies, is the emergence of deep learning neural networks as an evolution of machine algorithms, as well as the corresponding large volume of big data and computing power that makes deep learning a practical reality. While deep learning has been extremely popular and has shown real ability to solve many machine learning problems, deep learning is just one approach to machine learning (ML), that while having proven much capability across a wide range of problem areas, is still just one of many practical approaches. Increasingly, were starting to see news and research showing the limits of deep learning capabilities, as well as some of the downsides to the deep learning approach. So are peoples enthusiasm of AI tied to their enthusiasm of deep learning, and is deep learning really able to deliver on many of its promises?
The Origins of Deep Learning
AI researchers have struggled to understand how the brain learns from the very beginnings of the development of the field of artificial intelligence. It comes as no surprise that since the brain is primarily a collection of interconnected neurons, AI researchers sought to recreate the way the brain is structured through artificial neurons, and connections of those neurons in artificial neural networks. All the way back in 1940, Walter Pitts and Warren McCulloch built the first thresholded logic unit that was an attempt to mimic the way biological neurons worked. The Pitts and McCulloch model was just a proof of concept, but Frank Rosenblatt picked up on the idea in 1957 with the development of the Perceptron that took the concept to its logical extent. While primitive by todays standards, the Perceptron was still capable of remarkable feats - being able to recognize written numbers and letters, and even distinguish male from female faces. That was over 60 years ago!
Rosenblatt was so enthusiastic in 1959 about the Perceptrons promises that he remarked at the time that the perceptron is the embryo of an electronic computer that [we expect] will be able to walk, talk, see, write, reproduce itself and be conscious of its existence. Sound familiar? However, the enthusiasm didnt last. AI researcher Marvin Minsky noted how sensitive the perceptron was to small changes in the images, and also how easily it could be fooled. Maybe the perceptron wasnt really that smart at all. Minsky and AI researcher peer Seymour Papert basically took apart the whole perceptron idea in their Perceptrons book, and made the claim that perceptrons, and neural networks like it, are fundamentally flawed in their inability to handle certain kinds of problems notably, non-linear functions. That is to say, it was easy to train a neural network like a perceptron to put data into classifications, such as male/female, or types of numbers. For these simple neural networks, you can graph a bunch of data and draw a line and say things on one side of the line are in one category and things on the other side of the line are in a different category, thereby classifying them. But theres a whole bunch of problems where you cant draw lines like this, such as speech recognition or many forms of decision-making. These are nonlinear functions, which Minsky and Papert proved perceptrons incapable of solving.
During this period, while neural network approaches to ML settled to become an afterthought in AI, other approaches to ML were in the limelight including knowledge graphs, decision trees, genetic algorithms, similarity models, and other methods. In fact, during this period, IBMs DeepBlue purpose-built AI computer defeated Gary Kasparov in a chess match, the first computer to do so, using a brute-force alpha-beta search algorithm (so-called Good Old-Fashioned AI [GOFAI]) rather than new-fangled deep learning approaches. Yet, even this approach to learning didnt go far, as some said that this system wasnt even intelligent at all.
Yet, the neural network story doesnt end here. In 1986, AI researcher Geoff Hinton, along with David Rumelhart and Ronald Williams, published a research paper entitled Learning representations by back-propagating errors. In this paper, Hinton and crew detailed how you can use many hidden layers of neurons to get around the problems faced by perceptrons. With sufficient data and computing power, these layers can be calculated to identify specific features in the data sets they can classify on, and as a group, could learn nonlinear functions, something known as the universal approximation theorem. The approach works by backpropagating errors from higher layers of the network to lower ones (backprop), expediting training. Now, if you have enough layers, enough data to train those layers, and sufficient computing power to calculate all the interconnections, you can train a neural network to identify and classify almost anything. Researcher Yann Lecun developed LeNet-5 at AT&T Bell Labs in 1998, recognizing handwritten images on checks using an iteration of this approach known as Convolutional Neural Networks (CNNs), and researchers Yoshua Bengio and Jrgen Schmidhube further advanced the field.
Yet, just as things go in AI, research halted when these early neural networks couldnt scale. Surprisingly very little development happened until 2006, when Hinton re-emerged onto the scene with the ideas of unsupervised pre-training and deep belief nets. The idea here is to have a simple two-layer network whose parameters are trained in an unsupervised way, and then stack new layers on top of it, just training that layers parameters. Repeat for dozens, hundreds, even thousands of layers. Eventually you get a deep network with many layers that can learn and understand something complex. This is what deep learning is all about: using lots of layers of trained neural nets to learn just about anything, at least within certain constraints.
In 2010, Stanford researcher Fei-Fei Li published the release of ImageNet, a large database of millions of labeled images. The images were labeled with a hierarchy of classifications, such as animal or vehicle, down to very granular levels, such as husky or trimaran. This ImageNet database was paired with an annual competition called the Large Scale Visual Recognition Challenge (LSVRC) to see which computer vision system had the lowest number of classification and recognition errors. In 2012, Geoff Hinton, Alex Krizhevsky, and Ilya Sutskever, submitted their AlexNet entry that had almost half the number of errors as all previous winning entries. What made their approach win was that they moved from using ordinary computers with CPUs, to specialized graphical processing units (GPUs) that could train much larger models in reasonable amounts of time. They also introduced now-standard deep learning methods such as dropout to reduce a problem called overfitting (when the network is trained too tightly on the example data and cant generalize to broader data), and something called the rectified linear activation unit (ReLU) to speed training. After the success of their competition, it seems everyone took notice, and Deep Learning was off to the races.
Deep Learnings Shortcomings
The fuel that keeps the Deep Learning fires roaring is data and compute power. Specifically, large volumes of well-labeled data sets are needed to train Deep Learning networks. The more layers, the better the learning power, but to have layers you need to have data that is already well labeled to train those layers. Since deep neural networks are primarily a bunch of calculations that have to all be done at the same time, you need a lot of raw computing power, and specifically numerical computing power. Imagine youre tuning a million knobs at the same time to find the optimal combination that will make the system learn based on millions of pieces of data that are being fed into the system. This is why neural networks in the 1950s were not possible, but today they are. Today we finally have lots of data and lots of computing power to handle that data.
Deep learning is being applied successfully in a wide range of situations, such as natural language processing, computer vision, machine translation, bioinformatics, gaming, and many other applications where classification, pattern matching, and the use of this automatically tuned deep neural network approach works well. However, these same advantages have a number of disadvantages.
The most notable of these disadvantages is that since deep learning consists of many layers, each with many interconnected nodes, each configured with different weights and other parameters theres no way to inspect a deep learning network and understand how any particular decision, clustering, or classification is actually done. Its a black box, which means deep learning networks are inherently unexplainable. As many have written on the topic of Explainable AI (XAI), systems that are used to make decisions of significance need to have explainability to satisfy issues of trust, compliance, verifiability, and understandability. While DARPA and others are working on ways to possibly explain deep learning neural networks, the lack of explainability is a significant drawback for many.
The second disadvantage is that deep learning networks are really great at classification and clustering of information, but not really good at other decision-making or learning scenarios. Not every learning situation is one of classifying something in a category or grouping information together into a cluster. Sometimes you have to deduce what to do based on what youve learned before. Deduction and reasoning is not a fort of deep learning networks.
As mentioned earlier, deep learning is also very data and resource hungry. One measure of a neural networks complexity is the number of parameters that need to be learned and tuned. For deep learning neural networks, there can be hundreds of millions of parameters. Training models requires a significant amount of data to adjust these parameters. For example, a speech recognition neural net often requires terabytes of clean, labeled data to train on. The lack of a sufficient, clean, labeled data set would hinder the development of a deep neural net for that problem domain. And even if you have the data, you need to crunch on it to generate the model, which takes a significant amount of time and processing power.
Another challenge of deep learning is that the models produced are very specific to a problem domain. If its trained on a certain dataset of cats, then it will only recognize those cats and cant be used to generalize on animals or be used to identify non-cats. While this is not a problem of only deep learning approaches to machine learning, it can be particularly troublesome when factoring in the overfitting problem mentioned above. Deep learning neural nets can be so tightly constrained (fitted) to the training data that, for example, even small perturbations in the images can lead to wildly inaccurate classifications of images. There are well known examples of turtles being mis-recognized as guns or polar bears being mis-recognized as other animals due to just small changes in the image data. Clearly if youre using this network in mission critical situations, those mistakes would be significant.
Machine Learning is not (just) Deep Learning
Enterprises looking at using cognitive technologies in their business need to look at the whole picture. Machine learning is not just one approach, but rather a collection of different approaches of various different types that are applicable in different scenarios. Some machine learning algorithms are very simple, using small amounts of data and an understandable logic or deduction path thats very suitable for particular situations, while others are very complex and use lots of data and processing power to handle more complicated situations. The key thing to realize is that deep learning isnt all of machine learning, let alone AI. Even Geoff Hinton, the Einstein of deep learning is starting to rethink core elements of deep learning and its limitations.
The key for organizations is to understand which machine learning methods are most viable for which problem areas, and how to plan, develop, deploy, and manage that machine learning approach in practice. Since AI use in the enterprise is still continuing to gain adoption, especially these more advanced cognitive approaches, the best practices on how to employ cognitive technologies successfully are still maturing.
More:
Are We Overly Infatuated With Deep Learning? - Forbes
- Infleqtion Secures $2M U.S. Army Contract to Advance Contextual Machine Learning for Assured Navigation and Timing - Yahoo Finance - December 12th, 2025 [December 12th, 2025]
- A county-level machine learning model for bottled water consumption in the United States - ESS Open Archive - December 12th, 2025 [December 12th, 2025]
- Grainge AI: Solving the ingredient testing blind spot with machine learning - foodingredientsfirst.com - December 12th, 2025 [December 12th, 2025]
- Improved herbicide stewardship with remote sensing and machine learning decision-making tools - Open Access Government - December 12th, 2025 [December 12th, 2025]
- Hero Medical Technologies Awarded OTA by MTEC to Advance Machine Learning and Wearable Sensing for Field Triage - PRWeb - December 12th, 2025 [December 12th, 2025]
- Lieprune Achieves over Compression of Quantum Neural Networks with Negligible Performance Loss for Machine Learning Tasks - Quantum Zeitgeist - December 12th, 2025 [December 12th, 2025]
- WFS Leverages Machine Learning to Accurately Forecast Air Cargo Volumes and Align Workforce Resources - Metropolitan Airport News - December 12th, 2025 [December 12th, 2025]
- "Emerging AI and Machine Learning Technologies Revolutionize Diagnostic Accuracy in Endoscope Imaging" - GlobeNewswire - December 12th, 2025 [December 12th, 2025]
- Study Uses Multi-Scale Machine Learning to Classify Cognitive Status in Parkinsons Disease Patients - geneonline.com - December 12th, 2025 [December 12th, 2025]
- WFS uses machine learning to forecast cargo volumes and staffing - STAT Times - December 12th, 2025 [December 12th, 2025]
- Portfolio Management with Machine Learning and AI Integration - The AI Journal - December 12th, 2025 [December 12th, 2025]
- AI, Machine Learning to drive power sector transformation: Manohar Lal - DD News - December 7th, 2025 [December 7th, 2025]
- AI WebTracker and Machine-Learning Compliance Tools Help Law Firms Acquire High-Value Personal Injury Cases While Reducing Fake Leads and TCPA Risk -... - December 7th, 2025 [December 7th, 2025]
- AI AND MACHINE LEARNING BASED APPLICATIONS TO PLAY PIVOTAL ROLE IN TRANSFORMING INDIAS POWER SECTOR, SAYS SHRI MANOHAR LAL - pib.gov.in - December 7th, 2025 [December 7th, 2025]
- AI and Machine Learning to Transform Indias Power Sector, Says Manohar Lal - The Impressive Times - December 7th, 2025 [December 7th, 2025]
- Exploring LLMs with MLX and the Neural Accelerators in the M5 GPU - Apple Machine Learning Research - November 23rd, 2025 [November 23rd, 2025]
- Machine learning model for HBsAg seroclearance after 48-week pegylated interferon therapy in inactive HBsAg carriers: a retrospective study - Virology... - November 23rd, 2025 [November 23rd, 2025]
- IIT Madras Free Machine Learning Course 2026: What to know - Times of India - November 23rd, 2025 [November 23rd, 2025]
- Towards a Better Evaluation of 3D CVML Algorithms: Immersive Debugging of a Localization Model - Apple Machine Learning Research - November 23rd, 2025 [November 23rd, 2025]
- A machine-learning powered liquid biopsy predicts response to paclitaxel plus ramucirumab in advanced gastric cancer: results from the prospective IVY... - November 23rd, 2025 [November 23rd, 2025]
- Monitoring for early prediction of gram-negative bacteremia using machine learning and hematological data in the emergency department - Nature - November 23rd, 2025 [November 23rd, 2025]
- Development and validation of an interpretable machine learning model for osteoporosis prediction using routine blood tests: a retrospective cohort... - November 23rd, 2025 [November 23rd, 2025]
- Snowflake Supercharges Machine Learning for Enterprises with Native Integration of NVIDIA CUDA-X Libraries - Snowflake - November 23rd, 2025 [November 23rd, 2025]
- Rethinking Revenue: How AI and Machine Learning Are Unlocking Hidden Value in the Post-Booking Space - Aviation Week Network - November 23rd, 2025 [November 23rd, 2025]
- Machine Learning Prediction of Material Properties Improves with Phonon-Informed Datasets - Quantum Zeitgeist - November 23rd, 2025 [November 23rd, 2025]
- A predictive model for the treatment outcomes of patients with secondary mitral regurgitation based on machine learning and model interpretation - BMC... - November 23rd, 2025 [November 23rd, 2025]
- Mobvista (1860.HK) Delivers Solid Revenue Growth in Q3 2025 as Mintegral Strengthens Its AI and Machine Learning Technology - Business Wire - November 23rd, 2025 [November 23rd, 2025]
- Machine learning beats classical method in predicting cosmic ray radiation near Earth - Phys.org - November 23rd, 2025 [November 23rd, 2025]
- Top Ways AI and Machine Learning Are Revolutionizing Industries in 2025 - nerdbot - November 23rd, 2025 [November 23rd, 2025]
- Snowflake Supercharges Machine Learning for Enterprises with Native Integration of NVIDIA CUDA-X Libraries - Yahoo Finance - November 18th, 2025 [November 18th, 2025]
- An interpretable machine learning model for predicting 5year survival in breast cancer based on integration of proteomics and clinical data -... - November 18th, 2025 [November 18th, 2025]
- scMFF: a machine learning framework with multiple feature fusion strategies for cell type identification - BMC Bioinformatics - November 18th, 2025 [November 18th, 2025]
- URI professor examines how machine learning can help with depression diagnosis Rhody Today - The University of Rhode Island - November 18th, 2025 [November 18th, 2025]
- Predicting drug solubility in supercritical carbon dioxide green solvent using machine learning models based on thermodynamic properties - Nature - November 18th, 2025 [November 18th, 2025]
- Relationship between C-reactive protein triglyceride glucose index and cardiovascular disease risk: a cross-sectional analysis with machine learning -... - November 18th, 2025 [November 18th, 2025]
- Using machine learning to predict student outcomes for early intervention and formative assessment - Nature - November 18th, 2025 [November 18th, 2025]
- Prevalence, associated factors, and machine learning-based prediction of probable depression among individuals with chronic diseases in Bangladesh -... - November 18th, 2025 [November 18th, 2025]
- Snowflake supercharges machine learning for enterprises with native integration of Nvidia CUDA-X libraries - MarketScreener - November 18th, 2025 [November 18th, 2025]
- Unlocking Cardiovascular Disease Insights Through Machine Learning - BIOENGINEER.ORG - November 18th, 2025 [November 18th, 2025]
- Machine learning boosts solar forecasts in diverse climates of India - researchmatters.in - November 18th, 2025 [November 18th, 2025]
- Big Data Machine Learning In Telecom Market by Type and Application Set for 14.8% CAGR Growth Through 2033 - openPR.com - November 18th, 2025 [November 18th, 2025]
- How Humans Could Soon Understand and Talk to Animals, Thanks to Machine Learning - SYFY - November 10th, 2025 [November 10th, 2025]
- Machine learning based analysis of diesel engine performance using FeO nanoadditive in sterculia foetida biodiesel blend - Nature - November 10th, 2025 [November 10th, 2025]
- Machine Learning in Maternal Care - Johns Hopkins Bloomberg School of Public Health - November 10th, 2025 [November 10th, 2025]
- Machine learning-based differentiation of benign and malignant adrenal lesions using 18F-FDG PET/CT: a two-stage classification and SHAP... - November 10th, 2025 [November 10th, 2025]
- How to Better Use AI and Machine Learning in Dermatology, With Renata Block, MMS, PA-C - HCPLive - November 10th, 2025 [November 10th, 2025]
- Avoiding Catastrophe: The Importance of Privacy when Leveraging AI and Machine Learning for Disaster Management - CSIS | Center for Strategic and... - November 10th, 2025 [November 10th, 2025]
- Efferocytosis-related signatures identified via Single-cell analysis and machine learning predict TNBC outcomes and immunotherapy response - Nature - November 10th, 2025 [November 10th, 2025]
- Arc Raiders' use of AI highlights the tension and confusion over where machine learning ends and generative AI begins - PC Gamer - November 3rd, 2025 [November 3rd, 2025]
- From performance to prediction: extracting aging data from the effects of base load aging on washing machines for a machine learning model - Nature - November 3rd, 2025 [November 3rd, 2025]
- Meet 'kvcached': A Machine Learning Library to Enable Virtualized, Elastic KV Cache for LLM Serving on Shared GPUs - MarkTechPost - October 28th, 2025 [October 28th, 2025]
- Bayesian-optimized machine learning boosts actual evapotranspiration prediction in water-stressed agricultural regions of China - Nature - October 28th, 2025 [October 28th, 2025]
- Using machine learning to shed light on how well the triage systems work - News-Medical - October 28th, 2025 [October 28th, 2025]
- Our Last Hope Before The AI Bubble Detonates: Taming LLMs - Machine Learning Week US - October 28th, 2025 [October 28th, 2025]
- Using multiple machine learning algorithms to predict spinal cord injury in patients with cervical spondylosis: a multicenter study - Nature - October 28th, 2025 [October 28th, 2025]
- The diagnostic potential of proteomics and machine learning in Lyme neuroborreliosis - Nature - October 28th, 2025 [October 28th, 2025]
- Using unsupervised machine learning methods to cluster cardio-metabolic profile of the middle-aged and elderly Chinese with general and central... - October 28th, 2025 [October 28th, 2025]
- The prognostic value of POD24 for multiple myeloma: a comprehensive analysis based on traditional statistics and machine learning - BMC Cancer - October 28th, 2025 [October 28th, 2025]
- Reducing inequalities using an unbiased machine learning approach to identify births with the highest risk of preventable neonatal deaths - Population... - October 28th, 2025 [October 28th, 2025]
- Association between SHR and mortality in critically ill patients with CVD: a retrospective analysis and machine learning approach - Diabetology &... - October 28th, 2025 [October 28th, 2025]
- AI-Powered Visual Storytelling: How Machine Learning Transforms Creative Content Production - About Chromebooks - October 28th, 2025 [October 28th, 2025]
- How beauty brand Shiseido nearly tripled revenue per user with machine learning - Performance Marketing World - October 28th, 2025 [October 28th, 2025]
- Magnite introduces machine learning-powered ad podding for streaming platforms - PPC Land - October 26th, 2025 [October 26th, 2025]
- Krafton is an AI first company and will invest 70M USD on machine learning - Female First - October 26th, 2025 [October 26th, 2025]
- Machine learning prediction of bacterial optimal growth temperature from protein domain signatures reveals thermoadaptation mechanisms - BMC Genomics - October 24th, 2025 [October 24th, 2025]
- Data Proportionality and Its Impact on Machine Learning Predictions of Ground Granulated Blast Furnace Slag Concrete Strength | Newswise - Newswise - October 24th, 2025 [October 24th, 2025]
- The Evolution of Machine Learning and Its Applications in Orthopaedics: A Bibliometric Analysis - Cureus - October 24th, 2025 [October 24th, 2025]
- Sentiment Analysis with Machine Learning Achieves 83.48% Accuracy in Predicting Consumer Behavior Trends - Quantum Zeitgeist - October 24th, 2025 [October 24th, 2025]
- Use of machine learning for risk stratification of chest pain patients in the emergency department - BMC Medical Informatics and Decision Making - October 24th, 2025 [October 24th, 2025]
- Mass spectrometry combined with machine learning identifies novel protein signatures as demonstrated with multisystem inflammatory syndrome in... - October 24th, 2025 [October 24th, 2025]
- How Machine Learning Is Shrinking to Fit the Sensor Node - All About Circuits - October 24th, 2025 [October 24th, 2025]
- Machine learning models for mechanical properties prediction of basalt fiber-reinforced concrete incorporating graphical user interface - Nature - October 24th, 2025 [October 24th, 2025]
- Ohio wins national cybersecurity award for fraud solutions using machine learning - Spectrum News NY1 - October 24th, 2025 [October 24th, 2025]
- Itron Partners with Gordian Technologies to Enhance Grid Edge Intelligence with AI and Machine Learning Solutions - Quiver Quantitative - October 24th, 2025 [October 24th, 2025]
- Wearable sensors and machine learning give leg up on better running data - Medical Xpress - October 23rd, 2025 [October 23rd, 2025]
- Geophysical-machine learning tool developed for continuous subsurface geomaterials characterization - Phys.org - October 23rd, 2025 [October 23rd, 2025]
- Ohio wins national cybersecurity award for fraud solutions using machine learning - Spectrum News 1 - October 23rd, 2025 [October 23rd, 2025]
- Machine learning predictions of climate change effects on nearly threatened bird species ( Crithagra xantholaema) habitat in Ethiopia for conservation... - October 23rd, 2025 [October 23rd, 2025]
- A machine learning tool for predicting newly diagnosed osteoporosis in primary healthcare in the Stockholm Region - Nature - October 23rd, 2025 [October 23rd, 2025]
- ECBs New Perspective on Machine Learning in Banking - KPMG - October 23rd, 2025 [October 23rd, 2025]