Are We Overly Infatuated With Deep Learning? – Forbes
Deep Learning
One of the factors often credited for this latest boom in artificial intelligence (AI) investment, research, and related cognitive technologies, is the emergence of deep learning neural networks as an evolution of machine algorithms, as well as the corresponding large volume of big data and computing power that makes deep learning a practical reality. While deep learning has been extremely popular and has shown real ability to solve many machine learning problems, deep learning is just one approach to machine learning (ML), that while having proven much capability across a wide range of problem areas, is still just one of many practical approaches. Increasingly, were starting to see news and research showing the limits of deep learning capabilities, as well as some of the downsides to the deep learning approach. So are peoples enthusiasm of AI tied to their enthusiasm of deep learning, and is deep learning really able to deliver on many of its promises?
The Origins of Deep Learning
AI researchers have struggled to understand how the brain learns from the very beginnings of the development of the field of artificial intelligence. It comes as no surprise that since the brain is primarily a collection of interconnected neurons, AI researchers sought to recreate the way the brain is structured through artificial neurons, and connections of those neurons in artificial neural networks. All the way back in 1940, Walter Pitts and Warren McCulloch built the first thresholded logic unit that was an attempt to mimic the way biological neurons worked. The Pitts and McCulloch model was just a proof of concept, but Frank Rosenblatt picked up on the idea in 1957 with the development of the Perceptron that took the concept to its logical extent. While primitive by todays standards, the Perceptron was still capable of remarkable feats - being able to recognize written numbers and letters, and even distinguish male from female faces. That was over 60 years ago!
Rosenblatt was so enthusiastic in 1959 about the Perceptrons promises that he remarked at the time that the perceptron is the embryo of an electronic computer that [we expect] will be able to walk, talk, see, write, reproduce itself and be conscious of its existence. Sound familiar? However, the enthusiasm didnt last. AI researcher Marvin Minsky noted how sensitive the perceptron was to small changes in the images, and also how easily it could be fooled. Maybe the perceptron wasnt really that smart at all. Minsky and AI researcher peer Seymour Papert basically took apart the whole perceptron idea in their Perceptrons book, and made the claim that perceptrons, and neural networks like it, are fundamentally flawed in their inability to handle certain kinds of problems notably, non-linear functions. That is to say, it was easy to train a neural network like a perceptron to put data into classifications, such as male/female, or types of numbers. For these simple neural networks, you can graph a bunch of data and draw a line and say things on one side of the line are in one category and things on the other side of the line are in a different category, thereby classifying them. But theres a whole bunch of problems where you cant draw lines like this, such as speech recognition or many forms of decision-making. These are nonlinear functions, which Minsky and Papert proved perceptrons incapable of solving.
During this period, while neural network approaches to ML settled to become an afterthought in AI, other approaches to ML were in the limelight including knowledge graphs, decision trees, genetic algorithms, similarity models, and other methods. In fact, during this period, IBMs DeepBlue purpose-built AI computer defeated Gary Kasparov in a chess match, the first computer to do so, using a brute-force alpha-beta search algorithm (so-called Good Old-Fashioned AI [GOFAI]) rather than new-fangled deep learning approaches. Yet, even this approach to learning didnt go far, as some said that this system wasnt even intelligent at all.
Yet, the neural network story doesnt end here. In 1986, AI researcher Geoff Hinton, along with David Rumelhart and Ronald Williams, published a research paper entitled Learning representations by back-propagating errors. In this paper, Hinton and crew detailed how you can use many hidden layers of neurons to get around the problems faced by perceptrons. With sufficient data and computing power, these layers can be calculated to identify specific features in the data sets they can classify on, and as a group, could learn nonlinear functions, something known as the universal approximation theorem. The approach works by backpropagating errors from higher layers of the network to lower ones (backprop), expediting training. Now, if you have enough layers, enough data to train those layers, and sufficient computing power to calculate all the interconnections, you can train a neural network to identify and classify almost anything. Researcher Yann Lecun developed LeNet-5 at AT&T Bell Labs in 1998, recognizing handwritten images on checks using an iteration of this approach known as Convolutional Neural Networks (CNNs), and researchers Yoshua Bengio and Jrgen Schmidhube further advanced the field.
Yet, just as things go in AI, research halted when these early neural networks couldnt scale. Surprisingly very little development happened until 2006, when Hinton re-emerged onto the scene with the ideas of unsupervised pre-training and deep belief nets. The idea here is to have a simple two-layer network whose parameters are trained in an unsupervised way, and then stack new layers on top of it, just training that layers parameters. Repeat for dozens, hundreds, even thousands of layers. Eventually you get a deep network with many layers that can learn and understand something complex. This is what deep learning is all about: using lots of layers of trained neural nets to learn just about anything, at least within certain constraints.
In 2010, Stanford researcher Fei-Fei Li published the release of ImageNet, a large database of millions of labeled images. The images were labeled with a hierarchy of classifications, such as animal or vehicle, down to very granular levels, such as husky or trimaran. This ImageNet database was paired with an annual competition called the Large Scale Visual Recognition Challenge (LSVRC) to see which computer vision system had the lowest number of classification and recognition errors. In 2012, Geoff Hinton, Alex Krizhevsky, and Ilya Sutskever, submitted their AlexNet entry that had almost half the number of errors as all previous winning entries. What made their approach win was that they moved from using ordinary computers with CPUs, to specialized graphical processing units (GPUs) that could train much larger models in reasonable amounts of time. They also introduced now-standard deep learning methods such as dropout to reduce a problem called overfitting (when the network is trained too tightly on the example data and cant generalize to broader data), and something called the rectified linear activation unit (ReLU) to speed training. After the success of their competition, it seems everyone took notice, and Deep Learning was off to the races.
Deep Learnings Shortcomings
The fuel that keeps the Deep Learning fires roaring is data and compute power. Specifically, large volumes of well-labeled data sets are needed to train Deep Learning networks. The more layers, the better the learning power, but to have layers you need to have data that is already well labeled to train those layers. Since deep neural networks are primarily a bunch of calculations that have to all be done at the same time, you need a lot of raw computing power, and specifically numerical computing power. Imagine youre tuning a million knobs at the same time to find the optimal combination that will make the system learn based on millions of pieces of data that are being fed into the system. This is why neural networks in the 1950s were not possible, but today they are. Today we finally have lots of data and lots of computing power to handle that data.
Deep learning is being applied successfully in a wide range of situations, such as natural language processing, computer vision, machine translation, bioinformatics, gaming, and many other applications where classification, pattern matching, and the use of this automatically tuned deep neural network approach works well. However, these same advantages have a number of disadvantages.
The most notable of these disadvantages is that since deep learning consists of many layers, each with many interconnected nodes, each configured with different weights and other parameters theres no way to inspect a deep learning network and understand how any particular decision, clustering, or classification is actually done. Its a black box, which means deep learning networks are inherently unexplainable. As many have written on the topic of Explainable AI (XAI), systems that are used to make decisions of significance need to have explainability to satisfy issues of trust, compliance, verifiability, and understandability. While DARPA and others are working on ways to possibly explain deep learning neural networks, the lack of explainability is a significant drawback for many.
The second disadvantage is that deep learning networks are really great at classification and clustering of information, but not really good at other decision-making or learning scenarios. Not every learning situation is one of classifying something in a category or grouping information together into a cluster. Sometimes you have to deduce what to do based on what youve learned before. Deduction and reasoning is not a fort of deep learning networks.
As mentioned earlier, deep learning is also very data and resource hungry. One measure of a neural networks complexity is the number of parameters that need to be learned and tuned. For deep learning neural networks, there can be hundreds of millions of parameters. Training models requires a significant amount of data to adjust these parameters. For example, a speech recognition neural net often requires terabytes of clean, labeled data to train on. The lack of a sufficient, clean, labeled data set would hinder the development of a deep neural net for that problem domain. And even if you have the data, you need to crunch on it to generate the model, which takes a significant amount of time and processing power.
Another challenge of deep learning is that the models produced are very specific to a problem domain. If its trained on a certain dataset of cats, then it will only recognize those cats and cant be used to generalize on animals or be used to identify non-cats. While this is not a problem of only deep learning approaches to machine learning, it can be particularly troublesome when factoring in the overfitting problem mentioned above. Deep learning neural nets can be so tightly constrained (fitted) to the training data that, for example, even small perturbations in the images can lead to wildly inaccurate classifications of images. There are well known examples of turtles being mis-recognized as guns or polar bears being mis-recognized as other animals due to just small changes in the image data. Clearly if youre using this network in mission critical situations, those mistakes would be significant.
Machine Learning is not (just) Deep Learning
Enterprises looking at using cognitive technologies in their business need to look at the whole picture. Machine learning is not just one approach, but rather a collection of different approaches of various different types that are applicable in different scenarios. Some machine learning algorithms are very simple, using small amounts of data and an understandable logic or deduction path thats very suitable for particular situations, while others are very complex and use lots of data and processing power to handle more complicated situations. The key thing to realize is that deep learning isnt all of machine learning, let alone AI. Even Geoff Hinton, the Einstein of deep learning is starting to rethink core elements of deep learning and its limitations.
The key for organizations is to understand which machine learning methods are most viable for which problem areas, and how to plan, develop, deploy, and manage that machine learning approach in practice. Since AI use in the enterprise is still continuing to gain adoption, especially these more advanced cognitive approaches, the best practices on how to employ cognitive technologies successfully are still maturing.
More:
Are We Overly Infatuated With Deep Learning? - Forbes
- Ensemble Machine Learning for Digital Mapping of Soil pH and Electrical Conductivity in the Andean Agroecosystem of Peru - Frontiers - October 21st, 2025 [October 21st, 2025]
- New UA research develops machine learning to address needs of children with autism - AZPM News - October 21st, 2025 [October 21st, 2025]
- NMDSI Speaker Series on Weather Forecasting: What Machine Learning Can and Can't Do, Oct. 23 - Marquette Today - October 21st, 2025 [October 21st, 2025]
- Polyskill Achieves 1.7x Improved Skill Reuse and 9.4% Higher Success Rates through Polymorphic Abstraction in Machine Learning - Quantum Zeitgeist - October 21st, 2025 [October 21st, 2025]
- University of Strathclyde opens admission for MSc in Machine & Deep Learning for Jan 2026 intake - The Indian Express - October 21st, 2025 [October 21st, 2025]
- Reducing Model Biases with Machine Learning Corrections Derived from Ocean Data Assimilation Increments - ESS Open Archive - October 19th, 2025 [October 19th, 2025]
- Unlocking Obesity: Multi-Omics and Machine Learning Insights - Bioengineer.org - October 19th, 2025 [October 19th, 2025]
- Lockheed Martin advances PAC-3 MSE interceptor using artificial intelligence and machine learning - Defence Industry Europe - October 19th, 2025 [October 19th, 2025]
- Semi-automated surveillance of surgical site infections using machine learning and rule-based classification models - Nature - October 19th, 2025 [October 19th, 2025]
- AI and Machine Learning - City of San Jos to release RFP for generative AI platform - Smart Cities World - October 19th, 2025 [October 19th, 2025]
- Machine learning helps identify 'thermal switch' for next-generation nanomaterials - Phys.org - October 17th, 2025 [October 17th, 2025]
- Machine Learning Makes Wildlife Data Analysis Less of a Trek - Maryland.gov - October 17th, 2025 [October 17th, 2025]
- An interpretable multimodal machine learning model for predicting malignancy of thyroid nodules in low-resource scenarios - BMC Endocrine Disorders - October 17th, 2025 [October 17th, 2025]
- In First-Episode Psychosis Patients, Machine Learning Predicted Illness Trajectories to Potentially Improve Outcomes - Brain and Behavior Research - October 17th, 2025 [October 17th, 2025]
- Novel Machine Learning Model Improves MASLD Detection in Type 2 Diabetes - The American Journal of Managed Care (AJMC) - October 17th, 2025 [October 17th, 2025]
- Hybrid machine learning models for predicting the tensile strength of reinforced concrete incorporating nano-engineered and sustainable supplementary... - October 17th, 2025 [October 17th, 2025]
- Modelling of immune infiltration in prostate cancer treated with HDR-brachytherapy using Raman spectroscopy and machine learning - Nature - October 17th, 2025 [October 17th, 2025]
- Association between atherogenic index of plasma and sepsis in critically ill patients with ischemic stroke: a retrospective cohort study using... - October 17th, 2025 [October 17th, 2025]
- AI enters the nuclear age: Pentagon modernizes warheads with machine learning - Washington Times - October 17th, 2025 [October 17th, 2025]
- AI and Machine Learning - Bentley Systems shares its vision for trustworthy AI - Smart Cities World - October 17th, 2025 [October 17th, 2025]
- Looking back to move forward: can historical clinical trial data and machine learning drive change in participant recruitment in anticipation of... - October 15th, 2025 [October 15th, 2025]
- Physics-Based Machine Learning Paves the Way for Advanced 3D-Printed Materials - Bioengineer.org - October 15th, 2025 [October 15th, 2025]
- Predicting one-year overall survival in patients with AITL using machine learning algorithms: a multicenter study - Nature - October 15th, 2025 [October 15th, 2025]
- Explainable machine learning models for predicting of protein-energy wasting in patients on maintenance haemodialysis - BMC Nephrology - October 15th, 2025 [October 15th, 2025]
- Feasibility of machine learning analysis for the identification of patients with possible primary ciliary dyskinesia - Orphanet Journal of Rare... - October 15th, 2025 [October 15th, 2025]
- Machine learning-based prediction of preeclampsia using first-trimester inflammatory markers and red blood cell indices - BMC Pregnancy and Childbirth - October 15th, 2025 [October 15th, 2025]
- Utilizing AI and machine learning to improve railroad safety: Detecting trespasser hotspots - masstransitmag.com - October 15th, 2025 [October 15th, 2025]
- Precision medicine meets machine learning: AI and oncology biomarkers - pharmaphorum - October 15th, 2025 [October 15th, 2025]
- Aether Pro Exchange Transforms Execution Dynamics with Machine-Learning Optimization - GlobeNewswire - October 15th, 2025 [October 15th, 2025]
- Prevalence, associated factors, and machine learning-based prediction of depression, anxiety, and stress among university students: a cross-sectional... - October 15th, 2025 [October 15th, 2025]
- Artificial Intelligence vs. Machine Learning: Which skills will open better career options in the global - Times of India - October 15th, 2025 [October 15th, 2025]
- Study Reveals Impact of Negative Class Definitions on Machine Learning Accuracy in Immunotherapy - geneonline.com - October 15th, 2025 [October 15th, 2025]
- Muna Al-Khaifi: Detection of Breast Cancer Using Machine Learning and Explainable AI - Oncodaily - October 13th, 2025 [October 13th, 2025]
- Expedia Group Unveils Innovative AI and Machine Learning Solutions to Transform Partner Travel Experiences - Travel And Tour World - October 13th, 2025 [October 13th, 2025]
- Machine Learning-Guided Prediction of Formulation Performance in Inhalable CiprofloxacinBile Acid Dispersions with Antimicrobial and Toxicity... - October 13th, 2025 [October 13th, 2025]
- Machine Learning and BIG DATA workshop planned Oct. 14-15 - West Virginia University - October 11th, 2025 [October 11th, 2025]
- How Google enables third-party circularity by increasing recycling rates with Machine Learning - The World Business Council for Sustainable... - October 11th, 2025 [October 11th, 2025]
- Integrating Artificial Intelligence and Machine Learning in Hydroclimatic Research - A Promising Step Forward - University of Northern British... - October 11th, 2025 [October 11th, 2025]
- Semi-automatic detection of anteriorly displaced temporomandibular joint discs in magnetic resonance images using machine learning - BMC Oral Health - October 11th, 2025 [October 11th, 2025]
- AI and Machine Learning - Partnership to bring infrastructure intelligence to US public sector - Smart Cities World - October 11th, 2025 [October 11th, 2025]
- Between rain and snow, machine learning finds nine precipitation types - Phys.org - October 9th, 2025 [October 9th, 2025]
- Between rain and snow, machine learning finds 9 precipitation types - Michigan Engineering News - October 9th, 2025 [October 9th, 2025]
- Machine learning optimizes nanoparticle design for drug delivery to the brain - Physics World - October 9th, 2025 [October 9th, 2025]
- Development and validation of a machine learning-based prediction model for prolonged length of stay after laparoscopic gastrointestinal surgery: a... - October 9th, 2025 [October 9th, 2025]
- G Sachs: Stock Mkt Not in Bubble Yet; Machine Learning/ AI Expected to Spawn New Wave of Superstars - AASTOCKS.com - October 9th, 2025 [October 9th, 2025]
- AI and Machine Learning - See.Sense works with City of Sydney to develop AI dashboard - Smart Cities World - October 9th, 2025 [October 9th, 2025]
- Machine Learning Used to Predict Live Birth Outcomes in Fresh Embryo Transfers - geneonline.com - October 9th, 2025 [October 9th, 2025]
- RIT researchers use machine learning to better understand the pathways of disease - Rochester Institute of Technology - October 7th, 2025 [October 7th, 2025]
- Leveraging machine learning to predict mosquito bed net utilization among women of reproductive age in sub-Saharan Africa - Malaria Journal - October 7th, 2025 [October 7th, 2025]
- Machine learning-based radiomics using magnetic resonance images for prediction of clinical complete response to neoadjuvant chemotherapy in patients... - October 7th, 2025 [October 7th, 2025]
- Machine Learning Self Driving Cars: The Technology Driving the Future of Mobility - SpeedwayMedia.com - October 7th, 2025 [October 7th, 2025]
- Investigating the relationship between blood factors and HDL-C levels in the bloodstream using machine learning methods - Journal of Health,... - October 7th, 2025 [October 7th, 2025]
- AI in the fast lane: F1 teams Alpine, Audi use machine learning as force multiplier - The Business Times - October 7th, 2025 [October 7th, 2025]
- Future Scope of Machine Learning in Healthcare Market Set to Witness Significant Growth by 2025-2032 - openPR.com - October 7th, 2025 [October 7th, 2025]
- AI and Machine Learning - AI readiness and adoption toolkit launched - Smart Cities World - October 4th, 2025 [October 4th, 2025]
- Machine Learning Model UmamiPredict Developed to Forecast Savory Taste of Molecules and Peptides - geneonline.com - October 4th, 2025 [October 4th, 2025]
- Machine Learning Boosts Crop Yield Predictions in Senegal - Bioengineer.org - October 4th, 2025 [October 4th, 2025]
- Machine learning-driven stability analysis of eco-friendly superhydrophobic graphene-based coatings on copper substrate - Nature - October 4th, 2025 [October 4th, 2025]
- Integrated machine learning analysis of proteomic and transcriptomic data identifies healing associated targets in diabetic wound repair - Nature - October 4th, 2025 [October 4th, 2025]
- Development and evaluation of a machine learning prediction model for short-term mortality in patients with diabetes or hyperglycemia at emergency... - October 4th, 2025 [October 4th, 2025]
- Fast and robust mixed gas identification and recognition using tree-based machine learning and sensor array response - Nature - October 4th, 2025 [October 4th, 2025]
- Estimation of sexual dimorphism of adult human mandibles of South Indian origin using non-metric parameters and machine learning classification... - October 4th, 2025 [October 4th, 2025]
- Cloud-Based Machine Learning Platforms Technologies Market Growth and Future Prospects - Precedence Research - October 4th, 2025 [October 4th, 2025]
- Machine Learning Framework Developed to Optimize Phosphorus Recovery in Hydrothermal Treatment of Livestock Manure - geneonline.com - October 4th, 2025 [October 4th, 2025]
- Unifying machine learning and interpolation theory via interpolating neural networks - Nature - October 2nd, 2025 [October 2nd, 2025]
- Anna: an open-source platform for real-time integration of machine learning classifiers with veterinary electronic health records - BMC Veterinary... - October 2nd, 2025 [October 2nd, 2025]
- The Future of Liver Health: Can Human Models and Machine Learning Reduce Disease Rates? - Technology Networks - October 2nd, 2025 [October 2nd, 2025]
- Machine Learning Radiomics Predicts Pancreatic Cancer Invasion - Bioengineer.org - October 2nd, 2025 [October 2nd, 2025]
- Next-generation COVID-19 detection using a metasurface biosensor with machine learning-enhanced refractive index sensing - Nature - October 2nd, 2025 [October 2nd, 2025]
- Machine learning-based models for screening of anemia and leukemia using features of complete blood count reports - Nature - October 2nd, 2025 [October 2nd, 2025]
- Estimating the peak age of chess players through statistical and machine learning techniques - Nature - October 2nd, 2025 [October 2nd, 2025]
- Optimizing water quality index using machine learning: a six-year comparative study in riverine and reservoir systems - Nature - October 2nd, 2025 [October 2nd, 2025]
- Physics-informed machine learning-based real-time long-horizon temperature fields prediction in metallic additive manufacturing - Nature - October 2nd, 2025 [October 2nd, 2025]
- The Silicon Revolution: How AI and Machine Learning Are Forging the Future of Semiconductor Manufacturing - FinancialContent - October 2nd, 2025 [October 2nd, 2025]
- Machine learning model for differentiating Pneumocystis jirovecii pneumonia from colonization and analyzing mortality risk in non-HIV patients using... - October 2nd, 2025 [October 2nd, 2025]
- Radiomics and Machine Learning Applied to CECT Scans Show Potential in Predicting Perineural Invasion in Pancreatic Cancer - geneonline.com - October 2nd, 2025 [October 2nd, 2025]
- Machine learning and response surface optimization to enhance diesel engine performance using milk scum biodiesel with alumina nanoparticles - Nature - October 2nd, 2025 [October 2nd, 2025]
- Landmark Patent Appeal Decision Strengthens Protection for AI and Machine Learning Innovations - The National Law Review - October 2nd, 2025 [October 2nd, 2025]
- Machine learning researchers and industry leaders gathering at Santa Clara University - Stories - News & Events - Santa Clara University - September 30th, 2025 [September 30th, 2025]
- Building better batteries with amorphous materials and machine learning - Tech Xplore - September 30th, 2025 [September 30th, 2025]