Are We Overly Infatuated With Deep Learning? – Forbes
Deep Learning
One of the factors often credited for this latest boom in artificial intelligence (AI) investment, research, and related cognitive technologies, is the emergence of deep learning neural networks as an evolution of machine algorithms, as well as the corresponding large volume of big data and computing power that makes deep learning a practical reality. While deep learning has been extremely popular and has shown real ability to solve many machine learning problems, deep learning is just one approach to machine learning (ML), that while having proven much capability across a wide range of problem areas, is still just one of many practical approaches. Increasingly, were starting to see news and research showing the limits of deep learning capabilities, as well as some of the downsides to the deep learning approach. So are peoples enthusiasm of AI tied to their enthusiasm of deep learning, and is deep learning really able to deliver on many of its promises?
The Origins of Deep Learning
AI researchers have struggled to understand how the brain learns from the very beginnings of the development of the field of artificial intelligence. It comes as no surprise that since the brain is primarily a collection of interconnected neurons, AI researchers sought to recreate the way the brain is structured through artificial neurons, and connections of those neurons in artificial neural networks. All the way back in 1940, Walter Pitts and Warren McCulloch built the first thresholded logic unit that was an attempt to mimic the way biological neurons worked. The Pitts and McCulloch model was just a proof of concept, but Frank Rosenblatt picked up on the idea in 1957 with the development of the Perceptron that took the concept to its logical extent. While primitive by todays standards, the Perceptron was still capable of remarkable feats - being able to recognize written numbers and letters, and even distinguish male from female faces. That was over 60 years ago!
Rosenblatt was so enthusiastic in 1959 about the Perceptrons promises that he remarked at the time that the perceptron is the embryo of an electronic computer that [we expect] will be able to walk, talk, see, write, reproduce itself and be conscious of its existence. Sound familiar? However, the enthusiasm didnt last. AI researcher Marvin Minsky noted how sensitive the perceptron was to small changes in the images, and also how easily it could be fooled. Maybe the perceptron wasnt really that smart at all. Minsky and AI researcher peer Seymour Papert basically took apart the whole perceptron idea in their Perceptrons book, and made the claim that perceptrons, and neural networks like it, are fundamentally flawed in their inability to handle certain kinds of problems notably, non-linear functions. That is to say, it was easy to train a neural network like a perceptron to put data into classifications, such as male/female, or types of numbers. For these simple neural networks, you can graph a bunch of data and draw a line and say things on one side of the line are in one category and things on the other side of the line are in a different category, thereby classifying them. But theres a whole bunch of problems where you cant draw lines like this, such as speech recognition or many forms of decision-making. These are nonlinear functions, which Minsky and Papert proved perceptrons incapable of solving.
During this period, while neural network approaches to ML settled to become an afterthought in AI, other approaches to ML were in the limelight including knowledge graphs, decision trees, genetic algorithms, similarity models, and other methods. In fact, during this period, IBMs DeepBlue purpose-built AI computer defeated Gary Kasparov in a chess match, the first computer to do so, using a brute-force alpha-beta search algorithm (so-called Good Old-Fashioned AI [GOFAI]) rather than new-fangled deep learning approaches. Yet, even this approach to learning didnt go far, as some said that this system wasnt even intelligent at all.
Yet, the neural network story doesnt end here. In 1986, AI researcher Geoff Hinton, along with David Rumelhart and Ronald Williams, published a research paper entitled Learning representations by back-propagating errors. In this paper, Hinton and crew detailed how you can use many hidden layers of neurons to get around the problems faced by perceptrons. With sufficient data and computing power, these layers can be calculated to identify specific features in the data sets they can classify on, and as a group, could learn nonlinear functions, something known as the universal approximation theorem. The approach works by backpropagating errors from higher layers of the network to lower ones (backprop), expediting training. Now, if you have enough layers, enough data to train those layers, and sufficient computing power to calculate all the interconnections, you can train a neural network to identify and classify almost anything. Researcher Yann Lecun developed LeNet-5 at AT&T Bell Labs in 1998, recognizing handwritten images on checks using an iteration of this approach known as Convolutional Neural Networks (CNNs), and researchers Yoshua Bengio and Jrgen Schmidhube further advanced the field.
Yet, just as things go in AI, research halted when these early neural networks couldnt scale. Surprisingly very little development happened until 2006, when Hinton re-emerged onto the scene with the ideas of unsupervised pre-training and deep belief nets. The idea here is to have a simple two-layer network whose parameters are trained in an unsupervised way, and then stack new layers on top of it, just training that layers parameters. Repeat for dozens, hundreds, even thousands of layers. Eventually you get a deep network with many layers that can learn and understand something complex. This is what deep learning is all about: using lots of layers of trained neural nets to learn just about anything, at least within certain constraints.
In 2010, Stanford researcher Fei-Fei Li published the release of ImageNet, a large database of millions of labeled images. The images were labeled with a hierarchy of classifications, such as animal or vehicle, down to very granular levels, such as husky or trimaran. This ImageNet database was paired with an annual competition called the Large Scale Visual Recognition Challenge (LSVRC) to see which computer vision system had the lowest number of classification and recognition errors. In 2012, Geoff Hinton, Alex Krizhevsky, and Ilya Sutskever, submitted their AlexNet entry that had almost half the number of errors as all previous winning entries. What made their approach win was that they moved from using ordinary computers with CPUs, to specialized graphical processing units (GPUs) that could train much larger models in reasonable amounts of time. They also introduced now-standard deep learning methods such as dropout to reduce a problem called overfitting (when the network is trained too tightly on the example data and cant generalize to broader data), and something called the rectified linear activation unit (ReLU) to speed training. After the success of their competition, it seems everyone took notice, and Deep Learning was off to the races.
Deep Learnings Shortcomings
The fuel that keeps the Deep Learning fires roaring is data and compute power. Specifically, large volumes of well-labeled data sets are needed to train Deep Learning networks. The more layers, the better the learning power, but to have layers you need to have data that is already well labeled to train those layers. Since deep neural networks are primarily a bunch of calculations that have to all be done at the same time, you need a lot of raw computing power, and specifically numerical computing power. Imagine youre tuning a million knobs at the same time to find the optimal combination that will make the system learn based on millions of pieces of data that are being fed into the system. This is why neural networks in the 1950s were not possible, but today they are. Today we finally have lots of data and lots of computing power to handle that data.
Deep learning is being applied successfully in a wide range of situations, such as natural language processing, computer vision, machine translation, bioinformatics, gaming, and many other applications where classification, pattern matching, and the use of this automatically tuned deep neural network approach works well. However, these same advantages have a number of disadvantages.
The most notable of these disadvantages is that since deep learning consists of many layers, each with many interconnected nodes, each configured with different weights and other parameters theres no way to inspect a deep learning network and understand how any particular decision, clustering, or classification is actually done. Its a black box, which means deep learning networks are inherently unexplainable. As many have written on the topic of Explainable AI (XAI), systems that are used to make decisions of significance need to have explainability to satisfy issues of trust, compliance, verifiability, and understandability. While DARPA and others are working on ways to possibly explain deep learning neural networks, the lack of explainability is a significant drawback for many.
The second disadvantage is that deep learning networks are really great at classification and clustering of information, but not really good at other decision-making or learning scenarios. Not every learning situation is one of classifying something in a category or grouping information together into a cluster. Sometimes you have to deduce what to do based on what youve learned before. Deduction and reasoning is not a fort of deep learning networks.
As mentioned earlier, deep learning is also very data and resource hungry. One measure of a neural networks complexity is the number of parameters that need to be learned and tuned. For deep learning neural networks, there can be hundreds of millions of parameters. Training models requires a significant amount of data to adjust these parameters. For example, a speech recognition neural net often requires terabytes of clean, labeled data to train on. The lack of a sufficient, clean, labeled data set would hinder the development of a deep neural net for that problem domain. And even if you have the data, you need to crunch on it to generate the model, which takes a significant amount of time and processing power.
Another challenge of deep learning is that the models produced are very specific to a problem domain. If its trained on a certain dataset of cats, then it will only recognize those cats and cant be used to generalize on animals or be used to identify non-cats. While this is not a problem of only deep learning approaches to machine learning, it can be particularly troublesome when factoring in the overfitting problem mentioned above. Deep learning neural nets can be so tightly constrained (fitted) to the training data that, for example, even small perturbations in the images can lead to wildly inaccurate classifications of images. There are well known examples of turtles being mis-recognized as guns or polar bears being mis-recognized as other animals due to just small changes in the image data. Clearly if youre using this network in mission critical situations, those mistakes would be significant.
Machine Learning is not (just) Deep Learning
Enterprises looking at using cognitive technologies in their business need to look at the whole picture. Machine learning is not just one approach, but rather a collection of different approaches of various different types that are applicable in different scenarios. Some machine learning algorithms are very simple, using small amounts of data and an understandable logic or deduction path thats very suitable for particular situations, while others are very complex and use lots of data and processing power to handle more complicated situations. The key thing to realize is that deep learning isnt all of machine learning, let alone AI. Even Geoff Hinton, the Einstein of deep learning is starting to rethink core elements of deep learning and its limitations.
The key for organizations is to understand which machine learning methods are most viable for which problem areas, and how to plan, develop, deploy, and manage that machine learning approach in practice. Since AI use in the enterprise is still continuing to gain adoption, especially these more advanced cognitive approaches, the best practices on how to employ cognitive technologies successfully are still maturing.
More:
Are We Overly Infatuated With Deep Learning? - Forbes
- Karen Hao on how the AI boom became a new imperial frontier - Machine Learning Week 2025 - July 8th, 2025 [July 8th, 2025]
- Machine Learning and AI in Enhancing Image Analysis of 3D Samples - Drug Target Review - July 8th, 2025 [July 8th, 2025]
- Gartner Predicts Over 40% of Agentic AI Projects Will Be Canceled by End of 2027 - Machine Learning Week 2025 - July 8th, 2025 [July 8th, 2025]
- Explainable machine learning model for predicting the transarterial chemoembolization response and subtypes of hepatocellular carcinoma patients - BMC... - July 8th, 2025 [July 8th, 2025]
- Identification and validation of glucocorticoid receptor and programmed cell death-related genes in spinal cord injury using machine learning - Nature - July 8th, 2025 [July 8th, 2025]
- Multiclass leukemia cell classification using hybrid deep learning and machine learning with CNN-based feature extraction - Nature - July 6th, 2025 [July 6th, 2025]
- Predictive modeling and machine learning show poor performance of clinical, morphological, and hemodynamic parameters for small intracranial aneurysm... - July 6th, 2025 [July 6th, 2025]
- A robust machine learning approach to predicting remission and stratifying risk in rheumatoid arthritis patients treated with bDMARDs - Nature - July 6th, 2025 [July 6th, 2025]
- Ultrabroadband and band-selective thermal meta-emitters by machine learning - Nature - July 4th, 2025 [July 4th, 2025]
- Machine Learning is Surprisingly Good at Simulating the Universe - Universe Today - July 4th, 2025 [July 4th, 2025]
- Machine learning-assisted multi-dimensional transcriptomic analysis of cytoskeleton-related molecules and their relationship with prognosis in... - July 4th, 2025 [July 4th, 2025]
- Machine learning combined with multi-omics to identify immune-related LncRNA signature as biomarkers for predicting breast cancer prognosis - Nature - July 4th, 2025 [July 4th, 2025]
- Comprehensive machine learning analysis of PANoptosis signatures in multiple myeloma identifies prognostic and immunotherapy biomarkers - Nature - July 4th, 2025 [July 4th, 2025]
- Enhancing game outcome prediction in the Chinese basketball league through a machine learning framework based on performance data - Nature - July 4th, 2025 [July 4th, 2025]
- A novel double machine learning approach for detecting early breast cancer using advanced feature selection and dimensionality reduction techniques -... - July 4th, 2025 [July 4th, 2025]
- Machine learning for Parkinsons disease: a comprehensive review of datasets, algorithms, and challenges - Nature - July 4th, 2025 [July 4th, 2025]
- Cervical cancer prediction using machine learning models based on routine blood analysis - Nature - July 4th, 2025 [July 4th, 2025]
- Enhancing anomaly detection in IoT-driven factories using Logistic Boosting, Random Forest, and SVM: A comparative machine learning approach - Nature - July 4th, 2025 [July 4th, 2025]
- Predicting car accident severity in Northwest Ethiopia: a machine learning approach leveraging driver, environmental, and road conditions - Nature - July 4th, 2025 [July 4th, 2025]
- Sensormatic Solutions Adds Machine Learning to Shrink Analyzer - Ink World magazine - July 4th, 2025 [July 4th, 2025]
- Exploring the link between the ZJU index and sarcopenia in adults aged 2059 using NHANES and machine learning - Nature - July 4th, 2025 [July 4th, 2025]
- Combining multi-parametric MRI radiomics features with tumor abnormal protein to construct a machine learning-based predictive model for prostate... - July 2nd, 2025 [July 2nd, 2025]
- New insight into viscosity prediction of imidazolium-based ionic liquids and their mixtures with machine learning models - Nature - July 2nd, 2025 [July 2nd, 2025]
- Implementing partial least squares and machine learning regressive models for prediction of drug release in targeted drug delivery application -... - July 2nd, 2025 [July 2nd, 2025]
- Advanced analysis of defect clusters in nuclear reactors using machine learning techniques - Nature - July 2nd, 2025 [July 2nd, 2025]
- Machine learning analysis of kinematic movement features during functional tasks to discriminate chronic neck pain patients from asymptomatic controls... - July 2nd, 2025 [July 2nd, 2025]
- Enhanced machine learning models for predicting three-year mortality in Non-STEMI patients aged 75 and above - BMC Geriatrics - July 2nd, 2025 [July 2nd, 2025]
- Modeling seawater intrusion along the Alabama coastline using physical and machine learning models to evaluate the effects of multiscale natural and... - July 2nd, 2025 [July 2nd, 2025]
- A comprehensive study based on machine learning models for early identification Mycoplasma pneumoniae infection in segmental/lobar pneumonia - Nature - July 2nd, 2025 [July 2nd, 2025]
- Identifying ovarian cancer with machine learning DNA methylation pattern analysis - Nature - July 2nd, 2025 [July 2nd, 2025]
- High-isolation dual-band MIMO antenna for next-generation 5G wireless networks at 28/38 GHz with machine learning-based gain prediction - Nature - July 2nd, 2025 [July 2nd, 2025]
- Sony and AMD want to focus on machine learning for the PS6 - Instant Gaming News - July 2nd, 2025 [July 2nd, 2025]
- How Machine Learning is Reshaping the Future of Sports Betting? - London Daily News - July 2nd, 2025 [July 2nd, 2025]
- An interpretable machine learning model for predicting depression in middle-aged and elderly cancer patients in China: a study based on the CHARLS... - July 2nd, 2025 [July 2nd, 2025]
- These Eight Projects Showcase the Power of Machine Learning on the Edge - Hackster.io - June 29th, 2025 [June 29th, 2025]
- Build Custom AI Tools for Your AI Agents that Combine Machine Learning and Statistical Analysis - MarkTechPost - June 29th, 2025 [June 29th, 2025]
- Check out these essential tips and trends for SEO in 2025 as AI and machine learning loom large - EdTech Innovation Hub - June 29th, 2025 [June 29th, 2025]
- Using machine learning to predict the severity of salmonella infection - Open Access Government - June 28th, 2025 [June 28th, 2025]
- How AI and machine learning are transforming drug discovery - Pharmaceutical Technology - June 28th, 2025 [June 28th, 2025]
- Capturing the complexity of human strategic decision-making with machine learning - Nature - June 26th, 2025 [June 26th, 2025]
- A framework to evaluate machine learning crystal stability predictions - Nature - June 24th, 2025 [June 24th, 2025]
- Machine learning revealed giant thermal conductivity reduction by strong phonon localization in two-angle disordered twisted multilayer graphene -... - June 24th, 2025 [June 24th, 2025]
- How AI and Machine Learning Are Powering the Next Generation of Pump Maintenance - Robotics Tomorrow - June 24th, 2025 [June 24th, 2025]
- Actuate Therapeutics Reports Positive Biomarker and Machine Learning Data from Phase 2 Elraglusib Trial in First-Line Treatment of Metastatic... - June 24th, 2025 [June 24th, 2025]
- Texas A&M Researchers Introduce a Two-Phase Machine Learning Method Named ShockCast for High-Speed Flow Simulation with Neural Temporal Re-Meshing -... - June 22nd, 2025 [June 22nd, 2025]
- Machine learning method helps bring diagnostic testing out of the lab - Medical Xpress - June 22nd, 2025 [June 22nd, 2025]
- Sebi proposes five-point rulebook for responsible use of AI, machine learning - The New Indian Express - June 22nd, 2025 [June 22nd, 2025]
- HAPIR: a refined Hallmark gene set-based machine learning approach for predicting immunotherapy response in cancer patients - Nature - June 20th, 2025 [June 20th, 2025]
- Machine learning boosts accuracy of point-of-care disease detection - News-Medical - June 20th, 2025 [June 20th, 2025]
- How AI and Machine Learning Are Transforming Food Poisoning Outbreak Detection - Food Poisoning News - June 20th, 2025 [June 20th, 2025]
- Evo 2 machine learning model enlists the power of AI in the fight against diseases - Medical Xpress - June 20th, 2025 [June 20th, 2025]
- Machine learning can predict which babies will be born with low birth weights - Medical Xpress - June 20th, 2025 [June 20th, 2025]
- Development and Validation of a Machine Learning Model for Identifying Novel HIV Integrase Inhibitors - Cureus - June 20th, 2025 [June 20th, 2025]
- IIT launches new online certificate programme in data science and machine learning for working profession - Times of India - June 20th, 2025 [June 20th, 2025]
- Calgary startup tackles referee abuse with microphones and machine learning - Yahoo - June 20th, 2025 [June 20th, 2025]
- New machine learning program accurately predicts who will stick with their exercise program - AOL.com - June 20th, 2025 [June 20th, 2025]
- Machine learning and generative AI: What are they good for in 2025? - MIT Sloan - June 4th, 2025 [June 4th, 2025]
- Researchers use machine learning to improve gene therapy - Stanford Report - June 4th, 2025 [June 4th, 2025]
- Machine learning for workpiece mass prediction using real and synthetic acoustic data - Nature - June 4th, 2025 [June 4th, 2025]
- Analyzing the Effect of Linguistic Similarity on Cross-Lingual Transfer: Tasks and Input Representations Matter - Apple Machine Learning Research - June 4th, 2025 [June 4th, 2025]
- Machine learning models for predicting severe acute kidney injury in patients with sepsis-induced myocardial injury - Nature - June 4th, 2025 [June 4th, 2025]
- A machine learning approach to carbon emissions prediction of the top eleven emitters by 2030 and their prospects for meeting Paris agreement targets... - June 4th, 2025 [June 4th, 2025]
- Augmentation of wastewater-based epidemiology with machine learning to support global health surveillance - Nature - June 4th, 2025 [June 4th, 2025]
- Analysis of a nonsteroidal anti inflammatory drug solubility in green solvent via developing robust models based on machine learning technique -... - June 4th, 2025 [June 4th, 2025]
- Your DNA Is a Machine Learning Model: Its Already Out There - Towards Data Science - June 4th, 2025 [June 4th, 2025]
- Development and validation of a risk prediction model for kinesiophobia in postoperative lung cancer patients: an interpretable machine learning... - June 4th, 2025 [June 4th, 2025]
- Predicting long-term patency of radiocephalic arteriovenous fistulas with machine learning and the PREDICT-AVF web app - Nature - June 4th, 2025 [June 4th, 2025]
- How Machine Learning and Cascade Learning Open Doors of Advanced Automation - Supply & Demand Chain Executive - June 4th, 2025 [June 4th, 2025]
- New Hydrogenation Reaction Mechanism for Superhydride Revealed by Machine Learning - Asia Research News | - June 4th, 2025 [June 4th, 2025]
- AI experiences rapid adoption, but with mixed outcomes Highlights from VotE: AI & Machine Learning - S&P Global - June 4th, 2025 [June 4th, 2025]
- IIPE introduces online M.Tech in Data Science and Machine Learning for working professionals - India Today - June 4th, 2025 [June 4th, 2025]
- Introducing Windows ML: The future of machine learning development on Windows - Windows Blog - May 19th, 2025 [May 19th, 2025]
- Settlement strategies and their driving mechanisms of Neolithic settlements using machine learning approaches: a case study in Zhejiang Province -... - May 19th, 2025 [May 19th, 2025]
- MyWear revolutionizes real-time health monitoring with comparative analysis of machine learning - Nature - May 19th, 2025 [May 19th, 2025]
- Leveraging stacking machine learning models and optimization for improved cyberattack detection - Nature - May 19th, 2025 [May 19th, 2025]
- Predicting land suitability for wheat and barley crops using machine learning techniques - Nature - May 10th, 2025 [May 10th, 2025]
- AI and Machine Learning - Ribeiro Preto adopts Optibus to optimise public bus system - Smart Cities World - May 10th, 2025 [May 10th, 2025]
- Childrens Hospital Los Angeles Leads Development of First Machine Learning Tool to Predict Risk of Cisplatin-Induced Hearing Loss - Business Wire - May 10th, 2025 [May 10th, 2025]
- Google is using machine learning to help Android users avoid unwanted and dangerous notifications - BetaNews - May 10th, 2025 [May 10th, 2025]
- London School of Emerging Technology (LSET) Concludes International Workshop on Emerging AI & Machine Learning Innovation - Barchart.com - May 10th, 2025 [May 10th, 2025]