Are We Overly Infatuated With Deep Learning? – Forbes
Deep Learning
One of the factors often credited for this latest boom in artificial intelligence (AI) investment, research, and related cognitive technologies, is the emergence of deep learning neural networks as an evolution of machine algorithms, as well as the corresponding large volume of big data and computing power that makes deep learning a practical reality. While deep learning has been extremely popular and has shown real ability to solve many machine learning problems, deep learning is just one approach to machine learning (ML), that while having proven much capability across a wide range of problem areas, is still just one of many practical approaches. Increasingly, were starting to see news and research showing the limits of deep learning capabilities, as well as some of the downsides to the deep learning approach. So are peoples enthusiasm of AI tied to their enthusiasm of deep learning, and is deep learning really able to deliver on many of its promises?
The Origins of Deep Learning
AI researchers have struggled to understand how the brain learns from the very beginnings of the development of the field of artificial intelligence. It comes as no surprise that since the brain is primarily a collection of interconnected neurons, AI researchers sought to recreate the way the brain is structured through artificial neurons, and connections of those neurons in artificial neural networks. All the way back in 1940, Walter Pitts and Warren McCulloch built the first thresholded logic unit that was an attempt to mimic the way biological neurons worked. The Pitts and McCulloch model was just a proof of concept, but Frank Rosenblatt picked up on the idea in 1957 with the development of the Perceptron that took the concept to its logical extent. While primitive by todays standards, the Perceptron was still capable of remarkable feats - being able to recognize written numbers and letters, and even distinguish male from female faces. That was over 60 years ago!
Rosenblatt was so enthusiastic in 1959 about the Perceptrons promises that he remarked at the time that the perceptron is the embryo of an electronic computer that [we expect] will be able to walk, talk, see, write, reproduce itself and be conscious of its existence. Sound familiar? However, the enthusiasm didnt last. AI researcher Marvin Minsky noted how sensitive the perceptron was to small changes in the images, and also how easily it could be fooled. Maybe the perceptron wasnt really that smart at all. Minsky and AI researcher peer Seymour Papert basically took apart the whole perceptron idea in their Perceptrons book, and made the claim that perceptrons, and neural networks like it, are fundamentally flawed in their inability to handle certain kinds of problems notably, non-linear functions. That is to say, it was easy to train a neural network like a perceptron to put data into classifications, such as male/female, or types of numbers. For these simple neural networks, you can graph a bunch of data and draw a line and say things on one side of the line are in one category and things on the other side of the line are in a different category, thereby classifying them. But theres a whole bunch of problems where you cant draw lines like this, such as speech recognition or many forms of decision-making. These are nonlinear functions, which Minsky and Papert proved perceptrons incapable of solving.
During this period, while neural network approaches to ML settled to become an afterthought in AI, other approaches to ML were in the limelight including knowledge graphs, decision trees, genetic algorithms, similarity models, and other methods. In fact, during this period, IBMs DeepBlue purpose-built AI computer defeated Gary Kasparov in a chess match, the first computer to do so, using a brute-force alpha-beta search algorithm (so-called Good Old-Fashioned AI [GOFAI]) rather than new-fangled deep learning approaches. Yet, even this approach to learning didnt go far, as some said that this system wasnt even intelligent at all.
Yet, the neural network story doesnt end here. In 1986, AI researcher Geoff Hinton, along with David Rumelhart and Ronald Williams, published a research paper entitled Learning representations by back-propagating errors. In this paper, Hinton and crew detailed how you can use many hidden layers of neurons to get around the problems faced by perceptrons. With sufficient data and computing power, these layers can be calculated to identify specific features in the data sets they can classify on, and as a group, could learn nonlinear functions, something known as the universal approximation theorem. The approach works by backpropagating errors from higher layers of the network to lower ones (backprop), expediting training. Now, if you have enough layers, enough data to train those layers, and sufficient computing power to calculate all the interconnections, you can train a neural network to identify and classify almost anything. Researcher Yann Lecun developed LeNet-5 at AT&T Bell Labs in 1998, recognizing handwritten images on checks using an iteration of this approach known as Convolutional Neural Networks (CNNs), and researchers Yoshua Bengio and Jrgen Schmidhube further advanced the field.
Yet, just as things go in AI, research halted when these early neural networks couldnt scale. Surprisingly very little development happened until 2006, when Hinton re-emerged onto the scene with the ideas of unsupervised pre-training and deep belief nets. The idea here is to have a simple two-layer network whose parameters are trained in an unsupervised way, and then stack new layers on top of it, just training that layers parameters. Repeat for dozens, hundreds, even thousands of layers. Eventually you get a deep network with many layers that can learn and understand something complex. This is what deep learning is all about: using lots of layers of trained neural nets to learn just about anything, at least within certain constraints.
In 2010, Stanford researcher Fei-Fei Li published the release of ImageNet, a large database of millions of labeled images. The images were labeled with a hierarchy of classifications, such as animal or vehicle, down to very granular levels, such as husky or trimaran. This ImageNet database was paired with an annual competition called the Large Scale Visual Recognition Challenge (LSVRC) to see which computer vision system had the lowest number of classification and recognition errors. In 2012, Geoff Hinton, Alex Krizhevsky, and Ilya Sutskever, submitted their AlexNet entry that had almost half the number of errors as all previous winning entries. What made their approach win was that they moved from using ordinary computers with CPUs, to specialized graphical processing units (GPUs) that could train much larger models in reasonable amounts of time. They also introduced now-standard deep learning methods such as dropout to reduce a problem called overfitting (when the network is trained too tightly on the example data and cant generalize to broader data), and something called the rectified linear activation unit (ReLU) to speed training. After the success of their competition, it seems everyone took notice, and Deep Learning was off to the races.
Deep Learnings Shortcomings
The fuel that keeps the Deep Learning fires roaring is data and compute power. Specifically, large volumes of well-labeled data sets are needed to train Deep Learning networks. The more layers, the better the learning power, but to have layers you need to have data that is already well labeled to train those layers. Since deep neural networks are primarily a bunch of calculations that have to all be done at the same time, you need a lot of raw computing power, and specifically numerical computing power. Imagine youre tuning a million knobs at the same time to find the optimal combination that will make the system learn based on millions of pieces of data that are being fed into the system. This is why neural networks in the 1950s were not possible, but today they are. Today we finally have lots of data and lots of computing power to handle that data.
Deep learning is being applied successfully in a wide range of situations, such as natural language processing, computer vision, machine translation, bioinformatics, gaming, and many other applications where classification, pattern matching, and the use of this automatically tuned deep neural network approach works well. However, these same advantages have a number of disadvantages.
The most notable of these disadvantages is that since deep learning consists of many layers, each with many interconnected nodes, each configured with different weights and other parameters theres no way to inspect a deep learning network and understand how any particular decision, clustering, or classification is actually done. Its a black box, which means deep learning networks are inherently unexplainable. As many have written on the topic of Explainable AI (XAI), systems that are used to make decisions of significance need to have explainability to satisfy issues of trust, compliance, verifiability, and understandability. While DARPA and others are working on ways to possibly explain deep learning neural networks, the lack of explainability is a significant drawback for many.
The second disadvantage is that deep learning networks are really great at classification and clustering of information, but not really good at other decision-making or learning scenarios. Not every learning situation is one of classifying something in a category or grouping information together into a cluster. Sometimes you have to deduce what to do based on what youve learned before. Deduction and reasoning is not a fort of deep learning networks.
As mentioned earlier, deep learning is also very data and resource hungry. One measure of a neural networks complexity is the number of parameters that need to be learned and tuned. For deep learning neural networks, there can be hundreds of millions of parameters. Training models requires a significant amount of data to adjust these parameters. For example, a speech recognition neural net often requires terabytes of clean, labeled data to train on. The lack of a sufficient, clean, labeled data set would hinder the development of a deep neural net for that problem domain. And even if you have the data, you need to crunch on it to generate the model, which takes a significant amount of time and processing power.
Another challenge of deep learning is that the models produced are very specific to a problem domain. If its trained on a certain dataset of cats, then it will only recognize those cats and cant be used to generalize on animals or be used to identify non-cats. While this is not a problem of only deep learning approaches to machine learning, it can be particularly troublesome when factoring in the overfitting problem mentioned above. Deep learning neural nets can be so tightly constrained (fitted) to the training data that, for example, even small perturbations in the images can lead to wildly inaccurate classifications of images. There are well known examples of turtles being mis-recognized as guns or polar bears being mis-recognized as other animals due to just small changes in the image data. Clearly if youre using this network in mission critical situations, those mistakes would be significant.
Machine Learning is not (just) Deep Learning
Enterprises looking at using cognitive technologies in their business need to look at the whole picture. Machine learning is not just one approach, but rather a collection of different approaches of various different types that are applicable in different scenarios. Some machine learning algorithms are very simple, using small amounts of data and an understandable logic or deduction path thats very suitable for particular situations, while others are very complex and use lots of data and processing power to handle more complicated situations. The key thing to realize is that deep learning isnt all of machine learning, let alone AI. Even Geoff Hinton, the Einstein of deep learning is starting to rethink core elements of deep learning and its limitations.
The key for organizations is to understand which machine learning methods are most viable for which problem areas, and how to plan, develop, deploy, and manage that machine learning approach in practice. Since AI use in the enterprise is still continuing to gain adoption, especially these more advanced cognitive approaches, the best practices on how to employ cognitive technologies successfully are still maturing.
More:
Are We Overly Infatuated With Deep Learning? - Forbes
- Infleqtion Unveils Contextual Machine Learning (CML) at GTC 2025, Powering AI Breakthroughs with NVIDIA CUDA-Q and Quantum-Inspired Algorithms - Yahoo... - March 22nd, 2025 [March 22nd, 2025]
- Karlie Kloss' coding nonprofit offering free AI and machine learning workshop this weekend - KSDK.com - March 22nd, 2025 [March 22nd, 2025]
- Machine learning reveals distinct neuroanatomical signatures of cardiovascular and metabolic diseases in cognitively unimpaired individuals -... - March 22nd, 2025 [March 22nd, 2025]
- Machine learning analysis of cardiovascular risk factors and their associations with hearing loss - Nature.com - March 22nd, 2025 [March 22nd, 2025]
- Weekly Recap: Dual-Cure Inks, AI And Machine Learning Top This Weeks Stories - Ink World Magazine - March 22nd, 2025 [March 22nd, 2025]
- Network-based predictive models for artificial intelligence: an interpretable application of machine learning techniques in the assessment of... - March 22nd, 2025 [March 22nd, 2025]
- Machine learning aids in detection of 'brain tsunamis' - University of Cincinnati - March 22nd, 2025 [March 22nd, 2025]
- AI & Machine Learning in Database Management: Studying Trends and Applications with Nithin Gadicharla - Tech Times - March 22nd, 2025 [March 22nd, 2025]
- MicroRNA Biomarkers and Machine Learning for Hypertension Subtyping - Physician's Weekly - March 22nd, 2025 [March 22nd, 2025]
- Machine Learning Pioneer Ramin Hasani Joins Info-Tech's "Digital Disruption" Podcast to Explore the Future of AI and Liquid Neural Networks... - March 22nd, 2025 [March 22nd, 2025]
- Predicting HIV treatment nonadherence in adolescents with machine learning - News-Medical.Net - March 22nd, 2025 [March 22nd, 2025]
- AI And Machine Learning In Ink And Coatings Formulation - Ink World Magazine - March 22nd, 2025 [March 22nd, 2025]
- Counting whales by eavesdropping on their chatter, with help from machine learning - Mongabay.com - March 22nd, 2025 [March 22nd, 2025]
- Associate Professor - Artificial Intelligence and Machine Learning job with GALGOTIAS UNIVERSITY | 390348 - Times Higher Education - March 22nd, 2025 [March 22nd, 2025]
- Innovative Machine Learning Tool Reveals Secrets Of Marine Microbial Proteins - Evrim Aac - March 22nd, 2025 [March 22nd, 2025]
- Exploring the role of breastfeeding, antibiotics, and indoor environments in preschool children atopic dermatitis through machine learning and hygiene... - March 22nd, 2025 [March 22nd, 2025]
- Applying machine learning algorithms to explore the impact of combined noise and dust on hearing loss in occupationally exposed populations -... - March 22nd, 2025 [March 22nd, 2025]
- 'We want them to be the creators': Karlie Kloss' coding nonprofit offering free AI and machine learning workshop this weekend - KSDK.com - March 22nd, 2025 [March 22nd, 2025]
- New headset reads minds and uses AR, AI and machine learning to help people with locked-in-syndrome communicate with loved ones again - PC Gamer - March 22nd, 2025 [March 22nd, 2025]
- Enhancing cybersecurity through script development using machine and deep learning for advanced threat mitigation - Nature.com - March 11th, 2025 [March 11th, 2025]
- Machine learning-assisted wearable sensing systems for speech recognition and interaction - Nature.com - March 11th, 2025 [March 11th, 2025]
- Machine learning uncovers complexity of immunotherapy variables in bladder cancer - Hospital Healthcare - March 11th, 2025 [March 11th, 2025]
- Machine-learning algorithm analyzes gravitational waves from merging neutron stars in the blink of an eye - The University of Rhode Island - March 11th, 2025 [March 11th, 2025]
- Precision soil sampling strategy for the delineation of management zones in olive cultivation using unsupervised machine learning methods - Nature.com - March 11th, 2025 [March 11th, 2025]
- AI in Esports: How Machine Learning is Transforming Anti-Cheat Systems in Esports - Jumpstart Media - March 11th, 2025 [March 11th, 2025]
- Whats that microplastic? Advances in machine learning are making identifying plastics in the environment more reliable - The Conversation Indonesia - March 11th, 2025 [March 11th, 2025]
- Application of machine learning techniques in GlaucomAI system for glaucoma diagnosis and collaborative research support - Nature.com - March 11th, 2025 [March 11th, 2025]
- Elucidating the role of KCTD10 in coronary atherosclerosis: Harnessing bioinformatics and machine learning to advance understanding - Nature.com - March 11th, 2025 [March 11th, 2025]
- Hugging Face Tutorial: Unleashing the Power of AI and Machine Learning - - March 11th, 2025 [March 11th, 2025]
- Utilizing Machine Learning to Predict Host Stars and the Key Elemental Abundances of Small Planets - Astrobiology News - March 11th, 2025 [March 11th, 2025]
- AI to the rescue: Study shows machine learning predicts long term recovery for anxiety with 72% accuracy - Hindustan Times - March 11th, 2025 [March 11th, 2025]
- New in 2025.3: Reducing false positives with Machine Learning - Emsisoft - March 5th, 2025 [March 5th, 2025]
- Abnormal FX Returns And Liquidity-Based Machine Learning Approaches - Seeking Alpha - March 5th, 2025 [March 5th, 2025]
- Sentiment analysis of emoji fused reviews using machine learning and Bert - Nature.com - March 5th, 2025 [March 5th, 2025]
- Detection of obstetric anal sphincter injuries using machine learning-assisted impedance spectroscopy: a prospective, comparative, multicentre... - March 5th, 2025 [March 5th, 2025]
- JFrog and Hugging Face team to improve machine learning security and transparency for developers - SDxCentral - March 5th, 2025 [March 5th, 2025]
- Opportunistic access control scheme for enhancing IoT-enabled healthcare security using blockchain and machine learning - Nature.com - March 5th, 2025 [March 5th, 2025]
- AI and Machine Learning Operationalization Software Market Hits New High | Major Giants Google, IBM, Microsoft - openPR - March 5th, 2025 [March 5th, 2025]
- FICO secures new patents in AI and machine learning technologies - Investing.com - March 5th, 2025 [March 5th, 2025]
- Study on landslide hazard risk in Wenzhou based on slope units and machine learning approaches - Nature.com - March 5th, 2025 [March 5th, 2025]
- NVIDIA Is Finding Great Success With Vulkan Machine Learning - Competitive With CUDA - Phoronix - March 3rd, 2025 [March 3rd, 2025]
- MRI radiomics based on machine learning in high-grade gliomas as a promising tool for prediction of CD44 expression and overall survival - Nature.com - March 3rd, 2025 [March 3rd, 2025]
- AI and Machine Learning - Identifying meaningful use cases to fulfil the promise of AI in cities - SmartCitiesWorld - March 3rd, 2025 [March 3rd, 2025]
- Prediction of contrast-associated acute kidney injury with machine-learning in patients undergoing contrast-enhanced computed tomography in emergency... - March 3rd, 2025 [March 3rd, 2025]
- Predicting Ag Harvest using ArcGIS and Machine Learning - Esri - March 1st, 2025 [March 1st, 2025]
- Seeing Through The Hype: The Difference Between AI And Machine Learning In Marketing - AdExchanger - March 1st, 2025 [March 1st, 2025]
- Machine Learning Meets War Termination: Using AI to Explore Peace Scenarios in Ukraine - Center for Strategic & International Studies - March 1st, 2025 [March 1st, 2025]
- Statistical and machine learning analysis of diesel engines fueled with Moringa oleifera biodiesel doped with 1-hexanol and Zr2O3 nanoparticles |... - March 1st, 2025 [March 1st, 2025]
- Spatial analysis of air pollutant exposure and its association with metabolic diseases using machine learning - BMC Public Health - March 1st, 2025 [March 1st, 2025]
- The Evolution of AI in Software Testing: From Machine Learning to Agentic AI - CSRwire.com - March 1st, 2025 [March 1st, 2025]
- Wonder Dynamics Helps Boxel Studio Embrace Machine Learning and AI - Animation World Network - March 1st, 2025 [March 1st, 2025]
- Predicting responsiveness to fixed-dose methylene blue in adult patients with septic shock using interpretable machine learning: a retrospective study... - March 1st, 2025 [March 1st, 2025]
- Workplace Predictions: AI, Machine Learning To Transform Operations In 2025 - Facility Executive Magazine - March 1st, 2025 [March 1st, 2025]
- Development and validation of a machine learning approach for screening new leprosy cases based on the leprosy suspicion questionnaire - Nature.com - March 1st, 2025 [March 1st, 2025]
- Machine learning analysis of gene expression profiles of pyroptosis-related differentially expressed genes in ischemic stroke revealed potential... - March 1st, 2025 [March 1st, 2025]
- Utilization of tree-based machine learning models for predicting low birth weight cases - BMC Pregnancy and Childbirth - March 1st, 2025 [March 1st, 2025]
- Machine learning-based pattern recognition of Bender element signals for predicting sand particle-size - Nature.com - March 1st, 2025 [March 1st, 2025]
- Wearable Tech Uses Machine Learning to Predict Mood Swings - IoT World Today - March 1st, 2025 [March 1st, 2025]
- Machine learning can prevent thermal runaway in EV batteries - Automotive World - March 1st, 2025 [March 1st, 2025]
- Integration of multiple machine learning approaches develops a gene mutation-based classifier for accurate immunotherapy outcomes - Nature.com - March 1st, 2025 [March 1st, 2025]
- Data Analytics Market Size to Surpass USD 483.41 Billion by 2032 Owing to Rising Adoption of AI & Machine Learning Technologies - Yahoo Finance - March 1st, 2025 [March 1st, 2025]
- Predictive AI Only Works If Stakeholders Tune This Dial - The Machine Learning Times - March 1st, 2025 [March 1st, 2025]
- Relationship between atherogenic index of plasma and length of stay in critically ill patients with atherosclerotic cardiovascular disease: a... - March 1st, 2025 [March 1st, 2025]
- A global survey from SAS shows that artificial intelligence and machine learning are producing major benefits in combating money laundering and other... - March 1st, 2025 [March 1st, 2025]
- Putting the AI in air cargo: How machine learning is reshaping demand forecasting - Air Cargo Week - March 1st, 2025 [March 1st, 2025]
- Meta speeds up its hiring process for machine-learning engineers as it cuts thousands of 'low performers' - Business Insider - February 11th, 2025 [February 11th, 2025]
- AI vs. Machine Learning: The Key Differences and Why They Matter - Lifewire - February 11th, 2025 [February 11th, 2025]
- Unravelling single-cell DNA replication timing dynamics using machine learning reveals heterogeneity in cancer progression - Nature.com - February 11th, 2025 [February 11th, 2025]
- Climate change and machine learning the good, bad, and unknown - MIT Sloan News - February 11th, 2025 [February 11th, 2025]
- Theory, Analysis, and Best Practices for Sigmoid Self-Attention - Apple Machine Learning Research - February 11th, 2025 [February 11th, 2025]
- Yielding insights: Machine learning driven imputations to fill in agricultural data gaps in surveys - World Bank - February 11th, 2025 [February 11th, 2025]
- SKUtrak Promote tool taps machine learning powered analysis to shake up way brands run promotions - Retail Technology Innovation Hub - February 11th, 2025 [February 11th, 2025]
- Machine learning approaches for resilient modulus modeling of cement-stabilized magnetite and hematite iron ore tailings - Nature.com - February 11th, 2025 [February 11th, 2025]
- The Alignment Problem: Machine Learning and Human Values - Harvard Gazette - February 11th, 2025 [February 11th, 2025]
- Narrowing the gap between machine learning scoring functions and free energy perturbation using augmented data - Nature.com - February 11th, 2025 [February 11th, 2025]
- Analyzing the influence of manufactured sand and fly ash on concrete strength through experimental and machine learning methods - Nature.com - February 11th, 2025 [February 11th, 2025]
- Machine learning prediction of glaucoma by heavy metal exposure: results from the National Health and Nutrition Examination Survey 2005 to 2008 -... - February 11th, 2025 [February 11th, 2025]
- Correlation of rivaroxaban solubility in mixed solvents for optimization of solubility using machine learning analysis and validation - Nature.com - February 11th, 2025 [February 11th, 2025]
- Characterisation of cardiovascular disease (CVD) incidence and machine learning risk prediction in middle-aged and elderly populations: data from the... - February 11th, 2025 [February 11th, 2025]
- Unlock the Secrets of AI: How Mohit Pandey Makes Machine Learning Fun! - Mi Valle - February 11th, 2025 [February 11th, 2025]