AI: The pattern is not in the data, it’s in the machine – ZDNet
A neural network transforms input, the circles on the left, to output, on the right. How that happens is a transformation of weights, center, which we often confuse for patterns in the data itself.
It's a commonplace of artificial intelligence to say that machine learning, which depends on vast amounts of data, functions by finding patterns in data.
The phrase, "finding patterns in data," in fact, has been a staple phrase of things such as data mining and knowledge discovery for years now, and it has been assumed that machine learning, and its deep learning variant especially, are just continuing the tradition of finding such patterns.
AI programs do, indeed, result in patterns, but, just as "The fault, dear Brutus, lies not in our stars but in ourselves," the fact of those patterns is not something in the data, it is what the AI program makes of the data.
Almost all machine learning models function via a learning rule that changes the so-called weights, also known as parameters, of the program as the program is fed examples of data, and, possibly, labels attached to that data. It is the value of the weights that counts as "knowing" or "understanding."
The pattern that is being found is really a pattern of how weights change. The weights are simulating how real neurons are believed to "fire", the principle formed by psychologist Donald O. Hebb, which became known as Hebbian learning, the idea that "neurons that fire together, wire together."
Also: AI in sixty seconds
It is the pattern of weight changes that is the model for learning and understanding in machine learning, something the founders of deep learning emphasized. As expressed almost forty years ago, in one of the foundational texts of deep learning, Parallel Distributed Processing, Volume I, James McClelland, David Rumelhart, and Geoffrey Hinton wrote,
What is stored is the connection strengths between units that allow these patterns to be created [] If the knowledge is the strengths of the connections, learning must be a matter of finding the right connection strengths so that the right patterns of activation will be produced under the right circumstances.
McClelland, Rumelhart, and Hinton were writing for a select audience, cognitive psychologists and computer scientists, and they were writing in a very different age, an age when people didn't make easy assumptions that anything a computer did represented "knowledge." They were laboring at a time when AI programs couldn't do much at all, and they were mainly concerned with how to produce a computation, any computation, from a fairly limited arrangement of transistors.
Then, starting with the rise of powerful GPU chips some sixteen years ago, computers really did begin to produce interesting behavior, capped off by the landmark ImageNet performance of Hinton's work with his graduate students in 2012 that marked deep learning's coming of age.
As a consequence of the new computer achievements, the popular mind started to build all kinds of mythology around AI and deep learning. There was a rush of really bad headlines likening the technology to super-human performance.
Also: Why is AI reporting so bad?
Today's conception of AI has obscured what McClelland, Rumelhart, and Hinton focused on, namely, the machine, and how it "creates" patterns, as they put it. They were very intimately familiar with the mechanics of weights constructing a pattern as a response to what was, in the input, merely data.
Why does all that matter? If the machine is the creator of patterns, then the conclusions people draw about AI are probably mostly wrong. Most people assume a computer program is perceiving a pattern in the world, which can lead to people deferring judgment to the machine. If it produces results, the thinking goes, the computer must be seeing something humans don't.
Except that a machine that constructs patterns isn't explicitly seeing anything. It's constructing a pattern. That means what is "seen" or "known" is not the same as the colloquial, everyday sense in which humans speak of themselves as knowing things.
Instead of starting from the anthropocentric question, What does the machine know? it's best to start from a more precise question, What is this program representing in the connections of its weights?
Depending on the task, the answer to that question takes many forms.
Consider computer vision. The convolutional neural network that underlies machine learning programs for image recognition and other visual perception is composed of a collection of weights that measure pixel values in a digital image.
The pixel grid is already an imposition of a 2-D coordinate system on the real world. Provided with the machine-friendly abstraction of the coordinate grid, a neural net's task of representation boils down to matching the strength of collections of pixels to a label that has been imposed, such as "bird" or "blue jay."
In a scene containing a bird, or specifically a blue jay, many things may be happening, including clouds, sunshine, and passers by. But the scene in its entirety is not the thing. What matters to the program is the collection of pixels most likely to produce an appropriate label. The pattern, in other words, is a reductive act of focus and selection inherent in the activation of neural net connections.
You might say, a program of this kind doesn't "see" or "perceive" so much as it filters.
Also: A new experiment: Does AI really know cats or dogs -- or anything?
The same is true in games, where AI has mastered chess and poker. In the full information game of chess, for DeepMind's AlphaZero program, the machine learning task boils down to crafting a probability score at each moment of how much a potential next move will lead ultimately to win, lose or draw.
Because the number of potential future game board configurations cannot be calculated even by the fastest computers, the computer's weights cut short the search for moves by doing what you might call summarizing. The program summarizes the likelihood of a success if one were to pursue several moves in a given direction, and then compares that summary to the summary of potential moves to be taken in another direction.
Whereas the state of the board at any moment the position of pieces, and which pieces remain might "mean" something to a human chess grandmaster, it's not clear the term "mean" has any meaning for DeepMind's AlphaZero for such a summarizing task.
A similar summarizing task is achieved for the Pluribus program that in 2019 conquered the hardest form of poker, No-limit Texas hold'em. That game is even more complex in that it has hidden information, the players' face down cards, and additional "stochastic" elements of bluffing. But the representation is, again, a summary of likelihoods by each turn.
Even in human language, what's in the weights is different from what the casual observer might suppose. GPT-3, the top language program from OpenAI, can produce strikingly human-like output in sentences and paragraphs.
Does the program "know" language? Its weights hold a representation of the likelihood of how individual words and even whole strings of text are found in sequence with other words and strings.
You could call that function of a neural net a summary similar to AlphaGo or Pluribus, given that the problem is rather like chess or poker. But the possible states to be represented as connections in the neural net are not just vast, they are infinite given the infinite composability of language.
On the other hand, given that the output of a language program such as GPT-3, a sentence, is a fuzzy answer rather than a discrete score, the "right answer" is somewhat less demanding than the win, lose or draw of chess or poker. You could also call this function of GPT-3 and similar programs an "indexing" or an inventory" of things in their weights.
Also: What is GPT-3? Everything your business needs to know about OpenAI's breakthrough AI language program
Do humans have a similar kind of inventory or index of language? There doesn't seem to be any indication of it so far in neuroscience. Likewise, in the expression"to tell the dancer from the dance,"does GPT-3 spot the multiple levels of significance in the phrase, or the associations? It's not clear such a question even has a meaning in the context of a computer program.
In each of these cases chess board, cards, word strings the data are what they are: a fashioned substrate divided in various ways, a set of plastic rectangular paper products, a clustering of sounds or shapes. Whether such inventions "mean" anything, collectively, to the computer, is only a way of saying that a computer becomes tuned in response, for a purpose.
The things such data prompt in the machine filters, summarizations, indices, inventories, or however you want to characterize those representations are never the thing in itself. They are inventions.
Also: DeepMind: Why is AI so good at language? It's something in language itself
But, you may say, people see snowflakes and see their differences, and also catalog those differences, if they have a mind to. True, human activity has always sought to find patterns, via various means. Direct observation is one of the simplest means, and in a sense, what is being done in a neural network is a kind of extension of that.
You could say the neural network revels what was always true in human activity for millennia, that to speak of patterns is a thing imposed on the world rather than a thing in the world. In the world, snowflakes have form but that form is only a pattern to a person who collects and indexes them and categorizes them. It is a construction, in other words.
The activity of creating patterns will increase dramatically as more and more programs are unleashed on the data of the world, and their weights are tuned to form connections that we hope create useful representations. Such representations may be incredibly useful. They may someday cure cancer. It is useful to remember, however, that the patterns they reveal are not out there in the world, they are in the eye of the perceiver.
Also: DeepMind's 'Gato' is mediocre, so why did they build it?
Excerpt from:
AI: The pattern is not in the data, it's in the machine - ZDNet
- The Nvidia AI interview: Inside DLSS 4 and machine learning with Bryan Catanzaro - Eurogamer - January 22nd, 2025 [January 22nd, 2025]
- The wide use of machine learning VFX techniques on Here - befores & afters - January 22nd, 2025 [January 22nd, 2025]
- .NET Core: Pioneering the Future of AI and Machine Learning - TechBullion - January 22nd, 2025 [January 22nd, 2025]
- Development and validation of a machine learning-based prediction model for hepatorenal syndrome in liver cirrhosis patients using MIMIC-IV and eICU... - January 22nd, 2025 [January 22nd, 2025]
- A comparative study on different machine learning approaches with periodic items for the forecasting of GPS satellites clock bias - Nature.com - January 22nd, 2025 [January 22nd, 2025]
- Machine learning based prediction models for the prognosis of COVID-19 patients with DKA - Nature.com - January 22nd, 2025 [January 22nd, 2025]
- A scoping review of robustness concepts for machine learning in healthcare - Nature.com - January 22nd, 2025 [January 22nd, 2025]
- How AI and machine learning led to mind blowing progress in understanding animal communication - WHYY - January 22nd, 2025 [January 22nd, 2025]
- 3 Predictions For Predictive AI In 2025 - The Machine Learning Times - January 22nd, 2025 [January 22nd, 2025]
- AI and Machine Learning - WEF report offers practical steps for inclusive AI adoption - SmartCitiesWorld - January 22nd, 2025 [January 22nd, 2025]
- Learnings from a Machine Learning Engineer Part 3: The Evaluation | by David Martin | Jan, 2025 - Towards Data Science - January 22nd, 2025 [January 22nd, 2025]
- Google AI Research Introduces Titans: A New Machine Learning Architecture with Attention and a Meta in-Context Memory that Learns How to Memorize at... - January 22nd, 2025 [January 22nd, 2025]
- Improving BrainMachine Interfaces with Machine Learning ... - eeNews Europe - January 22nd, 2025 [January 22nd, 2025]
- Powered by machine learning, a new blood test can enable early detection of multiple cancers - Medical Xpress - January 15th, 2025 [January 15th, 2025]
- Mapping the Edges of Mass Spectral Prediction: Evaluation of Machine Learning EIMS Prediction for Xeno Amino Acids - Astrobiology News - January 15th, 2025 [January 15th, 2025]
- Development of an interpretable machine learning model based on CT radiomics for the prediction of post acute pancreatitis diabetes mellitus -... - January 15th, 2025 [January 15th, 2025]
- Understanding the spread of agriculture in the Western Mediterranean (6th-3rd millennia BC) with Machine Learning tools - Nature.com - January 15th, 2025 [January 15th, 2025]
- "From 'Food Rules' to Food Reality: Machine Learning Unveils the Ultra-Processed Truth in Our Grocery Carts" - American Council on Science... - January 15th, 2025 [January 15th, 2025]
- AI and Machine Learning in Business Market is Predicted to Reach $190.5 Billion at a CAGR of 32% by 2032 - EIN News - January 15th, 2025 [January 15th, 2025]
- QT Imaging Holdings Introduces Machine Learning-Enabled Image Interpolation Algorithm to Substantially Reduce Scan Time - Business Wire - January 15th, 2025 [January 15th, 2025]
- Global Tiny Machine Learning (TinyML) Market to Reach USD 3.4 Billion by 2030 - Key Drivers and Opportunities | Valuates Reports - PR Newswire UK - January 15th, 2025 [January 15th, 2025]
- Machine learning in mental health getting better all the time - Nature.com - January 15th, 2025 [January 15th, 2025]
- Signature-based intrusion detection using machine learning and deep learning approaches empowered with fuzzy clustering - Nature.com - January 15th, 2025 [January 15th, 2025]
- Machine learning and multi-omics in precision medicine for ME/CFS - Journal of Translational Medicine - January 15th, 2025 [January 15th, 2025]
- Exploring the influence of age on the causes of death in advanced nasopharyngeal carcinoma patients undergoing chemoradiotherapy using machine... - January 15th, 2025 [January 15th, 2025]
- 3D Shape Tokenization - Apple Machine Learning Research - January 9th, 2025 [January 9th, 2025]
- Machine Learning Used To Create Scalable Solution for Single-Cell Analysis - Technology Networks - January 9th, 2025 [January 9th, 2025]
- Robotics: machine learning paves the way for intuitive robots - Hello Future - January 9th, 2025 [January 9th, 2025]
- Machine learning-based estimation of crude oil-nitrogen interfacial tension - Nature.com - January 9th, 2025 [January 9th, 2025]
- Machine learning Nomogram for Predicting endometrial lesions after tamoxifen therapy in breast Cancer patients - Nature.com - January 9th, 2025 [January 9th, 2025]
- Staying ahead of the automation, AI and machine learning curve - Creamer Media's Engineering News - January 9th, 2025 [January 9th, 2025]
- Machine Learning and Quantum Computing Predict Which Antibiotic To Prescribe for UTIs - Consult QD - January 9th, 2025 [January 9th, 2025]
- Machine Learning, Innovation, And The Future Of AI: A Conversation With Manoj Bhoyar - International Business Times UK - January 9th, 2025 [January 9th, 2025]
- AMD's FSR 4 will use machine learning but requires an RDNA 4 GPU, promises 'a dramatic improvement in terms of performance and quality' - PC Gamer - January 9th, 2025 [January 9th, 2025]
- Explainable artificial intelligence with UNet based segmentation and Bayesian machine learning for classification of brain tumors using MRI images -... - January 9th, 2025 [January 9th, 2025]
- Understanding the Fundamentals of AI and Machine Learning - Nairobi Wire - January 9th, 2025 [January 9th, 2025]
- Machine learning can help blood tests have a separate normal for each patient - The Hindu - January 1st, 2025 [January 1st, 2025]
- Artificial Intelligence and Machine Learning Programs Introduced this Spring - The Flash Today - January 1st, 2025 [January 1st, 2025]
- Virtual reality-assisted prediction of adult ADHD based on eye tracking, EEG, actigraphy and behavioral indices: a machine learning analysis of... - January 1st, 2025 [January 1st, 2025]
- Open source machine learning systems are highly vulnerable to security threats - TechRadar - December 22nd, 2024 [December 22nd, 2024]
- After the PS5 Pro's less dramatic changes, PlayStation architect Mark Cerny says the next-gen will focus more on CPUs, memory, and machine-learning -... - December 22nd, 2024 [December 22nd, 2024]
- Accelerating LLM Inference on NVIDIA GPUs with ReDrafter - Apple Machine Learning Research - December 22nd, 2024 [December 22nd, 2024]
- Machine learning for the prediction of mortality in patients with sepsis-associated acute kidney injury: a systematic review and meta-analysis - BMC... - December 22nd, 2024 [December 22nd, 2024]
- Machine learning uncovers three osteosarcoma subtypes for targeted treatment - Medical Xpress - December 22nd, 2024 [December 22nd, 2024]
- From Miniatures to Machine Learning: Crafting the VFX of Alien: Romulus - Animation World Network - December 22nd, 2024 [December 22nd, 2024]
- Identification of hub genes, diagnostic model, and immune infiltration in preeclampsia by integrated bioinformatics analysis and machine learning -... - December 22nd, 2024 [December 22nd, 2024]
- This AI Paper from Microsoft and Novartis Introduces Chimera: A Machine Learning Framework for Accurate and Scalable Retrosynthesis Prediction -... - December 18th, 2024 [December 18th, 2024]
- Benefits and Challenges of Integrating AI and Machine Learning into EHR Systems - Healthcare IT Today - December 18th, 2024 [December 18th, 2024]
- The History Of AI: How Machine Learning's Evolution Is Reshaping Everything Around Us - SlashGear - December 18th, 2024 [December 18th, 2024]
- AI and Machine Learning to Enhance Pension Plan Governance and the Investor Experience: New CFA Institute Research - Fintech Finance - December 18th, 2024 [December 18th, 2024]
- Address Common Machine Learning Challenges With Managed MLflow - The New Stack - December 18th, 2024 [December 18th, 2024]
- Machine Learning Used To Classify Fossils Of Extinct Pollen - Offworld Astrobiology Applications? - Astrobiology News - December 18th, 2024 [December 18th, 2024]
- Machine learning model predicts CDK4/6 inhibitor effectiveness in metastatic breast cancer - News-Medical.Net - December 18th, 2024 [December 18th, 2024]
- New Lockheed Martin Subsidiary to Offer Machine Learning Tools to Defense Customers - ExecutiveBiz - December 18th, 2024 [December 18th, 2024]
- How Powerful Will AI and Machine Learning Become? - International Policy Digest - December 18th, 2024 [December 18th, 2024]
- ChatGPT-Assisted Machine Learning for Chronic Disease Classification and Prediction: A Developmental and Validation Study - Cureus - December 18th, 2024 [December 18th, 2024]
- Blood Tests Are Far From Perfect But Machine Learning Could Change That - Inverse - December 18th, 2024 [December 18th, 2024]
- Amazons AGI boss: You dont need a PhD in machine learning to build with AI anymore - Fortune - December 18th, 2024 [December 18th, 2024]
- From Novice to Pro: A Roadmap for Your Machine Learning Career - KDnuggets - December 10th, 2024 [December 10th, 2024]
- Dimension nabs $500M second fund for 'still contrary' intersection of bio and machine learning - Endpoints News - December 10th, 2024 [December 10th, 2024]
- Using Machine Learning to Make A Really Big Detailed Simulation - Astrobites - December 10th, 2024 [December 10th, 2024]
- Driving Business Growth with GreenTomatos Data and Machine Learning Strategy on Generative AI - AWS Blog - December 10th, 2024 [December 10th, 2024]
- Unlocking the power of data analytics and machine learning to drive business performance - WTW - December 10th, 2024 [December 10th, 2024]
- AI and the Ethics of Machine Learning | by Abwahabanjum | Dec, 2024 - Medium - December 10th, 2024 [December 10th, 2024]
- Differentiating Cystic Lesions in the Sellar Region of the Brain Using Artificial Intelligence and Machine Learning for Early Diagnosis: A Prospective... - December 10th, 2024 [December 10th, 2024]
- New Amazon SageMaker AI Innovations Reimagine How Customers Build and Scale Generative AI and Machine Learning Models - Amazon Press Release - December 10th, 2024 [December 10th, 2024]
- What is Machine Learning? 18 Crucial Concepts in AI, ML, and LLMs - Netguru - December 5th, 2024 [December 5th, 2024]
- Machine learning-based prediction of antibiotic resistance in Mycobacterium tuberculosis clinical isolates from Uganda - BMC Infectious Diseases - December 5th, 2024 [December 5th, 2024]
- Interdisciplinary Team Needed to Apply Machine Learning in Epilepsy Surgery: Lara Jehi, MD, MHCDS - Neurology Live - December 5th, 2024 [December 5th, 2024]
- A multimodal machine learning model for the stratification of breast cancer risk - Nature.com - December 5th, 2024 [December 5th, 2024]
- Machine learning based intrusion detection framework for detecting security attacks in internet of things - Nature.com - December 5th, 2024 [December 5th, 2024]
- Machine learning evaluation of a hypertension screening program in a university workforce over five years - Nature.com - December 5th, 2024 [December 5th, 2024]
- Vaultree Introduces VENum Stack: Combining the Power of Machine Learning and Encrypted Data Processing for Secure Innovation - PR Newswire - December 5th, 2024 [December 5th, 2024]
- Direct simulation and machine learning structure identification unravel soft martensitic transformation and twinning dynamics - pnas.org - December 5th, 2024 [December 5th, 2024]
- AI and Machine Learning - Maryland to use AI technology to manage traffic flow - SmartCitiesWorld - December 5th, 2024 [December 5th, 2024]
- Researchers make machine learning breakthrough in lithium-ion tech here's how it could make aging batteries safer - Yahoo! Voices - December 5th, 2024 [December 5th, 2024]
- Integrating IoT and machine learning: Benefits and use cases - TechTarget - December 5th, 2024 [December 5th, 2024]
- Landsat asks industry for artificial intelligence (AI) and machine learning for satellite operations - Military & Aerospace Electronics - December 5th, 2024 [December 5th, 2024]
- Machine learning optimized efficient graphene-based ultra-broadband solar absorber for solar thermal applications - Nature.com - December 5th, 2024 [December 5th, 2024]
- Polymathic AI Releases The Well: 15TB of Machine Learning Datasets Containing Numerical Simulations of a Wide Variety of Spatiotemporal Physical... - December 5th, 2024 [December 5th, 2024]