Artificial Intelligence | Encyclopedia.com
Artificial Intelligence (AI) tries to enable computers to do the things that minds can do. These things include seeing pathways, picking things up, learning categories from experience, and using emotions to schedule one's actionswhich many animals can do, too. Thus, human intelligence is not the sole focus of AI. Even terrestrial psychology is not the sole focus, because some people use AI to explore the range of all possible minds.
There are four major AI methodologies: symbolic AI, connectionism, situated robotics, and evolutionary programming (Russell and Norvig 2003). AI artifacts are correspondingly varied. They include both programs (including neural networks) and robots, each of which may be either designed in detail or largely evolved. The field is closely related to artificial life (A-Life), which aims to throw light on biology much as some AI aims to throw light on psychology.
AI researchers are inspired by two different intellectual motivations, and while some people have both, most favor one over the other. On the one hand, many AI researchers seek solutions to technological problems, not caring whether these resemble human (or animal) psychology. They often make use of ideas about how people do things. Programs designed to aid/replace human experts, for example, have been hugely influenced by knowledge engineering, in which programmers try to discover what, and how, human experts are thinking when they do the tasks being modeled. But if these technological AI workers can find a nonhuman method, or even a mere trick (a kludge) to increase the power of their program, they will gladly use it.
Technological AI has been hugely successful. It has entered administrative, financial, medical, and manufacturing practice at countless different points. It is largely invisible to the ordinary person, lying behind some deceptively simple human-computer interface or being hidden away inside a car or refrigerator. Many procedures taken for granted within current computer science were originated within AI (pattern-recognition and image-processing, for example).
On the other hand, AI researchers may have a scientific aim. They may want their programs or robots to help people understand how human (or animal) minds work. They may even ask how intelligence in general is possible, exploring the space of possible minds. The scientific approachpsychological AIis the more relevant for philosophers (Boden 1990, Copeland 1993, Sloman 2002). It is also central to cognitive science, and to computationalism.
Considered as a whole, psychological AI has been less obviously successful than technological AI. This is partly because the tasks it tries to achieve are often more difficult. In addition, it is less clearfor philosophical as well as empirical reasonswhat should be counted as success.
Symbolic AI is also known as classical AI and as GOFAIshort for John Haugeland's label "Good Old-Fashioned AI" (1985). It models mental processes as the step-by-step information processing of digital computers. Thinking is seen as symbol-manipulation, as (formal) computation over (formal) representations. Some GOFAI programs are explicitly hierarchical, consisting of procedures and subroutines specified at different levels. These define a hierarchically structured search-space, which may be astronomical in size. Rules of thumb, or heuristics, are typically provided to guide the searchby excluding certain areas of possibility, and leading the program to focus on others. The earliest AI programs were like this, but the later methodology of object-oriented programming is similar.
Certain symbolic programs, namely production systems, are implicitly hierarchical. These consist of sets of logically separate if-then (condition-action) rules, or productions, defining what actions should be taken in response to specific conditions. An action or condition may be unitary or complex, in the latter case being defined by a conjunction of several mini-actions or mini-conditions. And a production may function wholly within computer memory (to set a goal, for instance, or to record a partial parsing) or outside it (via input/output devices such as cameras or keyboards).
Another symbolic technique, widely used in natural language processing (NLP) programs, involves augmented transition networks, or ATNs. These avoid explicit backtracking by using guidance at each decision-point to decide which question to ask and/or which path to take.
GOFAI methodology is used for developing a wide variety of language-using programs and problem-solvers. The more precisely and explicitly a problem-domain can be defined, the more likely it is that a symbolic program can be used to good effect. Often, folk-psychological categories and/or specific propositions are explicitly represented in the system. This type of AI, and the forms of computational psychology based on it, is defended by the philosopher Jerry Fodor (1988).
GOFAI models (whether technological or scientific) include robots, planning programs, theorem-provers, learning programs, question-answerers, data-mining systems, machine translators, expert systems of many different kinds, chess players, semantic networks, and analogy machines. In addition, a host of software agentsspecialist mini-programs that can aid a human being to solve a problemare implemented in this way. And an increasingly important area of research is distributed AI, in which cooperation occurs between many relatively simple individualswhich may be GOFAI agents (or neural-network units, or situated robots).
The symbolic approach is used also in modeling creativity in various domains (Boden 2004, Holland et al. 1986). These include musical composition and expressive performance, analogical thinking, line-drawing, painting, architectural design, storytelling (rhetoric as well as plot), mathematics, and scientific discovery. In general, the relevant aesthetic/theoretical style must be specified clearly, so as to define a space of possibilities that can be fruitfully explored by the computer. To what extent the exploratory procedures can plausibly be seen as similar to those used by people varies from case to case.
Connectionist systems, which became widely visible in the mid-1980s, are different. They compute not by following step-by-step programs but by using large numbers of locally connected (associative) computational units, each one of which is simple. The processing is bottom-up rather than top-down.
Connectionism is sometimes said to be opposed to AI, although it has been part of AI since its beginnings in the 1940s (McCulloch and Pitts 1943, Pitts and McCulloch 1947). What connectionism is opposed to, rather, is symbolic AI. Yet even here, opposed is not quite the right word, since hybrid systems exist that combine both methodologies. Moreover, GOFAI devotees such as Fodor see connectionism as compatible with GOFAI, claiming that it concerns how symbolic computation can be implemented (Fodor and Pylyshyn 1988).
Two largely separate AI communities began to emerge in the late 1950s (Boden forthcoming). The symbolic school focused on logic and Turing-computation, whereas the connectionist school focused on associative, and often probabilistic, neural networks. (Most connectionist systems are connectionist virtual machines, implemented in von Neumann computers; only a few are built in dedicated connectionist hardware.) Many people remained sympathetic to both schools. But the two methodologies are so different in practice that most hands-on AI researchers use either one or the other.
There are different types of connectionist systems. Most philosophical interest, however, has focused on networks that do parallel distributed processing, or PDP (Clark 1989, Rumelhart and McClelland 1986). In essence, PDP systems are pattern recognizers. Unlike brittle GOFAI programs, which often produce nonsense if provided with incomplete or part-contradictory information, they show graceful degradation. That is, the input patterns can be recognized (up to a point) even if they are imperfect.
A PDP network is made up of subsymbolic units, whose semantic significance cannot easily be expressed in terms of familiar semantic content, still less propositions. (Some GOFAI programs employ subsymbolic units, but most do not.) That is, no single unit codes for a recognizable concept, such as dog or cat. These concepts are represented, rather, by the pattern of activity distributed over the entire network.
Because the representation is not stored in a single unit but is distributed over the whole network, PDP systems can tolerate imperfect data. (Some GOFAI systems can do so too, but only if the imperfections are specifically foreseen and provided for by the programmer.) Moreover, a single subsymbolic unit may mean one thing in one input-context and another in another. What the network as a whole can represent depends on what significance the designer has decided to assign to the input-units. For instance, some input-units are sensitive to light (or to coded information about light), others to sound, others to triads of phonological categories and so on.
Most PDP systems can learn. In such cases, the weights on the links of PDP units in the hidden layer (between the input-layer and the output-layer) can be altered by experience, so that the network can learn a pattern merely by being shown many examples of it. (A GOFAI learning-program, in effect, has to be told what to look for beforehand, and how.) Broadly, the weight on an excitatory link is increased by every coactivation of the two units concerned: cells that fire together, wire together.
These two AI approaches have complementary strengths and weaknesses. For instance, symbolic AI is better at modeling hierarchy and strong constraints, whereas connectionism copes better with pattern recognition, especially if many conflictingand perhaps incompleteconstraints are relevant. Despite having fervent philosophical champions on both sides, neither methodology is adequate for all of the tasks dealt with by AI scientists. Indeed, much research in connectionism has aimed to restore the lost logical strengths of GOFAI to neural networkswith only limited success by the beginning of the twenty-first century.
Another, and more recently popular, AI methodology is situated robotics (Brooks 1991). Like connectionism, this was first explored in the 1950s. Situated robots are described by their designers as autonomous systems embedded in their environment (Heidegger is sometimes cited). Instead of planning their actions, as classical robots do, situated robots react directly to environmental cues. One might say that they are embodied production systems, whose if-then rules are engineered rather than programmed, and whose conditions lie in the external environment, not inside computer memory. Althoughunlike GOFAI robotsthey contain no objective representations of the world, some of them do construct temporary, subject-centered (deictic) representations.
The main aim of situated roboticists in the mid-1980s, such as Rodney Brooks, was to solve/avoid the frame problem that had bedeviled GOFAI (Pylyshyn 1987). GOFAI planners and robots had to anticipate all possible contingencies, including the side effects of actions taken by the system itself, if they were not to be defeated by unexpectedperhaps seemingly irrelevantevents. This was one of the reasons given by Hubert Dreyfus (1992) in arguing that GOFAI could not possibly succeed: Intelligence, he said, is unformalizable. Several ways of implementing nonmonotonic logics in GOFAI were suggested, allowing a conclusion previously drawn by faultless reasoning to be negated by new evidence. But because the general nature of that new evidence had to be foreseen, the frame problem persisted.
Brooks argued that reasoning shouldn't be employed at all: the system should simply react appropriately, in a reflex fashion, to specific environmental cues. This, he said, is what insects doand they are highly successful creatures. (Soon, situated robotics was being used, for instance, to model the six-legged movement of cockroaches.) Some people joked that AI stood for artificial insects, not artificial intelligence. But the joke carried a sting: Many argued that much human thinking needs objective representations, so the scope for situated robotics was strictly limited.
In evolutionary programming, genetic algorithms (GAs) are used by a program to make random variations in its own rules. The initial rules, before evolution begins, either do not achieve the task in question or do so only inefficiently; sometimes, they are even chosen at random.
The variations allowed are broadly modeled on biological mutations and crossovers, although more unnatural types are sometimes employed. The most successful rules are automatically selected, and then varied again. This is more easily said than done: The breakthrough in GA methodology occurred when John Holland (1992) defined an automatic procedure for recognizing which rules, out of a large and simultaneously active set, were those most responsible for whatever level of success the evolving system had just achieved.
Selection is done by some specific fitness criterion, predefined in light of the task the programmer has in mind. Unlike GOFAI systems, a GA program contains no explicit representation of what it is required to do: its task is implicit in the fitness criterion. (Similarly, living things have evolved to do what they do without knowing what that is.) After many generations, the GA system may be well-adapted to its task. For certain types of tasks, it can even find the optimal solution.
This AI method is used to develop both symbolic and connectionist AI systems. And it is applied both to abstract problem-solving (mathematical optimization, for instance, or the synthesis of new pharmaceutical molecules) and to evolutionary roboticswherein the brain and/or sensorimotor anatomy of robots evolve within a specific task-environment.
It is also used for artistic purposes, in the composition of music or the generation of new visual forms. In these cases, evolution is usually interactive. That is, the variation is done automatically but the selection is done by a human beingwho does not need to (and usually could not) define, or even name, the aesthetic fitness criteria being applied.
AI is a close cousin of A-Life (Boden 1996). This is a form of mathematical biology, which employs computer simulation and situated robotics to study the emergence of complexity in self-organizing, self-reproducing, adaptive systems. (A caveat: much as some AI is purely technological in aim, so is some A-Life; the research of most interest to philosophers is the scientifically oriented type.)
The key concepts of A-Life date back to the early 1950s. They originated in theoretical work on self-organizing systems of various kinds, including diffusion equations and cellular automata (by Alan Turing and John von Neumann respectively), and in early self-equilibrating machines and situated robots (built by W. Ross Ashby and W. Grey Walter). But A-Life did not flourish until the late 1980s, when computing power at last sufficed to explore these theoretical ideas in practice.
Much A-Life work focuses on specific biological phenomena, such as flocking, cooperation in ant colonies, or morphogenesisfrom cell-differentiation to the formation of leopard spots or tiger stripes. But A-Life also studies general principles of self-organization in biology: evolution and coevolution, reproduction, and metabolism. In addition, it explores the nature of life as suchlife as it could be, not merely life as it is.
A-Life workers do not all use the same methodology, but they do eschew the top-down methods of GOFAI. Situated and evolutionary robotics, and GA-generated neural networks, too, are prominent approaches within the field. But not all A-Life systems are evolutionary. Some demonstrate how a small number of fixed, and simple, rules can lead to self-organization of an apparently complex kind.
Many A-Lifers take pains to distance themselves from AI. But besides their close historical connections, AI and A-Life are philosophically related in virtue of the linkage between life and mind. It is known that psychological properties arise in living things, and some people argue (or assume) that they can arise only in living things. Accordingly, the whole of AI could be regarded as a subarea of A-Life. Indeed, some people argue that success in AI (even in technological AI) must await, and build on, success in A-Life.
Whichever of the two AI motivationstechnological or psychologicalis in question, the name of the field is misleading in three ways. First, the term intelligence is normally understood to cover only a subset of what AI workers are trying to do. Second, intelligence is often supposed to be distinct from emotion, so that AI is assumed to exclude work on that. And third, the name implies that a successful AI system would really be intelligenta philosophically controversial claim that AI researchers do not have to endorse (though some do).
As for the first point, people do not normally regard vision or locomotion as examples of intelligence. Many people would say that speaking one's native language is not a case of intelligence either, except in comparison with nonhuman species; and common sense is sometimes contrasted with intelligence. The term is usually reserved for special cases of human thought that show exceptional creativity and subtlety, or which require many years of formal education. Medical diagnosis, scientific or legal reasoning, playing chess, and translating from one language to another are typically regarded as difficult, thus requiring intelligence. And these tasks were the main focus of research when AI began. Vision, for example, was assumed to be relatively straightforwardnot least, because many nonhuman animals have it too. It gradually became clear, however, that everyday capacities such as vision and locomotion are vastly more complex than had been supposed. The early definition of AI as programming computers to do things that involve intelligence when done by people was recognized as misleading, and eventually dropped.
Similarly, intelligence is often opposed to emotion. Many people assume that AI could never model that. However, crude examples of such models existed in the early 1960s, and emotion was recognized by a high priest of AI, Herbert Simon, as being essential to any complex intelligence. Later, research in the computational philosophy (and modeling) of affect showed that emotions have evolved as scheduling mechanisms for systems with many different, and potentially conflicting, purposes (Minsky 1985, and Web site). When AI began, it was difficult enough to get a program to follow one goal (with its subgoals) intelligentlyany more than that was essentially impossible. For this reason, among others, AI modeling of emotion was put on the back burner for about thirty years. By the 1990s, however, it had become a popular focus of AI research, and of neuroscience and philosophy too.
The third point raises the difficult questionwhich many AI practitioners leave open, or even ignoreof whether intentionality can properly be ascribed to any conceivable program/robot (Newell 1980, Dennett 1987, Harnad 1991).
Could some NLP programs really understand the sentences they parse and the words they translate? Or can a visuo-motor circuit evolved within a robot's neural-network brain truly be said to represent the environmental feature to which it responds? If a program, in practice, could pass the Turing Test, could it truly be said to think? More generally, does it even make sense to say that AI may one day achieve artificially produced (but nonetheless genuine) intelligence?
For the many people in the field who adopt some form of functionalism, the answer in each case is: In principle, yes. This applies for those who favor the physical symbol system hypothesis or intentional systems theory. Others adopt connectionist analyses of concepts, and of their development from nonconceptual content. Functionalism is criticized by many writers expert in neuroscience, who claim that its core thesis of multiple realizability is mistaken. Others criticize it at an even deeper level: a growing minority (especially in A-Life) reject neo-Cartesian approaches in favor of philosophies of embodiment, such as phenomenology or autopoiesis.
Part of the reason why such questions are so difficult is that philosophers disagree about what intentionality is, even in the human case. Practitioners of psychological AI generally believe that semantic content, or intentionality, can be naturalized. But they differ about how this can be done.
For instance, a few practitioners of AI regard computation and intentionality as metaphysically inseparable (Smith 1996). Others ascribe meaning only to computations with certain causal consequences and provenance, or grounding. John Searle argues that AI cannot capture intentionality, becauseat baseit is concerned with the formal manipulation of formal symbols. And for those who accept some form of evolutionary semantics, only evolutionary robots could embody meaning (Searle, 1980).
See also Computationalism; Machine Intelligence.
Boden, Margaret A. The Creative Mind: Myths and Mechanisms. 2nd ed. London: Routledge, 2004.
Boden, Margaret A. Mind as Machine: A History of Cognitive Science. Oxford: Oxford University Press, forthcoming. See especially chapters 4, 7.i, 1013, and 14.
Boden, Margaret A., ed. The Philosophy of Artificial Intelligence. Oxford: Oxford University Press, 1990.
Boden, Margaret A., ed. The Philosophy of Artificial Life. Oxford: Oxford University Press, 1996.
Brooks, Rodney A. "Intelligence without Representation." Artificial Intelligence 47 (1991): 139159.
Clark, Andy J. Microcognition: Philosophy, Cognitive Science, and Parallel Distributed Processing. Cambridge, MA: MIT Press, 1989.
Copeland, B. Jack. Artificial Intelligence: A Philosophical Introduction. Oxford: Blackwell, 1993.
Dennett, Daniel C. The Intentional Stance. Cambridge, MA: MIT Press, 1987.
Dreyfus, Hubert L. What Computers Still Can't Do: A Critique of Artificial Reason. Cambridge, MA: MIT Press, 1992.
Fodor, Jerome A., and Zenon W. Pylyshyn. "Connectionism and Cognitive Architecture: A Critical Analysis." Cognition 28 (1988): 371.
Harnad, Stevan. "Other Bodies, Other Minds: A Machine Incarnation of an Old Philosophical Problem." Minds and Machines 1 (1991): 4354.
Haugeland, John. Artificial Intelligence: The Very Idea. Cambridge, MA: MIT Press, 1985.
Holland, John H. Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence. Cambridge, MA: MIT Press, 1992.
Holland, John H., Keith J. Holyoak, Richard E. Nisbett, and Paul R. Thagard. Induction: Processes of Inference, Learning, and Discovery. Cambridge, MA: MIT Press, 1986.
McCulloch, Warren S., and Walter H. Pitts. "A Logical Calculus of the Ideas Immanent in Nervous Activity." In The Philosoophy of Artificial Intelligence, edited by Margaret A. Boden. Oxford: Oxford University Press, 1990. First published in 1943.
Minsky, Marvin L. The Emotion Machine. Available from http://web.media.mit.edu/~minsky/E1/eb1.html. Web site only.
Minsky, Marvin L. The Society of Mind. New York: Simon & Schuster, 1985.
Newell, Allen. "Physical Symbol Systems." Cognitive Science 4 (1980): 135183.
Pitts, Walter H., and Warren S. McCulloch. "How We Know Universals: The Perception of Auditory and Visual Forms." In Embodiments of Mind, edited by Warren S. McCulloch. Cambridge, MA: MIT Press, 1965. First published in 1947.
Pylyshyn, Zenon W. The Robot's Dilemma: The Frame Problem in Artificial Intelligence. Norwood, NJ: Ablex, 1987.
Rumelhart, David E., and James L. McClelland, eds. Parallel Distributed Processing: Explorations in the Microstructure of Cognition. 2 vols. Cambridge, MA: MIT Press, 1986.
Russell, Stuart J., and Peter Norvig. Artificial Intelligence: A Modern Approach. 2nd ed. Upper Saddle River, NJ: Prentice-Hall, 2003.
Searle, John R. "Minds, Brains, and Programs," The Behavioral and Brain Sciences 3 (1980), 417424. Reprinted in M. A. Boden, ed., The Philosophy of Artificial Intelligence (Oxford: Oxford University Press 1990), pp. 6788.
Sloman, Aaron. "The Irrelevance of Turing Machines to Artificial Intelligence." In Computationalism: New Directions, edited by Matthias Scheutz. Cambridge, MA: MIT Press, 2002.
Smith, Brian C. On the Origin of Objects. Cambridge, MA: MIT Press, 1996.
Margaret A. Boden (1996, 2005)
Originally posted here:
Artificial Intelligence | Encyclopedia.com
- The Artificial Intelligence (AI) Stock That Refuses to Slow Down, and It's Not Nvidia - AOL.com - March 15th, 2026 [March 15th, 2026]
- Why Iran is targeting the artificial intelligence infrastructure of Gulf countries - EL PAS English - March 15th, 2026 [March 15th, 2026]
- Doane University to Train Teachers in Artificial Intelligence - GovTech - March 15th, 2026 [March 15th, 2026]
- 2 Artificial Intelligence (AI) Stocks With Average Upside of 47% and 54%, According to Wall Street - AOL.com - March 15th, 2026 [March 15th, 2026]
- 5 ways artificial intelligence may reshape Texas schools in the coming years - San Antonio Express-News - March 15th, 2026 [March 15th, 2026]
- Oracle Just Delivered Incredible News for Artificial Intelligence (AI) Investors - The Motley Fool - March 15th, 2026 [March 15th, 2026]
- Agentic Artificial Intelligence in Airport and Airline Operations - wsp.com - March 15th, 2026 [March 15th, 2026]
- Micron Is the Best-Performing Artificial Intelligence (AI) Stock of the Past Year -- Up 318%. Can It Keep Going in 2026? - AOL.com - March 15th, 2026 [March 15th, 2026]
- The Artificial Intelligence (AI) Stock That Wall Street Says Could Rally 58% From Here - The Motley Fool - March 15th, 2026 [March 15th, 2026]
- Utah works to keep AI from getting 'stuck' in regulation, says Zachary Boyd, Ph.D., director of the state's Office of Artificial Intelligence Policy -... - March 15th, 2026 [March 15th, 2026]
- Prediction: 1 Artificial Intelligence (AI) Stock That Will Be Worth More Than Micron and Palantir by 2027 - AOL.com - March 15th, 2026 [March 15th, 2026]
- Goldman Sachs Sees a "Flight to Quality" in Artificial Intelligence (AI). This Stock Fits the Bill for 2026. - AOL.com - March 15th, 2026 [March 15th, 2026]
- 3 Under-the-Radar Artificial Intelligence (AI) Stocks With Explosive Potential - The Motley Fool - March 15th, 2026 [March 15th, 2026]
- OBSERVER: Artificial Intelligence and Earth Observation workshop looks into the future of EO in Europe - Copernicus - March 15th, 2026 [March 15th, 2026]
- Enhancing Health Literacy in Endourology: Using Artificial Intelligence to Improve Readability of Patient Education Materials - Cureus - March 15th, 2026 [March 15th, 2026]
- Artificial Intelligence and Machine Learning in Diagnostic Pathology: A Systematic Review of Applications, Challenges, and Clinical Implications -... - March 15th, 2026 [March 15th, 2026]
- 2 Artificial Intelligence (AI) Stocks Trading Below Their True Value Right Now - The Motley Fool - March 15th, 2026 [March 15th, 2026]
- Prediction: This Artificial Intelligence (AI) Stock Will Turn $10,000 Into $15,000 by the End of 2026 - The Motley Fool - March 15th, 2026 [March 15th, 2026]
- Deciding under algorithms: artificial intelligence and the protection of civilian infrastructure in armed conflict - ICRC - March 15th, 2026 [March 15th, 2026]
- Prediction: This Artificial Intelligence (AI) Stock Will Be the Biggest Winner of the $660+ Billion Capex Boom - The Motley Fool - March 15th, 2026 [March 15th, 2026]
- Prediction: This Under-the-Radar Artificial Intelligence (AI) Stock Could Be a Multibagger by the End of 2026 - The Motley Fool - March 15th, 2026 [March 15th, 2026]
- 2 Artificial Intelligence (AI) Stocks to Buy Hand Over Fist Before the Next Earnings Season - The Motley Fool - March 15th, 2026 [March 15th, 2026]
- If I Had $5,000 to Invest in Artificial Intelligence (AI), I'd Put It in This Stock - Yahoo Finance - March 15th, 2026 [March 15th, 2026]
- Are toys with artificial intelligence that talk to young children safe? - The Mountaineer - March 15th, 2026 [March 15th, 2026]
- The Artificial Intelligence (AI) Software Sell-Off Created a Rare Buying Opportunity. Here Are 3 Stocks to Grab in 2026. - Yahoo Finance - March 15th, 2026 [March 15th, 2026]
- 3 Artificial Intelligence Stocks Worth Owning for the Next 10 Years - The Globe and Mail - March 15th, 2026 [March 15th, 2026]
- A Look At Artificial Intelligence Technology Solutions' Valuation After Reverse Split And New Security Device Orders - Yahoo Finance - March 15th, 2026 [March 15th, 2026]
- Are toys with artificial intelligence that talk to young children safe? - fox21online.com - March 15th, 2026 [March 15th, 2026]
- Artificial Intelligence News for the Week of March 13; Updates from Anthropic, KPMG, Perplexity & More - solutionsreview.com - March 15th, 2026 [March 15th, 2026]
- Shipyards Integrating Artificial Intelligence Into Processes - The Waterways Journal - March 15th, 2026 [March 15th, 2026]
- Oracle Just Delivered Incredible News for Artificial Intelligence (AI) Investors - The Globe and Mail - March 13th, 2026 [March 13th, 2026]
- 2 Artificial Intelligence (AI) Stocks to Sell Before They Fall 40% and 55%, According to Wall Street Analysts - Yahoo Finance - March 9th, 2026 [March 9th, 2026]
- Artificial intelligence and earth observation: From innovation to services - European Commission - defence-industry-space.ec.europa.eu - March 9th, 2026 [March 9th, 2026]
- What Are the 2 Top Artificial Intelligence (AI) Stocks to Buy Right Now? - Yahoo Finance - March 9th, 2026 [March 9th, 2026]
- Dueling documentaries illuminate the promise and perils of artificial intelligence - Local News Matters - March 9th, 2026 [March 9th, 2026]
- A Case Series From a Multicentric Study: Can Artificial Intelligence (AI)-Enabled Chest X-Ray Assist in the Incidental Detection of Early-Stage Lung... - March 9th, 2026 [March 9th, 2026]
- The Artificial Intelligence (AI) Stock That Smart Money Is Buying This March - The Motley Fool - March 9th, 2026 [March 9th, 2026]
- 1 Artificial Intelligence (AI) Stock to Buy Before It Soars 74% to Join Nvidia as a $4 Trillion-Dollar Company - The Motley Fool - March 9th, 2026 [March 9th, 2026]
- 2 Artificial Intelligence (AI) Stocks to Sell Before They Fall 40% and 55%, According to Wall Street Analysts - The Motley Fool - March 9th, 2026 [March 9th, 2026]
- Texas Joins the AI Regulation Wave: Key Employer Takeaways from the Texas Responsible Artificial Intelligence Governance Act - The National Law Review - March 9th, 2026 [March 9th, 2026]
- Public records in the age of artificial intelligence - Albuquerque Journal - March 9th, 2026 [March 9th, 2026]
- Artificial intelligence and the future of fetal heart rate monitoring - KevinMD.com - March 9th, 2026 [March 9th, 2026]
- Watch Who Will Build the Future of Artificial Intelligence? - Bloomberg - March 9th, 2026 [March 9th, 2026]
- WOMEN, PEACE AND SECURITY: Womens Leadership in Addressing Emerging Threats to Peace and Security: Artificial Intelligence and Technology-Facilitated... - March 9th, 2026 [March 9th, 2026]
- Meet the Artificial Intelligence (AI) ETF With 20% of Its Portfolio Parked in Alphabet, Nvidia, Micron, and Amazon - Yahoo Finance - March 9th, 2026 [March 9th, 2026]
- BMO says this cloud stock is an early winner of agentic artificial intelligence boom - CNBC - March 9th, 2026 [March 9th, 2026]
- Breaking in with artificial intelligence - The N'West Iowa REVIEW - March 9th, 2026 [March 9th, 2026]
- 3 Top Artificial Intelligence Stocks to Buy Right Now - The Motley Fool - March 9th, 2026 [March 9th, 2026]
- Researchers Create Humanitys Last Exam to Test the Limits of Artificial Intelligence - The Debrief - March 9th, 2026 [March 9th, 2026]
- 2 Artificial Intelligence (AI) Stocks to Sell Before They Fall 40% and 55%, According to Wall Street Analysts - Nasdaq - March 9th, 2026 [March 9th, 2026]
- CSW70 Side Event on Automating Justice: Can Artificial Intelligence Increase Womens and Girls Access to Justice? - EEAS - March 9th, 2026 [March 9th, 2026]
- Responsiveness is trusting that with the intention to learn and to transform, the use of artificial intelligence can find balance - facebook.com - March 9th, 2026 [March 9th, 2026]
- Artificial Intelligence (AI) and Nuclear Energy Could Make This Engineering and Construction Stock a Big Winner - The Motley Fool - March 9th, 2026 [March 9th, 2026]
- As Hollywood's concern around artificial intelligence grows, Seth MacFarlane is justifying the technology in 'Ted' after transforming into Bill... - March 9th, 2026 [March 9th, 2026]
- Watch the video: How is artificial intelligence used in warfare right now? - Euronews.com - March 9th, 2026 [March 9th, 2026]
- 2 Popular Artificial Intelligence (AI) Stocks to Sell Before They Drop by as Much as 94%, According to Select Wall Street Analysts - The Motley Fool - March 9th, 2026 [March 9th, 2026]
- Risks Of AI For Bangladesh | Will artificial intelligence widen inequality? - The Daily Star - March 9th, 2026 [March 9th, 2026]
- The Top Artificial Intelligence (AI) Stocks to Buy With $1,000 Right Now - Nasdaq - March 2nd, 2026 [March 2nd, 2026]
- Artificial intelligence makes Xray spectroscopy five times faster, smarter and less prone to human error - anl.gov - March 2nd, 2026 [March 2nd, 2026]
- Artificial intelligence will not replace real estate agents it will divide them - Chicago Agent Magazine - March 2nd, 2026 [March 2nd, 2026]
- Is Artificial Intelligence in Charge of Nuclear Weapons? - CounterPunch - March 2nd, 2026 [March 2nd, 2026]
- Researchers say artificial intelligence is being used in swatting attacks - KETV - March 2nd, 2026 [March 2nd, 2026]
- The Top Artificial Intelligence (AI) Stocks to Buy With $1,000 Right Now - The Motley Fool - March 2nd, 2026 [March 2nd, 2026]
- Protecting Attorney-Client Privilege in the Age of Artificial Intelligence - Law.com - March 2nd, 2026 [March 2nd, 2026]
- U.S. Postal Inspectors Warn Customers to Avoid Scams that Use Artificial Intelligence - PR Newswire - March 2nd, 2026 [March 2nd, 2026]
- Honor Week panel discusses the future of artificial intelligence in academic integrity - The Cavalier Daily - March 2nd, 2026 [March 2nd, 2026]
- 2 Top Artificial Intelligence Stocks to Buy in March - The Motley Fool - March 2nd, 2026 [March 2nd, 2026]
- 2 Top Artificial Intelligence Stocks to Buy in March - Yahoo Finance - March 2nd, 2026 [March 2nd, 2026]
- Report suggests ways for Arkansas government to use artificial intelligence - The Arkansas Democrat-Gazette - March 2nd, 2026 [March 2nd, 2026]
- Why the U.S. Needs the UN in the Age of Artificial Intelligence - Better World Campaign - March 2nd, 2026 [March 2nd, 2026]
- 'Our AI Does Everything!' The Risks of Overstating the Use of Artificial Intelligence - Law.com - March 2nd, 2026 [March 2nd, 2026]
- SEO. How Googles artificial intelligence is changing the way we get information - Revista Merca2.0 - March 2nd, 2026 [March 2nd, 2026]
- Struggling to Pick Artificial Intelligence (AI) Stocks? You're Not Alone -- Try This ETF Instead - Nasdaq - March 2nd, 2026 [March 2nd, 2026]
- Background artificial intelligence: What it is and how it works - Root-Nation.com - March 2nd, 2026 [March 2nd, 2026]
- This Artificial Intelligence (AI) Crypto Is Up 140% Over the Past 90 Days, But Is It a Buy? - The Motley Fool - March 2nd, 2026 [March 2nd, 2026]
- Prediction: This Artificial Intelligence (AI) Stock Will Join Nvidia, Apple, and Alphabet in the $3 Trillion Club Before 2028 - The Motley Fool - March 2nd, 2026 [March 2nd, 2026]
- Jame Abraham: Exploring the Impact of Artificial Intelligence in Cancer Care - Oncodaily - March 2nd, 2026 [March 2nd, 2026]
- Struggling to Pick Artificial Intelligence (AI) Stocks? You're Not Alone -- Try This ETF Instead - The Motley Fool - March 2nd, 2026 [March 2nd, 2026]
- Palantir Billionaire Peter Thiel Sells 2 Artificial Intelligence (AI) Stocks That Wall Street Says Are Undervalued - Nasdaq - March 2nd, 2026 [March 2nd, 2026]
- Palantir Billionaire Peter Thiel Sells 2 Artificial Intelligence (AI) Stocks That Wall Street Says Are Undervalued - Yahoo Finance - March 2nd, 2026 [March 2nd, 2026]