Archive for the ‘Artificial Intelligence’ Category

Artificial Intelligence With Python | Build AI Models …

Artificial Intelligence With Python:

Artificial Intelligence has been around for over half a century now and its advancements are growing at an exponential rate. The demand for AI is at its peak and if you wish to learn about Artificial Intelligence, youve landed at the right place. This blog on Artificial Intelligence With Python will help you understand all the concepts of AI with practical implementations in Python.

To get in-depth knowledge of Artificial Intelligence and Machine Learning, you can enroll for live Machine Learning Engineer Master Program by Edureka with 24/7 support and lifetime access.

The following topics are covered in this Artificial Intelligence With Python blog:

A lot of people have asked me, Which programming language is best for AI? or Why Python for AI?

Despite being a general purpose language, Python has made its way into the most complex technologies such as Artificial Intelligence, Machine Learning, Deep Learning, and so on.

Why has Python gained so much popularity in all these fields?

Here is a list of reasons why Python is the choice of language for every core Developer, Data Scientist, Machine Learning Engineer, etc:

Why Python For AI Artificial Intelligence With Python Edureka

If you wish to learn Python Programming in depth, here are a couple of links, do give these blogs a read:

Since this blog is all about Artificial Intelligence With Python, I will introduce you to the most effective and popular AI-based Python Libraries.

In addition to the above-mentioned libraries make sure you check out this Top 10 Python Libraries You Must Know In 2019 blog to get a more clear understanding.

Now that you know the important Python libraries that are used for implementing AI techniques, lets focus on Artificial Intelligence. In the next section, I will cover all the fundamental concepts of AI.

First, lets start by understanding the sudden demand for AI.

Since the emergence of AI in the 1950s, we have seen exponential growth in its potential.But if AI has been here for over half a century, why has it suddenly gained so much importance? Why are we talking about Artificial Intelligence now?

Demand For AI Artificial Intelligence With Python Edureka

The main reasons for the vast popularity of AI are:

More computing power: Implementing AI requires a lot of computing power since building AI models involve heavy computations and the use of complex neural networks. The invention of GPUs has made this possible. We can finally perform high-level computations and implement complex algorithms.

Data Generation: Over the past years, weve been generating an immeasurable amount of data. Such data needs to be analyzed and processed by using Machine Learning algorithms and other AI techniques.

More Effective Algorithms: In the past decade weve successfully managed to develop state of the art algorithms that involve the implementation of Deep Neural Networks.

Broad Investment: As tech giants such as Tesla, Netflix and Facebook started investing in Artificial Intelligence, it gained more popularity which led to an increase in the demand for AI-based systems.

The growth of Artificial Intelligence is exponential, it is also adding to the economy at an accelerated pace. So this is the right time for you to get into the field of Artificial Intelligence.

Check out these AI and Machine Learning courses by E & ICT Academy NIT Warangal to learn and build a career in Artificial Intelligence.

The term Artificial Intelligence was first coined decades ago in the year 1956 by John McCarthy at the Dartmouth conference. He defined AI as:

The science and engineering of making intelligent machines.

In other words, Artificial Intelligence is the science of getting machines to think and make decisions like humans.

In the recent past, AI has been able to accomplish this by creating machines and robots that have been used in a wide range of fields including healthcare, robotics, marketing, business analytics and many more.

Now lets discuss the different stages of Artificial Intelligence.

AI is structured along three evolutionary stages:

Types Of AI Artificial Intelligence With Python Edureka

Commonly known as weak AI, Artificial Narrow Intelligence involves applying AI only to specific tasks.

The existing AI-based systems that claim to use artificial intelligence are actually operating as a weak AI. Alexa is a good example of narrow intelligence. It operates within a limited predefined range of functions. Alexa has no genuine intelligence or self-awareness.

Google search engine, Sophia, self-driving cars and even the famous AlphaGo, fall under the category of weak AI.

Commonly known as strong AI, Artificial General Intelligence involves machines that possess the ability to perform any intellectual task that a human being can.

You see, machines dont possess human-like abilities, they have a strong processing unit that can perform high-level computations but theyre not yet capable of thinking and reasoning like a human.

There are many experts who doubt that AGI will ever be possible, and there are also many who question whether it would be desirable.

Stephen Hawking, for example, warned:

Strong AI would take off on its own, and re-design itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldnt compete and would be superseded.

Artificial Super Intelligence is a term referring to the time when the capability of computers will surpass humans.

ASI is presently seen as a hypothetical situation as depicted in movies and science fiction books, where machines have taken over the world. However, tech masterminds like Elon Musk believe that ASI will take over the world by 2040!

What do you think about Artificial Super Intelligence? Let me know your thoughts in the comment section.

Before I go any further, let me clear a very common misconception. Ive been asked these question by every beginner:

What is the difference between AI and Machine Learning and Deep Learning?

Lets break it down:

People tend to think that Artificial Intelligence, Machine Learning, and Deep Learning are the same since they have common applications. For example, Siri is an application of AI, Machine learning and Deep learning.

So how are these technologies related?

To sum it up AI, Machine Learning and Deep Learning are interconnected fields. Machine Learning and Deep learning aids Artificial Intelligence by providing a set of algorithms and neural networks to solve data-driven problems.

However, Artificial Intelligence is not restricted to only Machine learning and Deep learning. It covers a vast domain of fields including, Natural Language Processing (NLP), object detection, computer vision, robotics, expert systems and so on.

Now lets get started with Machine Learning.

The term Machine Learning was first coined by Arthur Samuel in the year 1959. Looking back, that year was probably the most significant in terms of technological advancements.

In simple terms,

Machine learning is a subset of Artificial Intelligence (AI) which provides machines the ability to learn automatically by feeding it tons of data & allowing it to improve through experience. Thus, Machine Learning is a practice of getting Machines to solve problems by gaining the ability to think.

But how can a machine make decisions?

If you feed a machine a good amount of data, it will learn how to interpret, process and analyze this data by using Machine Learning Algorithms.

What Is Machine Learning Artificial Intelligence With Python Edureka

To sum it up, take a look at the above figure:

Now that we know what is Machine Learning, lets look at the different ways in which machines can learn.

A machine can learn to solve a problem by following any one of the following three approaches:

Supervised Learning

Unsupervised Learning

Reinforcement Learning

Supervised learning is a technique in which we teach or train the machine using data which is well labeled.

To understand Supervised Learning lets consider an analogy. As kids we all needed guidance to solve math problems. Our teachers helped us understand what addition is and how it is done.

Similarly, you can think of supervised learning as a type of Machine Learning that involves a guide. The labeled data set is the teacher that will train you to understand patterns in the data. The labeled data set is nothing but the training data set.

Supervised Learning Artificial Intelligence With Python Edureka

Consider the above figure. Here were feeding the machine images of Tom and Jerry and the goal is for the machine to identify and classify the images into two groups (Tom images and Jerry images).

The training data set that is fed to the model is labeled, as in, were telling the machine, this is how Tom looks and this is Jerry. By doing so youre training the machine by using labeled data. In Supervised Learning, there is a well-defined training phase done with the help of labeled data.

Unsupervised learning involves training by using unlabeled data and allowing the model to act on that information without guidance.

Think of unsupervised learning as a smart kid that learns without any guidance. In this type of Machine Learning, the model is not fed with labeled data, as in the model has no clue that this image is Tom and this is Jerry, it figures out patterns and the differences between Tom and Jerry on its own by taking in tons of data.

Unsupervised Learning Artificial Intelligence With Python Edureka

For example, it identifies prominent features of Tom such as pointy ears, bigger size, etc, to understand that this image is of type 1. Similarly, it finds such features in Jerry and knows that this image is of type 2.

Therefore, it classifies the images into two different classes without knowing who Tom is or Jerry is.

Reinforcement Learning is a part of Machine learning where an agent is put in an environment and he learns to behave in this environment by performing certain actions and observing the rewards which it gets from those actions.

Imagine that you were dropped off at an isolated island!

What would you do?

Panic? Yes, of course, initially we all would. But as time passes by, you will learn how to live on the island. You will explore the environment, understand the climate condition, the type of food that grows there, the dangers of the island, etc.

This is exactly how Reinforcement Learning works, it involves an Agent (you, stuck on the island) that is put in an unknown environment (island), where he must learn by observing and performing actions that result in rewards.

Reinforcement Learning is mainly used in advanced Machine Learning areas such as self-driving cars, AplhaGo, etc. So that sums up the types of Machine Learning.

Now, lets look at the type of problems that are solved by using Machine Learning.

There are three main categories of problems that can be solved using Machine Learning:

In this type of problem, the output is a continuous quantity. For example, if you want to predict the speed of a car given the distance, it is a Regression problem. Regression problems can be solved by using Supervised Learning algorithms like Linear Regression.

In this type, the output is a categorical value. Classifying emails into two classes, spam and non-spam is a classification problem that can be solved by using Supervised Learning classification algorithms such as Support Vector Machines, Naive Bayes, Logistic Regression, K Nearest Neighbor, etc.

This type of problem involves assigning the input into two or more clusters based on feature similarity. For example, clustering viewers into similar groups based on their interests, age, geography, etc can be done by using Unsupervised Learning algorithms like K-Means Clustering.

Heres a table that sums up the difference between Regression, Classification, and Clustering:

Regression vs Classification vs Clustering Artificial Intelligence With Python Edureka

Now lets look at how the Machine Learning process works.

The Machine Learning process involves building a Predictive model that can be used to find a solution for a Problem Statement.

To understand the Machine Learning process lets assume that you have been given a problem that needs to be solved by using Machine Learning.

The problem is to predict the occurrence of rain in your local area by using Machine Learning.

The below steps are followed in a Machine Learning process:

Step 1: Define the objective of the Problem Statement

At this step, we must understand what exactly needs to be predicted. In our case, the objective is to predict the possibility of rain by studying weather conditions.

It is also essential to take mental notes on what kind of data can be used to solve this problem or the type of approach you must follow to get to the solution.

Step 2: Data Gathering

At this stage, you must be asking questions such as,

See original here:
Artificial Intelligence With Python | Build AI Models ...

Artificial Intelligence | Encyclopedia.com

Artificial Intelligence (AI) tries to enable computers to do the things that minds can do. These things include seeing pathways, picking things up, learning categories from experience, and using emotions to schedule one's actionswhich many animals can do, too. Thus, human intelligence is not the sole focus of AI. Even terrestrial psychology is not the sole focus, because some people use AI to explore the range of all possible minds.

There are four major AI methodologies: symbolic AI, connectionism, situated robotics, and evolutionary programming (Russell and Norvig 2003). AI artifacts are correspondingly varied. They include both programs (including neural networks) and robots, each of which may be either designed in detail or largely evolved. The field is closely related to artificial life (A-Life), which aims to throw light on biology much as some AI aims to throw light on psychology.

AI researchers are inspired by two different intellectual motivations, and while some people have both, most favor one over the other. On the one hand, many AI researchers seek solutions to technological problems, not caring whether these resemble human (or animal) psychology. They often make use of ideas about how people do things. Programs designed to aid/replace human experts, for example, have been hugely influenced by knowledge engineering, in which programmers try to discover what, and how, human experts are thinking when they do the tasks being modeled. But if these technological AI workers can find a nonhuman method, or even a mere trick (a kludge) to increase the power of their program, they will gladly use it.

Technological AI has been hugely successful. It has entered administrative, financial, medical, and manufacturing practice at countless different points. It is largely invisible to the ordinary person, lying behind some deceptively simple human-computer interface or being hidden away inside a car or refrigerator. Many procedures taken for granted within current computer science were originated within AI (pattern-recognition and image-processing, for example).

On the other hand, AI researchers may have a scientific aim. They may want their programs or robots to help people understand how human (or animal) minds work. They may even ask how intelligence in general is possible, exploring the space of possible minds. The scientific approachpsychological AIis the more relevant for philosophers (Boden 1990, Copeland 1993, Sloman 2002). It is also central to cognitive science, and to computationalism.

Considered as a whole, psychological AI has been less obviously successful than technological AI. This is partly because the tasks it tries to achieve are often more difficult. In addition, it is less clearfor philosophical as well as empirical reasonswhat should be counted as success.

Symbolic AI is also known as classical AI and as GOFAIshort for John Haugeland's label "Good Old-Fashioned AI" (1985). It models mental processes as the step-by-step information processing of digital computers. Thinking is seen as symbol-manipulation, as (formal) computation over (formal) representations. Some GOFAI programs are explicitly hierarchical, consisting of procedures and subroutines specified at different levels. These define a hierarchically structured search-space, which may be astronomical in size. Rules of thumb, or heuristics, are typically provided to guide the searchby excluding certain areas of possibility, and leading the program to focus on others. The earliest AI programs were like this, but the later methodology of object-oriented programming is similar.

Certain symbolic programs, namely production systems, are implicitly hierarchical. These consist of sets of logically separate if-then (condition-action) rules, or productions, defining what actions should be taken in response to specific conditions. An action or condition may be unitary or complex, in the latter case being defined by a conjunction of several mini-actions or mini-conditions. And a production may function wholly within computer memory (to set a goal, for instance, or to record a partial parsing) or outside it (via input/output devices such as cameras or keyboards).

Another symbolic technique, widely used in natural language processing (NLP) programs, involves augmented transition networks, or ATNs. These avoid explicit backtracking by using guidance at each decision-point to decide which question to ask and/or which path to take.

GOFAI methodology is used for developing a wide variety of language-using programs and problem-solvers. The more precisely and explicitly a problem-domain can be defined, the more likely it is that a symbolic program can be used to good effect. Often, folk-psychological categories and/or specific propositions are explicitly represented in the system. This type of AI, and the forms of computational psychology based on it, is defended by the philosopher Jerry Fodor (1988).

GOFAI models (whether technological or scientific) include robots, planning programs, theorem-provers, learning programs, question-answerers, data-mining systems, machine translators, expert systems of many different kinds, chess players, semantic networks, and analogy machines. In addition, a host of software agentsspecialist mini-programs that can aid a human being to solve a problemare implemented in this way. And an increasingly important area of research is distributed AI, in which cooperation occurs between many relatively simple individualswhich may be GOFAI agents (or neural-network units, or situated robots).

The symbolic approach is used also in modeling creativity in various domains (Boden 2004, Holland et al. 1986). These include musical composition and expressive performance, analogical thinking, line-drawing, painting, architectural design, storytelling (rhetoric as well as plot), mathematics, and scientific discovery. In general, the relevant aesthetic/theoretical style must be specified clearly, so as to define a space of possibilities that can be fruitfully explored by the computer. To what extent the exploratory procedures can plausibly be seen as similar to those used by people varies from case to case.

Connectionist systems, which became widely visible in the mid-1980s, are different. They compute not by following step-by-step programs but by using large numbers of locally connected (associative) computational units, each one of which is simple. The processing is bottom-up rather than top-down.

Connectionism is sometimes said to be opposed to AI, although it has been part of AI since its beginnings in the 1940s (McCulloch and Pitts 1943, Pitts and McCulloch 1947). What connectionism is opposed to, rather, is symbolic AI. Yet even here, opposed is not quite the right word, since hybrid systems exist that combine both methodologies. Moreover, GOFAI devotees such as Fodor see connectionism as compatible with GOFAI, claiming that it concerns how symbolic computation can be implemented (Fodor and Pylyshyn 1988).

Two largely separate AI communities began to emerge in the late 1950s (Boden forthcoming). The symbolic school focused on logic and Turing-computation, whereas the connectionist school focused on associative, and often probabilistic, neural networks. (Most connectionist systems are connectionist virtual machines, implemented in von Neumann computers; only a few are built in dedicated connectionist hardware.) Many people remained sympathetic to both schools. But the two methodologies are so different in practice that most hands-on AI researchers use either one or the other.

There are different types of connectionist systems. Most philosophical interest, however, has focused on networks that do parallel distributed processing, or PDP (Clark 1989, Rumelhart and McClelland 1986). In essence, PDP systems are pattern recognizers. Unlike brittle GOFAI programs, which often produce nonsense if provided with incomplete or part-contradictory information, they show graceful degradation. That is, the input patterns can be recognized (up to a point) even if they are imperfect.

A PDP network is made up of subsymbolic units, whose semantic significance cannot easily be expressed in terms of familiar semantic content, still less propositions. (Some GOFAI programs employ subsymbolic units, but most do not.) That is, no single unit codes for a recognizable concept, such as dog or cat. These concepts are represented, rather, by the pattern of activity distributed over the entire network.

Because the representation is not stored in a single unit but is distributed over the whole network, PDP systems can tolerate imperfect data. (Some GOFAI systems can do so too, but only if the imperfections are specifically foreseen and provided for by the programmer.) Moreover, a single subsymbolic unit may mean one thing in one input-context and another in another. What the network as a whole can represent depends on what significance the designer has decided to assign to the input-units. For instance, some input-units are sensitive to light (or to coded information about light), others to sound, others to triads of phonological categories and so on.

Most PDP systems can learn. In such cases, the weights on the links of PDP units in the hidden layer (between the input-layer and the output-layer) can be altered by experience, so that the network can learn a pattern merely by being shown many examples of it. (A GOFAI learning-program, in effect, has to be told what to look for beforehand, and how.) Broadly, the weight on an excitatory link is increased by every coactivation of the two units concerned: cells that fire together, wire together.

These two AI approaches have complementary strengths and weaknesses. For instance, symbolic AI is better at modeling hierarchy and strong constraints, whereas connectionism copes better with pattern recognition, especially if many conflictingand perhaps incompleteconstraints are relevant. Despite having fervent philosophical champions on both sides, neither methodology is adequate for all of the tasks dealt with by AI scientists. Indeed, much research in connectionism has aimed to restore the lost logical strengths of GOFAI to neural networkswith only limited success by the beginning of the twenty-first century.

Another, and more recently popular, AI methodology is situated robotics (Brooks 1991). Like connectionism, this was first explored in the 1950s. Situated robots are described by their designers as autonomous systems embedded in their environment (Heidegger is sometimes cited). Instead of planning their actions, as classical robots do, situated robots react directly to environmental cues. One might say that they are embodied production systems, whose if-then rules are engineered rather than programmed, and whose conditions lie in the external environment, not inside computer memory. Althoughunlike GOFAI robotsthey contain no objective representations of the world, some of them do construct temporary, subject-centered (deictic) representations.

The main aim of situated roboticists in the mid-1980s, such as Rodney Brooks, was to solve/avoid the frame problem that had bedeviled GOFAI (Pylyshyn 1987). GOFAI planners and robots had to anticipate all possible contingencies, including the side effects of actions taken by the system itself, if they were not to be defeated by unexpectedperhaps seemingly irrelevantevents. This was one of the reasons given by Hubert Dreyfus (1992) in arguing that GOFAI could not possibly succeed: Intelligence, he said, is unformalizable. Several ways of implementing nonmonotonic logics in GOFAI were suggested, allowing a conclusion previously drawn by faultless reasoning to be negated by new evidence. But because the general nature of that new evidence had to be foreseen, the frame problem persisted.

Brooks argued that reasoning shouldn't be employed at all: the system should simply react appropriately, in a reflex fashion, to specific environmental cues. This, he said, is what insects doand they are highly successful creatures. (Soon, situated robotics was being used, for instance, to model the six-legged movement of cockroaches.) Some people joked that AI stood for artificial insects, not artificial intelligence. But the joke carried a sting: Many argued that much human thinking needs objective representations, so the scope for situated robotics was strictly limited.

In evolutionary programming, genetic algorithms (GAs) are used by a program to make random variations in its own rules. The initial rules, before evolution begins, either do not achieve the task in question or do so only inefficiently; sometimes, they are even chosen at random.

The variations allowed are broadly modeled on biological mutations and crossovers, although more unnatural types are sometimes employed. The most successful rules are automatically selected, and then varied again. This is more easily said than done: The breakthrough in GA methodology occurred when John Holland (1992) defined an automatic procedure for recognizing which rules, out of a large and simultaneously active set, were those most responsible for whatever level of success the evolving system had just achieved.

Selection is done by some specific fitness criterion, predefined in light of the task the programmer has in mind. Unlike GOFAI systems, a GA program contains no explicit representation of what it is required to do: its task is implicit in the fitness criterion. (Similarly, living things have evolved to do what they do without knowing what that is.) After many generations, the GA system may be well-adapted to its task. For certain types of tasks, it can even find the optimal solution.

This AI method is used to develop both symbolic and connectionist AI systems. And it is applied both to abstract problem-solving (mathematical optimization, for instance, or the synthesis of new pharmaceutical molecules) and to evolutionary roboticswherein the brain and/or sensorimotor anatomy of robots evolve within a specific task-environment.

It is also used for artistic purposes, in the composition of music or the generation of new visual forms. In these cases, evolution is usually interactive. That is, the variation is done automatically but the selection is done by a human beingwho does not need to (and usually could not) define, or even name, the aesthetic fitness criteria being applied.

AI is a close cousin of A-Life (Boden 1996). This is a form of mathematical biology, which employs computer simulation and situated robotics to study the emergence of complexity in self-organizing, self-reproducing, adaptive systems. (A caveat: much as some AI is purely technological in aim, so is some A-Life; the research of most interest to philosophers is the scientifically oriented type.)

The key concepts of A-Life date back to the early 1950s. They originated in theoretical work on self-organizing systems of various kinds, including diffusion equations and cellular automata (by Alan Turing and John von Neumann respectively), and in early self-equilibrating machines and situated robots (built by W. Ross Ashby and W. Grey Walter). But A-Life did not flourish until the late 1980s, when computing power at last sufficed to explore these theoretical ideas in practice.

Much A-Life work focuses on specific biological phenomena, such as flocking, cooperation in ant colonies, or morphogenesisfrom cell-differentiation to the formation of leopard spots or tiger stripes. But A-Life also studies general principles of self-organization in biology: evolution and coevolution, reproduction, and metabolism. In addition, it explores the nature of life as suchlife as it could be, not merely life as it is.

A-Life workers do not all use the same methodology, but they do eschew the top-down methods of GOFAI. Situated and evolutionary robotics, and GA-generated neural networks, too, are prominent approaches within the field. But not all A-Life systems are evolutionary. Some demonstrate how a small number of fixed, and simple, rules can lead to self-organization of an apparently complex kind.

Many A-Lifers take pains to distance themselves from AI. But besides their close historical connections, AI and A-Life are philosophically related in virtue of the linkage between life and mind. It is known that psychological properties arise in living things, and some people argue (or assume) that they can arise only in living things. Accordingly, the whole of AI could be regarded as a subarea of A-Life. Indeed, some people argue that success in AI (even in technological AI) must await, and build on, success in A-Life.

Whichever of the two AI motivationstechnological or psychologicalis in question, the name of the field is misleading in three ways. First, the term intelligence is normally understood to cover only a subset of what AI workers are trying to do. Second, intelligence is often supposed to be distinct from emotion, so that AI is assumed to exclude work on that. And third, the name implies that a successful AI system would really be intelligenta philosophically controversial claim that AI researchers do not have to endorse (though some do).

As for the first point, people do not normally regard vision or locomotion as examples of intelligence. Many people would say that speaking one's native language is not a case of intelligence either, except in comparison with nonhuman species; and common sense is sometimes contrasted with intelligence. The term is usually reserved for special cases of human thought that show exceptional creativity and subtlety, or which require many years of formal education. Medical diagnosis, scientific or legal reasoning, playing chess, and translating from one language to another are typically regarded as difficult, thus requiring intelligence. And these tasks were the main focus of research when AI began. Vision, for example, was assumed to be relatively straightforwardnot least, because many nonhuman animals have it too. It gradually became clear, however, that everyday capacities such as vision and locomotion are vastly more complex than had been supposed. The early definition of AI as programming computers to do things that involve intelligence when done by people was recognized as misleading, and eventually dropped.

Similarly, intelligence is often opposed to emotion. Many people assume that AI could never model that. However, crude examples of such models existed in the early 1960s, and emotion was recognized by a high priest of AI, Herbert Simon, as being essential to any complex intelligence. Later, research in the computational philosophy (and modeling) of affect showed that emotions have evolved as scheduling mechanisms for systems with many different, and potentially conflicting, purposes (Minsky 1985, and Web site). When AI began, it was difficult enough to get a program to follow one goal (with its subgoals) intelligentlyany more than that was essentially impossible. For this reason, among others, AI modeling of emotion was put on the back burner for about thirty years. By the 1990s, however, it had become a popular focus of AI research, and of neuroscience and philosophy too.

The third point raises the difficult questionwhich many AI practitioners leave open, or even ignoreof whether intentionality can properly be ascribed to any conceivable program/robot (Newell 1980, Dennett 1987, Harnad 1991).

Could some NLP programs really understand the sentences they parse and the words they translate? Or can a visuo-motor circuit evolved within a robot's neural-network brain truly be said to represent the environmental feature to which it responds? If a program, in practice, could pass the Turing Test, could it truly be said to think? More generally, does it even make sense to say that AI may one day achieve artificially produced (but nonetheless genuine) intelligence?

For the many people in the field who adopt some form of functionalism, the answer in each case is: In principle, yes. This applies for those who favor the physical symbol system hypothesis or intentional systems theory. Others adopt connectionist analyses of concepts, and of their development from nonconceptual content. Functionalism is criticized by many writers expert in neuroscience, who claim that its core thesis of multiple realizability is mistaken. Others criticize it at an even deeper level: a growing minority (especially in A-Life) reject neo-Cartesian approaches in favor of philosophies of embodiment, such as phenomenology or autopoiesis.

Part of the reason why such questions are so difficult is that philosophers disagree about what intentionality is, even in the human case. Practitioners of psychological AI generally believe that semantic content, or intentionality, can be naturalized. But they differ about how this can be done.

For instance, a few practitioners of AI regard computation and intentionality as metaphysically inseparable (Smith 1996). Others ascribe meaning only to computations with certain causal consequences and provenance, or grounding. John Searle argues that AI cannot capture intentionality, becauseat baseit is concerned with the formal manipulation of formal symbols. And for those who accept some form of evolutionary semantics, only evolutionary robots could embody meaning (Searle, 1980).

See also Computationalism; Machine Intelligence.

Boden, Margaret A. The Creative Mind: Myths and Mechanisms. 2nd ed. London: Routledge, 2004.

Boden, Margaret A. Mind as Machine: A History of Cognitive Science. Oxford: Oxford University Press, forthcoming. See especially chapters 4, 7.i, 1013, and 14.

Boden, Margaret A., ed. The Philosophy of Artificial Intelligence. Oxford: Oxford University Press, 1990.

Boden, Margaret A., ed. The Philosophy of Artificial Life. Oxford: Oxford University Press, 1996.

Brooks, Rodney A. "Intelligence without Representation." Artificial Intelligence 47 (1991): 139159.

Clark, Andy J. Microcognition: Philosophy, Cognitive Science, and Parallel Distributed Processing. Cambridge, MA: MIT Press, 1989.

Copeland, B. Jack. Artificial Intelligence: A Philosophical Introduction. Oxford: Blackwell, 1993.

Dennett, Daniel C. The Intentional Stance. Cambridge, MA: MIT Press, 1987.

Dreyfus, Hubert L. What Computers Still Can't Do: A Critique of Artificial Reason. Cambridge, MA: MIT Press, 1992.

Fodor, Jerome A., and Zenon W. Pylyshyn. "Connectionism and Cognitive Architecture: A Critical Analysis." Cognition 28 (1988): 371.

Harnad, Stevan. "Other Bodies, Other Minds: A Machine Incarnation of an Old Philosophical Problem." Minds and Machines 1 (1991): 4354.

Haugeland, John. Artificial Intelligence: The Very Idea. Cambridge, MA: MIT Press, 1985.

Holland, John H. Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence. Cambridge, MA: MIT Press, 1992.

Holland, John H., Keith J. Holyoak, Richard E. Nisbett, and Paul R. Thagard. Induction: Processes of Inference, Learning, and Discovery. Cambridge, MA: MIT Press, 1986.

McCulloch, Warren S., and Walter H. Pitts. "A Logical Calculus of the Ideas Immanent in Nervous Activity." In The Philosoophy of Artificial Intelligence, edited by Margaret A. Boden. Oxford: Oxford University Press, 1990. First published in 1943.

Minsky, Marvin L. The Emotion Machine. Available from http://web.media.mit.edu/~minsky/E1/eb1.html. Web site only.

Minsky, Marvin L. The Society of Mind. New York: Simon & Schuster, 1985.

Newell, Allen. "Physical Symbol Systems." Cognitive Science 4 (1980): 135183.

Pitts, Walter H., and Warren S. McCulloch. "How We Know Universals: The Perception of Auditory and Visual Forms." In Embodiments of Mind, edited by Warren S. McCulloch. Cambridge, MA: MIT Press, 1965. First published in 1947.

Pylyshyn, Zenon W. The Robot's Dilemma: The Frame Problem in Artificial Intelligence. Norwood, NJ: Ablex, 1987.

Rumelhart, David E., and James L. McClelland, eds. Parallel Distributed Processing: Explorations in the Microstructure of Cognition. 2 vols. Cambridge, MA: MIT Press, 1986.

Russell, Stuart J., and Peter Norvig. Artificial Intelligence: A Modern Approach. 2nd ed. Upper Saddle River, NJ: Prentice-Hall, 2003.

Searle, John R. "Minds, Brains, and Programs," The Behavioral and Brain Sciences 3 (1980), 417424. Reprinted in M. A. Boden, ed., The Philosophy of Artificial Intelligence (Oxford: Oxford University Press 1990), pp. 6788.

Sloman, Aaron. "The Irrelevance of Turing Machines to Artificial Intelligence." In Computationalism: New Directions, edited by Matthias Scheutz. Cambridge, MA: MIT Press, 2002.

Smith, Brian C. On the Origin of Objects. Cambridge, MA: MIT Press, 1996.

Margaret A. Boden (1996, 2005)

Originally posted here:
Artificial Intelligence | Encyclopedia.com

Artificial intelligence is being used to digitally replicate human voices – NPR

Reporter Chloe Veltman reacts to hearing her digital voice double, "Chloney," for the first time, with Speech Morphing chief linguist Mark Seligman. Courtesy of Speech Morphing hide caption

Reporter Chloe Veltman reacts to hearing her digital voice double, "Chloney," for the first time, with Speech Morphing chief linguist Mark Seligman.

The science behind making machines talk just like humans is very complex, because our speech patterns are so nuanced.

"The voice is not easy to grasp," says Klaus Scherer, emeritus professor of the psychology of emotion at the University of Geneva. "To analyze the voice really requires quite a lot of knowledge about acoustics, vocal mechanisms and physiological aspects. So it is necessarily interdisciplinary, and quite demanding in terms of what you need to master in order to do anything of consequence."

So it's not surprisingly taken well over 200 years for synthetic voices to get from the first speaking machine, invented by Wolfgang von Kempelen around 1800 a boxlike contraption that used bellows, pipes and a rubber mouth and nose to simulate a few recognizably human utterances, like mama and papa to a Samuel L. Jackson voice clone delivering the weather report on Alexa today.

A model replica of Wolfgang von Kempelen's Speaking Machine. Fabian Brackhane hide caption

A model replica of Wolfgang von Kempelen's Speaking Machine.

Talking machines like Siri, Google Assistant and Alexa, or a bank's automated customer service line, are now sounding quite human. Thanks to advances in artificial intelligence, or AI, we've reached a point where it's sometimes difficult to distinguish synthetic voices from real ones.

I wanted to find out what's involved in the process at the customer end. So I approached San Francisco Bay Area-based natural language speech synthesis company Speech Morphing about creating a clone or "digital double" of my own voice.

Given the complexities of speech synthesis, it's quite a shock to find out just how easy it is to order one up. For a basic conversational build, all a customer has to do is record themselves saying a bunch of scripted lines for roughly an hour. And that's about it.

"We extract 10 to 15 minutes of net recordings for a basic build," says Speech Morphing founder and CEO Fathy Yassa.

The hundreds of phrases I record so that Speech Morphing can build my digital voice double seem very random: "Here the explosion of mirth drowned him out." "That's what Carnegie did." "I'd like to be buried under Yankee Stadium with JFK." And so on.

But they aren't as random as they appear. Yassa says the company chooses utterances that will produce a wide enough variety of sounds across a range of emotions such as apologetic, enthusiastic, angry and so on to feed a neural network-based AI training system. It essentially teaches itself the specific patterns of a person's speech.

Speech Morphing founder and CEO Fathy Yassa. Chloe Veltman/KQED hide caption

Speech Morphing founder and CEO Fathy Yassa.

Yassa says there are around 20 affects or tones to choose from, and some of these can be used interchangeably, or not at all. "Not every tone or affect is needed for every client," he says. "The choice depends on the target application and use cases. Banking is different from eBooks, is different from reporting and broadcast, is different from consumer."

At the end of the recording session, I send Speech Morphing the audio files. From there, the company breaks down and analyzes my utterances, and then builds the model for the AI to learn from. Yassa says the entire process takes less than a week.

He says the possibilities for the Chloe Veltman voice clone or "Chloney" as I've affectionately come to call my robot self are almost limitless.

"We can make you apologetic, we can make you promotional, we can make you act like you're in the theater," Yassa says. "We can make you sing, eventually, though we're not yet there."

The global speech and voice recognition industry is worth tens of billions of dollars,and is growing fast. Its uses are evident. The technology has given actor Val Kilmer, who lost his voice owing to throat cancer a few years ago, the chance to reclaim something approaching his former vocal powers.

It's enabled film directors, audiobook creators and game designers to develop characters without the need to have live voice talent on hand, as in the movie Roadrunner, where an AI was trained on Anthony Bourdain's extensive archive of media appearances to create a digital double of the late chef and TV personality's voice.

As pitch-perfect as Bourdain's digital voice double might be, it's also caused controversy. Some people raised ethical concerns about putting words into Bourdain's mouth that he never actually said while he was alive.

A cloned version of Barack Obama's voice warning people about the dangers of fake news, created by actor and film director Jordan Peele, hammers the point home: Sometimes we have cause to be wary of machines that sound too much like us.

[Note: The video embedded below includes profanities.]

"We're entering an era in which our enemies can make it look like anyone is saying anything at any point in time," says the Obama deepfake in the video, produced in collaboration with BuzzFeed in 2018. "Even if they would never say those things."

Sometimes, though, we don't necessarily want machines to sound too human, because it creeps us out.

If you're looking for a digital voice double to read an audiobook to kids, or act as a companion or helper for a senior, a more human-sounding voice might be the right way to go.

"Maybe not something that actually breathes, because that's a little bit creepy, but a little more human might be more approachable," says user experience and voice designer Amy Jimnez Mrquez, who led the voice, multimodal and UX Amazon Alexa personality-experience design team for four years.

But for a machine that performs basic tasks, like, say, a voice-activated refrigerator? Maybe less human is best. "Having something a little more robotic and you can even create a tinny voice that sounds like an actual robot that is cute, that would be more appropriate for a refrigerator," Jimnez Mrquez says.

At a demo session with Speech Morphing, I get to hear Chloney, my digital voice double.

Her voice comes at me through a pair of portable speakers connected to a laptop. The laptop displays the programming interface into which whatever text I want Chloney to say is typed. The interface includes tools to make micro-adjustments to the pitch, speed and other vocal attributes that might need to be tweaked if Chloney's prosody doesn't come out sounding exactly right.

"Happy birthday to you. Happy birthday to you. Happy birthday, dear Chloney. Happy birthday to you," says Chloney.

Chloney can't sing "Happy Birthday" at least for now. But she can read out news stories I didn't even report myself, like one ripped from an AP newswire about the COVID-19 pandemic. And she can even do it in Spanish.

Chloney sounds quite a lot like me. It's impressive, but it's also a little scary.

"My jaw is on the floor," says the original voice behind Chloney that's me, Chloe as I listen to what my digital voice double can do. "Let's hope she doesn't put me out of a job anytime soon."

Continued here:
Artificial intelligence is being used to digitally replicate human voices - NPR

6 features of the ideal healthcare artificial intelligence algorithm – HealthExec

1. Explainable: Many algorithms are unable to "show their work," commonly known as the black box problem. But quality tools must clarify the traits of a patient and their medical condition while making a diagnosis. Separating association from causation is also crucial.

2. Dynamic: Digital tools should capture and adjust to patients in real-time. For example, intracranial and cerebral perfusion pressure can shift quickly after a head injury, not recognizing these changes can prove deadly.

3. Precise: The average person generates more than 1 million gigabytes of healthcare data during their lifetime or nearly 300 million books, the authors noted. Algorithms must utilize and distill this information to diagnose complex diseases and changing conditions.

4. Autonomous: After training and testing periods, AI should be able to learn and offer results with little input from providers or developers.

5. Fair: Implicit bias and social inequities must be accounted for. Prior to including demographic or socioeconomic factors into a prediction model, developers must determine whether that factor has a proven association with a clinical outcome.

6. Reproducible: These tools are validated externally and prospectively, and shared among multiple academic communities and institutions. Federated learning uses a decentralized, online infrastructure to train algorithms and presents a good opportunity for developing reproducible tools.

See original here:
6 features of the ideal healthcare artificial intelligence algorithm - HealthExec

Teaching Stream Faculty in Artificial Intelligence job with KING ABDULLAH UNIVERSITY OF SCIENCE & TECHNOLOGY | 278533 – Times Higher Education…

King Abdullah University of Science and Technology: Faculty Positions: Center for Teaching and Learning

Location

King Abdullah University of Science and Technology, Thuwal, Saudi Arabia

Deadline

Feb 28, 2022 at 11:59 PM Eastern Time

Description

The Center for Teaching and Learning at KAUST seeks to appoint one or more teaching stream faculty members in the field of artificial intelligence. Such a faculty member will teach in the underlying methodology of machine learning, and modern AI, as well as its application in software, using modern tools like TensorFlow and Pytorch. The faculty member will educate students in how to use these algorithms and software to implement advanced machine learning and AI methods on modern computing platforms, including graphical processor units (GPUs). The principal teaching will be on neural networks, for applications in image and natural language processing, but also in other areas, like medicine and geoscience. While the faculty member need not be an expert in all of these application areas, he/she should have deep enough understanding of the underlying methodology to adapt to a diverse set of applications.

The teaching responsibilities will come in several forms. The faculty member may teach up to one class each semester within a KAUST academic program, like Computer Science. Additionally, the faculty member will help lead small workshops at KAUST on AI training for a wide audience of scientists and engineers, for people who hope to apply the technology, but need not wish to become experts. Finally, KAUST is seeking to expand its exposure to the Saudi community outside the KAUST campus. AI training and development of micro-credentials will be performed for short periods in Saudi cities like Riyadh, accessible to a wide audience of technical people, as well as business leaders who hope to learn about what can be achieved with AI, but who do not seek to become experts themselves. These teaching opportunities outside of KAUST are meant to address the need for AI training throughout the Kingdom, and will help KAUST meet its expanded mission to help upskill a broad segment of the Saudi community. The faculty member will help design these training opportunities, and with KAUST colleagues will assist in their delivery. In this context, there may be opportunities to perform on-site training for employees at major Saudi companies.

For a teaching stream faculty member, it is anticipated that one would typically teach 2 to 3 classes per semester. However, the individual who fills the role described here will typically teach one class per semester. Therefore, the remaining time commitment is meant to address the development and implementation of AI workshops at KAUST, as well as the aforementioned training opportunities planned for Saudi cities like in Riyadh, and possibly targeted training for Saudi companies.

This teaching stream faculty position is full-time, over the 12 month calendar year, with vacation periods consistent with all KAUST faculty. The summer period will be a particularly important time for developing and executing the teaching to be performed outside KAUST.

Qualifications

We welcome candidates with a PhD in Computer Scienceor related areas, with a strong background in Artificial Intelligence and Data Science.

Application Instructions

To apply for this position, please complete the interfolio application form and upload the following materials:

See the original post:
Teaching Stream Faculty in Artificial Intelligence job with KING ABDULLAH UNIVERSITY OF SCIENCE & TECHNOLOGY | 278533 - Times Higher Education...