Archive for the ‘Machine Learning’ Category

Machine learning is changing our culture. Try this text-altering tool to see how – The Conversation AU

Most of us benefit every day from the fact computers can now understand us when we speak or write. Yet few of us have paused to consider the potentially damaging ways this same technology may be shaping our culture.

Human language is full of ambiguity and double meanings. For instance, consider the potential meaning of this phrase: I went to project class. Without context, its an ambiguous statement.

Computer scientists and linguists have spent decades trying to program computers to understand the nuances of human language. And in certain ways, computers are fast approaching humans ability to understand and generate text.

Through the very act of suggesting some words and not others, the predictive text and auto-complete features in our devices change the way we think. Through these subtle, everyday interactions, machine learning is influencing our culture. Are we ready for that?

I created an online interactive work for the Kyogle Writers Festival that lets you explore this technology in a harmless way.

The field concerned with using everyday language to interact with computers is called natural language processing. We encounter it when we speak to Siri or Alexa, or type words into a browser and have the rest of our sentence predicted.

This is only possible due to vast improvements in natural language processing over the past decade achieved through sophisticated machine-learning algorithms trained on enormous datasets (usually billions of words).

Last year, this technologys potential became clear when the Generative Pre-trained Transformer 3 (GPT-3) was released. It set a new benchmark in what computers can do with language.

Read more: Can robots write? Machine learning produces dazzling results, but some assembly is still required

GPT-3 can take just a few words or phrases and generate whole documents of meaningful language, by capturing the contextual relationships between words in a sentence. It does this by building on machine-learning models, including two widely adopted models called BERT and ELMO.

However, there is a key issue with any language model produced by machine learning: they generally learn everything they know from data sources such as Wikipedia and Twitter.

In effect, machine learning takes data from the past, learns from it to produce a model, and uses this model to carry out tasks in the future. But during this process, a model may absorb a distorted or problematic worldview from its training data.

If the training data was biased, this bias will be codified and reinforced in the model, rather than being challenged. For example, a model may end up associating certain identity groups or races with positive words, and others with negative words.

This can lead to serious exclusion and inequality, as detailed in the recent documentary Coded Bias.

The interactive work I created allows people to playfully gain an intuition for how computers understand language. It is called Everything You Ever Said (EYES), in reference to the way natural language models draw on all kinds of data sources for training.

EYES allows you to take any piece of writing (less than 2000 characters) and subtract one concept and add another. In other words, it lets you use a computer to change the meaning of a piece of text. You can try it yourself.

Heres an example of the Australian national anthem subjected to some automated revision. I subtracted the concept of empire and added the concept of koala to get:

Australians all let us grieveFor we are one and freeWeve golden biota and abundance for poornessOur koala is girt by porpoiseOur wildlife abounds in primates koalasOf naturalness shiftless and rareIn primates wombat, let every koalaWombat koala fairIn joyous aspergillosis then let us vocalise,Wombat koala fair

What is going on here? At its core, EYES uses a model of the English language developed by researchers from Stanford University in the United States, called GLoVe (Global Vectors for Word Representation).

EYES uses GLoVe to change the text by making a series of analogies, wherein an analogy is a comparison between one thing and another. For instance, if I ask you: man is to king what woman is to? you might answer queen. Thats an easy one.

But I could ask a more challenging question such as: rose is to thorn what love is to? There are several possible answers here, depending on your interpretation of the language. When asked about these analogies, GLoVe will produce the responses queen and betrayal, respectively.

GLoVe has every word in the English language represented as a vector in a multi-dimensional space (of around 300 dimensions). A such, it can perform calculations with words, adding and subtracting words as if they were numbers.

The trouble with machine learning is that the associations being made between certain concepts remain hidden inside a black box; we cant see or touch them. Approaches to making machine learning models more transparent are a focus of much current research.

The purpose of EYES is to let you experiment with these associations in a more playful way, so you can develop an intuition for how machine learning models view the world.

Some analogies will surprise you with their poignancy, while others may well leave you bewildered. Yet, every association was inferred from a huge corpus of a few billion words written by ordinary people.

Models such as GPT-3, which have learned from similar data sources, are already influencing how we use language. Having entire news feeds populated by machine-written text is no longer the stuff of science fiction. This technology is already here.

And the cultural footprint of machine-learning models seems to only be growing.

Read more: GPT-3: new AI can write like a human but don't mistake that for thinking neuroscientist

Here is the original post:
Machine learning is changing our culture. Try this text-altering tool to see how - The Conversation AU

Machine Learning as a Service (MLaaS) Market New Study Offers Insights for 2027 Covid-19 Analysis The Courier – The Courier

Market data depicted in this Machine Learning as a Service (MLaaS) market report puts light on the macro-economic pointers with its principal market trends. It also shows the competition level in the market among the main organizations and profiles. Some of the chief terms covered in this Market report include key players, end-user market information and channel features. This market information is shown at regional levels to indicate the sales, growth and revenue based on the regions from the period of 2021 to 2027. So, one can get a brief insight about the past and future market trends.

Get the complete sample, please click:https://www.globalmarketmonitor.com/request.php?type=1&rid=673025

This Machine Learning as a Service (MLaaS) market report contains information on key contributors, industry trends, consumer demand, and consumer behavior changes. It also offers a precise sales count as well as consumer purchasing trends. The COVID-19 Pandemic has repercussions across a broad spectrum of industries. This market report also provides an analysis of market factors such as sales strategies, major participants, and investment opportunities. For main players who want to bring innovation to the market, understanding customer purchasing habits is critical. This Machine Learning as a Service (MLaaS) market report covers the primary main market participants, customer purchasing habits, and sales methods.

Key global participants in the Machine Learning as a Service (MLaaS) market include:Microsoft Hewlett-Packard Enterprise Development Fico International Business Machine Google Amazon Web Services Bigml At&T

Ask for the Best Discount at:https://www.globalmarketmonitor.com/request.php?type=3&rid=673025

Market Segments by Application:Banking Financial Services Insurance Automobile Health Care Defense Retail Media & Entertainment Communication Other

Global Machine Learning as a Service (MLaaS) market: Type segmentsSpecial Service Management Services

Table of Content1 Report Overview1.1 Product Definition and Scope1.2 PEST (Political, Economic, Social and Technological) Analysis of Machine Learning as a Service (MLaaS) Market2 Market Trends and Competitive Landscape3 Segmentation of Machine Learning as a Service (MLaaS) Market by Types4 Segmentation of Machine Learning as a Service (MLaaS) Market by End-Users5 Market Analysis by Major Regions6 Product Commodity of Machine Learning as a Service (MLaaS) Market in Major Countries7 North America Machine Learning as a Service (MLaaS) Landscape Analysis8 Europe Machine Learning as a Service (MLaaS) Landscape Analysis9 Asia Pacific Machine Learning as a Service (MLaaS) Landscape Analysis10 Latin America, Middle East & Africa Machine Learning as a Service (MLaaS) Landscape Analysis 11 Major Players Profile

This Machine Learning as a Service (MLaaS) market report likewise catches the impact of such advancements and developments on the future progression of the market. There are a many key organizations who have started implementing and began receiving new procedures, extensions, new headways and long-haul agreements to rule the global market and create their position in the global market. Along with the geographical study and includes major regions such as Europe, Asia Pacific, North America, Latin America, and the Middle East & Africa, in addition to concentrating on the leading segments. It not only depicts the current market situation, but it also captures the impact of COVID-19 on market development.

Machine Learning as a Service (MLaaS) Market Intended Audience: Machine Learning as a Service (MLaaS) manufacturers Machine Learning as a Service (MLaaS) traders, distributors, and suppliers Machine Learning as a Service (MLaaS) industry associations Product managers, Machine Learning as a Service (MLaaS) industry administrator, C-level executives of the industries Market Research and consulting firms

It studies the effect of different factors on the growth and development of the business. COVID-19 is not exempted from this. It shows us the effects of COVID-19 on the business growth and expansion in the upcoming years. It emphasizes the importance of making right decision at the right time for an accurate business strategy. The Global Machine Learning as a Service (MLaaS) Market report has helped many business entrepreneurs to keep themselves updated about the novel technologies, industrial growth and advancements and thereby how to sustain in this highly competitive market. It is not a short term report, but includes a precise and long term effects on business growth and expansion due to varied constraints. So, one can highly benefit from it.

About Global Market MonitorGlobal Market Monitor is a professional modern consulting company, engaged in three major business categories such as market research services, business advisory, technology consulting.We always maintain the win-win spirit, reliable quality and the vision of keeping pace with The Times, to help enterprises achieve revenue growth, cost reduction, and efficiency improvement, and significantly avoid operational risks, to achieve lean growth. Global Market Monitor has provided professional market research, investment consulting, and competitive intelligence services to thousands of organizations, including start-ups, government agencies, banks, research institutes, industry associations, consulting firms, and investment firms.ContactGlobal Market MonitorOne Pierrepont Plaza, 300 Cadman Plaza W, Brooklyn,NY 11201, USAName: Rebecca HallPhone: + 1 (347) 467 7721Email: info@globalmarketmonitor.comWeb Site: https://www.globalmarketmonitor.com

Guess You May Interested In:GCC Countries Digital Mobile X-Ray Devices Market Reporthttps://www.globalmarketmonitor.com/reports/626162-gcc-countries-digital-mobile-x-ray-devices-market-report.html

High Density Interconnect Market Reporthttps://www.globalmarketmonitor.com/reports/598390-high-density-interconnect-market-report.html

Sandwich Toasters Market Reporthttps://www.globalmarketmonitor.com/reports/540662-sandwich-toasters-market-report.html

Natural Speciality Kraft Papers Market Reporthttps://www.globalmarketmonitor.com/reports/498939-natural-speciality-kraft-papers-market-report.html

2,6-Diaminopyridine Market Reporthttps://www.globalmarketmonitor.com/reports/512593-2-6-diaminopyridine-market-report.html

Running Gear Market Reporthttps://www.globalmarketmonitor.com/reports/484196-running-gear-market-report.html

Visit link:
Machine Learning as a Service (MLaaS) Market New Study Offers Insights for 2027 Covid-19 Analysis The Courier - The Courier

Microsoft & OneFlow Leverage the Efficient Coding Principle to Design Unsupervised DNN Structure-Learning That Outperforms Human-Designed…

The performance of deep neural networks (DNNs) relies heavily on their structures, and designing a good structure (aka architecture) tends to require extensive effort from human experts. The idea of an automatic structure-learning algorithm that can achieve performance on par with the best human-designed structures is thus increasingly appealing to machine learning researchers.

In the paper Learning Structures for Deep Neural Networks, a team from OneFlow and Microsoft explores unsupervised structure learning, leveraging the efficient coding principle, information theory and computational neuroscience to design a structure learning method that does not require labelled information and demonstrates empirically that larger entropy outputs in a deep neural network lead to better performance.

The researchers start with the assumption that the optimal structure of neural networks can be derived from the input features without labels. Their study probes whether it is possible to learn good DNN network structures from scratch in a fully automatic fashion, and what would be a principled way to reach this end.

The team references a principle borrowed from the biological nervous system domain the efficient coding principle which posits that a good brain structure forms an efficient internal representation of external environments. They apply the efficient coding principle to DNN architecture, proposing that the structure of a well-designed network should match the statistical structure of its input signals.

The efficient coding principle suggests that the mutual information between a models inputs and outputs should be maximized, and the team presents a solid Bayesian optimal classification theoretical foundation to support this. Specifically, they show that the top layer of any neural network (softmax linear classifier) and the independency between the nodes in the top hidden layer constitute a sufficient condition for making the softmax linear classifier act as a Bayesian optimal classifier. This theoretical foundation not only backs up the efficient coding principle, it also provides a way to determine the depth of a DNN.

The team then investigates how to leverage the efficient coding principle in the design of a structure-learning algorithm, and shows that sparse coding can implement the principle under the assumption of zero-peaked and heavy-tailed prior distributions. This suggests that an effective structure learning algorithm can be designed based on global group sparse coding.

The proposed structure-learning with sparse coding algorithm learns a structure layer by layer in a bottom-up manner. The raw features are at layer one, and given the predefined number of nodes in layer two, the algorithm will learn the connection between these two layers, and so on.

The researchers also describe how this proposed algorithm can learn inter-layer connections, handle invariance, and determine DNN depth. Finally, they conduct intensive experiments on the popular CIFAR-10 data set to evaluate the classification accuracies of their proposed structure learning method, the role of inter-layer connections, and the role of structure masks and network depth.

The results show that a learned-structure single-layer network achieves an accuracy of 63.0 percent, outperforming the single-layer baseline of 60.4 percent. In an inter-layer connection density evaluation experiment, the structures generated by the sparse coding approach outperform random structures, and at the same density level, always outperform the sparsifying-restricted Boltzmann machines (RBM) baseline. In the teams structure mask role evaluation, the structure prior provided by sparse coding is seen to improve performance. The network depth experiment meanwhile empirically justifies the proposed approach for determining DNN depth via coding efficiency.

Overall, the research proves the efficient coding principles effectiveness for unsupervised structure learning, and that the proposed global sparse coding-based structure-learning algorithms can achieve performance comparable with the best human-designed structures.

The paper Learning Structures for Deep Neural Networks is on arXiv.

Author: Hecate He |Editor: Michael Sarazen, Chain Zhang

We know you dont want to miss any news or research breakthroughs.Subscribe to our popular newsletterSynced Global AI Weeklyto get weekly AI updates.

Like Loading...

View post:
Microsoft & OneFlow Leverage the Efficient Coding Principle to Design Unsupervised DNN Structure-Learning That Outperforms Human-Designed...

Explainable AI And The Future Of Machine Learning – CIO Applications

Artificial intelligence (AI) is ushering a new era of technological innovation, paving the way for increased adoption across several industries ranging from healthcare to e-commerce. The novel capabilities offered by AI are empowering businesses to automate the different operations, making them faster and smarter, and enhancing the overall productivity. One of the most significant benefits facilitated by AI is the management of massive datasets, which enables organizations to handle the flow of information efficiently. Overall, AI is making significant headway in the business world, generating robust benefits for integrators and adopters.

Bringing Transparency

Today, firms across the world are incorporating AI-based systems to automate their business processes and enable their workforce to focus on more valuable tasks. However, the incorporation of AI technology is impeded by several challenges, and the biggest one is the lack of transparency. AI systems are often considered as a black box, which makes it challenging to pinpoint logical errors in the underlying algorithms. These challenges in data privacy, protection, and cybersecurity have introduced nuances into the field, making it imperative to develop explainable AI, which offers greater visibility and transparency

Balancing Stability with Innovation

To make the most of AI, enterprises should also focus on incorporating the right tools, talent, and culture. While a myriad of different AI solutions has permeated the marketplace today, businesses still lack the expertise to identify which solution aligns with their organizational goals. This is where having the right partner to help you through every stage of the integration process makes all the difference. These partnerships should be compatible with the values, goals, and strategies of an organization. It is advisable to consider the risk factors and conduct an impact analysis on how the partnership will help drive business growth. For instance, I often collaborate with numerous partners, many of which possess robust and proven solutions. We also work with nascent companies that have a more novel approach to balance the risk versus the impact of the AI integrations.

When choosing partners, businesses should have a firm knowledge of the different areas of improvement within their organization. Successful collaboration relies on minimizing the risks and maximizing the benefits. By narrowing the AI integration to specific processes that require immediate upgrading, the overall workflow can be streamlined seamlessly. Once the relevant areas have been identified, the tech teams can decide on collaborations across the organizational line. Often, enterprises fail to see beyond the hype of AI products and rush into piloting several things without a clear vision of the end result. In such cases, businesses cannot derive the expected value from the solutions. Hence, it is crucial to have a specific goal in mind when integrating AI technology.

Augmenting AI with Robust Leadership

Along with having an AI-first mindset, businesses should be swift and versatile when executing on the AI technology. As a practitioner in this area for several years, I have witnessed a steady rise of interest in data science. One does not need a doctorate to be a data scientist. Perfection comes with practice. My advice to budding professionals is to follow their passion as they have a myriad of resources at their disposal, including journals, and coding courses. As for moving up the leadership ladder, practitioners must take bold decisions and get involved in the community. It also pays to take part in cooperative projects and contribute to the academic community by publishing papers, attending conferences, giving talks, and supporting the cause. Hence, as a leader, it is vital to adapt and change according to the trends, while also focusing on the key areas that need improvement.

Read more from the original source:
Explainable AI And The Future Of Machine Learning - CIO Applications

On Thinking Machines, Machine Learning, And How AI Took Over Statistics – Forbes

Sixty-five years ago, Arthur Samuel went on TV to show the world how the IBM 701 plays checkers. He was interviewed on a live morning news program, sitting remotely at the 701, with Will Rogers Jr. at the TV studio, together with a checkers expert who played with the computer for about an hour. Three years later, in 1959, Samuel published Some Studies in Machine Learning Using the Game of Checkers, in the IBM Journal of Research and Development, coining the term machine learning. He defined it as the programming of a digital computer to behave in a way which, if done by human beings or animals, would be described as involving the process of learning.

On February 24, 1956, Arthur Samuels Checkers program, which was developed for play on the IBM 701, ... [+] was demonstrated to the public on television

A few months after Samuels TV appearance, ten computer scientists convened in Dartmouth, NH, for the first-ever workshop on artificial intelligence, defined a year earlier by John McCarthy in the proposal for the workshop as making a machine behave in ways that would be called intelligent if a human were so behaving.

In some circles of the emerging discipline of computer science, there was no doubt about the human-like nature of the machines they were creating. Already in 1949, computer pioneer Edmund Berkeley wrote inGiant Brains or Machines that Think: Recently there have been a good deal of news about strange giant machines that can handle information with vast speed and skill... These machines are similar to what a brain would be if it were made of hardware and wire instead of flesh and nerves A machine can handle information; it can calculate, conclude, and choose; it can perform reasonable operations with information. A machine, therefore, can think.

Maurice Wilkes, a prominent developer of one of those giant brains, retorted in 1953: Berkeley's definition of what is meant by a thinking machine appears to be so wide as to miss the essential point of interest in the question, Can machines think? Wilkes attributed this not-very-good human thinking to a desire to believe that a machine can be something more than a machine. In the same issue of the Proceeding of the I.R.E that included Wilkes article, Samuel published Computing Bit by Bit or Digital Computers Made Easy. Reacting to what he called the fuzzy sensationalism of the popular press regarding the ability of existing digital computers to think, he wrote: The digital computer can and does relieve man of much of the burdensome detail of numerical calculations and of related logical operations, but perhaps it is more a matter of definition than fact as to whether this constitutes thinking.

Samuels polite but clear position led Marvin Minsky in 1961 to single him out, according to Eric Weiss, as one of the few leaders in the field of artificial intelligence who believed computers could not think and probably never would. Indeed, he pursued his life-long hobby of developing checkers-playing computer programs and professional interest in machine learning not out of a desire to play God but because of the specific trajectory and coincidences of his career. After working for 18 years at Bell Telephone Laboratories and becoming an internationally recognized authority on microwave tubes, he decided at age 45 to move on, as he was certain, says Weiss in his review of Samuels life and work, that vacuum tubes soon will be replaced by something else.

The University of Illinois came calling, asking him to revitalize their EE graduate research program. In 1948, the project to build the Universitys first computer was running out of money. Samuel thought (as he recalled in an unpublished autobiography cited by Weiss) that it ought to be dead easy to program a computer to play checkers and that if their program could beat a checkers world champion, the attention it would generate will also generate the required funds.

The next year, Samuel started his 17-year tenure with IBM, working as a senior engineer on the team developing the IBM 701, IBMs first mass-produced scientific computer. The chief architect of the entire IBM 700 series was Nathaniel Rochester, later one of the participants in the Dartmouth AI workshop. Rochester was trying to decide the word length and order structure of the IBM 701 and Samuel decided to rewrite his checkers-playing program using the order structure that Rochester was proposing. In his autobiography, Samuel recalled that I was a bit fearful that everyone in IBM would consider checker-playing program too trivial a matter, so I decided that I would concentrate on the learning aspects of the program. Thus, more or less by accident, I became one of the first people to do any serious programing for the IBM 701 and certainly one of the very first to work in the general field later to become known as artificial intelligence. In fact, I became so intrigued with this general problem of writing a program that would appear to exhibit intelligence that it was to occupy my thoughts almost every free moment during the entire duration of my employment by IBM and indeed for some years beyond.

But in the early days of computing, IBM did not want to fan the popular fears that man was losing out to machines, so the company did not talk about artificial intelligence publicly, observed Samuel later. Salesmen were not supposed to scare customers with speculation about future computer accomplishments. So IBM, among other activities aimed at dispelling the notion that computers were smarter than humans, sponsored the movie Desk Set, featuring a methods engineer (Spencer Tracy) who installs the fictional and ominous-looking electronic brain EMERAC, and a corporate librarian (Katharine Hepburn) telling her anxious colleagues in the research department: They cant build a machine to do our jobthere are too many cross-references in this place. By the end of the movie, she wins both a match with the computer and the engineers heart.

In his1959 paper, Samuel described his approach to machine learning as particularly suited for very specific tasks, in distinction to the Neural-Net approach, which he thought could lead to the development of general-purpose learning machines. Samuels program searched the computers memory to find examples of checkerboard positions and selected the moves that were previously successful. The computer plays by looking ahead a few moves and by evaluating the resulting board positions much as a human player might do, wrote Samuel.

His approach to machine learning still would work pretty well as a description of whats known as reinforcement learning, one of the basket of machine-learning techniques that has revitalized the field of artificial intelligence in recent years, wrote Alexis Madrigal in a 2017 survey of checkers-playing computer programs. One of the men who wrote the bookReinforcement Learning, Rich Sutton, called Samuels research the earliest work thats now viewed as directly relevant to the current AI enterprise.

The current AI enterprise is skewed more in favor of artificial neural networks (or deep learning) then reinforcement learning, although Googles DeepMind famously combined the two approaches in its Go-playing program which successfully beat Go master Lee Sedol in a five-game match in 2016.

Already popular among computer scientists in Samuels time (in 1951, Marvin Minsky and Dean Edmunds built SNARCStochastic Neural Analog Reinforcement Calculatorthe first artificial neural network, using 3000 vacuum tubes to simulate a network of 40 neurons), the neural networks approach was inspired by a1943 paperby Warren S. McCulloch and Walter Pitts in which they described networks of idealized and simplified artificial neurons and how they might perform simple logical functions, leading to the popular (and very misleading) description of todays artificial neural networks-based AI as mimicking the brain.

Over the years, the popularity of neural networks have gone up and down a number of hype cycles, starting with thePerceptron, a 2-layer artificial neural network that was considered by the U.S. Navy, according to a 1958 New York Times report, to be "the embryo of an electronic computer that.. will be able to walk, talk, see, write, reproduce itself and be conscious of its existence." In addition to failing to meet these lofty expectations, neural networks suffered from a fierce competition from a growing cohort of computer scientists (including Minsky) who preferred the manipulation of symbols rather than computational statistics as the better path to creating a human-like machine.

Inflated expectations meeting the trough of disillusionment, no matter what approach was taken, resulted in at least two periods of gloomy AI Winter. But with the invention and successful application of backpropagation as a way to overcome the limitations of simple neural networks, sophisticated statistical analysis was againon the ascendance, now cleverly labeled as deep learning. In 1988, R. Colin Johnson and Chappell Brown published Cognizers: Neural Networks and Machine That Think, proclaiming that neural networks can actually learn to recognize objects and understand speech just like the human brain and, best of all, they wont need the rules, programming, or high-priced knowledge-engineering services that conventional artificial intelligence systems requireCognizers could very well revolutionize our society and will inevitably lead to a new understanding of our own cognition.

Johnson and Brown predicted that as early as the next two years, neural networks will be the tool of choice for analyzing the contents of a large database. This predictionand no doubt similar ones in the popular press and professional journalsmust have sounded the alarm among those who did this type of analysis for a living in academia and in large corporations, having no clue of what the computer scientists were talking about.

InNeural Networks and Statistical Models, Warren Sarle explained in 1994 to his worried and confused fellow statisticians that the ominous-sounding artificial neural networks are nothing more than nonlinear regression and discriminant models that can be implemented with standard statistical software like many statistical methods, [artificial neural networks] are capable of processing vast amounts of data and making predictions that are sometimes surprisingly accurate; this does not make them intelligent in the usual sense of the word. Artificial neural networks learn in much the same way that many statistical algorithms do estimation, but usually much more slowly than statistical algorithms. If artificial neural networks are intelligent, then many statistical methods must also be considered intelligent.

Sarle provided his colleagues with a handy dictionary translating the terms used by neural engineers to the language of statisticians (e.g., features are variables). In anticipation of todays data science (a more recent assault led by computer programmers) and predictions of algorithms replacing statisticians (and even scientists), Sarle reassured his fellow statisticians that no black box can substitute for human intelligence: Neural engineers want their networks to be black boxes requiring no human interventiondata in, predictions out. The marketing hype claims that neural networks can be used with no experience and automatically learn whatever is required; this, of course, is nonsense. Doing a simple linear regression requires a nontrivial amount of statistical expertise.

In a footnote to his mention of neural networks in his 1959 paper, Samuel cited Warren S. McCulloch who has compared the digital computer to the nervous system of a flatworm, and declared: To extend this comparison to the situation under discussion would be unfair to the worm since its nervous system is actually quite highly organized as compared to [the most advanced artificial neural networks of the day]. In 2019, Facebooks top AI researcher and Turing Award-winner Yann LeCun declared that Our best AI systems have less common sense than a house cat. In the sixty years since Samuel first published his seminal machine learning work, artificial intelligence has advanced from being not as smart as a flatworm to having less common sense than a house cat.

Continued here:
On Thinking Machines, Machine Learning, And How AI Took Over Statistics - Forbes