Archive for the ‘Machine Learning’ Category

REPLY: European Central Bank Explores the Possibilities of Machine Learning With a Coding Marathon Organised by Reply – Business Wire

TURIN, Italy--(BUSINESS WIRE)--The European Central Bank (ECB), in collaboration with Reply, leader in digital technology innovation, is organising the Supervisory Data Hackathon, a coding marathon focussing on the application of Machine Learning and Artificial Intelligence.

From 27 to 29 February 2020, at the ECB in Frankfurt, more than 80 participants from the ECB, Reply and further companies explore possibilities to gain deeper and faster insights into the large amount of supervisory data gathered by the ECB from financial institutions through regular financial reporting for risk analysis. The coding marathon provides a protected space to co-creatively develop new ideas and prototype solutions based on Artificial Intelligence within a short timeframe.

Ahead of the event, participants submit projects in the areas of data quality, interlinkages in supervisory reporting and risk indicators. The most promising submissions will be worked on for 48 hours during the event by the multidisciplinary teams composed of members from the ECB, Reply and other companies.

Reply has proven its Artificial Intelligence and Machine Learning capabilities with numerous projects in various industries and combines this technological expertise with in-depth knowledge of the financial services industry and its regulatory environment.

Coding marathons using the latest technologies are a substantial element in Replys toolset for sparking innovation through training and knowledge transfer internally and with clients and partners.

ReplyReply [MTA, STAR: REY] specialises in the design and implementation of solutions based on new communication channels and digital media. As a network of highly specialised companies, Reply defines and develops business models enabled by the new models of big data, cloud computing, digital media and the internet of things. Reply delivers consulting, system integration and digital services to organisations across the telecom and media; industry and services; banking and insurance; and public sectors. http://www.reply.com

Read this article:
REPLY: European Central Bank Explores the Possibilities of Machine Learning With a Coding Marathon Organised by Reply - Business Wire

AI and Predictive Analytics: Myth, Math, or Magic? – TDWI

AI and Predictive Analytics: Myth, Math, or Magic?

Don't fall into the trap of thinking that math-based analytics can predict human behavior with certainty.

We are a species invested in predicting the future -- as if our lives depended on it. Indeed, good predictions of where wolves might lurk were once a matter of survival. Even as civilization made us physically safer, prediction has remained a mainstay of culture, from the haruspices of ancient Rome inspecting animal entrails to business analysts dissecting a wealth of transactions to foretell future sales.

Such predictions generally disappoint. We humans are predisposed to assuming that the future is a largely linear extrapolation of the most recent (and familiar) past. This is one -- or a combination -- of the nearly 200 cognitive biases that allegedly afflict us.

A Prediction for the Coming Decade

With these caveats in mind, I predict that in 2020 (and the decade ahead) we will struggle if we unquestioningly adopt artificial intelligence (AI) in predictive analytics, founded on an unjustified overconfidence in the almost mythical power of AI's mathematical foundations. This is another form of the disease of technochauvinism I discussed in a previous article.

Science fiction author and journalist Cory Doctorow's article, "Our Neophobic, Conservative AI Overlords Want Everything to Stay the Same," in the Los Angeles Review of Books, offers a succinct and superb summary of technochauvinism as it operates in AI. "Machine learning," he asserts, "is about finding things that are similar to things the machine learning system can already model." These models are, of course, built from past data with all its errors, gaps, and biases.

The premise that AI makes better (e.g., less biased) predictions than humans is already demonstrably false. Employment screening apps, for example, are often riddled with a bias toward hiring white males because the historical hiring data used to train its algorithms consisted largely of information about hiring such workers.

The widespread belief that AI can predict novel aspects of the future is simply a case of magical thinking. Machine learning is fundamentally conservative, based as it is on correlations in existing data; its predictions are essentially extensions of the past. AI lacks the creative thinking ability of humans. Says Tabitha Goldstaub, a tech entrepreneur and commentator, about the use of AI by Hollywood studios to decide which movies to make: "Already we're seeing that we're getting more and more remakes and sequels because that's safe, rather than something that's out of the box."

A Predictive Puzzle

AI, together with the explosion of data available from the internet, have raised the profile of what used to be called operational BI, now known as predictive analytics and its more recent extension into prescriptive analytics. Attempting to predict the future behavior of prospects and customers and, further, to influence their behavior is central to digital transformation efforts. Predictions based on AI, especially in real-time decision making with minimal human involvement, require careful and ongoing examination lest they fall foul of the myth of an all-knowing AI.

As Doctorow notes, AI conservatism arises from detecting correlations within and across existing large data sets. Causation -- a much more interesting feature -- is more opaque, usually relying on human intuition to separate the causal wheat from the correlational chaff, as I discussed in a previous Upside article.

Nonetheless, causation can be separated algorithmically from correlation in specific cases, as described by Mollie Davies and coauthors. I cannot claim to follow the full mathematical formulae they present, but the logic makes sense. As the authors conclude, "Instead of being naively data driven, we should seek to be causal information driven. Causal inference provides a set of powerful tools for understanding the extent to which causal relationships can be learned from the data we have." They present math that data scientists should learn and apply more widely.

However, there is a myth here, too: that predictive (and prescriptive) analytics can divine human intention, which is the true basis for understanding and influencing behavior. As Doctorow notes, in trying to distinguish a wink from a twitch, "machine learning [is not] likely to produce a reliable method of inferring intention: it's a bedrock of anthropology that intention is unknowable without dialogue." Dialogue -- human-to-human interaction -- attracts little attention in digital business implementation.

The Dilemma of (Real) Prediction

Once accused of looking too intently in the rearview mirror, business intelligence has today embraced prediction and prescription as among its most important goals. Despite advances in data availability and math-based technology, truly envisaging future human intentions and actions remains a strictly human gift.

The myth that math-based analytics can predict human behavior with certainty is probably the most dangerous magical thinking we data professionals can indulge in.

About the Author

Dr. Barry Devlin defined the first data warehouse architecture in 1985 and is among the worlds foremost authorities on BI, big data, and beyond. His 2013 book, Business unIntelligence, offers a new architecture for modern information use and management.

View post:
AI and Predictive Analytics: Myth, Math, or Magic? - TDWI

Machine Learning Market Booming by Size, Revenue, Trends and Top Growing Companies 2026 – Instant Tech News

Verified Market Research offers its latest report on the Machine Learning Market that includes a comprehensive analysis of a range of subjects such as market opportunities, competition, segmentation, regional expansion, and market dynamics. It prepares players also as investors to require competent decisions and plan for growth beforehand. This report is predicted to assist the reader understand the market with reference to its various drivers, restraints, trends, and opportunities to equip them in making careful business decisions.

Global Machine Learning Market was valued at USD 2.03 Billion in 2018 and is projected to reach USD 37.43 Billion by 2026, growing at a CAGR of 43.9% from 2019 to 2026.

Get PDF template of this report: @ https://www.verifiedmarketresearch.com/download-sample/?rid=6487&utm_source=ITN&utm_medium=003

The top manufacturer with company profile, sales volume, and product specifications, revenue (Million USD) and market share

Global Machine Learning Market: Competitive Landscape

The chapter on competitive landscape covers all the major manufacturers in the global Smart Cameramarket to study new trends and opportunities. In this section, the researchers have used SWOT analysis to study the various strengths, weaknesses, opportunities, and trends the manufacturers are using to expand their share. Furthermore, they have briefed about the trends that are expected to drive the market in the future and open more opportunities.

Global Machine Learning Market: Drivers and Restraints

The researchers have analyzed various factors that are necessary for the growth of the market in global terms. They have taken different perspectives for the market including technological, social, political, economic, environmental, and others. The drivers have been derived using PESTELs analysis to keep them accurate. Factors responsible for propelling the growth of the market and helping its growth in terms of market share are been studied objectively.

Furthermore, restraints present in the market have been put together using the same process. Analysts have provided a thorough assessment of factors likely to hold the market back and offered solutions for circumventing the same too.

Global Machine Learning Market: Segment Analysis

The researchers have segmented the market into various product types and their applications. This segmentation is expected to help the reader understand where the market is observing more growth and which product and application hold the largest share in the market. This will give them leverage over others and help them invest wisely.

Ask For Discount (Exclusive Offer) @ https://www.verifiedmarketresearch.com/ask-for-discount/?rid=6487&utm_source=ITN&utm_medium=003

Machine Learning Market: Regional Analysis :

As part of regional analysis, important regions such as North America, Europe, the MEA, Latin America, and Asia Pacific have been studied. The regional Machine Learning markets are analyzed based on share, growth rate, size, production, consumption, revenue, sales, and other crucial factors. The report also provides country-level analysis of the Machine Learning industry.

Table of Contents

Introduction: The report starts off with an executive summary, including top highlights of the research study on the Machine Learning industry.

Market Segmentation: This section provides detailed analysis of type and application segments of the Machine Learning industry and shows the progress of each segment with the help of easy-to-understand statistics and graphical presentations.

Regional Analysis: All major regions and countries are covered in the report on the Machine Learning industry.

Market Dynamics: The report offers deep insights into the dynamics of the Machine Learning industry, including challenges, restraints, trends, opportunities, and drivers.

Competition: Here, the report provides company profiling of leading players competing in the Machine Learning industry.

Forecasts: This section is filled with global and regional forecasts, CAGR and size estimations for the Machine Learning industry and its segments, and production, revenue, consumption, sales, and other forecasts.

Recommendations: The authors of the report have provided practical suggestions and reliable recommendations to help players to achieve a position of strength in the Machine Learning industry.

Research Methodology: The report provides clear information on the research approach, tools, and methodology and data sources used for the research study on the Machine Learning industry.

Complete Report is Available @ https://www.verifiedmarketresearch.com/product/global-machine-learning-market-size-and-forecast-to-2026/?utm_source=ITN&utm_medium=003

About Us:

Verified market research partners with clients to provide insight into strategic and growth analytics; data that help achieve business goals and targets. Our core values include trust, integrity, and authenticity for our clients.

Our research studies help our clients to make superior data-driven decisions, capitalize on future opportunities, optimize efficiency and keeping them competitive by working as their partner to deliver the right information without compromise.

Contact Us:

Mr. Edwyne FernandesCall: +1 (650) 781 4080Email:[emailprotected]

Here is the original post:
Machine Learning Market Booming by Size, Revenue, Trends and Top Growing Companies 2026 - Instant Tech News

AI, machine learning, robots, and marketing tech coming to a store near you – TechRepublic

Retailers are harnessing the power of new technology to dig deeper into customer decisions and bring people back into stores.

The National Retail Federation's 2020 Big Show in New York was jam packed full of robots, frictionless store mock-ups, and audacious displays of the latest technology now available to retailers.

Dozens of robots, digital signage tools, and more were available for retail representatives to test out, with hundreds of the biggest tech companies in attendance offering a bounty of eye-popping gadgets designed to increase efficiency and bring the wow factor back to brick-and-mortar stores.

SEE: Artificial intelligence: A business leader's guide (free PDF) (TechRepublic)

Here are some of the biggest takeaways from the annual retail event.

With the explosion in popularity of Amazon, Alibaba, and other e-commerce sites ready to deliver goods right to your door within days, many analysts and retailers figured the brick-and-mortar stores of the past were on their last legs.

But it turns out billions of customers still want the personal, tailored touch of in-store experiences and are not ready to completely abandon physical retail outlets.

"It's not a retail apocalypse. It's a retail renaissance," said Lori Mitchell-Keller, executive vice president and global general manager of consumer industries at SAP.

As leader of SAP's retail, wholesale distribution, consumer products, and life sciences industries division, Mitchell-Keller said she was surprised to see that retailers had shifted their stance and were looking to find ways to beef up their online experience while infusing stores with useful but flashy technology.

"Brick-and-mortar stores have this unique capability to have a specific advantage against online retailers. So despite the trend where everything was going online, it did not mean online at the expense of brick-and-mortar. There is a balance between the two. Those companies that have a great online experience and capability combined with a brick-and-mortar store are in the best place in terms of their ability to be profitable," Mitchell-Keller said during an interview at NRF 2020.

"There is an experience that you cannot get online. This whole idea of customer experience and experience management is definitely the best battleground for the guys that can't compete in delivery. Even for the ones that can compete on delivery, like the Walmarts and Targets, they are using their brick-and-mortar stores to offer an experience that you can't get online. We thought five years ago that brick-and-mortar was dead and it's absolutely not dead. It's actually an asset."

In her experience working with the world's biggest retailers, companies that have a physical presence actually have a huge advantage because customers are now yearning for a personalized experience they can't get online. While e-commerce sites are fast, nothing can beat the ability to have real people answer questions and help customers work through their options, regardless of what they're shopping for.

Retailers are also transforming parts of their stores into fulfillment centers for their online sales, which have the doubling effect of bringing customers into the store where they may spend even more on things they see.

"The brick-and-mortar stores that are using their stores as fulfillment centers have a much lower cost of delivery because they're typically within a few miles of customers. If they have a great online capability and good store fulfillment, they're able to get to customers faster than the aggregators," Mitchell-Keller said. "It's better to have both."

SEE: Feature comparison: E-commerce services and software (TechRepublic Premium)

But one of the main trends, and problems, highlighted at NRF 2020 was the sometimes difficult transition many retailers have had to make to a digitized world.

NRF 2020 was full of decadent tech retail tools like digital price tags, shelf-stocking robots and next-gen advertising signage, but none of this could be incorporated into a retail environment without a basic amount tech talent and systems to back it all.

"It can be very overwhelmingly complicated, not to mention costly, just to have a team to manage technology and an environment that is highly digitally integrated. The solution we try to bring to bear is to add all these capabilities or applications into a turn key environment because fundamentally, none of it works without the network," said Michael Colaneri, AT&T's vice president of retail, restaurants and hospitality.

While it would be easy for a retailer to leave NRF 2020 with a fancy robot or cool gadget, companies typically have to think bigger about the changes they want to see, and generally these kinds of digital transformations have to be embedded deep throughout the supply chain before they can be incorporated into stores themselves.

Colaneri said much of AT&T's work involved figuring out how retailers could connect the store system, the enterprise, the supply chain and then the consumer, to both online and offline systems. The e-commerce part of retailer's business now had to work hand in hand with the functionality of the brick-and-mortar experience because each part rides on top of the network.

"There are five things that retailers ask me to solve: Customer experience, inventory visibility, supply chain efficiency, analytics, and the integration of media experiences like a robot, electronic shelves or digital price tags. How do I pull all this together into a unified experience that is streamlined for customers?" Colaneri said.

"Sometimes they talk to me about technical components, but our number one priority is inventory visibility. I want to track products from raw material to where it is in the legacy retail environment. Retailers also want more data and analytics so they can get some business intelligence out of the disparate data lakes they now have."

The transition to digitized environments is different for every retailer, Colaneri added. Some want slow transitions and gradual introductions of technology while others are desperate for a leg up on the competition and are interested in quick makeovers.

While some retailers have balked at the thought, and price, of wholesale changes, the opposite approach can end up being just as costly.

"Anybody that sells you a digital sign, robot, Magic Mirror or any one of those assets is usually partnering with network providers because it requires the network. And more importantly, what typically happens is if someone buys an asset, they are underestimating the requirements it's going to need from their current network," Colaneri said.

"Then when their team says 'we're already out of bandwidth,' you'll realize it wasn't engineered and that the application wasn't accommodated. It's not going to work. It can turn into a big food fight."

Retailers are increasingly realizing the value of artificial intelligence and machine learning as a way to churn through troves of data collected from customers through e-commerce sites. While these tools require the kind of digital base that both Mitchell-Keller and Colaneri mentioned, artificial intelligence (AI) and machine learning can be used to address a lot of the pain points retailers are now struggling with.

Mitchell-Keller spoke of SAP's work with Costco as an example of the kind of real-world value AI and machine learning can add to a business. Costco needed help reducing waste in their bakeries and wanted better visibility into when customers were going to buy particular products on specific days or at specific times.

"Using machine learning, what SAP did was take four years of data out of five different stores for Costco as a pilot and used AI and machine learning to look through the data for patterns to be able to better improve their forecasting. They're driving all of their bakery needs based on the forecast and that forcecast helped Costco so much they were able to reduce their waste by about 30%," Mitchell-Keller said, adding that their program improved productivity by 10%.

SAP and dozens of other tech companies at NRF 2020 offered AI-based systems for a variety of supply chain management tools, employee payment systems and even resume matches. But AI and machine learning systems are nothing without more data.

SEE:Managing AI and ML in the enterprise 2019: Tech leaders expect more difficulty than previous IT projects(TechRepublic Premium)

Jeff Warren, vice president of Oracle Retail, said there has been a massive shift toward better understanding customers through increased data collection. Historically, retailers simply focused on getting products through the supply chain and into the hands of consumers. But now, retailers are pivoting toward focusing on how to better cater services and goods to the customer.

Warren said Oracle Retail works with about 6,000 retailers in 96 different countries and that much of their work now prioritizes collecting information from every customer interaction.

"What is new is that when you think of the journey of the consumer, it's not just about selling anymore. It's not just about ringing up a transaction or line busting. All of the interactions between you and me have value and hold something meaningful from a data perspective," he said, adding that retailers are seeking to break down silos and pool their data into a single platform for greater ease of use.

"Context would help retailers deliver a better experience to you. Its petabytes of information about what the US consumer market is spending and where they're spending. We can take the information that we get from those interactions that are happening at the point of sale about our best customers and learn more."

With the Oracle platform, retailers can learn about their customers and others who may have similar interests or live in similar places. Companies can do a better job of targeting new customers when they know more about their current customers and what else they may want.

IBM is working on similar projects with hundreds of different retailers , all looking to learn more about their customers and tailor their e-commerce as well as in-store experience to suit their biggest fans.

IBM global managing director for consumer industries Luq Niazi told TechRepublic during a booth tour that learning about consumer interests was just one aspect of how retailers could appeal to customers in the digital age.

"Retailers are struggling to work through what tech they need. When there is so much tech choice, how do you decide what's important? Many companies are implementing tech that is good but implemented badly, so how do you help them do good tech implemented well?" Niazi said.

"You have all this old tech in stores and you have all of this new tech. You have to think about how you bring the capability together in the right way to deploy flexibly whatever apps and experiences you need from your store associate, for your point of sale, for your order management system that is connected physically and digitally. You've got to bring those together in different ways. We have to help people think about how they design the store of the future."

Get expert tips on mastering the fundamentals of big data analytics, and keep up with the latest developments in artificial intelligence. Delivered Mondays

Go here to see the original:
AI, machine learning, robots, and marketing tech coming to a store near you - TechRepublic

Overview of causal inference in machine learning – Ericsson

In a major operators network control center complaints are flooding in. The network is down across a large US city; calls are getting dropped and critical infrastructure is slow to respond. Pulling up the systems event history, the manager sees that new 5G towers were installed in the affected area today.

Did installing those towers cause the outage, or was it merely a coincidence? In circumstances such as these, being able to answer this question accurately is crucial for Ericsson.

Most machine learning-based data science focuses on predicting outcomes, not understanding causality. However, some of the biggest names in the field agree its important to start incorporating causality into our AI and machine learning systems.

Yoshua Bengio, one of the worlds most highly recognized AI experts, explained in a recent Wired interview: Its a big thing to integrate [causality] into AI. Current approaches to machine learning assume that the trained AI system will be applied on the same kind of data as the training data. In real life it is often not the case.

Yann LeCun, a recent Turing Award winner, shares the same view, tweeting: Lots of people in ML/DL [deep learning] know that causal inference is an important way to improve generalization.

Causal inference and machine learning can address one of the biggest problems facing machine learning today that a lot of real-world data is not generated in the same way as the data that we use to train AI models. This means that machine learning models often arent robust enough to handle changes in the input data type, and cant always generalize well. By contrast, causal inference explicitly overcomes this problem by considering what might have happened when faced with a lack of information. Ultimately, this means we can utilize causal inference to make our ML models more robust and generalizable.

When humans rationalize the world, we often think in terms of cause and effect if we understand why something happened, we can change our behavior to improve future outcomes. Causal inference is a statistical tool that enables our AI and machine learning algorithms to reason in similar ways.

Lets say were looking at data from a network of servers. Were interested in understanding how changes in our network settings affect latency, so we use causal inference to proactively choose our settings based on this knowledge.

The gold standard for inferring causal effects is randomized controlled trials (RCTs) or A/B tests. In RCTs, we can split a population of individuals into two groups: treatment and control, administering treatment to one group and nothing (or a placebo) to the other and measuring the outcome of both groups. Assuming that the treatment and control groups arent too dissimilar, we can infer whether the treatment was effective based on the difference in outcome between the two groups.

However, we can't always run such experiments. Flooding half of our servers with lots of requests might be a great way to find out how response time is affected, but if theyre mission-critical servers, we cant go around performing DDOS attacks on them. Instead, we rely on observational datastudying the differences between servers that naturally get a lot of requests and those with very few requests.

There are many ways of answering this question. One of the most popular approaches is Judea Pearl's technique for using to statistics to make causal inferences. In this approach, wed take a model or graph that includes measurable variables that can affect one another, as shown below.

To use this graph, we must assume the Causal Markov Condition. Formally, it says that subject to the set of all its direct causes, a node is independent of all the variables which are not direct causes or direct effects of that node. Simply put, it is the assumption that this graph captures all the real relationships between the variables.

Another popular method for inferring causes from observational data is Donald Rubin's potential outcomes framework. This method does not explicitly rely on a causal graph, but still assumes a lot about the data, for example, that there are no additional causes besides the ones we are considering.

For simplicity, our data contains three variables: a treatment , an outcome , and a covariate . We want to know if having a high number of server requests affects the response time of a server.

In our example, the number of server requests is determined by the memory value: a higher memory usage means the server is less likely to get fed requests. More precisely, the probability of having a high number of requests is equal to 1 minus the memory value (i.e. P(x=1)=1-z , where P(x=1) is the probability that x is equal to 1). The response time of our system is determined by the equation (or hypothetical model):

y=1x+5z+

Where is the error, that is, the deviation from the expected value of given values of and depends on other factors not included in the model. Our goal is to understand the effect of on via observations of the memory value, number of requests, and response times of a number of servers with no access to this equation.

There are two possible assignments (treatment and control) and an outcome. Given a random group of subjects and a treatment, each subject has a pair of potential outcomes: and , the outcomes Y_i (0) and Y_i (1) under control and treatment respectively. However, only one outcome is observed for each subject, the outcome under the actual treatment received: Y_i=xY_i (1)+(1-x)Y_i (0). The opposite potential outcome is unobserved for each subject and is therefore referred to as a counterfactual.

For each subject, the effect of treatment is defined to be Y_i (1)-Y_i (0) . The average treatment effect (ATE) is defined as the average difference in outcomes between the treatment and control groups:

E[Y_i (1)-Y_i (0)]

Here, denotes an expectation over values of Y_i (1)-Y_i (0)for each subject , which is the average value across all subjects. In our network example, a correct estimate of the average treatment effect would lead us to the coefficient in front of x in equation (1) .

If we try to estimate this by directly subtracting the average response time of servers with x=0 from the average response time of our hypothetical servers with x=1, we get an estimate of the ATE as 0.177 . This happens because our treatment and control groups are not inherently directly comparable. In an RTC, we know that the two groups are similar because we chose them ourselves. When we have only observational data, the other variables (such as the memory value in our case) may affect whether or not one unit is placed in the treatment or control group. We need to account for this difference in the memory value between the treatment and control groups before estimating the ATE.

One way to correct this bias is to compare individual units in the treatment and control groups with similar covariates. In other words, we want to match subjects that are equally likely to receive treatment.

The propensity score ei for subject is defined as:

e_i=P(x=1z=z_i ),z_i[0,1]

or the probability that x is equal to 1the unit receives treatmentgiven that we know its covariate is equal to the value z_i. Creating matches based on the probability that a subject will receive treatment is called propensity score matching. To find the propensity score of a subject, we need to predict how likely the subject is to receive treatment based on their covariates.

The most common way to calculate propensity scores is through logistic regression:

Now that we have calculated propensity scores for each subject, we can do basic matching on the propensity score and calculate the ATE exactly as before. Running propensity score matching on the example network data gets us an estimate of 1.008 !

We were interested in understanding the causal effect of binary treatment x variable on outcome y . If we find that the ATE is positive, this means an increase in x results in an increase in y. Similarly, a negative ATE says that an increase in x will result in a decrease in y .

This could help us understand the root cause of an issue or build more robust machine learning models. Causal inference gives us tools to understand what it means for some variables to affect others. In the future, we could use causal inference models to address a wider scope of problems both in and out of telecommunications so that our models of the world become more intelligent.

Special thanks to the other team members of GAIA working on causality analysis: Wenting Sun, Nikita Butakov, Paul Mclachlan, Fuyu Zou, Chenhua Shi, Lule Yu and Sheyda Kiani Mehr.

If youre interested in advancing this field with us, join our worldwide team of data scientists and AI specialists at GAIA.

In this Wired article, Turing Award winner Yoshua Bengio shares why deep learning must begin to understand the why before it can replicate true human intelligence.

In this technical overview of causal inference in statistics, find out whats needed to evolve AI from traditional statistical analysis to causal analysis of multivariate data.

This journal essay from 1999 offers an introduction to the Causal Markov Condition.

Go here to read the rest:
Overview of causal inference in machine learning - Ericsson