Archive for the ‘Machine Learning’ Category

4 Ways AI, Analytics and Machine Learning Are Improving Customer Service and Support – CMSWire

Many of todays marketing processes are powered by AI and machine learning. Discover how these technologies are shaping the future of customer experience.

By using artificial intelligence (AI) and machine learning (ML) along with analytics, brands are in a much better position to elevate customer service experiences at every touchpoint and create positive emotional connections.

This article will look at the ways that AI and ML are used by brands to improve customer service and support.

AI improves the customer service journey in several ways, including tracking conversations in real-time, providing feedback to service agents and using intelligence to monitor language, speech patterns and psychographic profiles to predict future customer needs.

This functionality can also drastically enhance the effectiveness of customer relationship management (CRM) and customer data platforms (CDP).

CRM platforms, including C2CRM, Salesforce Einstein and Zoho, have integrated AI into their software to provide real-time decisioning, predictive analysis and conversational assistants, all of which help brands more fully understand and engage their customers.

CDPs, such as Amperity, BlueConic, Adobes Real-Time CDP and ActionIQ, have also integrated AI into more traditional capabilities to unify customer data and provide real-time functionality and decisoning. This technology enables brands to gain a deeper understanding of what their customers want, how they feel and what they are most likely to do next.

Related Article: What's Next for Artificial Intelligence in Customer Experience?

Artificial intelligence and machine learning are now used for gathering and analyzing social, historical and behavioral data, which allows brands to gain a much more complete understanding of their customers.

Because AI continuously learns and improves from the data it analyzes, it can anticipate customer behavior. As such, AI- and ML-driven chatbots can provide customers with a more personalized, informed conversation that can easily answer their questions and if not, immediately route them to a live customer service agent.

Bill Schwaab, VP of sales, North America for boost.ai, told CMSWire that ML is used in combination with AI and a number of other deep learning models to support todays virtual customer service agents.

ML on its own may not be sufficient to gain a total understanding of customer requests, but its useful in classifying basic user intent, said Schwaab, who believes that the brightest applications of these technologies in customer service find the balance between AI and human intervention.

Virtual agents are becoming the first line in customer experience in addition to human agents, he explained. Because these virtual agents can resolve service queries quickly and are available outside of normal service hours, human agents can focus on more complex or valuable customer interactions. Round-the-clock availability provides brands with additional time to capture customer input and inform better decision-making.

Swapnil Jain, CEO and co-founder of Observe.AI, said that todays customer service agents no longer have to spend as much time on simpler, transactional interactions, as digital and self-serve options have reduced the volume of those tasks.

"Instead, agents must excel at higher-value, complex behaviors that meaningfully impact CX and revenue," said Jain, adding that brands are harnessing AI and ML to up-level agent skills, which include empathy and active listening. This, in turn, "drives the behavioral changes needed to improve CX performance at speed and scale."

Because customer conversations contain a goldmine of insights for improving agent performance, AI-powered conversation intelligence can help brands with everything from service and support to sales and retention, said Jain. Using advanced interaction analytics, brands can benefit from pinpointing positive and negative CX drivers, advanced tonality-based sentiment and intent analysis and evidence-based agent coaching.

Predictive analytics is the process of using statistics, data mining and modeling to make predictions.

AI can analyze large amounts of data in a very short time, and along with predictive analytics, it can produce real-time, actionable insights that can guide interactions between a customer and a brand. This practice is also referred to as predictive engagement and uses AI to inform a brand when and how to interact with each customer.

Don Kaye, CCO of Exasol, spoke with CMSWire about the ways brands are using predictive analytics as part of their data strategies that link to their overall business objectives.

Weve seen first-hand how businesses use predictive analytics to better inform their organizations decision-making processes to drive powerful customer experiences that result in brand loyalty and earn consumer trust, said Kaye.

As an example, he told CMSWire that banks use supervised learning or regression and classification to calculate the risks of loan defaults or IT departments to detect spam.

With retailers, weve seen them seeking the benefits of deep learning or reinforcement learning, which enables a new level of end-to-end automation, where models become more adaptable and use larger data volumes for increased accuracy, he said.

According to Kaye, businesses with advanced analytics also tend to have agile, open data architectures that promote open access to data, also known as data democratization.

Kaye is a big advocate for AI and ML and believes that the technologies will continue to grow and become routine across all verticals, with the democratization of analytics enabling data professionals to focus on more complex scenarios and making customer experience personalization the norm.

Related Article: What Customer-Centric Predictive Analytics Looks Like

AI-driven sentiment analysis enables brands to obtain actionable insights which facilitate a better understanding of the emotions that customers feel when they encounter pain points or friction along the customer journey as well as how they feel when they have positive, emotionally satisfying experiences.

Julien Salinas, founder and CTO at NLP Cloud, told CMSWire that AI is often used to perform sentiment analysis to automatically detect whether an incoming customer support request is urgent or not. "If the detected sentiment is negative, the ticket is more likely to be addressed quickly by the support team."

Sentiment analysis can automatically detect emotions and opinions by classifying customer text as positive, negative or neutral through the use of AI, natural language processing (NLP) and ML.

Pieter Buteneers, director of engineering in ML and AI at Sinch, said that NLP enables applications to understand, write and speak languages in a manner that is similar to humans.

"It also facilitates a deeper understanding of customer sentiment, he explained. When NLP is incorporated into chatbots and voice bots it permits them to have seemingly human-like language proficiency and adjust their tones during conversations.

When used in conjunction with chatbots, NLP can facilitate human-like conversations based on sentiment. So if a customer is upset, for example, the bot can adjust its tone to diffuse the situation while moving along the conversation, said Buteneers. This would be an intuitive shift for a human, but bots that arent equipped with NLP sentiment analysis could miss the subtle cues of human sentiment in the conversation, and risk damaging the customer relationship."

Buteneers added that breakthroughs in NLP are making an enormous difference in how AI understands input from humans. For example, NLP can be used to perform textual sentiment analysis, which can decipher the polarity of sentiments in text."

Similar to sentiment analysis, AI is also useful for detecting intent. Salinas said that its sometimes difficult to have a quick grasp on a user request, especially when the users message is very long. In that case, AI can automatically extract the main idea from the message so the support agent can act more quickly.

While AI and ML have continued to evolve, and brands have found many ways to use these technologies to improve the customer service experience, the challenges of AI and ML can still be daunting.

Kaye explained that AI models need good data to deliver accurate results, so brands must also focus on quality and governance.

In-memory analytics databases will become the driver of creation, storage and loading features in ML training tools given their analysis capabilities, and ability to scale and deliver optimal time to insight, said Kaye. He added that these tools will benefit from closer integration with the companys data stores, which will enable them to run more effectively on larger data volumes to guarantee greater system scalability.

Iliya Rybchin, partner at Elixirr Consulting, told CMSWire that thanks to ML and the vast amount of data bots are collecting, they are getting better and will continue to improve. The challenge is that they will improve in proportion to the data they receive.

Therefore, if an under-represented minority with a unique dialect is not utilizing a particular service as much as other consumers, the ML will start to discount the aspects of that dialect as outliers vs. common language, said Rybchin.

He explained that the issue is not caused by the technology or programming, but rather, it is the result of the consumer-facing product that is not providing equal access to the bot. The solution is more about bringing more consumers to the product vs. changing how the product is built or designed."

AI and ML have been incorporated into the latest generations of CDP and CRM platforms, and conversational AI-driven bots are assisting service agents and enhancing and improving the customer service experience. Predictive analytics and sentiment analysis, meanwhile, are enabling brands to obtain actionable insights that guide the subsequent interactions between a customer and a brand.

Here is the original post:
4 Ways AI, Analytics and Machine Learning Are Improving Customer Service and Support - CMSWire

Solve the problem of unstructured data with machine learning – VentureBeat

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.

Were in the midst of a data revolution. The volume of digital data created within the next five years will total twice the amount produced so far and unstructured data will define this new era of digital experiences.

Unstructured data information that doesnt follow conventional models or fit into structured database formats represents more than 80% of all new enterprise data. To prepare for this shift, companies are finding innovative ways to manage, analyze and maximize the use of data in everything from business analytics to artificial intelligence (AI). But decision-makers are also running into an age-old problem: How do you maintain and improve the quality of massive, unwieldy datasets?

With machine learning (ML), thats how. Advancements in ML technology now enable organizations to efficiently process unstructured data and improve quality assurance efforts. With a data revolution happening all around us, where does your company fall? Are you saddled with valuable, yet unmanageable datasets or are you using data to propel your business into the future?

Theres no disputing the value of accurate, timely and consistent data for modern enterprises its as vital as cloud computing and digital apps. Despite this reality, however, poor data quality still costs companies an average of $13 million annually.

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

To navigate data issues, you may apply statistical methods to measure data shapes, which enables your data teams to track variability, weed out outliers, and reel in data drift. Statistics-based controls remain valuable to judge data quality and determine how and when you should turn to datasets before making critical decisions. While effective, this statistical approach is typically reserved for structured datasets, which lend themselves to objective, quantitative measurements.

But what about data that doesnt fit neatly into Microsoft Excel or Google Sheets, including:

When these types of unstructured data are at play, its easy for incomplete or inaccurate information to slip into models. When errors go unnoticed, data issues accumulate and wreak havoc on everything from quarterly reports to forecasting projections. A simple copy and paste approach from structured data to unstructured data isnt enough and can actually make matters much worse for your business.

The common adage, garbage in, garbage out, is highly applicable in unstructured datasets. Maybe its time to trash your current data approach.

When considering solutions for unstructured data, ML should be at the top of your list. Thats because ML can analyze massive datasets and quickly find patterns among the clutter and with the right training, ML models can learn to interpret, organize and classify unstructured data types in any number of forms.

For example, an ML model can learn to recommend rules for data profiling, cleansing and standardization making efforts more efficient and precise in industries like healthcare and insurance. Likewise, ML programs can identify and classify text data by topic or sentiment in unstructured feeds, such as those on social media or within email records.

As you improve your data quality efforts through ML, keep in mind a few key dos and donts:

Your unstructured data is a treasure trove for new opportunities and insights. Yet only 18% of organizations currently take advantage of their unstructured data and data quality is one of the top factors holding more businesses back.

As unstructured data becomes more prevalent and more pertinent to everyday business decisions and operations, ML-based quality controls provide much-needed assurance that your data is relevant, accurate, and useful. And when you arent hung up on data quality, you can focus on using data to drive your business forward.

Just think about the possibilities that arise when you get your data under control or better yet, let ML take care of the work for you.

Edgar Honing is senior solutions architect at AHEAD.

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even considercontributing an articleof your own!

Read More From DataDecisionMakers

The rest is here:
Solve the problem of unstructured data with machine learning - VentureBeat

How advanced analytics and machine learning are transforming the role of Finance Controllers – Times of India

Equipping Financial Controllers with predictive capabilities, advanced analytics and ML will help them elevate their role from providing back-office support to business partnering.

The role of a finance controller is changing. It is expected that controllers will not only take ownership of the companys accounts but also drive strategic performance. Such change in role is further accentuated with the explosion in the volume and variety of data available with an organization. Furthermore, data landscape in organizations is becoming more and more siloed, complex and distributed. Given this shift in business dynamics, it is becoming extremely important to upskill on how advanced analytics, AI/ML techniques be leveraged to become an effective business partner driving performance in an organization.

Use-cases of AI/ML in Finance

Here a a number of use cases of how data science and ML techniques can be used in the business context to drive productivity and performance in the organization:

1)Identifying and preventing Revenue leakages: Revenue leakage is a major issue with many large enterprise and A/R leaders spent a substantial time and effort in preventing them. This could be due to multiple reasons viz. a process issue with disjointed systems, poor experience of customer, disputes, invalid deductions by customer with a relatively high volume and low value, auto-approved write-offs etc. Here, advanced analytics can play a significant role the root cause of such leakages and provide insights to the A/R team on actions that can be taken to prevent such instances.

For example, there have been instances where few customers use low dollar value deductions as a strategy to strengthen their cash flow. In such situations, it is difficult to track low-dollar value deductions as it is really a small number and is below the acceptable tolerance / threshold. This becomes a scenario of finding a needle in a haystack. However, when such deductions are aggregated at a customer level over a period of time, it can be truly amazing to seehow certaingroup of customers are actually using this strategy to cause a significant cash flow leakage for the company. To track such events, there are advanced clustering algorithms which can provide which customers are consistently using this strategy and can help the A/R team to go and recover them.

2)Identifying high risk customers and undertaking recommended actions for faster collection:For organizationshaving thousands of transactions across a large customer, it is really a difficult task to understand the behavior and financial stability of its customer due to which there are late payments or sometimes the receivables are written off. To avoid such scenarios, advanced classification algorithms can help detect such customer at risk and help organization to take pro-active steps to not only identify the customers but also reduce exposures to them over a period of time. In order to implement such smart solutions, it is really important to have the finance leader defining the key variables or data points needed to develop such classification algorithm which then the data scientist will use in its modelling. In other words, it needs a close co-operation between the finance leaders and Data scientist to model the key variables and scenarios.

3)Inventory Management: Inventory management is a major challenge in an organization. There are different categories of inventory Finished goods, semi-finished goods, raw material etc. and within each of these categories, there could be different types viz. slow moving, fast moving etc. The use of AI/ML can help manage inventory by revealing insightful information about Stock keeping units (SKU) and their associated variables such as minimum order quantity, lead times, replenishment frequency, and safety stocks. Using predictive capabilities, advanced classification algorithms can help to keep the inventory issues such as supply mismanagement, deadstock, and wastage under strict control.

4)Improving Cash Conversion / working capital: One of the significant benefits of AI/ML is the optimization of cash conversion cycles by optimizing the management of receivables, inventory and payables. This, in turn, helps the company to perform well on the cash conversion and significantly improve its performance on accounts receivables.

5)Intelligent Root cause Analysis: The use of AI/ML offers profoundly important information on various business scenarios that could possibly spring in the future as a result of changing business environments. Whether it is the use of predictive analysis, scenario modeling, or descriptive root cause, AI/ML can help financial controllers in understanding the main reasons why some of the product gained immense popularity while others fail to find favor with consumers.

There is little to doubt about the transformative power of AI/ML. These solutions can transform the role of financial controllers and can catapult their positions to one of strategic relevance to the company. That said, with a plethora of choices around, financial controllers should opt for holistic and comprehensive solutions so that the benefits of AI/ML solutions can be realized in a holistic manner.

Views expressed above are the author's own.

END OF ARTICLE

Read more here:
How advanced analytics and machine learning are transforming the role of Finance Controllers - Times of India

The AI Researcher Giving Her Field Its Bitter Medicine – Quanta Magazine

Anima Anandkumar, Bren Professor of computing at the California Institute of Technology and senior director of machine learning research at Nvidia, has a bone to pick with the matrix. Her misgivings are not about the sci-fi movies, but about mathematical matrices grids of numbers or variables used throughout computer science. While researchers typically use matrices to study the relationships and patterns hiding within large sets of data, these tools are best suited for two-way relationships. Complicated processes like social dynamics, on the other hand, involve higher-order interactions.

Luckily, Anandkumar has long savored such challenges. When she recalls Ugadi, a new years festival she celebrated as a child in Mysore (now Mysuru), India, two flavors stand out: jaggery, an unrefined sugar representing lifes sweetness, and neem, bitter blossoms representing lifes setbacks and difficulties. Its one of the most bitter things you can think about, she said.

Shed typically load up on the neem, she said. I want challenges.

This appetite for effort propelled her to study electrical engineering at the Indian Institute of Technology in Madras. She earned her doctorate at Cornell University and was a postdoc at the Massachusetts Institute of Technology. She then started her own group as an assistant professor at the University of California, Irvine, focusing on machine learning, a subset of artificial intelligence in which a computer can gain knowledge without explicit programming. At Irvine, Anandkumar dived into the world of topic modeling, a type of machine learning where a computer tries to glean important topics from data; one example would be an algorithm on Twitter that identifies hidden trends. But the connection between words is one of those higher-order interactions too subtle for matrix relationships: Words can have multiple meanings, multiple words can refer to the same topic, and language evolves so quickly that nothing stays settled for long.

This led Anandkumar to challenge AIs reliance on matrix methods. She deduced that to keep an algorithm observant enough to learn amid such chaos, researchers must design it to grasp the algebra of higher dimensions. So she turned to what had long been an underutilized tool in algebra called the tensor. Tensors are like matrices, but they can extend to any dimension, going beyond a matrixs two dimensions of rows and columns. As a result, tensors are more general tools, making them less susceptible to overfitting when models match training data closely but cant accommodate new data. For example, if you enjoy many music genres but only stream jazz songs, your streaming platforms AI could learn to predict which jazz songs youd enjoy, but its R&B predictions would be baseless. Anandkumar believes tensors make machine learning more adaptable.

Its not the only challenge shes embraced. Anandkumar is a mentor and an advocate for changes to the systems that push marginalized groups out of the field. In 2018, she organized a petition to change the name of her fields annual Neural Information Processing Systems conference from a direct acronym to NeurIPS. The conference board rejected the petition that October. But Anandkumar and her peers refused to let up, and weeks later the board reversed course.

Quanta spoke with Anandkumar at her office in Pasadena about her upbringing, tensors and the ethical challenges facing AI. The interview has been condensed and edited for clarity.

In the early 1990s they were among the first to bring programmable manufacturing machines into Mysore. At that time it was seen as something odd: We can hire human operators to do this, so what is the need for automation? My parents saw that there can be huge efficiencies, and they can do it a lot faster compared to human-operated machines.

Yeah. And programming. I would see the green screen where my dad would write the program, and that would move the turret and the tools. It was just really fascinating to see understanding geometry, understanding how the tool should move. You see the engineering side of how such a massive machine can do this.

My mom was a pioneer in a sense. She was one of the first in her community and family background to take up engineering. Many other relatives advised my grandfather not to send her, saying she may not get married easily. My grandfather hesitated. Thats when my mom went on a hunger strike for three days.

As a result, I never saw it as something weird for women to be interested in engineering. My mother inculcated in us that appreciation of math and sciences early on. Having that be just a natural part of who I am from early childhood went a long way. If my mom ever saw sexism, she would point it out and say, No, dont accept this. That really helped.

The rest is here:
The AI Researcher Giving Her Field Its Bitter Medicine - Quanta Magazine

The growth stage of applied AI and MLOps – TechTalks

This article is part of our series that explores thebusiness of artificial intelligence

Applied artificial intelligence tops the list of 14 most influential technology trends in McKinsey & Companys Technology Trends Outlook 2022 report.

For now, applied AI (which might also be referred to as enterprise AI) is mainly the use of machine learning and deep learning models in real-world applications. A closely related trend that also made it to McKinseys top-14 list is industrializing machine learning, which refers to MLOps platforms and other tools that make it easier to train, deploy, integrate, and update ML models in different applications and environments.

McKinseys findings, which are in line with similar reports released by consulting and research firms, show that after a decade of investment, research, and development of tools, the barriers to applied AI are slowly fading.

Large tech companies, which often house many of the top machine learning/deep learning scientists and engineers, have been researching new algorithms and applying them to their products for years. Thanks to the developments highlighted in McKinseys report, more organizations can adopt machine learning models in their applications and bring their benefits to their customers and users.

The recent decade has seen a revived and growing mainstream interest in artificial intelligence, mainly thanks to the proven capabilities of deep neural networks in performing tasks that were previously thought to be beyond the limits of computers. During the same period, the machine learning research community has made very impressive progress in some of the challenging areas of AI, including computer vision and natural language processing.

The scientific breakthroughs in machine learning were largely made possible because of the growing capabilities to collect, store, and access data in different domains. At the same time, advances in processors and cloud computing have made it possible to train and run neural networks at speeds and scales that were previously thought to be impossible.

Some of the milestone achievements of deep learning were followed by news cycles that publicized (and often exaggerated) the capabilities of contemporary AI. Today, many companies try to present themselves as AI first, or pitch their products as using the latest and greatest in deep learning.

However, bringing ML from research labs to actual products presents several challenges, which is why most machine learning strategies fail. Creating and maintaining products that use machine learning requires different infrastructure, tools, and skill sets than those used in traditional software. Organizations need data lakes to collect and store data, and data engineers to set up, maintain, and configure the data infrastructure that makes it possible to train and update ML models. They need data scientists and ML engineers to prepare the data and models that will power their applications. They need distributed computing experts that can make ML models run in a time- and cost-efficient manner and at scale. And they need product managers who can adapt the ML system to their business model and software engineers who can integrate the ML pipeline into their products.

The data, hardware, and talent costs that come with enterprise AI have been often too prohibitive for smaller organizations to make long-term investments in ML strategies.

It is against this backdrop that the McKinsey & Company reports findings are worth examining.

The report ranks tech trends based on five quantifiable measures: search engine queries, news publications, patents, research publications, and investment. It is worth noting that such quantitative measures dont always paint the most accurate picture of the relevance of a trend. But tracking them over time can give a good estimate of how a technology goes through the different steps of hype, adoption, and productivity cycle.

McKinsey further corroborated its findings through surveys and interviews with experts from 20 different industries, which gives a better picture of what the opportunities and challenges are.

The report is based on 2018-2021 data, which does not fully account for the downturn that capital markets are currently undergoing. According to the findings, applied AI has seen growth in all quantifiable measures except for the search engine queries category (which is a grey area, since AI terms and trends are constantly evolving). McKinsey gives applied AI the highest innovation score and top-five investment score with $165 billion in 2021.

(Measuring investment is also very subjective and depends on how you define applied AIe.g., if a company that secures a huge round of funding uses machine learning as a small part of its product, will it count as an investment in applied AI?)

In terms of industry relevance, some of the ML applications mentioned in the report include use cases such as recommendation engines (e.g., content recommendation, smart upselling), detection and prevention (e.g., credit card fraud detection, customer complaint modeling, early disease diagnosis, defect prediction), and time series analysis (e.g., managing price volatility, demand forecasting). Interestingly, these are some of the areas of machine learning where the algorithms have been well-developed for years. Though computer vision is only mentioned once in the use cases, some of the applications might benefit from it (e.g., document scanning, equipment defect detection).

The report also mentions some of the more advanced areas of machine learning, such as generative deep learning models (e.g., simulation engines for self-driving cars, generating chemical compounds), transformer models (e.g., drug discovery), graph neural networks, and robotics.

This further drives the point that the main hurdle for the adoption of applied AI has not been poor machine learning algorithms but the lack of tooling and infrastructure to put well-known and -tested algorithms to efficient use. These constraints have limited the use of applied AI to companies that dont have enormous resources and access to scarce machine learning talent.

In recent years, there has been tremendous advances in some of these fronts. Weve seen the advent and maturity of no-code ML platforms, easy-to-use ML programming libraries, API-based ML services (MLaaS), and special hardware for training and running ML models. At the same time, the data storage technologies underlying ML services have evolved to become more flexible, interoperable, and scalable. Meanwhile, some enterprise AI companies have started to develop and provide ML solutions for specific sectors (e.g., financial services, oil and gas, retail).

All these developments reduce the financial and technical barriers to adopting machine learning in their business models. In many cases, companies can integrate ML services into their applications without having in-depth knowledge of the algorithms running in the background.

According to McKinseys 2021 survey of industry experts, 56 percent of respondents said their organizations had adopted AI, up from 50 percent in the 2020 survey. The 2021 survey also indicated that adopting AI can have financial benefits: 27 percent of respondents attributed 5 percent or more of their companies EBIT to AI.

The second AI-related tech trend included in the McKinsey & Company report is the industrialization of machine learning. This is a vague term and has much overlap with the applied AI category, so the report defines it as an interoperable stack of technical tools for automating ML and scaling up its use so that organizations can realize its full potential.

The technologies underlying advances in this field are mostly the same that have led to the growth of applied AI (better data storage platforms, hardware stacks, ML development tools and platforms, etc.). However, one specific field that has seen impressive developments in recent years is machine learning operations (MLOps), the set of tools and practices that streamline the training, deployment, and maintenance of ML models.

MLOps platforms provide tools for curating, processing, and labeling data; training and comparing different machine learning models; versioning control for dataset and models; deploying ML models and monitoring their performance; and updating ML models as their performance decays, their environment changes, and new data becomes available. MLOps platforms, which are growing in number and maturity, bring together several different tasks that were previously carried out desperately and in an ad hoc fashion.

According to the report, the industrialization of machine learning can shorten the production time frame for ML applications by 90 percent (from proof of concept to product) and reduce development resources by up to 40 percent.

Despite the advances in applied AI, the field still has some gaps to bridge. The McKinsey report states that the availability of resources such as talent and funding remain two of the hurdles for the further growth of enterprise AI. Currently, the capital markets are in a downturn, and all sectors, including AI, are facing problems funding their startups and companies.

However, despite the AI capital pie becoming smaller, funding has not stopped altogether. According to a recent CB Insights report, companies that have already achieved product/market fit and are ready for aggressive growth are still managing to secure mega-funding rounds (above $100 million). This suggests that companies that dont have the margins to launch new ML strategies will have a hard time receiving outside funding. But applied ML platforms that have already cornered their share of the market will continue to draw interest from investors.

Another important challenge that the report mentions is data risks and vulnerabilities. This is becoming an increasingly critical issue for applied machine learning. Like its development lifecycle, the security threat landscape of machine learning is different from that of traditional software. The security tools used in most software development platforms are not designed to detect adversarial examples, data poisoning, membership inference attacks, and other types of threats against ML models.

Fortunately, the security and machine learning communities are coming together to develop tools and practices for creating secure ML pipelines. As applied AI continues to grow, we can expect other sectors to speed up their adoption of ML, which will in turn further accelerate the pace of innovation in the field.

Here is the original post:
The growth stage of applied AI and MLOps - TechTalks