Archive for the ‘Machine Learning’ Category

C3 AI Named Google Cloud Technology Partner of the Year for AI and Machine Learning – Business Wire

REDWOOD CITY, Calif.--(BUSINESS WIRE)--C3 AI (NYSE: AI), the Enterprise AI application software company, announced it has been awarded the Google Cloud Technology Partner of the Year in the artificial intelligence and machine learning category for 2021. C3 AI has been recognized for achievements in the Google Cloud ecosystem, including helping cross-industry customers accelerate the deployment of Enterprise AI applications.

Our team is honored to be selected as a Google Cloud Technology Partner of the Year award winner, said Ed Abbo, C3 AI president and chief technology officer. C3 AI and Google Cloud are fully aligned to unlock customer value by accelerating delivery and operation of innovative industry-specific AI applications.

In September 2021, C3 AI and Google Cloud unveiled a first-of-its-kind partnership to rapidly deploy Enterprise AI applications for industry-specific business operations across financial services, manufacturing, healthcare and supply chain, among other sectors. The entire portfolio of C3 AIs Enterprise AI applications is available to Google Cloud customers, including C3 AI CRM.

These solutions fully leverage the accuracy and scale of multiple Google Cloud products and capabilities, including Google Kubernetes Engine, Google BigQuery, and Vertex AI, enabling customers to rapidly build and deploy machine learning models. C3 AIs applications, built on a common foundation of Google Clouds infrastructure, AI, machine learning and data analytics capabilities, complement and interoperate with Google Clouds portfolio of existing and future industry solutions.

This award recognizes C3 AIs commitment to customer success, and its delivery of innovative and impactful solutions on Google Cloud in AI and machine learning, said Bronwyn Hastings, VP of Global ISV Partnerships and Channels, Google Cloud. Were proud to recognize C3 AI as our Technology Partner of the Year for AI and Machine Learning, and we look forward to continuing our work together building and creating business value for customers with cloud technologies.

About C3.ai, Inc.

C3 AI is the Enterprise AI application software company. C3 AI delivers a family of fully integrated products including the C3 AI Application Platform, an end-to-end platform for developing, deploying, and operating enterprise AI applications and C3 AI Applications, a portfolio of industry-specific SaaS enterprise AI applications that enable the digital transformation of organizations globally. Learn more at: http://www.c3.ai.

Read more:
C3 AI Named Google Cloud Technology Partner of the Year for AI and Machine Learning - Business Wire

Union.ai Releases UnionML For Seamless Creation Of Web-native Machine Learning Applications – AiThority

Open-source MLOps framework speeds creation and deployment of ML microservices within a unified interface.

Union.ai, provider of the open-source workflow orchestration platform Flyte and its hosted version, Union Cloud, announced the release of UnionML at MLOps World 2022.

Latest AithorityInsights:AI Company Decide Expands in the US to Lead in the Cookieless World

The open-source MLOps framework for building web-native machine learning applications offers a unified interface for bundling Python functions into machine learning (ML) microservices. It is the only library that seamlessly manages both data science workflows and production lifecycle tasks. This makes it easy to build new AI applications from scratch, or make existing Python code run faster at scale.

UnionML significantly simplifies creating and deploying machine learning applications.

UnionML aims to unify the ever-evolving ecosystem of machine learning and data tools into a single interface for expressing microservices as Python functions. Data scientists can create UnionML applications by defining a few core methods that are automatically bundled into ML microservices, starting with model training and offline/online prediction.

Top Machine Learning Insights:LivePerson Collaborates with UCSC to Build the Future of Natural

Creating machine learning applications should be easy, frictionless and simple, but today it really isnt., said Union.ai CEO Ketan Umare. The cost and complexity of choosing tools, deciding how to combine them into a coherent ML stack, and maintaining them in production requires a whole team of people who often leverage different programming languages and follow disparate practices. UnionML significantly simplifies creating and deploying machine learning applications.

UnionML apps comprise two objects: Dataset and Model. Together, they expose function decorator entry points that serve as building blocks for a machine learning application. By focusing on the core building blocks instead of the way they fit together, data scientists can reduce their cognitive load for iterating on models and deploying them to production. UnionML uses Flyte to execute training and prediction workflows locally or on production-grade Kubernetes clusters, relieving MLOps engineers of the overhead of provisioning compute resources for their stakeholders. Models and ML applications can be served via FastAPI or AWS Lambda. More options will be available in the future.

Read More About AithorityNews : net2phone Partners with TeleBermuda International to Provide Unified Communications Solutions

[To share your insights with us, please write tosghosh@martechseries.com]

Continue reading here:
Union.ai Releases UnionML For Seamless Creation Of Web-native Machine Learning Applications - AiThority

Datatonic Wins Google Cloud Specialization Partner of the Year Award for Machine Learning – PR Newswire

LONDON, June 15, 2022 /PRNewswire/ -- Datatonic, a leader for Data + AI consulting on Google Cloud, today announced it has received the 2021 Google Cloud Specialization Partner of the Year award for Machine Learning.

Datatonic was recognized for the company's achievements in the Google Cloud ecosystem, helping joint customers scale their Machine Learning (ML) capabilities with Machine Learning Operations (MLOps) and achieve business impact with transformational ML solutions.

Datatonic has continuously invested in expanding their MLOps expertise, from defining what "good" MLOps looks like, to helping clients make their ML workloads faster, scalable, and more efficient. In just the past year, they have built high-performing MLOps platforms for global clients across the Telecommunications, Media, and e-Commercesectors, enabling them to seamlessly leverage MLOps best practices across their teams.

Their recently open-sourced MLOps Turbo Templates, co-developed with Google Cloud's Vertex AI Pipelines product team, showcase Datatonic's experience implementing MLOps solutions, and Google Cloud's technical excellence to help teams get started with MLOps even faster.

"We're delighted with this recognition from our partners at Google Cloud. It's amazing to see our team go from strength to strength at the forefront of cutting-edge technology with Google Cloud and MLOps. We're proud to be driving continuous improvements to the tech stack in partnership with Google Cloud, and to drive impact and scalability with our customers, from increasing ROI in data and AI spending to unlocking new revenue streams." - Louis Decuypere - CEO, Datatonic

"Google Cloud Specializations recognize partner excellence and proven customer success in a particular product area or industry," said Nina Harding, Global Chief, Partner Programs and Strategy, Google Cloud. "Based on their certified, repeatable customer success and strong technical capabilities, we're proud to recognize Datatonic as Specialization Partner of the Year for Machine Learning."

Datatonic is a data consultancy enabling companies to make better business decisions with the power of Modern Data Stack and MLOps. Its services empower clients to deepen their understanding of consumers, increase competitive advantages, and unlock operational efficiencies by building cloud-native data foundations and accelerating high-impact analytics and machine learning use cases.

Logo - https://mma.prnewswire.com/media/1839415/Datatonic_Logo.jpg

For enquiries about new projects, get in touch at [emailprotected]For media / press enquiries, contact Krisztina Gyure ([emailprotected])

SOURCE Datatonic Ltd

Go here to see the original:
Datatonic Wins Google Cloud Specialization Partner of the Year Award for Machine Learning - PR Newswire

Using Machine Learning to Automate Kubernetes Optimization The New Stack – thenewstack.io

Brian Likosar

Brian is an open source geek with a passion for working at the intersection of people and technology. Throughout his career, he's been involved in open source, whether that was with Linux, Ansible and OpenShift/Kubernetes while at Red Hat, Apache Kafka while at Confluent, or Apache Flink while at AWS. Currently a senior solutions architect at StormForge, he is based in the Chicago area and enjoys horror, sports, live music and theme parks.

Note: This is the third of a five-part series covering Kubernetes resource management and optimization. In this article, we explain how machine learning can be used to manage Kubernetes resources efficiently. Previous articles explained Kubernetes resource types and requests and limits.

As Kubernetes has become the de-facto standard for application container orchestration, it has also raised vital questions about optimization strategies and best practices. One of the reasons organizations adopt Kubernetes is to improve efficiency, even while scaling up and down to accommodate changing workloads. But the same fine-grained control that makes Kubernetes so flexible also makes it challenging to effectively tune and optimize.

In this article, well explain how machine learning can be used to automate tuning of these resources and ensure efficient scaling for variable workloads.

Optimizing applications for Kubernetes is largely a matter of ensuring that the code uses its underlying resources namely CPU and memory as efficiently as possible. That means ensuring performance that meets or exceeds service-level objectives at the lowest possible cost and with minimal effort.

When creating a cluster, we can configure the use of two primary resources memory and CPU at the container level. Namely, we can set limits as to how much of these resources our application can use and request. We can think of those resource settings as our input variables, and the output in terms of performance, reliability and resource usage (or cost) of running our application. As the number of containers increases, the number of variables also increases, and with that, the overall complexity of cluster management and system optimization increases exponentially.

We can think of Kubernetes configuration as an equation with resource settings as our variables and cost, performance and reliability as our outcomes.

To further complicate matters, different resource parameters are interdependent. Changing one parameter may have unexpected effects on cluster performance and efficiency. This means that manually determining the precise configurations for optimal performance is an impossible task, unless you have unlimited time and Kubernetes experts.

If we do not set custom values for resources during the container deployment, Kubernetes automatically assigns these values. The challenge here is that Kubernetes is quite generous with its resources to prevent two situations: service failure due to an out-of-memory (OOM) error and unreasonably slow performance due to CPU throttling. However, using the default configurations to create a cloud-based cluster will result in unreasonably high cloud costs without guaranteeing sufficient performance.

This all becomes even more complex when we seek to manage multiple parameters for several clusters. For optimizing an environments worth of metrics, a machine learning system can be an integral addition.

There are two general approaches to machine learning-based optimization, each of which provides value in a different way. First, experimentation-based optimization can be done in a non-prod environment using a variety of scenarios to emulate possible production scenarios. Second, observation-based optimization can be performed either in prod or non-prod by observing actual system behavior. These two approaches are described next.

Optimizing through experimentation is a powerful, science-based approach because we can try any possible scenario, measure the outcomes, adjust our variables and try again. Since experimentation takes place in a non-prod environment, were only limited by the scenarios we can imagine and the time and effort needed to perform these experiments. If experimentation is done manually, the time and effort needed can be overwhelming. Thats where machine learning and automation come in.

Lets explore how experimentation-based optimization works in practice.

To set up an experiment, we must first identify which variables (also called parameters) can be tuned. These are typically CPU and memory requests and limits, replicas and application-specific parameters such as JVM heap size and garbage collection settings.

Some ML optimization solutions can scan your cluster to automatically identify configurable parameters. This scanning process also captures the clusters current, or baseline, values as a starting point for our experiment.

Next, you must specify your goals. In other words, which metrics are you trying to minimize or maximize? In general, the goal will consist of multiple metrics representing trade-offs, such as performance versus cost. For example, you may want to maximize throughput while minimizing resource costs.

Some optimization solutions will allow you to apply a weighting to each optimization goal, as performance may be more important than cost in some situations and vice versa. Additionally, you may want to specify boundaries for each goal. For instance, you might not want to even consider any scenarios that result in performance below a particular threshold. Providing these guardrails will help to improve the speed and efficiency of the experimentation process.

Here are some considerations for selecting the right metrics for your optimization goals:

Of course, these are just a few examples. Determining the proper metrics to prioritize requires communication between developers and those responsible for business operations. Determine the organizations primary goals. Then examine how the technology can achieve these goals and what it requires to do so. Finally, establish a plan that emphasizes the metrics that best accommodate the balance of cost and function.

With an experimentation-based approach, we need to establish the scenarios to optimize for and build those scenarios into a load test. This might be a range of expected user traffic or a specific scenario like a retail holiday-based spike in traffic. This performance test will be used during the experimentation process to simulate production load.

Once weve set up our experiment with optimization goals and tunable parameters, we can kick off the experiment. An experiment consists of multiple trials, with your optimization solution iterating through the following steps for each trial:

The machine learning engine uses the results of each trial to build a model representing the multidimensional parameter space. In this space, it can examine the parameters in relation to one another. With each iteration, the ML engine moves closer to identifying the configurations that optimize the goal metrics.

While machine learning automatically recommends the configuration that will result in the optimal outcomes, additional analysis can be done once the experiment is complete. For example, you can visualize the trade-offs between two different goals, see which parameters have a significant impact on outcomes and which matter less.

Results are often surprising and can lead to key architectural improvements, for example, determining that a larger number of smaller replicas is more efficient than a smaller number of heavier replicas.

Experiment results can be visualized and analyzed to fully understand system behavior.

Experiment results can be visualized and analyzed to fully understand system behavior.

While experimentation-based optimization is powerful for analyzing a wide range of scenarios, its impossible to anticipate every possible situation. Additionally, highly variable user traffic means that an optimal configuration at one point in time may not be optimal as things change. Kubernetes autoscalers can help, but they are based on historical usage and fail to take application performance into account.

This is where observation-based optimization can help. Lets see how it works.

Depending on what optimization solution youre using, configuring an application for observation-based optimization may consist of the following steps:

Once configured, the machine learning engine begins analyzing observability data collected from Prometheus, Datadog or other observability tools to understand actual resource usage and application performance trends. The system then begins making recommendations at the interval specified during configuration.

If you specified automatic implementation of recommendations during configuration, the optimization solution will automatically patch deployments with recommended configurations as they are recommended. If you selected manual deployment, you can view the recommendation, including container-level details, before deciding to approve or not.

As you may have noted, observation-based optimization is simpler than experimentation-based approaches. It provides value faster with less effort, but on the other hand, experimentation- based optimization is more powerful and can provide deep application insights that arent possible using an observation-based approach.

Which approach to use shouldnt be an either/or decision; both approaches have their place and can work together to close the gap between prod and non-prod. Here are some guidelines to consider:

Using both experimentation-based and observation-based approaches creates a virtuous cycle of systematic, continuous optimization.

Using both experimentation-based and observation-based approaches creates a virtuous cycle of systematic, continuous optimization.

Optimizing our Kubernetes environment to maximize efficiency (performance versus cost), scale intelligently and achieve our business goals requires:

For small environments, this task is arduous. For an organization running apps on Kubernetes at scale, it is likely already beyond the scope of manual labor.

Fortunately, machine learning can bridge the automation gap and provide powerful insights for optimizing a Kubernetes environment at every level.

StormForge provides a solution that uses machine learning to optimize based on both observation (using observability data) and experimentation (using performance-testing data).

To try StormForge in your environment, you can request a free trial here and experience how complete optimization does not need to be a complete headache.

Stay tuned for future articles in this series where well explain how to tackle specific challenges involved in optimizing Java apps and databases running in containers.

The New Stack is a wholly owned subsidiary of Insight Partners, an investor in the following companies mentioned in this article: StormForge.

Feature image via Pixabay.

Visit link:
Using Machine Learning to Automate Kubernetes Optimization The New Stack - thenewstack.io

Dataiku Named Snowflake Machine Learning/AI Partner of the Year Award for the 2nd Year in a Row – GlobeNewswire

SNOWFLAKE SUMMIT, Las Vegas, June 14, 2022 (GLOBE NEWSWIRE) -- Dataiku, the platform for Everyday AI, today announced that it has been named the 2022 Machine Learning/AI Partner of the Year award winner by Snowflake, the Data Cloud company, for the second year in a row. This award was presented at Snowflake Summit 2022 The World of Data Collaboration.

Also at Snowflake Summit today, Dataiku received competency awards in financial services, healthcare life sciences, and retail and CPG for the depth of its Snowflake expertise and commitment to driving customer impact in these industries. The partnership between the two industry leaders has deepened in the past year, as the tandem supports a growing list of joint customers, including Ameritas, First Tech Federal Credit Union, Novartis, and, Monoprix.

Dataikus Everyday AI platform enables organizations of any size to deliver data, analytics, and AI projects in a collaborative, scalable environment that takes full advantage of their investment in Snowflake. In addition, the joint solution provides an easy-to-use, visual interface where coders and non-coders can securely team up to work with data in Snowflake to build production-ready data pipelines and data science projects, all in a single platform.

We are thrilled to be chosen as Snowflakes 2022 Machine Learning / AI Partner of the Year for the second year in a row, said David Tharp, SVP Ecosystems and Alliances at Dataiku. The power of our AI capability with the Snowflake Data Cloud platform is quickly shifting the landscape of intelligent cloud computing. We are providing value to our joint customers in minutes rather than months. In the past year, we have invested significantly in becoming the most tightly integrated machine learning / AI platform to Snowflake, and its very rewarding to see this value delivered to our customers.

"Snowflake and Dataiku's commitment to developing and delivering Everyday AI solutions is foundational to our shared mission of helping every organization benefit from a data-driven culture. Together with Dataiku, we are delivering on the promise of machine learning and AI for customers across industries," said Colleen Kapase, SVP of Worldwide Partnerships at Snowflake.

Combining Snowflakes Data Cloud with Dataikus end-to-end analytics and AI platform has been a game changer for our organization, greatly improving how we manage large datasets and complex analytics, said Jay Franklin, VP of Enterprise Data and Analytics at First Tech Federal Credit Union. As a credit union, we have a terrific opportunity to directly impact the lives of our members. By scaling and maturing our data science and analytics practices, we are making this a reality through member centricity and personalized, highly-relevant experiences and offerings.

Dataikus integrated access with Snowpark for Python

A year after announcing its integration with Snowflakes Snowpark and Java user-defined functions (UDFs), Dataiku has also integrated access to Snowpark for Python, enabling Python coders and developers to work with familiar tools, packages, and libraries. Now, coders can focus on innovation while avoiding manual tasks and dependencies.

The new and existing integrations with Snowflake allow users to accomplish the following:

These integrations and more give customers the speed, scale, and security of Snowflakes high-performance engine with Dataikus platform for Everyday AI.

Resources

See the original post:
Dataiku Named Snowflake Machine Learning/AI Partner of the Year Award for the 2nd Year in a Row - GlobeNewswire