Archive for the ‘Machine Learning’ Category

Datatonic Wins Google Cloud Specialization Partner of the Year Award for Machine Learning – PR Newswire

LONDON, June 15, 2022 /PRNewswire/ -- Datatonic, a leader for Data + AI consulting on Google Cloud, today announced it has received the 2021 Google Cloud Specialization Partner of the Year award for Machine Learning.

Datatonic was recognized for the company's achievements in the Google Cloud ecosystem, helping joint customers scale their Machine Learning (ML) capabilities with Machine Learning Operations (MLOps) and achieve business impact with transformational ML solutions.

Datatonic has continuously invested in expanding their MLOps expertise, from defining what "good" MLOps looks like, to helping clients make their ML workloads faster, scalable, and more efficient. In just the past year, they have built high-performing MLOps platforms for global clients across the Telecommunications, Media, and e-Commercesectors, enabling them to seamlessly leverage MLOps best practices across their teams.

Their recently open-sourced MLOps Turbo Templates, co-developed with Google Cloud's Vertex AI Pipelines product team, showcase Datatonic's experience implementing MLOps solutions, and Google Cloud's technical excellence to help teams get started with MLOps even faster.

"We're delighted with this recognition from our partners at Google Cloud. It's amazing to see our team go from strength to strength at the forefront of cutting-edge technology with Google Cloud and MLOps. We're proud to be driving continuous improvements to the tech stack in partnership with Google Cloud, and to drive impact and scalability with our customers, from increasing ROI in data and AI spending to unlocking new revenue streams." - Louis Decuypere - CEO, Datatonic

"Google Cloud Specializations recognize partner excellence and proven customer success in a particular product area or industry," said Nina Harding, Global Chief, Partner Programs and Strategy, Google Cloud. "Based on their certified, repeatable customer success and strong technical capabilities, we're proud to recognize Datatonic as Specialization Partner of the Year for Machine Learning."

Datatonic is a data consultancy enabling companies to make better business decisions with the power of Modern Data Stack and MLOps. Its services empower clients to deepen their understanding of consumers, increase competitive advantages, and unlock operational efficiencies by building cloud-native data foundations and accelerating high-impact analytics and machine learning use cases.

Logo - https://mma.prnewswire.com/media/1839415/Datatonic_Logo.jpg

For enquiries about new projects, get in touch at [emailprotected]For media / press enquiries, contact Krisztina Gyure ([emailprotected])

SOURCE Datatonic Ltd

Go here to see the original:
Datatonic Wins Google Cloud Specialization Partner of the Year Award for Machine Learning - PR Newswire

Using Machine Learning to Automate Kubernetes Optimization The New Stack – thenewstack.io

Brian Likosar

Brian is an open source geek with a passion for working at the intersection of people and technology. Throughout his career, he's been involved in open source, whether that was with Linux, Ansible and OpenShift/Kubernetes while at Red Hat, Apache Kafka while at Confluent, or Apache Flink while at AWS. Currently a senior solutions architect at StormForge, he is based in the Chicago area and enjoys horror, sports, live music and theme parks.

Note: This is the third of a five-part series covering Kubernetes resource management and optimization. In this article, we explain how machine learning can be used to manage Kubernetes resources efficiently. Previous articles explained Kubernetes resource types and requests and limits.

As Kubernetes has become the de-facto standard for application container orchestration, it has also raised vital questions about optimization strategies and best practices. One of the reasons organizations adopt Kubernetes is to improve efficiency, even while scaling up and down to accommodate changing workloads. But the same fine-grained control that makes Kubernetes so flexible also makes it challenging to effectively tune and optimize.

In this article, well explain how machine learning can be used to automate tuning of these resources and ensure efficient scaling for variable workloads.

Optimizing applications for Kubernetes is largely a matter of ensuring that the code uses its underlying resources namely CPU and memory as efficiently as possible. That means ensuring performance that meets or exceeds service-level objectives at the lowest possible cost and with minimal effort.

When creating a cluster, we can configure the use of two primary resources memory and CPU at the container level. Namely, we can set limits as to how much of these resources our application can use and request. We can think of those resource settings as our input variables, and the output in terms of performance, reliability and resource usage (or cost) of running our application. As the number of containers increases, the number of variables also increases, and with that, the overall complexity of cluster management and system optimization increases exponentially.

We can think of Kubernetes configuration as an equation with resource settings as our variables and cost, performance and reliability as our outcomes.

To further complicate matters, different resource parameters are interdependent. Changing one parameter may have unexpected effects on cluster performance and efficiency. This means that manually determining the precise configurations for optimal performance is an impossible task, unless you have unlimited time and Kubernetes experts.

If we do not set custom values for resources during the container deployment, Kubernetes automatically assigns these values. The challenge here is that Kubernetes is quite generous with its resources to prevent two situations: service failure due to an out-of-memory (OOM) error and unreasonably slow performance due to CPU throttling. However, using the default configurations to create a cloud-based cluster will result in unreasonably high cloud costs without guaranteeing sufficient performance.

This all becomes even more complex when we seek to manage multiple parameters for several clusters. For optimizing an environments worth of metrics, a machine learning system can be an integral addition.

There are two general approaches to machine learning-based optimization, each of which provides value in a different way. First, experimentation-based optimization can be done in a non-prod environment using a variety of scenarios to emulate possible production scenarios. Second, observation-based optimization can be performed either in prod or non-prod by observing actual system behavior. These two approaches are described next.

Optimizing through experimentation is a powerful, science-based approach because we can try any possible scenario, measure the outcomes, adjust our variables and try again. Since experimentation takes place in a non-prod environment, were only limited by the scenarios we can imagine and the time and effort needed to perform these experiments. If experimentation is done manually, the time and effort needed can be overwhelming. Thats where machine learning and automation come in.

Lets explore how experimentation-based optimization works in practice.

To set up an experiment, we must first identify which variables (also called parameters) can be tuned. These are typically CPU and memory requests and limits, replicas and application-specific parameters such as JVM heap size and garbage collection settings.

Some ML optimization solutions can scan your cluster to automatically identify configurable parameters. This scanning process also captures the clusters current, or baseline, values as a starting point for our experiment.

Next, you must specify your goals. In other words, which metrics are you trying to minimize or maximize? In general, the goal will consist of multiple metrics representing trade-offs, such as performance versus cost. For example, you may want to maximize throughput while minimizing resource costs.

Some optimization solutions will allow you to apply a weighting to each optimization goal, as performance may be more important than cost in some situations and vice versa. Additionally, you may want to specify boundaries for each goal. For instance, you might not want to even consider any scenarios that result in performance below a particular threshold. Providing these guardrails will help to improve the speed and efficiency of the experimentation process.

Here are some considerations for selecting the right metrics for your optimization goals:

Of course, these are just a few examples. Determining the proper metrics to prioritize requires communication between developers and those responsible for business operations. Determine the organizations primary goals. Then examine how the technology can achieve these goals and what it requires to do so. Finally, establish a plan that emphasizes the metrics that best accommodate the balance of cost and function.

With an experimentation-based approach, we need to establish the scenarios to optimize for and build those scenarios into a load test. This might be a range of expected user traffic or a specific scenario like a retail holiday-based spike in traffic. This performance test will be used during the experimentation process to simulate production load.

Once weve set up our experiment with optimization goals and tunable parameters, we can kick off the experiment. An experiment consists of multiple trials, with your optimization solution iterating through the following steps for each trial:

The machine learning engine uses the results of each trial to build a model representing the multidimensional parameter space. In this space, it can examine the parameters in relation to one another. With each iteration, the ML engine moves closer to identifying the configurations that optimize the goal metrics.

While machine learning automatically recommends the configuration that will result in the optimal outcomes, additional analysis can be done once the experiment is complete. For example, you can visualize the trade-offs between two different goals, see which parameters have a significant impact on outcomes and which matter less.

Results are often surprising and can lead to key architectural improvements, for example, determining that a larger number of smaller replicas is more efficient than a smaller number of heavier replicas.

Experiment results can be visualized and analyzed to fully understand system behavior.

Experiment results can be visualized and analyzed to fully understand system behavior.

While experimentation-based optimization is powerful for analyzing a wide range of scenarios, its impossible to anticipate every possible situation. Additionally, highly variable user traffic means that an optimal configuration at one point in time may not be optimal as things change. Kubernetes autoscalers can help, but they are based on historical usage and fail to take application performance into account.

This is where observation-based optimization can help. Lets see how it works.

Depending on what optimization solution youre using, configuring an application for observation-based optimization may consist of the following steps:

Once configured, the machine learning engine begins analyzing observability data collected from Prometheus, Datadog or other observability tools to understand actual resource usage and application performance trends. The system then begins making recommendations at the interval specified during configuration.

If you specified automatic implementation of recommendations during configuration, the optimization solution will automatically patch deployments with recommended configurations as they are recommended. If you selected manual deployment, you can view the recommendation, including container-level details, before deciding to approve or not.

As you may have noted, observation-based optimization is simpler than experimentation-based approaches. It provides value faster with less effort, but on the other hand, experimentation- based optimization is more powerful and can provide deep application insights that arent possible using an observation-based approach.

Which approach to use shouldnt be an either/or decision; both approaches have their place and can work together to close the gap between prod and non-prod. Here are some guidelines to consider:

Using both experimentation-based and observation-based approaches creates a virtuous cycle of systematic, continuous optimization.

Using both experimentation-based and observation-based approaches creates a virtuous cycle of systematic, continuous optimization.

Optimizing our Kubernetes environment to maximize efficiency (performance versus cost), scale intelligently and achieve our business goals requires:

For small environments, this task is arduous. For an organization running apps on Kubernetes at scale, it is likely already beyond the scope of manual labor.

Fortunately, machine learning can bridge the automation gap and provide powerful insights for optimizing a Kubernetes environment at every level.

StormForge provides a solution that uses machine learning to optimize based on both observation (using observability data) and experimentation (using performance-testing data).

To try StormForge in your environment, you can request a free trial here and experience how complete optimization does not need to be a complete headache.

Stay tuned for future articles in this series where well explain how to tackle specific challenges involved in optimizing Java apps and databases running in containers.

The New Stack is a wholly owned subsidiary of Insight Partners, an investor in the following companies mentioned in this article: StormForge.

Feature image via Pixabay.

Visit link:
Using Machine Learning to Automate Kubernetes Optimization The New Stack - thenewstack.io

Dataiku Named Snowflake Machine Learning/AI Partner of the Year Award for the 2nd Year in a Row – GlobeNewswire

SNOWFLAKE SUMMIT, Las Vegas, June 14, 2022 (GLOBE NEWSWIRE) -- Dataiku, the platform for Everyday AI, today announced that it has been named the 2022 Machine Learning/AI Partner of the Year award winner by Snowflake, the Data Cloud company, for the second year in a row. This award was presented at Snowflake Summit 2022 The World of Data Collaboration.

Also at Snowflake Summit today, Dataiku received competency awards in financial services, healthcare life sciences, and retail and CPG for the depth of its Snowflake expertise and commitment to driving customer impact in these industries. The partnership between the two industry leaders has deepened in the past year, as the tandem supports a growing list of joint customers, including Ameritas, First Tech Federal Credit Union, Novartis, and, Monoprix.

Dataikus Everyday AI platform enables organizations of any size to deliver data, analytics, and AI projects in a collaborative, scalable environment that takes full advantage of their investment in Snowflake. In addition, the joint solution provides an easy-to-use, visual interface where coders and non-coders can securely team up to work with data in Snowflake to build production-ready data pipelines and data science projects, all in a single platform.

We are thrilled to be chosen as Snowflakes 2022 Machine Learning / AI Partner of the Year for the second year in a row, said David Tharp, SVP Ecosystems and Alliances at Dataiku. The power of our AI capability with the Snowflake Data Cloud platform is quickly shifting the landscape of intelligent cloud computing. We are providing value to our joint customers in minutes rather than months. In the past year, we have invested significantly in becoming the most tightly integrated machine learning / AI platform to Snowflake, and its very rewarding to see this value delivered to our customers.

"Snowflake and Dataiku's commitment to developing and delivering Everyday AI solutions is foundational to our shared mission of helping every organization benefit from a data-driven culture. Together with Dataiku, we are delivering on the promise of machine learning and AI for customers across industries," said Colleen Kapase, SVP of Worldwide Partnerships at Snowflake.

Combining Snowflakes Data Cloud with Dataikus end-to-end analytics and AI platform has been a game changer for our organization, greatly improving how we manage large datasets and complex analytics, said Jay Franklin, VP of Enterprise Data and Analytics at First Tech Federal Credit Union. As a credit union, we have a terrific opportunity to directly impact the lives of our members. By scaling and maturing our data science and analytics practices, we are making this a reality through member centricity and personalized, highly-relevant experiences and offerings.

Dataikus integrated access with Snowpark for Python

A year after announcing its integration with Snowflakes Snowpark and Java user-defined functions (UDFs), Dataiku has also integrated access to Snowpark for Python, enabling Python coders and developers to work with familiar tools, packages, and libraries. Now, coders can focus on innovation while avoiding manual tasks and dependencies.

The new and existing integrations with Snowflake allow users to accomplish the following:

These integrations and more give customers the speed, scale, and security of Snowflakes high-performance engine with Dataikus platform for Everyday AI.

Resources

See the original post:
Dataiku Named Snowflake Machine Learning/AI Partner of the Year Award for the 2nd Year in a Row - GlobeNewswire

Advances in AI and machine learning could lead to better health care: lawyers – Lexpert

Of course, transparency and privacy concerns are significant, she notes, but if the information from our public health care system benefits everyone, is it inefficient to ask for consent for every use?

On the other hand, cybersecurity is another essential consideration, as weve come to learn that there are a lot of malevolent actors out there, says Miller Olafsson, with the potential ability to hack into centralized systems as part of a ransomware attack or other threat.

Even in its more basic uses, the potential of AI and machine learning is enormous. But the tricky part of using it in the health care sector is the need to have access to incredible amounts of data while at the same time understanding the sensitive nature of the data collected.

For artificial intelligence to be used in systems, procedures, or devices, you need access to data, and getting that data, particularly personal health information, is very challenging, says Carole Piovesan, managing partner at INQ Law in Toronto.

She points to the developing legal frameworks in Europe and North America for artificial intelligence and privacy legislation more generally. Lawyers working with start-up companies or health care organizations to build AI systems must help them stay within the parameters of existing laws, says Piovesan, and provide guidance on best practices for whatever may come down the line and help them deal with the potential risks.

Here is the original post:
Advances in AI and machine learning could lead to better health care: lawyers - Lexpert

Machine Learning to Enable Positive Change An Interview with Adam Benzion – Elektor

Machine learning can enable positive change in society, says Adam Benzion, Chief Experience Officer at Edge Impulse. Read on to learn how the company is preventing unethical uses of its ML/AI development platform.

Machine learning can enable positive change in society, says Adam Benzion, Chief Experience Officer at Edge Impulse. Read on to learn how the company is preventing unethical uses of its ML/AI development platform.

Priscilla Haring-Kuipers: What Ethics in Electronics are you are working on?

Adam Benzion: At Edge Impulse, we try to connect our work to doing good in the world as a core value to our culture and operating philosophy. Our founders, Zach Shelby and Jan Jongboom define this as Machine learning can enable positive change in society, and we are dedicated to support applications for good. This is fundamental to what and how we do things. We invest our resources to support initiatives like UN Covid-19 Detect & Protect, Data Science Africa, and wildlife conservation with Smart Parks, Wildlabs, and ConservationX.

This also means we have a responsibility to prevent unethical uses of our ML/AI development platform. When Edge Impulse launched in January 2020, we decided to require a Responsible AI Licensefor our users, which prevents use for criminal purposes, surveillance, or harmful military or police applications. We have had a couple of cases where we have turned down a project that were not compatible with this license. There are also many positive uses for ML in governmental and defense applications, which we do support as compatible with our values.

We also joined 1% for the Planet, pledging to donate 1% of our revenue to support nonprofit organizations focused on the environment. I personally lead an initiative that focuses on elephant conservation where we have partnered with an organization called Smart Parks and helped developed a new AI-powered tracking collar that can last for eight years and be used to understand how the elephants communicate with each other. This is now deployed in parks across Mozambique.

Haring-Kuipers: What is the most important ethical question in your field?

Benzion: There are a lot of ethical issues with AI being used in population control, human recognition and tracking, let alone AI-powered weaponry. Especially where we touch human safety and dignity, AI-powered applications must be carefully evaluated, legislated and regulated. We dream of automation, fun magical experiences, and human-assisted technologies that do things better, faster and at a lesser cost. Thats the good AI dream, and thats what we all want to build. In a perfect world, we should all be able to vote on the rules and regulations that govern AI.

Haring-Kuipers: What would you like to include in an Electronics Code of Ethics?

Benzion: We need to look at how AI impacts human rights and machine accountability aka, when AI-powered machines fail, like in the case of autonomous driving, who takes the blame? Without universal guidelines to support us, it is up to every company in this field to find its core values and boundaries so we can all benefit from this exciting new wave.

Haring-Kuipers: An impossible choice ... The most important question before building anything is? A) Should I build this? B) Can I build this? C) How can I build this?

Benzion: A. Within reason, you can build almost anything, so ask yourself: Is the effort vs. outcome worth your precious time?

Priscilla Haring-Kuipers writes about technology from a social science perspective. She is especially interested in technology supporting the good in humanity and a firm believer in effect research. She has an MSc in Media Psychology and makes This Is Not Rocket Science happen.

Go here to see the original:
Machine Learning to Enable Positive Change An Interview with Adam Benzion - Elektor