Archive for the ‘Machine Learning’ Category

Finding Optimal Learning Rates. The Learning Rate Range Test | by Francesco Franco | Jan, 2024 – Medium

The Learning Rate Range Test

Learning Rates are important when configuring a neural network. But choosing one is not easy, as there is no single best learning rate due to its dependency on your dataset.

Now, how to choose one? And should it be a fixed one or should I use learning rate decay? If I know how Ill choose one, how to do so objectively? Theyre all interesting questions and well answer each of them in this blog post.

Today, well look at multiple things. In this blog post, well

Are you ready? Lets go!

Lets take a look at the high-level supervised machine learning process:

Training such models goes through a simple, sequential and cyclical process:

2. These predictions are compared with the targets, which represent the ground truth for the features. That is, they are the actual classes in the classification scenario above.

3. The difference between the predictions and the actual targets can be captured in the loss value. Depending on your machine learning problem, you can choose from a wide range of loss functions.

4. Based on the loss value, the model computes the best way of making it better i.e., it computes gradients using backpropagation.

5. Based on these gradients, an optimizer (such as gradient descent or an adaptive optimizer) will adapt the model accordingly.

6. The process starts again. Likely, and hopefully, the model performs slightly better this time.

Once youre happy with the end results, you stop the machine learning process, and you have a model that can hopefully be used in production.

Now, if we wish to understand the concept of the Learning Rate Range Test in more detail, we must take a look at model optimizers. In particular, we should study the concept of a learning rate.

When specifying an optimizer, its possible to configure the learning rate most of the time. For example, the Adam optimizer in Keras (Keras, n.d.):

Indeed, here, the learning rate can be set with learning_rate - and it is set to 0.001 by default.

Now, what is a learning rate? If our goal is to study the Learning Rate Range Test, its critical to understand the concept of a learning rate, isnt it? 😛

Lets go back to step 4 of the machine learning process outlined above: computing gradients with backpropagation.

I always compare optimizing a model with walking down a mountain.

The mountain represents the loss landscape, or how the loss value changes with respect to the particular model state, and your goal is to walk to the valley, where loss is lowest.

This analogy can be used to understand what backpropagation does and why you need learning rates to control it.

Essentially, I like to see backpropagation as a step-computer. While you walk down the mountain, you obviously set steps towards your goal. However, you dont want to miss out on possible shortcuts towards the valley. This requires you to take smaller steps.

Now this is why learning rates are useful: while backpropagation will likely compute relatively large steps, you wish to slow down your descent to allow yourself to look around more thoroughly. Perhaps, youll indeed find that path that brings you to the valley in a shorter amount of time!

So, while backpropagation is a step-computer, the learning rate will allow you to control the size of your steps. While youll take longer to arrive, you might do so more efficiently after all. Especially when the valley is very narrow, you might no longer overstep it because your steps are too large.

This analogy also perfectly explains why the learning rate in the Adam example above was set to learning_rate = 0.001: while it uses the computed gradient for optimization, it makes it 1,000 times smaller first, before using it to change the model weights with the optimizer.

Lets now build in a small intermezzo: the concepts of overfitting and underfitting, and checking for them by using validation and test loss.

Often, before you train a model with all your data, youll first evaluate your choice with hold-out techniques or K-fold Cross Validation. These generate a dataset split between training data and testing data, which youll need, as youre going to need to decide when the model is good enough.

And good enough is the precise balance between having it perform better and having it perform too adequately.

In the first case, which is called underfitting, your model can still improve in a predictive sense. By feeding more samples, and optimizing further, its likely to improve and show better performance over time.

However, when you do so for too long, the model will overfit or adapt too much to your dataset and its idiosyncrasies. As your dataset is a sample, which is drawn from the true population you wish to train for, you face differences between the sample and population means and variances by definition. If your model is over-adapted to your training set, its likely that these differences get in the way when you want to use it for new data from the population. And likely, this will occur when you use your model in production.

Youll therefore always have to strike a balance between the models predictive performance and the models ability to generalize. This is a very intricate balance that can often only be found in a small interval of your training iterations.

Fortunately, its possible to detect overfitting using a plot of your loss value (Smith, 2018). Always take your validation or test loss for this. Use your test loss if you dont split your training data in true training and validation data (which is the case if youre simply evaluating models with e.g. K-fold Cross Validation). Use validation loss if you evaluate models and train the final one at once (requiring training, validation and testing data). In both cases, you ensure that you use data that the model has not seen before, avoiding that you as a student mark your own homework.

This is especially useful when you are using e.g. TensorBoard, where you can inspect progress in real-time.

However, its also possible to generate a plot when your training process finishes. Such diagrams make things crisply clear:

In the first part of the training process, the models predictive performance is clearly improving. Hence, it is underfit during that stage and additional epochs can improve model performance.

However, after about the 20th epoch, validation loss starts improving, while (you must assume this) training loss still decreases. This means that while the model gets better and better in predicting the training data, it is getting worse in predicting the validation data. Hence, after the 20th epoch, overfitting starts to occur.

While you can reduce the impact of overfitting or delay it with regularizers, and Dropout, its clear that for this model and corresponding configuration, the optimum is achieved at the 20th epoch. Whats important to understand here is that this optimum emerges given the model architecture and configuration! If you changed the architecture, or configured it differently, you might e.g. delay overfitting or achieve even lower validation loss minimums. Thats why training neural networks is more of an art than a science.

As choosing a learning rate setting impacts the loss significantly, its good that its clear what overfitting and underfitting are, and how you can spot them on a plot. Lets now take a look at choosing a learning rate.

Which learning rate to choose? What options do I have?

Good questions.

Lets now take a look at two ways of setting a learning rate:

Lets take a look at the Adam optimizer implementation for Keras again (Keras, n.d.):

Here, the learning rate is set as a constant. Its a fixed value which is used in every epoch.

Unfortunately, this doesnt produce an optimal learning process.

Lets take a look at two other models that we trained for another blog post:

The model in orange clearly produces a low loss rapidly, and much faster than the model in blue. However, we can also observe some overfitting to occur after approximately the 10th epoch. Not so weird, given the fact that we trained for ten times longer than strictly necessary.

Now, the rapid descent of the loss value and the increasingly slower pace of falling down are typical for machine learning settings which use optimizers like gradient descent or adaptive ones.

Why is this the case? And why is this important for a learning rate?

Lets dig a little bit deeper.

Supervised machine learning models work with model weights: on initialization, models are configured to accept certain input data, and they create weight vectors in which they can store the numeric patterns they observe. Eventually, they multiply these vectors with the input vectors during training and production usage.

Now, when you start training, its often best practice to initialize your weight vectors randomly, or by using approaches adapted to your model.

For the forward pass (step 1 of the 6 steps outlined at the start), you can imagine that multiplying your input data with random weights will produce very poor results. Indeed, loss is likely high during the first few epochs. However, in this stage, its also possible to make large steps towards accurate weights and hence adequate loss values. Thats why you see loss descend so rapidly during the first few iterations of a supervised ML training process: its looking for a global loss minimum very fast.

However, as you walk down that loss mountain, the number of possible steps that can be taken goes down by function of the number of steps you already set. This is also true for loss landscapes in neural networks: once you get close to the global loss minimum (should it exist), then room for improvement gets tighter and tighter. For this reason, loss balances out (or even gets worse! i.e. overfitting) over time.

This rationale as to why loss values initially decrease substantially while balancing out later on is a substantial issue for our learning rate:

We dont want it to be static.

As we recall, the learning rate essentially tells the model how much of the gradient to use during optimization. Remember that with learning_rate = 0.001 only 1/1000th of the computed gradient is used.

For the latter part of the training process, this would be good, as theres no point in setting large steps. Instead, here, you want to set small ones in order to truly find the global minimum, without overshooting it every time. You might even want to use lower learning rate values here.

However, for the first part of the training process, such low learning rates are problematic. Here, you would actually benefit from large learning rates, for the simple reason that you can afford setting large steps during the first few epochs. Having a small fixed learning rate will thus unnecessarily slow down your learning process or make finding a global minimum in time even impossible!

Hence, a static learning rate is in my opinion not really a good idea when training a neural network.

Now, of course, you can choose to use a static learning rate that lies somewhere between the large and small ones. However, is this really a solution, especially when better solutions are available?

Lets now introduce the concept of a decaying learning rate. Eventually, well now also begin to discover why the Learning Rate Range Test can be useful.

Instead of a fixed learning rate, wouldnt it be good if we could reduce it over time?

Indeed, this seems to be an approach to reducing the negative impact of a fixed learning rate. By using a so-called decay scheme, which decides how the learning rate decays over time, you can exhibit control over the learning rate for an arbitrary epoch.

There are many decay schemes available, and here are four examples:

Linear decay allows you to start with a large learning rate, decay it pretty rapidly, and then keeping it balanced at a static one. Together with step decay, which keeps your learning rate fixed for a set number of epochs, these learning rates are not smooth.

Its also possible to use exponential and time decay, which are in fact smooth. With exponential decay, your learning rate decays rapidly at first, and slower over time but smoothly. Time decay is like a diesel engine: its a slow start, with great performance once the car has velocity, balancing out when its max is reached.

While each has their benefits, there is a wide range of new questions:

These are all important questions and the list is going on and on. Its impractical if not impossible to train your whole architecture every time such a question pops up, to compare. Neither is performing a grid search operation, which is expensive (Smith, 2018). However, especially with respect to the first two questions, there is another way: the Learning Rate Range Test (Smith, 2018).

Lets take a look at what it is and what it does!

With the Learning Rate Range Test, its possible to find an estimate of the optimal learning rate quite quickly and accurately. Smith (2018) gives a perfect introduction to the topic:

It is relatively straight-forward: in a test run, one starts with a very small learning rate, for which one runs the model and computes the loss on the validation data. One does this iteratively, while increasing the learning rate exponentially in parallel. One can then plot their findings into a diagram representing loss at the y axis and the learning rate at the x axis. The x value representing the lowest y value, i.e. the lowest loss, represents the optimal learning rate for the training data.

However, he also argues that

The learning rate at this extrema is the largest value that can be used as the learning rate for the maximum bound with cyclical learning rates but a smaller value will be necessary when choosing a constant learning rate or the network will not begin to converge.

Therefore, well simply pick a value just a tiny bit to the left of the loss minimum.

One such Learning Rate Range Test could, theoretically, yield the following plot:

Its a real plot generated with a ConvNet tested for MNIST data.

We see the fastest learning rate descent at 10^-1.95: in the first plot, the descent is steepest there. The second plot confirms this as it displays the lowest loss delta, i.e. where negative change in loss value (= improvement) was highest given change of learning rate. By consequence, we would choose this learning rate.

Now that we know what the LR Range Test is, its time to implement it with Keras. Fortunately, thats not a difficult thing to do!

Lets take a look.

We need a few dependencies if we wish to run this example successfully. Before you continue, make sure that you have them installed. The dependencies are as follows:

Now, keep your command prompt open, and generate a new file, e.g. touch lr-finder.py. Open this file in a code editor, and you're ready to code.

The first thing I always do is to import everything we need:

Next, we set the configuration for our test scenario. Well use batches of 250 samples for testing. Our images are 28 x 28 pixels and are one-channeled, as the MNIST dataset is grayscale. The number of classes equals 10, while well test for 5 epochs (unless one of the abort conditions, such as a loss value that goes out of the roof, occurs before then). Our estimated start learning rate is 10^-4 while we stop at 10. When generating a plot of our test results, we use a moving average of 20 loss values for smoothing the line, to make our results more interpretable.

The next things we do are related to the dataset:

Then, we specify the model architecture. Its not the most important thing for today, but here it is. Its a simple ConvNet using Max Pooling:

Now, heres the interesting part. We specified the model architecture in our previous step, so we can now decide about which tests we want to perform. For the sake of simplicity, we specify only two, but you can test as much as youd like:

As you can see, the tests that we will perform today will find the best learning rate for the traditional SGD optimizer, and also for the Adam one. Whats great is that by plotting them together (thats what we will do later), we can even compare the performance of the optimizer given this architecture. We can thus also answer the question :Which optimizer produces lowest loss?

Then, we perform the test. For every test, we specify the test_optimizer to be used as well as the label, and compile the model following that particular optimizer. This is followed by instantiating the Learning Rate Range Test through LRFinder, and performing the actual test using the training data and the configuration we specified above.

Once the test has finished this may either be the case because we have completed all epochs, because loss becomes NaN or because loss becomes too large - we take the learning_rates, the losses and loss_changes and store them in containers. However, before storing the loss changes, we smooth them using the moving_average that we defined before. Credits for the smoothing part of the code go to the keras-lr-finder package

After smoothing, we store the learning rates per step, as well as the test losses and the labels, to the containers we specified before. This iteration will ensure that all tests are performed in line with how we want them to perform.

Now that we have the outcomes, we can visualize them! 🙂 Well use Matplotlib for doing so, and well create two plots: one for the loss deltas and one for the actual loss values.

For each, the first thing we do is iterate over the containers, and generate a plot for each test with plt.plot. In our case, this generates two plots, both on top of each other. This is followed by plot configuration - for example, we set the x axis to logarithmic scale, and finally by a popup that visualizes the end result.

All right, you should now have a model that runs!

Open up that terminal again, cd to the folder where your .py file is located (if you're not already there), and run e.g. python lr-finder.py. You should see the epochs begin, and once they finish, two plots similar to these ones should pop up sequentially:

The results are very clear: for this training setting, Adam performs substantially better. We can observe that it reaches a lower loss value compared to SGD (first plot), and that it does so in a much shorter time (second plot the negative delta occurs at a lower learning rate). Likely, this is how we benefit from the fact that Adam performs local parameter updates, whereas SGD does not. If we had to choose between these two optimizers, it would clearly be Adam with a learning rate of 10^-3.95.

Full code is available at my Github repository.

In this blog post, we looked at the Learning Rate Range Test for finding the best learning rate for your neural network empirically.

This was done by looking at the concept of a learning rate before moving to Python code. What is a learning rate? Why is it useful? And how to configure it objectively? Do I need a fixed or a decaying learning rate? Those are all questions that we answered in the first part of this blog post.

In the second part, we introduced the Learning Rate Range Test: a method based on Smith (2018) that allows us to empirically determine the best learning rate for the model and its compile settings that you specify. It even allows us to compare multiple settings at once, and which learning rate is best!

In the third and final part, we used the keras-lr-finder package to implement the Learning Rate Range Test. With blocks of Python code, we explained each step of doing so - and why we set that particular step. This should allow you to use the Learning Rate Range Test in your own projects too.

Read more from the original source:
Finding Optimal Learning Rates. The Learning Rate Range Test | by Francesco Franco | Jan, 2024 - Medium

Artificial Intelligence Revolutionizes Material Discovery Across Industries – VoIp.Review

In a groundbreaking shift, artificial intelligence (AI) is revolutionizing material discovery, unlocking new possibilities in renewable energy, semiconductors, and pharmaceuticals. GlobalData, a prominent data and analytics firm, asserts that AI is spearheading a transformative era in research and development, dismantling traditional barriers and fueling unprecedented advancements in material science.

Saurabh Daga, Associate Project Manager of Disruptive Tech at GlobalData, underscores the pivotal role of AI in addressing specific industry needs. In renewable energy, AI is pivotal in surmounting efficiency and cost barriers essential for growth. The semiconductor sector relies on AI to identify materials for miniaturization and heat management crucial for future technologies. In pharmaceuticals, AI accelerates drug discovery and enhances biocompatibility, propelling personalized medicine. Essentially, AI has become the linchpin for unlocking innovative materials and propelling industry-specific progress.

Recent initiatives from tech giants and startups underscore the potential of AI in material discovery. Google DeepMinds Graphical Networks for Material Exploration (GNoME) employs advanced deep-learning models for new material structure discovery and is utilized at Lawrence Berkeley National Laboratorys A-Lab, combining robotics and machine learning for novel material synthesis.

Other noteworthy AI-driven endeavors include Quantum Generative Materials LLCs (GenMat) Generative AI for faster material simulation, a collaboration between Fujitsu and Icelandic startup Atmonia leveraging high-performance computing and AI for carbon-neutral technology advancements, and IBMs AI-enhanced, cloud-based molecular design platform Molecule Generation Experience (MolGX).

Despite the promise, challenges persist. Daga emphasizes that overcoming obstacles related to data, algorithms, and cross-industry collaboration is crucial for AI models to effectively accelerate material discovery. Establishing a robust supporting infrastructure is deemed vital to fully leverage the benefits offered by AI-powered material discovery. As AI continues to evolve, its transformative impact on material science is set to reshape development processes across key industries.

Read the original here:
Artificial Intelligence Revolutionizes Material Discovery Across Industries - VoIp.Review

Japan introduces world’s first machine learning model to screen Alzheimer’s disease – BSA bureau

Japan-based Oita University and pharmaceutical firm Eisai Co. have announced the development of the world's first machine learning model to predict amyloid beta(A) accumulation in the brain using a wristband sensor. This model is expected to enable screening for brain A accumulation, which is an important pathological factor of Alzheimer's disease(AD), simply by collecting biological and lifestyle data from daily life.

In AD, which is said to account for over 60% of the causes of dementia, A begins to accumulate in the brain about 20 years before the onset of the disease. This has prompted the development of new therapeutic drugs targeting A, leading to the approval of an humanized anti-soluble aggregated A monoclonal antibody in Japan.

The key to maximising treatment effects of the medicine is detecting A accumulation in the brain of patients with mild cognitive impairment before the onset of symptoms. Currently, although brain A accumulation can be detected by positron emission tomography(amyloid PET) and cerebrospinal fluid testing(CSF testing), the number of medical institutes able to perform those tests is limited, and the high cost and invasiveness of these tests are considered issues. Therefore, development of an inexpensive and easy-to-use screening method has been sought after to identify those who need amyloid PET or CSF testing.

Although lifestyle factors, including lack of exercise, social isolation, and sleep disorders, as well as diseases, including hypertension, diabetes, and cardiovascular disease are known risk factors for AD, thus far, studies applying machine learning models to predict brain A accumulation have used only cognitive function tests, blood tests, and brain imaging tests. In contrast, this is the first machine learning study to focus on "biological data" and "lifestyle data".

Link:
Japan introduces world's first machine learning model to screen Alzheimer's disease - BSA bureau

Wearable Biosensor Predicts Aggression Among Inpatients with Autism – mHealthIntelligence.com

January 02, 2024 -Physiological changes recorded by a wearable biosensor and analyzed through a machine-learning approach can help predict aggressive behavior before it occurs in young psychiatric facility patients with autism, new research shows.

The study published in JAMA Network Open last month by Northeastern University researchers adds to research examining whether imminent aggressive behavior among autistic inpatients can be determined via a wearable biosensor and machine learning.

About one in 36 children were diagnosed with autism spectrum disorder (ASD) in 2020, up from one in 44 in 2018, according to the Centers for Disease Control and Preventions (CDC) Autism and Developmental Disabilities Monitoring (ADDM) Network. The prevalence of aggression among children and adolescents with ASD is high, with parents reporting in a 2011 study that 68 percent had demonstrated aggression to a caregiver and 49 percent to non-caregivers.

Prior research work by the Northeastern University team showed that three minutes of wearable biosensor-recorded peripheral physiological and motion signals gathered from 20 youths with autism could predict aggression toward others one minute before it occurred using ridge-regularized logistic regression.

The new study aimed to extend that research to determine whether the recorded data could be used to predict aggression toward others even earlier.

The researchers enrolled 86 participants at four primary care psychiatric inpatient hospitals. The participants had confirmed diagnoses of autism and exhibited self-injurious behavior, emotion dysregulation, or aggression toward others.

The research team collected patient data from March 2019 to March 2020. They coded aggressive behavior in real time while study participants wore a commercially available biosensor that recorded peripheral physiological signals, including cardiovascular activity, electrodermal activity, and motion. Of the 86 enrolled participants, only 70 were included in the analysis. The excluded eight could not wear the biosensor due to tactile sensitivity and general behavioral noncompliance or were discharged before an observation could be made.

During the study period, researchers collected 429 independent naturalistic observational coding sessions totaling 497 hours. They observed 6,665 aggressive behaviors, comprising 3,983 episodes of self-injurious behavior, 2,063 episodes of emotion dysregulation, and 619 episodes of aggression toward others.

Researchers conducted time-series feature extraction and data preprocessing, after which they used ridge-regularized logistic regression, support vector machines, neural networks, and domain adaptation to analyze the extracted time-series features to make binary aggressive behavior predictions.

They found that logistic regression was the best-performing overall classifier across eight experiments conducted. The classifier was able to predict aggressive behavior three minutes before it occurred with a mean area under the receiver operating characteristic curve of 0.80.

Our results suggest that biosensor data and machine learning have the potential to redress an intractable problem for a sizable segment of the autism population who are understudied and underserved, the researchers concluded. Our findings may lay the groundwork for developing just-in-time adaptive intervention mobile health systems that may enable new opportunities for preemptive intervention.

This is the latest instance of an mHealth tool that can be used to support care for youth with autism.

In September, Atlanta-based researchers announced they had developed a biomarker-based, eye-tracking diagnostic tool to diagnose ASD. The technology includes a portable tablet on which children watch videos of social interaction. The device monitors their "looking behavior" to pinpoint the social information the children are and are not looking at, according to the press release.

Clinicians review the data collected by the device and provide children and their families with a diagnosis and measures of the child's individual abilities, including social disability, verbal ability, and non-verbal learning skills.

Additionally, a University of California, Davis researcher received a five-year, $3.2 million grant from the National Institutes of Health (NIH) to study whether an ASD diagnosis among infants can be assessed effectively via telehealth.

The researcher, Meagan Talbott, Ph.D., and her team will enroll 120 infants between the ages of 6 and 12 months showing signs of delays or differences in their development. They will conduct four telehealth sessions over a year as well as additional assessments when the child is 3 years old to determine whether telehealth can help pinpoint possible ASD.

Link:
Wearable Biosensor Predicts Aggression Among Inpatients with Autism - mHealthIntelligence.com

From Points to Pictures: How GenAI Will Change Companies – From Points to Pictures: How GenAI Will Change … – InformationWeek

As more companies embrace artificial intelligence, and specifically generative AI (GenAI), we are headed for a landmark moment. GenAI is today mostly used with public data but when GenAI models are trained, tuned and used with an enterprise's proprietary data the combination unlocks thehidden patterns, connections, and insights that can transform a business.

Ten years ago, basic pattern finding was core to the idea of leveraging big data. Machine learning spotted patterns within a particular domain, like offering an online customer the right product. However, with the new computational and software innovations of GenAI, data can come from a much wider variety of sources across domains, with deep learning finding not just patterns in one domain, but also entirely new relationships among different domains.

Earlier limitations of technology and communications meant organizational designs eventually relied on creating independent, fractured data silos and leaving on the table a great potential for collective learning and improvement. GenAI, embedded in reimagined and hyper-connected business processes, as well as new business intelligence platforms can change this.

Google is among several companies working on the next generation of data analytics systems that build wide data records combining structured, unstructured, at-rest and in-movement data that ultimately the digital footprint of a company into a powerful AI model. In future the focus will need to shift from big to wide data.

Related:ChatGPT: Benefits, Drawbacks, and Ethical Considerations for IT Organizations

GenAI can now be instructed to take on specific roles and achieve specific goals on behalf of humans. AI agents will be the future do-ers, taking on the role of personas, such as a data engineer, and executing tasks within a workflow.

Automation follows a pattern: Insights, actions and processes are abstracted and embodied in a system, new workflows are established around trust and reliability, and finally widespread adoption follows. Think of automatically scheduling maintenance on a machine in a factory, or problem-solving natural language interactions in a call center. These are examples of trusted software agents carrying out autonomous actions across an enterprise.

The goal for GenAI in analytics is to make observations and generate insights that can accelerate the work of people. People will be able to uncover new approaches, identify trends faster, collaborate in unforeseen ways, and delegate to agents that have permission to act in autonomous ways to increase organizational effectiveness.

Related:Hot Jobs in AI/Data Science for 2024

The role of human experts will be different and require new skill sets. Its less about doing the work and more about what a good result looks like and what the right question (or prompt) islike. For example, a sales analyst will spend less time on writing queries to gather data and more time on judging if data found by AI-driven insights offer a relevant insight. Business judgment becomes more important than technical analyst expertise.

Gen AI for analytics brings us back to really understanding the question one is trying to solve and frees us from much of the complication in the technical toolkits that took the lions share of our time and investments. Organizations that overly limit data access and employee empowerment are likely to become less competitive.

When things are changing in big ways, it's useful to think about the things that won't change, like offering value to customers, focusing on positive efficiencies, and creating new goods and services that excite people and improve lives. These core values will continue to steer the application of this new GenAI technology, and the world of business will be forever changed. GenAI represents a paradigm shift on how we will imagine and enact new ways of doing business, from enabling business users to "chat" with their business data, supercharging data and analytics teams with an always-on collaborator and automating business with AI-driven data intelligence.

Related:The Evolving Ethics of AI: What Every Tech Leader Needs to Know

Excerpt from:
From Points to Pictures: How GenAI Will Change Companies - From Points to Pictures: How GenAI Will Change ... - InformationWeek