Archive for the ‘Machine Learning’ Category

AI vs. Machine Learning vs. Deep Learning vs. Neural … – IBM

These terms are often used interchangeably, but what are the differences that make them each a unique technology?

Technology is becoming more embedded in our daily lives by the minute, and in order to keep up with the pace of consumer expectations, companies are more heavily relying on learning algorithms to make things easier. You can see its application in social media (through object recognition in photos) or in talking directly todevices (like Alexa or Siri).

These technologies are commonly associated with artificial intelligence, machine learning, deep learning, and neural networks, and while they do all play a role, these terms tend to be used interchangeably in conversation, leading to some confusion around the nuances between them. Hopefully, we can use this blog post to clarify some of the ambiguity here.

Perhaps the easiest way to think about artificial intelligence, machine learning, neural networks, and deep learning is to think of them like Russian nesting dolls. Each is essentially a component of the prior term.

That is, machine learning is a subfield of artificial intelligence. Deep learning is a subfield of machine learning, and neural networks make up the backbone of deep learning algorithms. In fact, it is the number of node layers, or depth, of neural networks that distinguishes a single neural network from a deep learning algorithm, which must have more than three.

Neural networksand more specifically, artificial neural networks (ANNs)mimic the human brain through a set of algorithms. At a basic level, a neural network is comprised of four main components: inputs, weights, a bias or threshold, and an output. Similar to linear regression, the algebraic formula would look something like this:

From there, lets apply it to a more tangible example, like whether or not you should order a pizza for dinner. This will be our predicted outcome, or y-hat. Lets assume that there are three main factors that will influence your decision:

Then, lets assume the following, giving us the following inputs:

For simplicity purposes, our inputs will have a binary value of 0 or 1. This technically defines it as a perceptron as neural networks primarily leverage sigmoid neurons, which represent values from negative infinity to positive infinity. This distinction is important since most real-world problems are nonlinear, so we need values which reduce how much influence any single input can have on the outcome. However, summarizing in this way will help you understand the underlying math at play here.

Moving on, we now need to assign some weights to determine importance. Larger weights make a single inputs contribution to the output more significant compared to other inputs.

Finally, well also assume a threshold value of 5, which would translate to a bias value of 5.

Since we established all the relevant values for our summation, we can now plug them into this formula.

Using the following activation function, we can now calculate the output (i.e., our decision to order pizza):

In summary:

Y-hat (our predicted outcome) = Decide to order pizza or not

Y-hat = (1*5) + (0*3) + (1*2) - 5

Y-hat = 5 + 0 + 2 5

Y-hat = 2, which is greater than zero.

Since Y-hat is 2, the output from the activation function will be 1, meaning that we will order pizza (I mean, who doesn't love pizza).

If the output of any individual node is above the specified threshold value, that node is activated, sending data to the next layer of the network. Otherwise, no data is passed along to the next layer of the network. Now, imagine the above process being repeated multiple times for a single decision as neural networks tend to have multiple hidden layers as part of deep learning algorithms. Each hidden layer has its own activation function, potentially passing information from the previous layer into the next one. Once all the outputs from the hidden layers are generated, then they are used as inputs to calculate the final output of the neural network. Again, the above example is just the most basic example of a neural network; most real-world examples are nonlinear and far more complex.

The main difference between regression and a neural network is the impact of change on a single weight. In regression, you can change a weight without affecting the other inputs in a function. However, this isnt the case with neural networks. Since the output of one layer is passed into the next layer of the network, a single change can have a cascading effect on the other neurons in the network.

See this IBM Developer article for a deeper explanation of the quantitative concepts involved in neural networks.

While it was implied within the explanation of neural networks, its worth noting more explicitly. The deep in deep learning is referring to the depth of layers in a neural network. A neural network that consists of more than three layerswhich would be inclusive of the inputs and the outputcan be considered a deep learning algorithm. This is generally represented using the following diagram:

Most deep neural networks are feed-forward, meaning they flow in one direction only from input to output. However, you can also train your model through backpropagation; that is, move in opposite direction from output to input. Backpropagation allows us to calculate and attribute the error associated with each neuron, allowing us to adjust and fit the algorithm appropriately.

As we explain in our Learn Hub article on Deep Learning, deep learning is merely a subset of machine learning. The primary ways in which they differ is in how each algorithm learns and how much data each type of algorithm uses. Deep learning automates much of the feature extraction piece of the process, eliminating some of the manual human intervention required. It also enables the use of large data sets, earning itself the title of "scalable machine learning" in this MIT lecture. This capability will be particularly interesting as we begin to explore the use of unstructured data more, particularly since 80-90% of an organizations data is estimated to be unstructured.

Classical, or "non-deep", machine learning is more dependent on human intervention to learn. Human experts determine the hierarchy of features to understand the differences between data inputs, usually requiring more structured data to learn. For example, let's say that I were to show you a series of images of different types of fast food, pizza, burger, or taco. The human expert on these images would determine the characteristics which distinguish each picture as the specific fast food type. For example, the bread of each food type might be a distinguishing feature across each picture. Alternatively, you might just use labels, such as pizza, burger, or taco, to streamline the learning process through supervised learning.

"Deep" machine learning can leverage labeled datasets, also known as supervised learning, to inform its algorithm, but it doesnt necessarily require a labeled dataset. It can ingest unstructured data in its raw form (e.g. text, images), and it can automatically determine the set of features which distinguish "pizza", "burger", and "taco" from one another.

For a deep dive into the differences between these approaches, check out "Supervised vs. Unsupervised Learning: What's the Difference?"

By observing patterns in the data, a deep learning model can cluster inputs appropriately. Taking the same example from earlier, we could group pictures of pizzas, burgers, and tacos into their respective categories based on the similarities or differences identified in the images. With that said, a deep learning model would require more data points to improve its accuracy, whereas a machine learning model relies on less data given the underlying data structure. Deep learning is primarily leveraged for more complex use cases, like virtual assistants or fraud detection.

For further info on machine learning, check out the following video:

Finally, artificial intelligence (AI) is the broadest term used to classify machines that mimic human intelligence. It is used to predict, automate, and optimize tasks that humans have historically done, such as speech and facial recognition, decision making, and translation.

There are three main categories of AI:

ANI is considered weak AI, whereas the other two types are classified as strong AI. Weak AI is defined by its ability to complete a very specific task, like winning a chess game or identifying a specific individual in a series of photos. As we move into stronger forms of AI, like AGI and ASI, the incorporation of more human behaviors becomes more prominent, such as the ability to interpret tone and emotion. Chatbots and virtual assistants, like Siri, are scratching the surface of this, but they are still examples of ANI.

Strong AI is defined by its ability compared to humans. Artificial General Intelligence (AGI) would perform on par with another human while Artificial Super Intelligence (ASI)also known as superintelligencewould surpass a humans intelligence and ability. Neither forms of Strong AI exist yet, but ongoing research in this field continues. Since this area of AI is still rapidly evolving, the best example that I can offer on what this might look like is the character Dolores on the HBO show Westworld.

While all these areas of AI can help streamline areas of your business and improve your customer experience, achieving AI goals can be challenging because youll first need to ensure that you have the right systems in place to manage your data for the construction of learning algorithms. Data management is arguably harder than building the actual models that youll use for your business. Youll need a place to store your data and mechanisms for cleaning it and controlling for bias before you can start building anything. Take a look at some of IBMs product offerings to help you and your business get on the right track to prepare and manage your data at scale.

Read the original post:
AI vs. Machine Learning vs. Deep Learning vs. Neural ... - IBM

The Role of Artificial Intelligence and Machine Learning in Ransomware Protection: How enterprises Can Leve… – Security Boulevard

The Role of Artificial Intelligence and Machine Learning in Ransomware Protection: How enterprises Can Leve...  Security Boulevard

Here is the original post:
The Role of Artificial Intelligence and Machine Learning in Ransomware Protection: How enterprises Can Leve... - Security Boulevard

Artificial Intelligence in Health Care: Benefits and Challenges of Machine Learning Technologies for Medical Diagnostics – Government Accountability…

What GAO Found

Several machine learning (ML) technologies are available in the U.S. to assist with the diagnostic process. The resulting benefits include earlier detection of diseases; more consistent analysis of medical data; and increased access to care, particularly for underserved populations. GAO identified a variety of ML-based technologies for five selected diseases certain cancers, diabetic retinopathy, Alzheimers disease, heart disease, and COVID-19 with most technologies relying on data from imaging such as x-rays or magnetic resonance imaging (MRI). However, these ML technologies have generally not been widely adopted.

Academic, government, and private sector researchers are working to expand the capabilities of ML-based medical diagnostic technologies. In addition, GAO identified three broader emerging approachesautonomous, adaptive, and consumer-oriented ML-diagnosticsthat can be applied to diagnose a variety of diseases. These advances could enhance medical professionals capabilities and improve patient treatments but also have certain limitations. For example, adaptive technologies may improve accuracy by incorporating additional data to update themselves, but automatic incorporation of low-quality data may lead to inconsistent or poorer algorithmic performance.

Spectrum of adaptive algorithms

We identified several challenges affecting the development and adoption of ML in medical diagnostics:

These challenges affect various stakeholders including technology developers, medical providers, and patients, and may slow the development and adoption of these technologies.

GAO developed three policy options that could help address these challenges or enhance the benefits of ML diagnostic technologies. These policy options identify possible actions by policymakers, which include Congress, federal agencies, state and local governments, academic and research institutions, and industry. See below for a summary of the policy options and relevant opportunities and considerations.

Policy Options to Help Address Challenges or Enhance Benefits of ML Diagnostic Technologies

Evaluation (reportpage 28)

Policymakers could create incentives, guidance, or policies to encourage or require the evaluation of ML diagnostic technologies across a range of deployment conditions and demographics representative of the intended use.

This policy option could help address the challenge of demonstrating real world performance.

Data Access (reportpage 29)

Policymakers could develop or expand access to high-quality medical data to develop and test ML medical diagnostic technologies. Examples include standards for collecting and sharing data, creating data commons, or using incentives to encourage data sharing.

This policy option could help address the challenge of demonstrating real world performance.

Collaboration (reportpage 30)

Policymakers could promote collaboration among developers, providers, and regulators in the development and adoption of ML diagnostic technologies. For example, policymakers could convene multidisciplinary experts together in the design and development of these technologies through workshops and conferences.

This policy option could help address the challenges of meeting medical needs and addressing regulatory gaps.

Source: GAO. | GAO-22-104629

Diagnostic errors affect more than 12 million Americans each year, with aggregate costs likely in excess of $100 billion, according to a report by the Society to Improve Diagnosis in Medicine. ML, a subfield of artificial intelligence, has emerged as a powerful tool for solving complex problems in diverse domains, including medical diagnostics. However, challenges to the development and use of machine learning technologies in medical diagnostics raise technological, economic, and regulatory questions.

GAO was asked to conduct a technology assessment on the current and emerging uses of machine learning in medical diagnostics, as well as the challenges and policy implications of these technologies. This report discusses (1) currently available ML medical diagnostic technologies for five selected diseases, (2) emerging ML medical diagnostic technologies, (3) challenges affecting the development and adoption of ML technologies for medical diagnosis, and (4) policy options to help address these challenges.

GAO assessed available and emerging ML technologies; interviewed stakeholders from government, industry, and academia; convened a meeting of experts in collaboration with the National Academy of Medicine; and reviewed reports and scientific literature. GAO is identifying policy options in this report.

For more information, contact Karen L. Howard at (202) 512-6888 or howardk@gao.gov.

More here:
Artificial Intelligence in Health Care: Benefits and Challenges of Machine Learning Technologies for Medical Diagnostics - Government Accountability...

4 Types of Machine Learning to Know – Built In

How else could you analyze 36,000 naked mole rat chirps to find out what theyre talking about?

Or translate your cats purr or meow to know its just chilling?

Or auto-generate an image like this just by typing in the words: giant squid assembling Ikea furniture?

Thanks to different types of machine learning, thats all seemingly possible.

More on AI vs. Machine LearningArtificial Intelligence vs. Machine Learning vs. Deep Learning: Whats the Difference?

Machine learning is a branch of artificial intelligence where algorithms identify patterns in data, which are then used to make accurate predictions or complete a given task, like filtering spam emails. The process, which relies on algorithms and statistical models to identify patterns in data, doesnt require consistent, or explicit, programming. Its then further optimized through trial and error and feedback, meaning machines learn by experience and increased exposure to data, much the same way humans do.

Today, machine learning is a popular tool used in a range of industries, from banking and insurance where its used to detect fraud to healthcare, retail marketing and trend forecasting in housing and other markets.

Supervised learning is machine learning with a human touch.

With supervised learning, tagged input and output data is constantly fed and re-fed into human-trained systems that offer real-time guidance, with predictions increasing in accuracy after each new data set is fed into the system. One of the most popular forms of machine learning, supervised learning requires a significant amount of human intervention on data the system may be uncertain about and time along with vast volumes of data to make accurate predictions, which restricts use from one use case to another.

Supervised learning, like each of these machine learning types, serves as an umbrella for specific algorithms and statistical models. Here are a few that fall under supervised learning.

Used to further categorize data think pesky spam and unrelenting marketing emails classification algorithms are a great tool to sort, and even hide, that data. (If you use a Gmail or any large email client, you may notice that some emails are automatically redirected to a spam or promotions folder, essentially hiding those emails from view.)

Under the broad umbrella of classification algorithms, theres an even narrower subset of specific machine learning algorithms like naive Bayes classifier algorithms, support vector machine algorithms, decision trees and random forest models that are used to sort data.

When it comes to forecasting trends, like home prices in the housing market, regression algorithms are popular tools. These algorithms identify relationships between outcomes and other independent variables to make accurate predictions. Linear regression algorithms are the most widely used, but other commonly used regression algorithms include logistic regressions, ridge regressions and lasso regressions.

With unsupervised learning, raw data thats neither labeled nor tagged is processed by the system, meaning less legwork for humans.

Unsupervised learning algorithms work by identifying patterns within a data set, grouping information based on similarities and differences, which is helpful when youre not sure what to look for though outcomes and predictions are less accurate than with supervised learning. Unsupervised learning is especially useful in customer and audience segmentation, as well as identifying patterns in recorded audio and image data.

Heres one example of an unsupervised learning algorithm.

Clustering algorithms are the most widely used example of unsupervised machine learning. These algorithms focus on similarities within raw data, and then groups that information accordingly. More simply, these algorithms provide structure to raw data. Clustering algorithms are often used with marketing data to garner customer (or potential customer) insights, as well as for fraud detection. Some clustering algorithms include KNN clustering, principal component analysis, hierarchical clustering and k-means clustering.

Semi-supervised learning offers a balanced mix of both supervised and unsupervised learning. With semi-supervised learning, a hybrid approach is taken as small amounts of tagged data are processed alongside larger chunks of raw data. This strategy essentially gives algorithms a head start when it comes to identifying relevant patterns and making accurate predictions when compared with unsupervised learning algorithms, without the time, effort and cost associated with more labor-intensive supervised learning algorithms.

Semi-supervised learning is typically used in applications ranging from fraud detection to speech recognition as well as text document classification. Because semi-supervised learning uses labeled data and unlabeled data, it often relies on modified unsupervised and unsupervised algorithms trained for both data types.

More on Machine Learning Innovation28 Machine Learning Companies You Should Know

With reinforcement learning, AI-powered computer software programs outfitted with sensors, commonly referred to as intelligent agents, respond to their surrounding environment think simulations, computer games and the real world to make decisions independently that achieve a desired outcome. By perceiving and interacting with their environment, intelligent agents learn through trial and error, ultimately reaching optimal proficiency through positive reinforcement, or rewards, during the learning process. Reinforcement learning is often used in robotics, helping robots acquire specific skills and behaviors.

These are some of the algorithms that fall under reinforcement learning.

Q-learning is a reinforcement learning algorithm that does not require a model of the intelligent agents environment. Q-learning algorithms calculate the value of actions based on rewards resulting from those actions to improve outcomes and behaviors.

Used in the development of self-driving cars, video games and robots, deep reinforcement learning combines deep learning machine learning based on artificial neural networks with reinforcement learning where actions, or responses to the artificial neural networks environment, are either rewarded or punished. With deep reinforcement learning, vast amounts of data and increased computing power are required.

Read the original:
4 Types of Machine Learning to Know - Built In

Google turns to machine learning to advance translation of text out in the real world – TechCrunch

Google is giving its translation service an upgrade with a new machine learning-powered addition that will allow users to more easily translate text that appears in the real world, like on storefronts, menus, documents, business cards and other items. Instead of covering up the original text with the translation the new feature will instead smartly overlay the translated text on top of the image, while also rebuilding the pixels underneath with an AI-generated background to make the process of reading the translation feel more natural.

Often its that combination of the word plus the context like the background image that really brings meaning to what youre seeing, explained Cathy Edwards, VP and GM of Google Search, in a briefing ahead of todays announcement. You dont want to translate a text to cover up that important context that can come through in the images, she said.

Image Credits: Google

To make this process work, Google is using a machine learning technology known as generative adversarial networks, otherwise known as GAN models the same technology that powers the Magic Eraser feature to remove objects from photos taken on the Google Pixel smartphones. This advancement will allow Google to now blend the translated text into even very complex images, making the translation feel natural and seamless, the company says. It should seem as if youre looking at the item or object itself with translated text, not an overlay obscuring the image.

The feature is another development that seems to point to Googles plans to further invest in the creation of new AR glasses, as an ability to translate text in the real world could be a key selling point for such a device. The company noted that every month, people use Google to translate text and images over a billion times in more than 100 languages. It also this year began testing AR prototypes in public settings with a handful of employees and trusted testers, it said.

While theres obvious demand for better translation, its not clear if users will prefer to use their smartphone for translations rather than special eyewear. After all, Googles first entry into the smartglasses space, Google Glass, ultimately failed as a consumer product.

Google didnt speak to its long-term plans for the translation feature today, noting only that it would arrive sometime later this year.

Read the original post:
Google turns to machine learning to advance translation of text out in the real world - TechCrunch