Archive for the ‘Machine Learning’ Category

Machine Learning for education: Trends to expect in 2023 – Express Computer

By Subramanyam Reddy, CEO and Founder, KnowledgeHut upGrad

The global Machine Learning market was valued at US$ 6.9 billion in 2018 and is projected to grow at a CAGR of over 43% between 2019 to 2025, as per a Bloomberg report. Against this, ML has also emerged as one of the fastest-growing fields for career seekers, boasting a year-on-year growth rate of 300%, enjoying unprecedented levels of popularity among young professionals. Machine Learnings growth and popularity are rooted in the growing digitization of all sectors across the world, significantly, in education.

Particularly during the pandemic and after, the education sector has had to fast-track the adoption of tech in delivery. AI and ML applications have found their way into revolutionizing the education and EdTech sectors with the technology driving delivery, assessment, and enhanced retention amongst learners. After the USA, India is one of the biggest markets for e-learning solutions in the world.

The autonomous way in which computers learn is in turn creating an impact in how learning happens in classrooms and beyond. Machine Learning (ML)s giant strides in rapidly transforming the field of education in India are expected to continue in 2023 and beyond.

Lets look at some of the trends emerging in the sector this year and beyond:

Personalised learning is emerging as one of the forerunners in the impact areas of ML in education. Across schools and universities in India, personalised learning is gaining traction and it is driven by AI & ML. Analyzing patterns and behaviors, ML aids instructors and teachers in customising learning for different learners needs. The effectiveness of these interventions is also analyzed by ML.

Another emerging trend driven by AI & ML in education is the development of AI-powered tools to aid learning. The shockwaves created by Chat GPT and other AI-powered platforms are making way for curiosity in how these tools will help people learn be it coding, writing better, developing creative concepts, and more. The access to the vast quantum of data and superfast processing capabilities of such platforms generate accurate answers to questions posed. While several may argue that humans cannot match supercomputers in terms of access or processing, the aim of these technologies is not to one-up humans. The approach to learning changes in a fundamental manner with the advent of such tools. What are the outcomes we seek through learning, and how can tech aid those outcomes, becomes a focal point here.

India has sixteen official languages and hundreds of unofficial languages and dialects spoken across the country. Effective communication is often one of the biggest challenges in the public works domain. For effective reach and improved access to information, AI & ML tools and technologies such as NLP play a significant role in helping people learn languages and improve communication and collaboration across geographies. With MLs aid, learning languages can become simpler and more accessible to a larger audience.

When it comes to assessment and evaluation in learning, the human perspective is more often than not, rooted in personal prejudices and biases. The objective perspective is lost in such scenarios, making evaluations a tool to deter rather than advance. The way ML steps into these areas of assessment and evaluation completely change the game. The same assessments then become a path for advancement, through the identification of areas of improvement and existing strengths of the learner.

Overall, the use of ML in education in India is expected to continue to grow in 2023, with more educators and institutions turning to these technologies to improve the learning experience for students.

Read more:
Machine Learning for education: Trends to expect in 2023 - Express Computer

What Is Machine Learning and Why Is It Important? – SearchEnterpriseAI

What is machine learning?

Machine learning (ML) is a type of artificial intelligence (AI) that allows software applications to become more accurate at predicting outcomes without being explicitly programmed to do so. Machine learning algorithms use historical data as input to predict new output values.

Recommendation enginesare a common use case for machine learning. Other popular uses include fraud detection, spam filtering, malware threat detection, business process automation (BPA) and Predictive maintenance.

Machine learning is important because it gives enterprises a view of trends in customer behavior and business operational patterns, as well as supports the development of new products. Many of today's leading companies, such as Facebook, Google and Uber, make machine learning a central part of their operations. Machine learning has become a significant competitive differentiator for many companies.

Classical machine learning is often categorized by how an algorithm learns to become more accurate in its predictions. There are four basic approaches:supervised learning, unsupervised learning, semi-supervised learning and reinforcement learning. The type of algorithm data scientists choose to use depends on what type of data they want to predict.

Supervised machine learning requires the data scientist to train the algorithm with both labeled inputs and desired outputs. Supervised learning algorithms are good for the following tasks:

Unsupervised machine learning algorithms do not require data to be labeled. They sift through unlabeled data to look for patterns that can be used to group data points into subsets. Most types of deep learning, including neural networks, are unsupervised algorithms.Unsupervised learning algorithms are good for the following tasks:

Semi-supervised learning works by data scientists feeding a small amount of labeled training data to an algorithm. From this, the algorithm learns the dimensions of the data set, which it can then apply to new, unlabeled data. The performance of algorithms typically improves when they train on labeled data sets. But labeling data can be time consuming and expensive. Semi-supervised learning strikes a middle ground between the performance of supervised learning and the efficiency of unsupervised learning. Some areas where semi-supervised learning is used include:

Reinforcement learning works by programming an algorithm with a distinct goal and a prescribed set of rules for accomplishing that goal. Data scientists also program the algorithm to seek positive rewards -- which it receives when it performs an action that is beneficial toward the ultimate goal -- and avoid punishments -- which it receives when it performs an action that gets it farther away from its ultimate goal. Reinforcement learning is often used in areas such as:

Today, machine learning is used in a wide range of applications. Perhaps one of the most well-known examples of machine learning in action is the recommendation engine that powers Facebook's news feed.

Facebook uses machine learning to personalize how each member's feed is delivered. If a member frequently stops to read a particular group's posts, the recommendation engine will start to show more of that group's activity earlier in the feed.

Behind the scenes, the engine is attempting to reinforce known patterns in the member's online behavior. Should the member change patterns and fail to read posts from that group in the coming weeks, the news feed will adjust accordingly.

In addition to recommendation engines, other uses for machine learning include the following:

Machine learning has seen use cases ranging from predicting customer behavior to forming the operating system for self-driving cars.

When it comes to advantages, machine learning can help enterprises understand their customers at a deeper level. By collecting customer data and correlating it with behaviors over time, machine learning algorithms can learn associations and help teams tailor product development and marketing initiatives to customer demand.

Some companies use machine learning as a primary driver in their business models. Uber, for example, uses algorithms to match drivers with riders. Google uses machine learning to surface the ride advertisements in searches.

But machine learning comes with disadvantages. First and foremost, it can be expensive. Machine learning projects are typically driven by data scientists, who command high salaries. These projects also require software infrastructure that can be expensive.

There is also the problem of machine learning bias. Algorithms trained on data sets that exclude certain populations or contain errors can lead to inaccurate models of the world that, at best, fail and, at worst, are discriminatory. When an enterprise bases core business processes on biased models it can run into regulatory and reputational harm.

The process of choosing the right machine learning model to solve a problem can be time consuming if not approached strategically.

Step 1: Align the problem with potential data inputs that should be considered for the solution. This step requires help from data scientists and experts who have a deep understanding of the problem.

Step 2: Collect data, format it and label the data if necessary. This step is typically led by data scientists, with help from data wranglers.

Step 3: Chose which algorithm(s) to use and test to see how well they perform. This step is usually carried out by data scientists.

Step 4: Continue to fine tune outputs until they reach an acceptable level of accuracy. This step is usually carried out by data scientists with feedback from experts who have a deep understanding of the problem.

Explaining how a specific ML model works can be challenging when the model is complex. There are some vertical industries where data scientists have to use simple machine learning models because it's important for the business to explain how every decision was made. This is especially true in industries with heavy compliance burdens such as banking and insurance.

Complex models can produce accurate predictions, but explaining to a lay person how an output was determined can be difficult.

While machine learning algorithms have been around for decades, they've attained new popularity as artificial intelligence has grown in prominence. Deep learning models, in particular, power today's most advanced AI applications.

Machine learning platforms are among enterprise technology's most competitive realms, with most major vendors, including Amazon, Google, Microsoft, IBM and others, racing to sign customers up for platform services that cover the spectrum of machine learning activities, including data collection, data preparation, data classification, model building, training and application deployment.

As machine learning continues to increase in importance to business operations and AI becomes more practical in enterprise settings, the machine learning platform wars will only intensify.

Continued research into deep learning and AI is increasingly focused on developing more general applications. Today's AI models require extensive training in order to produce an algorithm that is highly optimized to perform one task. But some researchers are exploring ways to make models more flexible and are seeking techniques that allow a machine to apply context learned from one task to future, different tasks.

1642 - Blaise Pascal invents a mechanical machine that can add, subtract, multiply and divide.

1679 - Gottfried Wilhelm Leibniz devises the system of binary code.

1834 - Charles Babbage conceives the idea for a general all-purpose device that could be programmed with punched cards.

1842 - Ada Lovelace describes a sequence of operations for solving mathematical problems using Charles Babbage's theoretical punch-card machine and becomes the first programmer.

1847 - George Boole creates Boolean logic, a form of algebra in which all values can be reduced to the binary values of true or false.

1936 - English logician and cryptanalyst Alan Turing proposes a universal machine that could decipher and execute a set of instructions. His published proof is considered the basis of computer science.

1952 - Arthur Samuel creates a program to help an IBM computer get better at checkers the more it plays.

1959 - MADALINE becomes the first artificial neural network applied to a real-world problem: removing echoes from phone lines.

1985 - Terry Sejnowski's and Charles Rosenberg's artificial neural network taught itself how to correctly pronounce 20,000 words in one week.

1997 - IBM's Deep Blue beat chess grandmaster Garry Kasparov.

1999 - A CAD prototype intelligent workstation reviewed 22,000 mammograms and detected cancer 52% more accurately than radiologists did.

2006 - Computer scientist Geoffrey Hinton invents the term deep learning to describe neural net research.

2012 - An unsupervised neural network created by Google learned to recognize cats in YouTube videos with 74.8% accuracy.

2014 - A chatbot passes the Turing Test by convincing 33% of human judges that it was a Ukrainian teen named Eugene Goostman.

2014 - Google's AlphaGo defeats the human champion in Go, the most difficult board game in the world.

2016 - LipNet, DeepMind's artificial intelligence system, identifies lip-read words in video with an accuracy of 93.4%.

2019 - Amazon controls 70% of the market share for virtual assistants in the U.S.

See the original post:
What Is Machine Learning and Why Is It Important? - SearchEnterpriseAI

AI vs. Machine Learning vs. Deep Learning vs. Neural … – IBM

These terms are often used interchangeably, but what are the differences that make them each a unique technology?

Technology is becoming more embedded in our daily lives by the minute, and in order to keep up with the pace of consumer expectations, companies are more heavily relying on learning algorithms to make things easier. You can see its application in social media (through object recognition in photos) or in talking directly todevices (like Alexa or Siri).

These technologies are commonly associated with artificial intelligence, machine learning, deep learning, and neural networks, and while they do all play a role, these terms tend to be used interchangeably in conversation, leading to some confusion around the nuances between them. Hopefully, we can use this blog post to clarify some of the ambiguity here.

Perhaps the easiest way to think about artificial intelligence, machine learning, neural networks, and deep learning is to think of them like Russian nesting dolls. Each is essentially a component of the prior term.

That is, machine learning is a subfield of artificial intelligence. Deep learning is a subfield of machine learning, and neural networks make up the backbone of deep learning algorithms. In fact, it is the number of node layers, or depth, of neural networks that distinguishes a single neural network from a deep learning algorithm, which must have more than three.

Neural networksand more specifically, artificial neural networks (ANNs)mimic the human brain through a set of algorithms. At a basic level, a neural network is comprised of four main components: inputs, weights, a bias or threshold, and an output. Similar to linear regression, the algebraic formula would look something like this:

From there, lets apply it to a more tangible example, like whether or not you should order a pizza for dinner. This will be our predicted outcome, or y-hat. Lets assume that there are three main factors that will influence your decision:

Then, lets assume the following, giving us the following inputs:

For simplicity purposes, our inputs will have a binary value of 0 or 1. This technically defines it as a perceptron as neural networks primarily leverage sigmoid neurons, which represent values from negative infinity to positive infinity. This distinction is important since most real-world problems are nonlinear, so we need values which reduce how much influence any single input can have on the outcome. However, summarizing in this way will help you understand the underlying math at play here.

Moving on, we now need to assign some weights to determine importance. Larger weights make a single inputs contribution to the output more significant compared to other inputs.

Finally, well also assume a threshold value of 5, which would translate to a bias value of 5.

Since we established all the relevant values for our summation, we can now plug them into this formula.

Using the following activation function, we can now calculate the output (i.e., our decision to order pizza):

In summary:

Y-hat (our predicted outcome) = Decide to order pizza or not

Y-hat = (1*5) + (0*3) + (1*2) - 5

Y-hat = 5 + 0 + 2 5

Y-hat = 2, which is greater than zero.

Since Y-hat is 2, the output from the activation function will be 1, meaning that we will order pizza (I mean, who doesn't love pizza).

If the output of any individual node is above the specified threshold value, that node is activated, sending data to the next layer of the network. Otherwise, no data is passed along to the next layer of the network. Now, imagine the above process being repeated multiple times for a single decision as neural networks tend to have multiple hidden layers as part of deep learning algorithms. Each hidden layer has its own activation function, potentially passing information from the previous layer into the next one. Once all the outputs from the hidden layers are generated, then they are used as inputs to calculate the final output of the neural network. Again, the above example is just the most basic example of a neural network; most real-world examples are nonlinear and far more complex.

The main difference between regression and a neural network is the impact of change on a single weight. In regression, you can change a weight without affecting the other inputs in a function. However, this isnt the case with neural networks. Since the output of one layer is passed into the next layer of the network, a single change can have a cascading effect on the other neurons in the network.

See this IBM Developer article for a deeper explanation of the quantitative concepts involved in neural networks.

While it was implied within the explanation of neural networks, its worth noting more explicitly. The deep in deep learning is referring to the depth of layers in a neural network. A neural network that consists of more than three layerswhich would be inclusive of the inputs and the outputcan be considered a deep learning algorithm. This is generally represented using the following diagram:

Most deep neural networks are feed-forward, meaning they flow in one direction only from input to output. However, you can also train your model through backpropagation; that is, move in opposite direction from output to input. Backpropagation allows us to calculate and attribute the error associated with each neuron, allowing us to adjust and fit the algorithm appropriately.

As we explain in our Learn Hub article on Deep Learning, deep learning is merely a subset of machine learning. The primary ways in which they differ is in how each algorithm learns and how much data each type of algorithm uses. Deep learning automates much of the feature extraction piece of the process, eliminating some of the manual human intervention required. It also enables the use of large data sets, earning itself the title of "scalable machine learning" in this MIT lecture. This capability will be particularly interesting as we begin to explore the use of unstructured data more, particularly since 80-90% of an organizations data is estimated to be unstructured.

Classical, or "non-deep", machine learning is more dependent on human intervention to learn. Human experts determine the hierarchy of features to understand the differences between data inputs, usually requiring more structured data to learn. For example, let's say that I were to show you a series of images of different types of fast food, pizza, burger, or taco. The human expert on these images would determine the characteristics which distinguish each picture as the specific fast food type. For example, the bread of each food type might be a distinguishing feature across each picture. Alternatively, you might just use labels, such as pizza, burger, or taco, to streamline the learning process through supervised learning.

"Deep" machine learning can leverage labeled datasets, also known as supervised learning, to inform its algorithm, but it doesnt necessarily require a labeled dataset. It can ingest unstructured data in its raw form (e.g. text, images), and it can automatically determine the set of features which distinguish "pizza", "burger", and "taco" from one another.

For a deep dive into the differences between these approaches, check out "Supervised vs. Unsupervised Learning: What's the Difference?"

By observing patterns in the data, a deep learning model can cluster inputs appropriately. Taking the same example from earlier, we could group pictures of pizzas, burgers, and tacos into their respective categories based on the similarities or differences identified in the images. With that said, a deep learning model would require more data points to improve its accuracy, whereas a machine learning model relies on less data given the underlying data structure. Deep learning is primarily leveraged for more complex use cases, like virtual assistants or fraud detection.

For further info on machine learning, check out the following video:

Finally, artificial intelligence (AI) is the broadest term used to classify machines that mimic human intelligence. It is used to predict, automate, and optimize tasks that humans have historically done, such as speech and facial recognition, decision making, and translation.

There are three main categories of AI:

ANI is considered weak AI, whereas the other two types are classified as strong AI. Weak AI is defined by its ability to complete a very specific task, like winning a chess game or identifying a specific individual in a series of photos. As we move into stronger forms of AI, like AGI and ASI, the incorporation of more human behaviors becomes more prominent, such as the ability to interpret tone and emotion. Chatbots and virtual assistants, like Siri, are scratching the surface of this, but they are still examples of ANI.

Strong AI is defined by its ability compared to humans. Artificial General Intelligence (AGI) would perform on par with another human while Artificial Super Intelligence (ASI)also known as superintelligencewould surpass a humans intelligence and ability. Neither forms of Strong AI exist yet, but ongoing research in this field continues. Since this area of AI is still rapidly evolving, the best example that I can offer on what this might look like is the character Dolores on the HBO show Westworld.

While all these areas of AI can help streamline areas of your business and improve your customer experience, achieving AI goals can be challenging because youll first need to ensure that you have the right systems in place to manage your data for the construction of learning algorithms. Data management is arguably harder than building the actual models that youll use for your business. Youll need a place to store your data and mechanisms for cleaning it and controlling for bias before you can start building anything. Take a look at some of IBMs product offerings to help you and your business get on the right track to prepare and manage your data at scale.

Read the original post:
AI vs. Machine Learning vs. Deep Learning vs. Neural ... - IBM

The Role of Artificial Intelligence and Machine Learning in Ransomware Protection: How enterprises Can Leve… – Security Boulevard

The Role of Artificial Intelligence and Machine Learning in Ransomware Protection: How enterprises Can Leve...  Security Boulevard

Here is the original post:
The Role of Artificial Intelligence and Machine Learning in Ransomware Protection: How enterprises Can Leve... - Security Boulevard

Artificial Intelligence in Health Care: Benefits and Challenges of Machine Learning Technologies for Medical Diagnostics – Government Accountability…

What GAO Found

Several machine learning (ML) technologies are available in the U.S. to assist with the diagnostic process. The resulting benefits include earlier detection of diseases; more consistent analysis of medical data; and increased access to care, particularly for underserved populations. GAO identified a variety of ML-based technologies for five selected diseases certain cancers, diabetic retinopathy, Alzheimers disease, heart disease, and COVID-19 with most technologies relying on data from imaging such as x-rays or magnetic resonance imaging (MRI). However, these ML technologies have generally not been widely adopted.

Academic, government, and private sector researchers are working to expand the capabilities of ML-based medical diagnostic technologies. In addition, GAO identified three broader emerging approachesautonomous, adaptive, and consumer-oriented ML-diagnosticsthat can be applied to diagnose a variety of diseases. These advances could enhance medical professionals capabilities and improve patient treatments but also have certain limitations. For example, adaptive technologies may improve accuracy by incorporating additional data to update themselves, but automatic incorporation of low-quality data may lead to inconsistent or poorer algorithmic performance.

Spectrum of adaptive algorithms

We identified several challenges affecting the development and adoption of ML in medical diagnostics:

These challenges affect various stakeholders including technology developers, medical providers, and patients, and may slow the development and adoption of these technologies.

GAO developed three policy options that could help address these challenges or enhance the benefits of ML diagnostic technologies. These policy options identify possible actions by policymakers, which include Congress, federal agencies, state and local governments, academic and research institutions, and industry. See below for a summary of the policy options and relevant opportunities and considerations.

Policy Options to Help Address Challenges or Enhance Benefits of ML Diagnostic Technologies

Evaluation (reportpage 28)

Policymakers could create incentives, guidance, or policies to encourage or require the evaluation of ML diagnostic technologies across a range of deployment conditions and demographics representative of the intended use.

This policy option could help address the challenge of demonstrating real world performance.

Data Access (reportpage 29)

Policymakers could develop or expand access to high-quality medical data to develop and test ML medical diagnostic technologies. Examples include standards for collecting and sharing data, creating data commons, or using incentives to encourage data sharing.

This policy option could help address the challenge of demonstrating real world performance.

Collaboration (reportpage 30)

Policymakers could promote collaboration among developers, providers, and regulators in the development and adoption of ML diagnostic technologies. For example, policymakers could convene multidisciplinary experts together in the design and development of these technologies through workshops and conferences.

This policy option could help address the challenges of meeting medical needs and addressing regulatory gaps.

Source: GAO. | GAO-22-104629

Diagnostic errors affect more than 12 million Americans each year, with aggregate costs likely in excess of $100 billion, according to a report by the Society to Improve Diagnosis in Medicine. ML, a subfield of artificial intelligence, has emerged as a powerful tool for solving complex problems in diverse domains, including medical diagnostics. However, challenges to the development and use of machine learning technologies in medical diagnostics raise technological, economic, and regulatory questions.

GAO was asked to conduct a technology assessment on the current and emerging uses of machine learning in medical diagnostics, as well as the challenges and policy implications of these technologies. This report discusses (1) currently available ML medical diagnostic technologies for five selected diseases, (2) emerging ML medical diagnostic technologies, (3) challenges affecting the development and adoption of ML technologies for medical diagnosis, and (4) policy options to help address these challenges.

GAO assessed available and emerging ML technologies; interviewed stakeholders from government, industry, and academia; convened a meeting of experts in collaboration with the National Academy of Medicine; and reviewed reports and scientific literature. GAO is identifying policy options in this report.

For more information, contact Karen L. Howard at (202) 512-6888 or howardk@gao.gov.

More here:
Artificial Intelligence in Health Care: Benefits and Challenges of Machine Learning Technologies for Medical Diagnostics - Government Accountability...