Archive for the ‘Artificial Intelligence’ Category

Artificial intelligence can discriminate on the basis of race and gender, and also age – The Conversation CA

We have accepted the use of artificial intelligence (AI) in complex processes from health care to our daily use of social media often without critical investigation, until it is too late. The use of AI is inescapable in our modern society, and it may perpetuate discrimination without its users being aware of any prejudice. When health-care providers rely on biased technology, there are real and harmful impacts.

This became clear recently when a study showed that pulse oximeters which measure the amount of oxygen in the blood and have been an essential tool for clinical management of COVID-19 are less accurate on people with darker skin than lighter skin. The findings resulted in a sweeping racial bias review now underway, in an attempt to create international standards for testing medical devices.

There are examples in health care, business, government and everyday life where biased algorithms have led to problems, like sexist searches and racist predictions of an offenders likelihood of re-offending.

AI is often assumed to be more objective than humans. In reality, however, AI algorithms make decisions based on human-annotated data, which can be biased and exclusionary. Current research on bias in AI focuses mainly on gender and race. But what about age-related bias can AI be ageist?

In 2021, the World Health Organization released a global report on aging, which called for urgent action to combat ageism because of its widespread impacts on health and well-being.

Ageism is defined as a process of systematic stereotyping of and discrimination against people because they are old. It can be explicit or implicit, and can take the form of negative attitudes, discriminatory activities, or institutional practices.

The pervasiveness of ageism has been brought to the forefront throughout the COVID-19 pandemic. Older adults have been labelled as burdens to societies, and in some jurisdictions, age has been used as the sole criterion for lifesaving treatments.

Digital ageism exists when age-based bias and discrimination are created or supported by technology. A recent report indicates that a digital world of more than 2.5 quintillion bytes of data is produced each day. Yet even though older adults are using technology in greater numbers and benefiting from that use they continue to be the age cohort least likely to have access to a computer and the internet.

Read more: Online arts programming improves quality of life for isolated seniors

Digital ageism can arise when ageist attitudes influence technology design, or when ageism makes it more difficult for older adults to access and enjoy the full benefits of digital technologies.

There are several intertwined cycles of injustice where technological, individual and social biases interact to produce, reinforce and contribute to digital ageism.

Barriers to technological access can exclude older adults from the research, design and development process of digital technologies. Their absence in technology design and development may also be rationalized with the ageist belief that older adults are incapable of using technology. As such, older adults and their perspectives are rarely involved in the development of AI and related policies, funding and support services.

The unique experiences and needs of older adults are overlooked, despite age being a more powerful predictor of technology use than other demographic characteristics including race and gender.

AI is trained by data, and the absence of older adults could reproduce or even amplify the above ageist assumptions in its output. Many AI technologies are focused on a stereotypical image of an older adult in poor health a narrow segment of the population that ignores healthy aging. This creates a negative feedback loop that not only discourages older adults from using AI, but also results in further data loss from these demographics that would improve AI accuracy.

Even when older adults are included in large datasets, they are often grouped according to arbitrary divisions by developers. For example, older adults may be defined as everyone aged 50 and older, despite younger age cohorts being divided into narrower age ranges. As a result, older adults and their needs can become invisible to AI systems.

In this way, AI systems reinforce inequality and magnify societal exclusion for sections of the population, creating a digital underclass primarily made up of older, poor, racialized and marginalized groups.

We must understand the risks and harms associated with age-related biases as more older adults turn to technology.

The first step is for researchers and developers to acknowledge the existence of digital ageism alongside other forms of algorithmic biases, such as racism and sexism. They need to direct efforts towards identifying and measuring it. The next step is to develop safeguards for AI systems to mitigate ageist outcomes.

There is currently very little training, auditing or oversight of AI-driven activities from a regulatory or legal perspective. For instance, Canadas current AI regulatory regime is sorely lacking.

This presents a challenge, but also an opportunity to include ageism alongside other forms of biases and discrimination in need of excision. To combat digital ageism, older adults must be included in a meaningful and collaborative way in designing new technologies.

With bias in AI now recognized as a critical problem in need of urgent action, it is time to consider the experience of digital ageism for older adults, and understand how growing old in an increasingly digital world may reinforce social inequalities, exclusion and marginalization.

Follow this link:
Artificial intelligence can discriminate on the basis of race and gender, and also age - The Conversation CA

Artificial Intelligence (AI)

Early diagnosis of Alzheimers disease (AD) using analysis of brain networks

AD-related neurological degeneration begins long before the appearance of clinical symptoms. Information provided by functional MRI (fMRI) neuroimaging data, which can detect changes in brain tissue during the early phases of AD, holds potential for early detection and treatment. The researchers are combining the ability of fMRI to detect subtle brain changes with the ability of machine learning to analyze multiple brain changes over time. This approach aims to improve early detection of AD, as well as other neurological disorders including schizophrenia, autism, and multiple sclerosis.

NIBIB-funded researchers are building machine learning models to better manage blood glucose levels by using data obtained from wearable sensors. New portable sensing technologies provide continuous measurements that include heart rate, skin conductance, temperature, and body movements. The data will be used to train an artificial intelligence network to help predict changes in blood glucose levels before they occur. Anticipating and preventing blood glucose control problems will enhance patient safety and reduce costly complications.

This project aims to develop an advanced image scanning system with high detection sensitivity and specificity for colon cancers. The researchers will develop deep neural networks that can analyze a wider field on the radiographic images obtained during surgery. The wider scans will include the suspected lesion areas and more surrounding tissue. The neural networks will compare patient images with images of past diagnosed cases. The system is expected to outperform current computer-aided systems in the diagnosis of colorectal lesions. Broad adoption could advance the prevention and early diagnosis of cancer.

Smart, cyber-physically assistive clothing (CPAC) is being developed in an effort to reduce the high prevalence of low back pain. Forces on back muscles and discs that occur during daily tasks are major risk factors for back pain and injury. The researchers are gathering a public data set of more than 500 movements measured from each subject to inform a machine learning algorithm. The information will be used to develop assistive clothing that can detect unsafe conditions and intervene to protect low back health. The long-term vision is to create smart clothing that can monitor lumbar loading; train safe movement patterns; directly assist wearers to reduce incidence of low back pain;and reduce costs related to health care expenses and missed work.

View original post here:
Artificial Intelligence (AI)

Master’s in Artificial Intelligence | Hopkins EP Online

With the expertise of the Johns Hopkins Applied Physics Lab, weve developed one of the nations first online artificial intelligence masters programs to prepare engineers like you to take full advantage of opportunities in this field. The highly advanced curriculum is designed to deeply explore AI areas, including computer robotics, natural language processing, image processing, and more.

We have assembled a team of top-level researchers, scientists, and engineers to guide you through our rigorous online academic courses. Because we are a hub and frontrunner in artificial intelligence, we can tailor our artificial intelligence online masters content to include the most up-to-date practices and offer core courses that address the AI-driven technologies, techniques, and issues that power our modern world.

The online masters in Artificial Intelligence program balances theoretical concepts with the practical knowledge you can apply to real-world systems and processes. Courses deeply explore areas of AI, including robotics, natural language processing, image processing, and morefully online.

At the programs completion, you will:

Original post:
Master's in Artificial Intelligence | Hopkins EP Online

Artificial Intelligence and Machine Learning made simple

Lately, Artificial Intelligence and Machine Learning is a hot topic in the tech industry. Perhaps more than our daily lives Artificial Intelligence (AI) is impacting the business world more. There was about $300 million in venture capital invested in AI startups in 2014, a 300% increase than a year before (Bloomberg).

Hey there! This blog is almost about 1000+ words long and may take ~5 mins to go through the whole thing. We understand that you might not have that much time.

This is precisely why we made a short video on the topic. It is less than 2 mins, and simplifies Artificial intelligence & Machine learning. We hope this helps you learn more and save your time. Cheers!

AI is everywhere, from gaming stations to maintaining complex information at work. Computer Engineers and Scientists are working hard to impart intelligent behavior in the machines making them think and respond to real-time situations. AI is transiting from just a research topic to the early stages of enterprise adoption. Tech giants like Google and Facebook have placed huge bets on Artificial Intelligence and Machine Learning and are already using it in their products. But this is just the beginning, over the next few years, we may see AI steadily glide into one product after another.

According to Stanford Researcher, John McCarthy, Artificial Intelligence is the science and engineering of making intelligent machines, especially intelligent computer programs. Artificial Intelligence is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable.

Simply put, AIs goal is to make computers/computer programs smart enough to imitate the human mind behaviour.

Knowledge Engineering is an essential part of AI research. Machines and programs need to have bountiful information related to the world to often act and react like human beings. AI must have access to properties, categories, objects and relations between all of them to implement knowledge engineering. AI initiates common sense, problem-solving and analytical reasoning power in machines, which is much difficult and a tedious job.

AI services can be classified into Vertical or Horizontal AI

These are services focus on the single job, whether thats scheduling meeting, automating repetitive work, etc. Vertical AI Bots performs just one job for you and do it so well, that we might mistake them for a human.

These services are such that they are able to handle multiple tasks. There is no single job to be done. Cortana, Siri and Alexa are some of the examples of Horizontal AI. These services work more massively as the question and answer settings, such as What is the temperature in New York? or Call Alex. They work for multiple tasks and not just for a particular task entirely.

AI is achieved by analysing how the human brain works while solving an issue and then using that analytical problem-solving techniques to build complex algorithms to perform similar tasks. AI is an automated decision-making system, which continuously learn, adapt, suggest and take actions automatically. At the core, they require algorithms which are able to learn from their experience. This is where Machine Learning comes into the picture.

Artificial Intelligence and Machine Learning are much trending and also confused terms nowadays. Machine Learning (ML) is a subset of Artificial Intelligence. ML is a science of designing and applying algorithms that are able to learn things from past cases. If some behaviour exists in past, then you may predict if or it can happen again. Means if there are no past cases then there is no prediction.

ML can be applied to solve tough issues like credit card fraud detection, enable self-driving cars and face detection and recognition. ML uses complex algorithms that constantly iterate over large data sets, analyzing the patterns in data and facilitating machines to respond different situations for which they have not been explicitly programmed. The machines learn from the history to produce reliable results. The ML algorithms use Computer Science and Statistics to predict rational outputs.

There are 3 major areas of ML:

In supervised learning, training datasets are provided to the system. Supervised learning algorithms analyse the data and produce an inferred function. The correct solution thus produced can be used for mapping new examples. Credit card fraud detection is one of the examples of Supervised Learning algorithm.

Supervised Learning and Unsupervised Learning (Reference: http://dataconomy.com/whats-the-difference-between-supervised-and-unsupervised-learning/)

Unsupervised Learning algorithms are much harder because the data to be fed is unclustered instead of datasets. Here the goal is to have the machine learn on its own without any supervision. The correct solution of any problem is not provided. The algorithm itself finds the patterns in the data. One of the examples of supervised learning is Recommendation engines which are there on all e-commerce sites or also on Facebook friend request suggestion mechanism.

Recommendation Engine

This type of Machine Learning algorithms allows software agents and machines to automatically determine the ideal behaviour within a specific context, to maximise its performance. Reinforcement learning is defined by characterising a learning problem and not by characterising learning methods. Any method which is well suited to solve the problem, we consider it to be the reinforcement learning method. Reinforcement learning assumes that a software agent i.e. a robot, or a computer program or a bot, connect with a dynamic environment to attain a definite goal. This technique selects the action that would give expected output efficiently and rapidly.

Artificial Intelligence and Machine Learning always interests and surprises us with their innovations. AI and Ml have reached industries like Customer Service, E-commerce, Finance and where not. By 2020, 85% of the customer interactions will be managed without a human (Gartner). There are certain implications of AI and ML to incorporate data analysis like Descriptive analytics, Prescriptive analytics and Predictive analytics, discussed in our next blog: How Machine Learning can boost your Predictive Analytics?

Continue reading here:
Artificial Intelligence and Machine Learning made simple

Artificial Intelligence With Python | Build AI Models …

Artificial Intelligence With Python:

Artificial Intelligence has been around for over half a century now and its advancements are growing at an exponential rate. The demand for AI is at its peak and if you wish to learn about Artificial Intelligence, youve landed at the right place. This blog on Artificial Intelligence With Python will help you understand all the concepts of AI with practical implementations in Python.

To get in-depth knowledge of Artificial Intelligence and Machine Learning, you can enroll for live Machine Learning Engineer Master Program by Edureka with 24/7 support and lifetime access.

The following topics are covered in this Artificial Intelligence With Python blog:

A lot of people have asked me, Which programming language is best for AI? or Why Python for AI?

Despite being a general purpose language, Python has made its way into the most complex technologies such as Artificial Intelligence, Machine Learning, Deep Learning, and so on.

Why has Python gained so much popularity in all these fields?

Here is a list of reasons why Python is the choice of language for every core Developer, Data Scientist, Machine Learning Engineer, etc:

Why Python For AI Artificial Intelligence With Python Edureka

If you wish to learn Python Programming in depth, here are a couple of links, do give these blogs a read:

Since this blog is all about Artificial Intelligence With Python, I will introduce you to the most effective and popular AI-based Python Libraries.

In addition to the above-mentioned libraries make sure you check out this Top 10 Python Libraries You Must Know In 2019 blog to get a more clear understanding.

Now that you know the important Python libraries that are used for implementing AI techniques, lets focus on Artificial Intelligence. In the next section, I will cover all the fundamental concepts of AI.

First, lets start by understanding the sudden demand for AI.

Since the emergence of AI in the 1950s, we have seen exponential growth in its potential.But if AI has been here for over half a century, why has it suddenly gained so much importance? Why are we talking about Artificial Intelligence now?

Demand For AI Artificial Intelligence With Python Edureka

The main reasons for the vast popularity of AI are:

More computing power: Implementing AI requires a lot of computing power since building AI models involve heavy computations and the use of complex neural networks. The invention of GPUs has made this possible. We can finally perform high-level computations and implement complex algorithms.

Data Generation: Over the past years, weve been generating an immeasurable amount of data. Such data needs to be analyzed and processed by using Machine Learning algorithms and other AI techniques.

More Effective Algorithms: In the past decade weve successfully managed to develop state of the art algorithms that involve the implementation of Deep Neural Networks.

Broad Investment: As tech giants such as Tesla, Netflix and Facebook started investing in Artificial Intelligence, it gained more popularity which led to an increase in the demand for AI-based systems.

The growth of Artificial Intelligence is exponential, it is also adding to the economy at an accelerated pace. So this is the right time for you to get into the field of Artificial Intelligence.

Check out these AI and Machine Learning courses by E & ICT Academy NIT Warangal to learn and build a career in Artificial Intelligence.

The term Artificial Intelligence was first coined decades ago in the year 1956 by John McCarthy at the Dartmouth conference. He defined AI as:

The science and engineering of making intelligent machines.

In other words, Artificial Intelligence is the science of getting machines to think and make decisions like humans.

In the recent past, AI has been able to accomplish this by creating machines and robots that have been used in a wide range of fields including healthcare, robotics, marketing, business analytics and many more.

Now lets discuss the different stages of Artificial Intelligence.

AI is structured along three evolutionary stages:

Types Of AI Artificial Intelligence With Python Edureka

Commonly known as weak AI, Artificial Narrow Intelligence involves applying AI only to specific tasks.

The existing AI-based systems that claim to use artificial intelligence are actually operating as a weak AI. Alexa is a good example of narrow intelligence. It operates within a limited predefined range of functions. Alexa has no genuine intelligence or self-awareness.

Google search engine, Sophia, self-driving cars and even the famous AlphaGo, fall under the category of weak AI.

Commonly known as strong AI, Artificial General Intelligence involves machines that possess the ability to perform any intellectual task that a human being can.

You see, machines dont possess human-like abilities, they have a strong processing unit that can perform high-level computations but theyre not yet capable of thinking and reasoning like a human.

There are many experts who doubt that AGI will ever be possible, and there are also many who question whether it would be desirable.

Stephen Hawking, for example, warned:

Strong AI would take off on its own, and re-design itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldnt compete and would be superseded.

Artificial Super Intelligence is a term referring to the time when the capability of computers will surpass humans.

ASI is presently seen as a hypothetical situation as depicted in movies and science fiction books, where machines have taken over the world. However, tech masterminds like Elon Musk believe that ASI will take over the world by 2040!

What do you think about Artificial Super Intelligence? Let me know your thoughts in the comment section.

Before I go any further, let me clear a very common misconception. Ive been asked these question by every beginner:

What is the difference between AI and Machine Learning and Deep Learning?

Lets break it down:

People tend to think that Artificial Intelligence, Machine Learning, and Deep Learning are the same since they have common applications. For example, Siri is an application of AI, Machine learning and Deep learning.

So how are these technologies related?

To sum it up AI, Machine Learning and Deep Learning are interconnected fields. Machine Learning and Deep learning aids Artificial Intelligence by providing a set of algorithms and neural networks to solve data-driven problems.

However, Artificial Intelligence is not restricted to only Machine learning and Deep learning. It covers a vast domain of fields including, Natural Language Processing (NLP), object detection, computer vision, robotics, expert systems and so on.

Now lets get started with Machine Learning.

The term Machine Learning was first coined by Arthur Samuel in the year 1959. Looking back, that year was probably the most significant in terms of technological advancements.

In simple terms,

Machine learning is a subset of Artificial Intelligence (AI) which provides machines the ability to learn automatically by feeding it tons of data & allowing it to improve through experience. Thus, Machine Learning is a practice of getting Machines to solve problems by gaining the ability to think.

But how can a machine make decisions?

If you feed a machine a good amount of data, it will learn how to interpret, process and analyze this data by using Machine Learning Algorithms.

What Is Machine Learning Artificial Intelligence With Python Edureka

To sum it up, take a look at the above figure:

Now that we know what is Machine Learning, lets look at the different ways in which machines can learn.

A machine can learn to solve a problem by following any one of the following three approaches:

Supervised Learning

Unsupervised Learning

Reinforcement Learning

Supervised learning is a technique in which we teach or train the machine using data which is well labeled.

To understand Supervised Learning lets consider an analogy. As kids we all needed guidance to solve math problems. Our teachers helped us understand what addition is and how it is done.

Similarly, you can think of supervised learning as a type of Machine Learning that involves a guide. The labeled data set is the teacher that will train you to understand patterns in the data. The labeled data set is nothing but the training data set.

Supervised Learning Artificial Intelligence With Python Edureka

Consider the above figure. Here were feeding the machine images of Tom and Jerry and the goal is for the machine to identify and classify the images into two groups (Tom images and Jerry images).

The training data set that is fed to the model is labeled, as in, were telling the machine, this is how Tom looks and this is Jerry. By doing so youre training the machine by using labeled data. In Supervised Learning, there is a well-defined training phase done with the help of labeled data.

Unsupervised learning involves training by using unlabeled data and allowing the model to act on that information without guidance.

Think of unsupervised learning as a smart kid that learns without any guidance. In this type of Machine Learning, the model is not fed with labeled data, as in the model has no clue that this image is Tom and this is Jerry, it figures out patterns and the differences between Tom and Jerry on its own by taking in tons of data.

Unsupervised Learning Artificial Intelligence With Python Edureka

For example, it identifies prominent features of Tom such as pointy ears, bigger size, etc, to understand that this image is of type 1. Similarly, it finds such features in Jerry and knows that this image is of type 2.

Therefore, it classifies the images into two different classes without knowing who Tom is or Jerry is.

Reinforcement Learning is a part of Machine learning where an agent is put in an environment and he learns to behave in this environment by performing certain actions and observing the rewards which it gets from those actions.

Imagine that you were dropped off at an isolated island!

What would you do?

Panic? Yes, of course, initially we all would. But as time passes by, you will learn how to live on the island. You will explore the environment, understand the climate condition, the type of food that grows there, the dangers of the island, etc.

This is exactly how Reinforcement Learning works, it involves an Agent (you, stuck on the island) that is put in an unknown environment (island), where he must learn by observing and performing actions that result in rewards.

Reinforcement Learning is mainly used in advanced Machine Learning areas such as self-driving cars, AplhaGo, etc. So that sums up the types of Machine Learning.

Now, lets look at the type of problems that are solved by using Machine Learning.

There are three main categories of problems that can be solved using Machine Learning:

In this type of problem, the output is a continuous quantity. For example, if you want to predict the speed of a car given the distance, it is a Regression problem. Regression problems can be solved by using Supervised Learning algorithms like Linear Regression.

In this type, the output is a categorical value. Classifying emails into two classes, spam and non-spam is a classification problem that can be solved by using Supervised Learning classification algorithms such as Support Vector Machines, Naive Bayes, Logistic Regression, K Nearest Neighbor, etc.

This type of problem involves assigning the input into two or more clusters based on feature similarity. For example, clustering viewers into similar groups based on their interests, age, geography, etc can be done by using Unsupervised Learning algorithms like K-Means Clustering.

Heres a table that sums up the difference between Regression, Classification, and Clustering:

Regression vs Classification vs Clustering Artificial Intelligence With Python Edureka

Now lets look at how the Machine Learning process works.

The Machine Learning process involves building a Predictive model that can be used to find a solution for a Problem Statement.

To understand the Machine Learning process lets assume that you have been given a problem that needs to be solved by using Machine Learning.

The problem is to predict the occurrence of rain in your local area by using Machine Learning.

The below steps are followed in a Machine Learning process:

Step 1: Define the objective of the Problem Statement

At this step, we must understand what exactly needs to be predicted. In our case, the objective is to predict the possibility of rain by studying weather conditions.

It is also essential to take mental notes on what kind of data can be used to solve this problem or the type of approach you must follow to get to the solution.

Step 2: Data Gathering

At this stage, you must be asking questions such as,

See original here:
Artificial Intelligence With Python | Build AI Models ...