Archive for the ‘Artificial Intelligence’ Category

What is Artificial Intelligence (AI)? – AI Definition and …

Artificial intelligence is the simulation of human intelligence processes by machines, especially computer systems. Specific applications of AI include expert systems, natural language processing, speech recognition and machine vision.

As the hype around AI has accelerated, vendors have been scrambling to promote how their products and services use AI. Often what they refer to as AI is simply one component of AI, such as machine learning. AI requires a foundation of specialized hardware and software for writing and training machine learning algorithms. No one programming language is synonymous with AI, but a few, including Python, R and Java, are popular.

In general, AI systems work by ingesting large amounts of labeled training data, analyzing the data for correlations and patterns, and using these patterns to make predictions about future states. In this way, a chatbot that is fed examples of text chats can learn to produce lifelike exchanges with people, or an image recognition tool can learn to identify and describe objects in images by reviewing millions of examples.

AI programming focuses on three cognitive skills: learning, reasoning and self-correction.

Learning processes. This aspect of AI programming focuses on acquiring data and creating rules for how to turn the data into actionable information. The rules, which are called algorithms, provide computing devices with step-by-step instructions for how to complete a specific task.

Reasoning processes. This aspect of AI programming focuses on choosing the right algorithm to reach a desired outcome.

Self-correction processes. This aspect of AI programming is designed to continually fine-tune algorithms and ensure they provide the most accurate results possible.

AI is important because it can give enterprises insights into their operations that they may not have been aware of previously and because, in some cases, AI can perform tasks better than humans. Particularly when it comes to repetitive, detail-oriented tasks like analyzing large numbers of legal documents to ensure relevant fields are filled in properly, AI tools often complete jobs quickly and with relatively few errors.

This has helped fuel an explosion in efficiency and opened the door to entirely new business opportunities for some larger enterprises. Prior to the current wave of AI, it would have been hard to imagine using computer software to connect riders to taxis, but today Uber has become one of the largest companies in the world by doing just that. It utilizes sophisticated machine learning algorithms to predict when people are likely to need rides in certain areas, which helps proactively get drivers on the road before they're needed. As another example, Google has become one of the largest players for a range of online services by using machine learning to understand how people use their services and then improving them. In 2017, the company's CEO, Sundar Pichai, pronounced that Google would operate as an "AI first" company.

Today's largest and most successful enterprises have used AI to improve their operations and gain advantage on their competitors.

Artificial neural networks and deep learning artificial intelligence technologies are quickly evolving, primarily because AI processes large amounts of data much faster and makes predictions more accurately than humanly possible.

While the huge volume of data being created on a daily basis would bury a human researcher, AI applications that use machine learning can take that data and quickly turn it into actionable information. As of this writing, the primary disadvantage of using AI is that it is expensive to process the large amounts of data that AI programming requires.

Advantages

Disadvantages

AI can be categorized as either weak or strong.

Arend Hintze, an assistant professor of integrative biology and computer science and engineering at Michigan State University, explained in a 2016 article that AI can be categorized into four types, beginning with the task-specific intelligent systems in wide use today and progressing to sentient systems, which do not yet exist. The categories are as follows:

AI is incorporated into a variety of different types of technology. Here are six examples:

Artificial intelligence has made its way into a wide variety of markets. Here are nine examples.

AI in healthcare. The biggest bets are on improving patient outcomes and reducing costs. Companies are applying machine learning to make better and faster diagnoses than humans. One of the best-known healthcare technologies is IBM Watson. It understands natural language and can respond to questions asked of it. The system mines patient data and other available data sources to form a hypothesis, which it then presents with a confidence scoring schema. Other AI applications include using online virtual health assistants and chatbots to help patients and healthcare customers find medical information, schedule appointments, understand the billing process and complete other administrative processes. An array of AI technologies is also being used to predict, fight and understand pandemics such as COVID-19.

AI in business. Machine learning algorithms are being integrated into analytics and customer relationship management (CRM) platforms to uncover information on how to better serve customers. Chatbots have been incorporated into websites to provide immediate service to customers. Automation of job positions has also become a talking point among academics and IT analysts.

AI in education. AI can automate grading, giving educators more time. It can assess students and adapt to their needs, helping them work at their own pace. AI tutors can provide additional support to students, ensuring they stay on track. And it could change where and how students learn, perhaps even replacing some teachers.

AI in finance. AI in personal finance applications, such as Intuit Mint or TurboTax, is disrupting financial institutions. Applications such as these collect personal data and provide financial advice. Other programs, such as IBM Watson, have been applied to the process of buying a home. Today, artificial intelligence software performs much of the trading on Wall Street.

AI in law. The discovery process -- sifting through documents -- in law is often overwhelming for humans. Using AI to help automate the legal industry's labor-intensive processes is saving time and improving client service. Law firms are using machine learning to describe data and predict outcomes, computer vision to classify and extract information from documents and natural language processing to interpret requests for information.

AI in manufacturing. Manufacturing has been at the forefront of incorporating robots into the workflow. For example, the industrial robots that were at one time programmed to perform single tasks and separated from human workers, increasingly function as cobots: Smaller, multitasking robots that collaborate with humans and take on responsibility for more parts of the job in warehouses, factory floors and other workspaces.

AI in banking. Banks are successfully employing chatbots to make their customers aware of services and offerings and to handle transactions that don't require human intervention. AI virtual assistants are being used to improve and cut the costs of compliance with banking regulations. Banking organizations are also using AI to improve their decision-making for loans, and to set credit limits and identify investment opportunities.

AI in transportation. In addition to AI's fundamental role in operating autonomous vehicles, AI technologies are used in transportation to manage traffic, predict flight delays, and make ocean shipping safer and more efficient.

Security. AI and machine learning are at the top of the buzzword list security vendors use today to differentiate their offerings. Those terms also represent truly viable technologies. Organizations use machine learning in security information and event management (SIEM) software and related areas to detect anomalies and identify suspicious activities that indicate threats. By analyzing data and using logic to identify similarities to known malicious code, AI can provide alerts to new and emerging attacks much sooner than human employees and previous technology iterations. The maturing technology is playing a big role in helping organizations fight off cyber attacks.

Some industry experts believe the term artificial intelligence is too closely linked to popular culture, and this has caused the general public to have improbable expectations about how AI will change the workplace and life in general.

While AI tools present a range of new functionality for businesses, the use of artificial intelligence also raises ethical questions because, for better or worse, an AI system will reinforce what it has already learned.

This can be problematic because machine learning algorithms, which underpin many of the most advanced AI tools, are only as smart as the data they are given in training. Because a human being selects what data is used to train an AI program, the potential for machine learning bias is inherent and must be monitored closely.

Anyone looking to use machine learning as part of real-world, in-production systems needs to factor ethics into their AI training processes and strive to avoid bias. This is especially true when using AI algorithms that are inherently unexplainable in deep learning and generative adversarial network (GAN) applications.

Explainability is a potential stumbling block to using AI in industries that operate under strict regulatory compliance requirements. For example, financial institutions in the United States operate under regulations that require them to explain their credit-issuing decisions. When a decision to refuse credit is made by AI programming, however, it can be difficult to explain how the decision was arrived at because the AI tools used to make such decisions operate by teasing out subtle correlations between thousands of variables. When the decision-making process cannot be explained, the program may be referred to as black box AI.

Despite potential risks, there are currently few regulations governing the use of AI tools, and where laws do exist, they typically pertain to AI indirectly. For example, as previously mentioned, United States Fair Lending regulations require financial institutions to explain credit decisions to potential customers. This limits the extent to which lenders can use deep learning algorithms, which by their nature are opaque and lack explainability.

The European Union's General Data Protection Regulation (GDPR) puts strict limits on how enterprises can use consumer data, which impedes the training and functionality of many consumer-facing AI applications.

In October 2016, the National Science and Technology Council issued a report examining the potential role governmental regulation might play in AI development, but it did not recommend specific legislation be considered.

Crafting laws to regulate AI will not be easy, in part because AI comprises a variety of technologies that companies use for different ends, and partly because regulations can come at the cost of AI progress and development. The rapid evolution of AI technologies is another obstacle to forming meaningful regulation of AI. Technology breakthroughs and novel applications can make existing laws instantly obsolete. For example, existing laws regulating the privacy of conversations and recorded conversations do not cover the challenge posed by voice assistants like Amazon's Alexa and Apple's Siri that gather but do not distribute conversation -- except to the companies' technology teams which use it to improve machine learning algorithms. And, of course, the laws that governments do manage to craft to regulate AI don't stop criminals from using the technology with malicious intent.

The terms AI and cognitive computing are sometimes used interchangeably, but, generally speaking, the label AI is used in reference to machines that replace human intelligence by simulating how we sense, learn, process and react to information in the environment.

The label cognitive computing is used in reference to products and services that mimic and augment human thought processes.

The concept of inanimate objects endowed with intelligence has been around since ancient times. The Greek god Hephaestus was depicted in myths as forging robot-like servants out of gold. Engineers in ancient Egypt built statues of gods animated by priests. Throughout the centuries, thinkers from Aristotle to the 13th century Spanish theologian Ramon Llull to Ren Descartes and Thomas Bayes used the tools and logic of their times to describe human thought processes as symbols, laying the foundation for AI concepts such as general knowledge representation.

The late 19th and first half of the 20th centuries brought forth the foundational work that would give rise to the modern computer. In 1836, Cambridge University mathematician Charles Babbage and Augusta Ada Byron, Countess of Lovelace, invented the first design for a programmable machine.

1940s. Princeton mathematician John Von Neumann conceived the architecture for the stored-program computer -- the idea that a computer's program and the data it processes can be kept in the computer's memory. And Warren McCulloch and Walter Pitts laid the foundation for neural networks.

1950s. With the advent of modern computers, scientists could test their ideas about machine intelligence. One method for determining whether a computer has intelligence was devised by the British mathematician and World War II code-breaker Alan Turing. The Turing Test focused on a computer's ability to fool interrogators into believing its responses to their questions were made by a human being.

1956. The modern field of artificial intelligence is widely cited as starting this year during a summer conference at Dartmouth College. Sponsored by the Defense Advanced Research Projects Agency (DARPA), the conference was attended by 10 luminaries in the field, including AI pioneers Marvin Minsky, Oliver Selfridge and John McCarthy, who is credited with coining the term artificial intelligence. Also in attendance were Allen Newell, a computer scientist, and Herbert A. Simon, an economist, political scientist and cognitive psychologist, who presented their groundbreaking Logic Theorist, a computer program capable of proving certain mathematical theorems and referred to as the first AI program.

1950s and 1960s. In the wake of the Dartmouth College conference, leaders in the fledgling field of AI predicted that a man-made intelligence equivalent to the human brain was around the corner, attracting major government and industry support. Indeed, nearly 20 years of well-funded basic research generated significant advances in AI: For example, in the late 1950s, Newell and Simon published the General Problem Solver (GPS) algorithm, which fell short of solving complex problems but laid the foundations for developing more sophisticated cognitive architectures; McCarthy developed Lisp, a language for AI programming that is still used today. In the mid-1960s MIT Professor Joseph Weizenbaum developed ELIZA, an early natural language processing program that laid the foundation for today's chatbots.

1970s and 1980s. But the achievement of artificial general intelligence proved elusive, not imminent, hampered by limitations in computer processing and memory and by the complexity of the problem. Government and corporations backed away from their support of AI research, leading to a fallow period lasting from 1974 to 1980 and known as the first "AI Winter." In the 1980s, research on deep learning techniques and industry's adoption of Edward Feigenbaum's expert systems sparked a new wave of AI enthusiasm, only to be followed by another collapse of government funding and industry support. The second AI winter lasted until the mid-1990s.

1990s through today. Increases in computational power and an explosion of data sparked an AI renaissance in the late 1990s that has continued to present times. The latest focus on AI has given rise to breakthroughs in natural language processing, computer vision, robotics, machine learning, deep learning and more. Moreover, AI is becoming ever more tangible, powering cars, diagnosing disease and cementing its role in popular culture. In 1997, IBM's Deep Blue defeated Russian chess grandmaster Garry Kasparov, becoming the first computer program to beat a world chess champion. Fourteen years later, IBM's Watson captivated the public when it defeated two former champions on the game show Jeopardy!. More recently, the historic defeat of 18-time World Go champion Lee Sedol by Google DeepMind's AlphaGo stunned the Go community and marked a major milestone in the development of intelligent machines.

Because hardware, software and staffing costs for AI can be expensive, many vendors are including AI components in their standard offerings or providing access to artificial intelligence as a service (AIaaS) platforms. AIaaS allows individuals and companies to experiment with AI for various business purposes and sample multiple platforms before making a commitment.

Popular AI cloud offerings include the following:

See the original post:
What is Artificial Intelligence (AI)? - AI Definition and ...

Artificial Intelligence to Assist, Tutor, Teach and Assess in Higher Ed – Inside Higher Ed

Higher education already employs artificial intelligence in a number of effective wayscourse and facilities scheduling, student recruitment campaign development, endowment investments and support, and many other operational activities are guided by AI at large institutions. The programs that run AIalgorithmscan use big data to project or predict outcomes based on machine learning, in which the computer learns to adapt to a myriad of changing elements, conditions and trends.

Adaptive learning is one of the early applications of AI to the actual teaching and learning process. In this case AI is employed to orchestrate the interaction between the learner and instructional material. This enables the program to most efficiently guide the learner to meet desired outcomes based upon the unique needs and preferences of the learner. Using a series of assessments, the algorithm presents a customized selection of instructional materials adapted to what the learner has demonstrated mastery over and what the learner has yet to learn. This method efficiently eliminates needless repetition of material already learned while advancing through the content at the pace of the learner ensuring that learning outcomes are accomplished.

There is great room for further growth of AI in higher ed, as Susan Fourtan writes in Fierce Education:

The potential and impact of AI on teaching have prompted some colleges and universities to take a closer look at it, accelerating its adoption across campuses. For perspective, the global AI market is projected to reach almost $170billion by 2025. By 2028, the AI market size is expected to gain momentum by reaching over $360billion, registering a growth rate of 33.6percent between 2021 and 2028, according to a research firm Fortune Business Insights report. The market is mostly segmented into Machine Learning, Natural Language Processing (NLP), image processing, and speech recognition.

One of the pioneers in applying AI to supporting learning at the university level, Ashok Goel of Georgia Tech, famously developed Jill Watson, an AI program to serve as a virtual graduate assistant. Since Jills first semester in 2016, Goel has repeatedly and incrementally improved the program, expanding the potential to create additional AI assistants. The program is becoming increasingly affordable and replicable:

The first iteration of Jill Watson took between 1,000 and 1,500 person hours to complete. While thats understandable for a groundbreaking research project, its not a feasible time investment for a middle school teacher. So Goel and his team set about reducing the time it took to create a customized version of Jill Watson. Now we can build a Jill Watson in less than ten hours, Goel says. That reduction in build time is thanks to Agent Smith, a new creation by Goel and his team. All the Agent Smith system needs to create a personalized Jill Watson is a course syllabus and a one-on-one Q&A session with the person teaching it In a sense, its using AI to create AI, Goel says, which is what you want in the long term, because if humans keep on creating AI, its going to take a long time.

Increasingly, many students are accustomed to interacting with AI-driven chat bots. Serving in a wide range of capacities at colleges, the chat bots commonly converse in text or computer-generated speech using natural language processing. These algorithms may even create a virtual relationship with the students. Such is the case with a chat bot named Oli tested by Common App. For 12 months this chat bot communicated with half a million students of the high school Class of 2021 twice a week to guide them through the college application process. In addition to the pro forma steps in the application process, Oli would offer friendly reminders to students to look after themselves in these COVID times, including suggestions to remind them to keep in touch with friends, listen to favorite music or take deep breaths. When the process was complete, Oli texted.

Hey pal, Oli said one week before officially signing off, I wanted to let you know that I have to say goodbye soon. Remember, even without me, youre never alone. Dont hesitate to reach out to your advisor or close ones if you need help or someone to talk to. College isnt easy, but its exciting and youre so ready! The relationship might have ended there. But some of Olis human correspondents had more to say. Hundreds of them texted back, effusive in their praise for the support the chatbot had offered as they pursued college. Research about social robots shows that children view them as sort of alive and make an attempt to build a mutual relationship, writes MIT professor Sherry Turkle. Its a type of connection, a degree of friendship, that excites some researchers and worries others.

Just last month, Google announced a new AI tutor platform to give students personalized feedback, assignments and guidance. Brandon Paykamian writes in GovTech,

[Google Head of Education] Steven Butschi described the product as an expansion of Student Success Services, Googles software suite released last year that includes virtual assistants, analytics, enrollment algorithms and other applications for higher ed. He said the new AI tutor platform collects competency skills graphs made by educators, then uses AI to generate learning activities, such as short-answer or multiple-choice questions, which students can access on an app. The platform also includes applications that can chat with students, provide coaching for reading comprehension and writing, and advise them on academic course plans based on their prior knowledge, career goals and interests.

With all of these AI applications in development and early release phases, questions have arisen as to how we can best ensure that biases are avoided in AI algorithms used in education. At the same time concerns have been raised that we make sure that learners recognize these are computer programs rather than direct communication with live instructors, that privacy of learners is maintained, and related concerns about the use of AI. The federal Office of Technology and Science Policy is gathering information with the intention of creating an AI Bill of Rights. Generally, the AI bill of rights is meant to clarify the rights and freedoms of persons using, or who are subject to, data-driven biometric technologies.

How is your institution preparing to integrate reliable, cost-effective and efficient AI tools for instruction, assessment, advising and deeper engagement with learners? Are the stakeholdersincluding faculty, staff, students and the broader communityincluded in the process to facilitate the broadest input and ensure the advantages and intended outcomes from the use of AI?

More:
Artificial Intelligence to Assist, Tutor, Teach and Assess in Higher Ed - Inside Higher Ed

Can artificial intelligence boost the mortgage industry? – Mortgage Professional America

Shay Sabhikhi (pictured top) and Matt Sanchez (pictured top right), co-founder and COO and founder and CTO respectively, of CognitiveScale, spoke with Mortgage Professional America to describe the efficiencies of scale achieved since launching TrustStar, a SaaS-based product designed to provide mortgage companies with AI-powered market intelligence.

Sanchez used an example of AIs use in lending: If you were to get a bad decision, lets say you were denied credit for something, youd want to know what that decision was based on, of course, as a consumer. And perhaps you might even want to know what you can change to get a better decision. That level of explanation is something we find very important. It becomes more important when you introduce artificial intelligence.

A growing number of mortgage firms are using AI to weed out bias in appraisals. CognitiveScale aims to push its AI product, primarily to make the workload more efficient for mortgage professionals.

Number one, theres a lot of data, Sanchez said. If you were to go trying to figure it all on your own, youd spend hours researching and calculating things and building modes and try to figure out how to identify opportunities. So that takes time.

Even when we talk to chief lending officers in organizations, they have the same challenges. They have so much data to deal with and so many things they have to do on the analytics sideit would be really nice if we can deliver those insights on their doorsteps every day. Were doing a lot of that heavy lifting with data sources that are typically not institution specific. They certainly include information about lots of fin institutions but its not internal bank data were necessarily looking at. It is things that are outside.

View post:
Can artificial intelligence boost the mortgage industry? - Mortgage Professional America

Lior Cole Is The Model Combining Artificial Intelligence With Religion – British Vogue

When shes not modelling, shes developing Robo Rabbi, an artificial-intelligence project that taps into the teachings of the Torah. Think spiritual guidance via a computer. People look at computers as if they are calculators and are binary, but I like computers so much because there is this algorithm of giving advice and showing how A.I. has humanlike abilities, she says. They have a perspective now, and people dont see computing in that light. Cole began thinking about the project during Rosh Hashanah, the Jewish New Year and a time of new beginnings. Robo Rabbi starts with a persons birth parsha a Torah portion with a lesson that corresponds to a persons birthday. From that, Cole developed a system that will give a challenge derived from the parsha that is intended to help the person strive to become their best selves. If a persons parsha focuses on giving back, Coles A.I. program will give the person a 10-day challenge that encourages a person to be charitable.

Cole explains that the Robo Rabbi taps into the boundlessness of A.I. Thanks to the GPT-3 A.I. technology a natural-language processor the parsha lessons and challenges come from the A.I. technology itself, allowing Cole to view herself as simply the messenger. Rarely does A.I. touch spirituality and religion, says Cole. I am doing other projects that touch into the sentient dimensions, but there has yet to be a computer that is entirely human, that is sentient, or has human abilities.

According to Cole, a computer having its own point of view isnt unheard of. There are computers that can mimic humanlike capabilities, Cole says. The technology has a perspective and is articulating that perspective of knowledge on the internet, so it isnt unique. Those opinions can be channeled into a medium like Robo Rabbi, which is meant as an enlightening teaching mechanism.

Coles other projects include a childrens book about computer science. I was looking at a childrens book for computer science, and it is math and coding centric. I am such a computer nerd, but I dont like coding, she says. Kids should be exposed to the more human side [of computers]. She is also creating a coffee-table book to train an A.I. algorithm to program its own art and is involved in a fashion collective at Cornell, where she is developing a digital model that will be available on the NFT marketplace. Her other A.I.-minded project? Well, that she signed an NDA for.

As for modelling, Cole wants to pursue it as long as possible and considers it another curious path for her to explore. When I was younger, I wasnt like, Oh, I want to be a computer scientist when Im older. I figured that out when I was in college, she says. And now that I got scouted, Im like, This is cool too!

This content can also be viewed on the site it originates from.

Link:
Lior Cole Is The Model Combining Artificial Intelligence With Religion - British Vogue

Join us for a webinar on Human Right and Artificial Intelligence on 10 January 2022 – Council of Europe

The President of the Conference of INGOs and the President of the Committee on Human Rights and Artificial Intelligence of the Conference of INGOs have the pleasure to invite all members of the Conferenceto join a webinar on Human Right and Artificial Intelligence on Monday, 10 January 2022 from 14:00 to 16:00 (CET- Central European Time) withEnglish/French interpretation.

The Committee on Human Rights and Artificial Intelligence of the Conference of INGOs aims at proposing a common position to the INGOs of the Conference on artificial intelligence, on its uses and their effects, positive and negative, on human rights in the different fields of activity of the INGOs, in particular education, health, justice, security, the fight against hate speech on the internet, information and its manipulations.

The webinar will address these reflections and take stock of the work in progress in the European and international bodies: European Union (proposal for a regulation of the European Commission), Council of Europe (conclusions of the work of the CAHAI) and UNESCO (Recommendation).

Link to connect:

https://us06web.zoom.us/j/85797473674?pwd=aVFSVklRNyt6Rno1NkhlbmY3djI0UT09

Passcode: 516555

AGENDA (EN/FR)

Read the original post:
Join us for a webinar on Human Right and Artificial Intelligence on 10 January 2022 - Council of Europe