Archive for the ‘Artificial General Intelligence’ Category

LSE leads the way with new AI Management course – The London School of Economics and Political Science

Please find a Q&A with Dr Aaron Cheng about the new course below:

Can you tell me about the course and the content?

The course title is Managing Artificial Intelligence. As you can tell, its a human-centric approach to AI. I proposed this course as we have seen many courses at our and other Schools worldwide focusing on the technical capability of big data and AI. They help students see the potential of this technology rather than give a hands-on managerial perspective and guidelines for how we manage AI.

For the course, we have 10 lectures to cover both the technicality and management of AI, as well as the social and ethical considerations; balanced to give students different perspectives on AI.

The course is supplemented with nine seminars so students can be exposed to, and engage in, the real-world managerial practices of AI. Among them, we have three case study sessions to cover product development, human-in-the-loop, business model, and global strategy of AI applications in various contexts, such as social media, healthcare, and telecommunication. So its a fascinating line-up of teaching cases to show that AI is real and managing AI is now the priority of many organisations, not something we are envisioning and predicting for the future.

We also have an interesting debate on generative AI, the newest form of AI that can automatically generate content for people to use. We have seen lots of applications around it (e.g., ChatGPT) nowadays. In one of the seminars, students were assigned to five roles employer, university, teachers, students and the AI vendor, and debated the role of this technology in higher education. We wanted to see what kind of issues emerged in this ecosystem, and we did have interesting conversations when students walked in the shoes of different roles. This debate also yielded some regulatory implications for how AI should be managed in the higher education context.

The most exciting task for students is the team project on AI management. Student teams develop present, progress their projects in four seminars by incorporating what they learned in the lectures into their AI projects. Most of the teams start with a pressing business or societal challenge and then develop their start-ups around an AI solution.

Some of the students looked at whether journalism or public relations work can be fully automated and in the end they decided not. One of the teams looks at how predictive analytics can be used to assist university students and teachers to book spaces and make appointments. As you can tell, all of these projects are innovative and can be brought to the market for real, so the students are very excited about that.

Overall, we find that students love the course. Their course learning went along with the rapid changes in the field of AI, especially in the past several months since the beginning of ChatGPT, and many of the technology companies raced against each other to push innovations forward on a daily basis. The field is fascinating, although it creates course design challenges for us to keep up.

Is the course designed for students working for companies coming up with AI projects?

It can be for students who wish to work in any sector that is now embracing this technology. Its important to note that although we need IT developers and data scientists to create AI and data-driven solutions, we need more skilled professionals who know both technology and management to diffuse such innovations.

These professionals are often called business analysts and managers at different levels in an organization who can lead the digital transformation, and they often play a role as middlemen to connect the supply and demand of AI and analytics solutions. Statistics from McKinsey Global Institute showed ten times more of a shortage for managers and analysts who can use their know-how of big data and AI for effective decision-making than that for data scientists or machine learning (ML) engineers who are mainly specialised in programming.

To meet the demand for managerial talents in AI, my course does not focus on teaching students how to design technology but more on how to manage it and lead digital transformation with AI.

It's also important to mention the programme that hosts this course Management Information Systems and Digital Innovation (MISDI) a flagship masters programme for the Information Systems and Innovation Group (ISIG) in the Department of Management (DoM). The faculty expertise in ISIG and course offerings in MISDI are on connecting the technology know-what with business and management know-how to give students an edge and knowledge with this connection.

This is also a student demand driven course. Over the past several years, students in MISDI and other programmes in DoM have developed strong interest in AI issues, and many used topics in AI management for their coursework and dissertations. However, we did not have a specialised course for it.

In other departments at LSE like statistics, there are very good AI and ML courses, but most of them are from the perspectives of statisticians or computer scientists. Since 2021, we have had an LSE100 course how to control AI, which is very well-designed from a social science perspective but only for undergraduate students.

To better meet the needs of masters students studying AI management, we have launched this new course in MISDI to integrate multiple perspectives of AI, focus on the managerial considerations, and give a comprehensive and critical treatment of the automation and augmentation roles of AI for individuals, organizations, and the society at large.

Is the course designed for people interested in business side of AI?

I would say so but want to stress that its a more balanced course that also attracts students whose interests may be beyond business. Another thing important to mention is that the course is situated in a polarised public discourse with diverse views toward AI.

We have seen two camps one camp is held by those who worry about AI and the social and ethical implications of replacing humans in the workplace. The other is a utopian view of AI by those who only advocate the technical capability of AI to extend the capabilities of humans. The latter obviously has a more positive view of AI but sometimes downplays the existential threats for humans themselves especially when AI intensifies inequality among people who do not have the knowledge or skills to manage it.

These two camps are very big now but heavily segregated. I feel that they do not talk to each other in a very productive way, as they often debate using distinct language systems. I believe it is much needed in contemporary society to have effective communication between these camps, and people should know the underlying logic and assumptions of these two camps before they develop beliefs and actions about AI. It should be so, especially for current and future leaders in the private and public sectors. They really need to gain a deep understanding of the potential, promise and perils of AI. They also need to have a sober view of AI hopes and hypes claimed by the two camps.

I hope this course can plant seeds in the deep heart of these students; so, when they develop professional careers as business leaders and social planners, they know what AI is and, more importantly, they take the responsibility to manage AI for a better future for humanity. At the end of the day, we should be able to create strong AI but also create our own humanity and achieve shared prosperity with AI. This is the overarching idea of the course.

What makes this course unique and different?

Let me talk about similar courses and the difference my course makes in AI management education.

I have attended the biggest IT Teaching Workshop in my field (Information Systems) almost every year for the last five years. In the Workshop, teachers from most universities in the United States and Europe present their courses about big data and data analytics, yet I have not seen many specialist courses on AI.

Of course, in the Computer Science community, there are many popular courses about machine learning and data science, but they rarely say that these are AI courses. It is important to note that the concept of AI is not just technical but socio-technical. We need to study and teach the nature and implications of AI by examining its technical properties and also its social contexts. As far as I know, few courses have struck such a balance.

One reason, that most courses focus on the technicality of AI, is obvious. STEM jobs are much better paid than a lot of others. Preparing students for such jobs would help increase the popularity of universities, which further encourages the offering of technical AI or data science courses.

Leading the social science approach in higher education, LSE has its strength in cultivating leaders who can think and navigate social changes, especially the current transformational change led by AI. As such, we offer this new LSE course to situate the debate on AI in the academic and public discourse and approach AI education in a more comprehensive and critical way. We start with the history of AI, we discuss the role of data in making AI, and we unpack the black box of algorithms and issues involved (e.g., opacity, bias, interpretability).

Then we walk students through the socio-technical analysis of AI management at different levels: On the individual level, we assess the role of humans in the loop and when and how human judgment needs to be exercised in designing and using AI. On the organizational level, we analyse the business model, operations, and innovations with and governance of AI. On the societal level, we discuss the ethical concerns and regulatory efforts on managing AI for good. As you can tell, with this approach to AI, students start to think about and raise their own critical questions about AI management in the digital economy.

What made you personally think this course was really needed?

I would like to start with my educational background and then my reading and thinking about AI in the past decade to answer this question.

Starting from my college education 15 years ago, I was in the same discipline, management information systems, and initially, my training was technical and particularly computer science oriented. Then my understanding of technology deepened after I moved to my masters programme and was exposed to a more behavioural perspective on how people interact with technology. Later my PhD training in economic analysis of information technology helps me engage in studying the bigger role of technology in businesses and society.

Now I am a researcher and teacher of information systems and innovation, and LSE really broadens my horizon of the social science approach to technology. During my education journey, AI has been with me for many years, albeit more often in the form of algorithms or machine learning techniques.

AI did not catch a lot of my attention, and I am sure for anyone else, until the booming of the AI field especially when deep learning and generative models were developed and used to create powerful applications, such as deep fakes or ChatGPT. People say nowadays that the era of artificial general intelligence is coming, in contrast to the past decades of artificial narrow intelligence (AI can only serve a small set of pre-specified purposes and automate tasks like ordinary software does).

Over time I realised AI has so much potential to change human life in positive ways. At the same time, people worry about the apocalyptic claim that machines are the end of humanity has reached the all-time high. I think its time for us to seriously think and study how to manage AI.

Teaching AI management is an opportunity for me as a researcher to explore with students the socio-technical nature and implications of AI and how we can be more responsible in designing and deploying AI. I am happy that my students have been excited about this course and really engaged in and benefitted from this journey.

Original post:

LSE leads the way with new AI Management course - The London School of Economics and Political Science

What You Should Know About Googles Upgraded Bard Chatbot – Unite.AI

Google's annual I/O 2023 developer conference was abuzz with significant announcements revolving around artificial intelligence. The tech giant unveiled a multitude of AI enhancements for Google apps and services, with a notable spotlight on their large language model (LLM), PaLM 2, and the upgraded Bard, Google's experimental conversational chatbot. As we delve into the advanced capabilities of Google's AI chatbot, it's crucial to retrace Bard's journey and understand its underpinnings.

Debuted in February, Bard marked Google's innovative foray into AI-based conversational chatbots, akin to OpenAIs renowned ChatGPT. Bard was initially equipped with a scaled-down version of Googles Language Model for Dialogue Applications, LaMDA. This AI chatbot was designed to interact with users in a human-like manner, engaging in conversation, generating ideas, writing essays and codes, and even tackling math problems.

However, the initial version of Bard received criticism for its limited capabilities and factual inaccuracies. Google's CEO, Sundar Pichai, acknowledged these limitations, revealing that they were intentional and part of the plan to progressively enhance Bard's capabilities with more potent LLMs.

Fast forward to Google I/O 2023, Google delivered on its promise by upgrading Bard with the latest version of the Parallel Language Model, PaLM 2. This move marks a significant leap from LaMDA, amplifying Bard's capabilities.

Initially, Bard was accessible exclusively to a select group of trusted testers in the US and the UK. Although the waitlist opened in March 2023, Bard remained inaccessible to the general public. However, Google has now broadened Bard's availability to over 180 countries and territories. While currently available only in English, Google plans to extend Bard's language support to Japanese and Korean, followed by an additional 40 languages in the future.

With the integration of PaLM 2, Bard's functionality has experienced considerable improvements. The chatbot now boasts superior math, logic, and reasoning skills. It's capable of generating, explaining, and debugging code in over 20 programming languages, aiding developers in their programming endeavors.

Bard's latest version also brings forth a more visual and interactive user experience. Users can now provide image inputs to Bard, who can then respond with relevant information, leveraging Google tools like Google Lens. Moreover, Bard can generate humorous captions, further enhancing user engagement.

Not only can Bard's responses be directly exported to Gmail and Google Docs, but the chatbot also has the ability to browse the web for images, tap into knowledge graphs for relevant information, and utilize Google Maps for location-related queries. The integration with Google Sheets further augments its utility.

In a bid to expand its collaborative functionalities, Google plans to integrate Bard with external services such as Adobe Firefly. This integration will allow users to generate new images from text prompts and bring them to the editing table. Google is also establishing connections between Bard and other partners like Kayak, OpenTable, ZipRecruiter, Instacart, Wolfram, and Khan Academy.

With the announcement of the upgraded Bard chatbot, Google is posed to challenge the dominance of OpenAIs ChatGPT. One of Googles strategic moves was the introduction of a lightweight version called Gecko, designed for smartphone integration, enabling users to run it locally on their Android devices. Besides Gecko, there are other more potent versions, including Otter, Bison, and Unicorn.

In a head-to-head comparison between Bard and ChatGPT, both AI chatbots display impressive capabilities. However, certain distinguishing factors could tip the scales in Bard's favor. When it came to translating complex phrases, Bard provides more context, enhancing the comprehensibility of the translations.

Bard also outperforms ChatGPT in the realm of coding. With its support for over 20 programming languages, Bard can assist professionals with code generation, explanation, and debugging, and it does so with a faster response time compared to ChatGPT.

Another advantage Bard holds over ChatGPT is its connectivity to the internet. For example, when asked about the differences between OpenAIs GPT-4 and Googles PaLM2, Bard can provide an up-to-date response, while ChatGPT is limited to information from before 2021.

Despite these advantages, Bard has a few limitations. One notable drawback is the lack of source backing for the information it provides, which can potentially lead to the spread of false information. Additionally, unlike ChatGPT, Bard doesn't allow access to previous interactions.

As Google continues to refine and expand Bard's capabilities, it becomes increasingly clear that Bard is set to become a major competitor to OpenAI's ChatGPT. The advancements in AI chatbots, as exemplified by Google's Bard, are a testament to the vast potential of AI in enhancing user experience and interaction.

ChatGPT, with its early mover advantage, has become a household name in tech and has a dedicated user base accustomed to its functionalities. However, with Bard being offered for free compared to the GPT4-powered version of ChatGPT priced at $20 a month, Google's chatbot has a competitive edge.

Both OpenAI and Google are working on enhancing their chatbots, with plans to make them multimodal and compatible with plugins. As the AI chatbot industry evolves, tech giants such as Google, Microsoft, and Meta are in a heated race to provide the most innovative and powerful offerings.

Although OpenAIs ChatGPT had a head start with its launch in September 2022, Google is rapidly closing the gap. With its strategic improvements and expansive global reach, Bard is emerging as a significant threat to the dominance of ChatGPT.

Here is the original post:

What You Should Know About Googles Upgraded Bard Chatbot - Unite.AI

AIs Impact on Journalism – Signals AZ

Article audio is made possible by CAST11 Prescott Podcast Network. A Talking Glass Media production.

Turning things on their heads is something we like to do here at Talking Glass Media in Prescott Valley. We wanted to share with our readers what would an AI (artificial intelligence) chatbot like ChatGPT by the company OpenAI would say if we asked it How will AI impact journalism? Before we share that answer, heres some introductory information on ChatGPT and the company OpenAI.

According to the OpenAI website, OpenAI is an AI research and deployment company. Our mission is to ensure that artificial general intelligence benefits all of humanity Our mission is to ensure that artificial general intelligenceAI systems that are generally smarter than humansbenefits all of humanity.We are governed by a nonprofit and our unique capped-profit model drives our commitment to safety. This means that as AI becomes more powerful, we can redistribute profits from our work to maximize the social and economic benefits of AI technology.

AI has already begun to impact journalism in various ways, and this trend is likely to continue and intensify in the future. Here are some areas where AI is expected to have a significant impact:

Automated Reporting: AI can generate news stories from structured data. For example, templates can be created to report on financial earnings or sports results, which AI can then populate with the relevant numbers. This automation can free up human journalists to focus on more complex stories.

Data Analysis: AI tools can analyze vast amounts of data to identify trends, correlations, and outliers that might be relevant to journalists. These tools can do this work far faster and more accurately than humans, enabling journalists to uncover stories they might not otherwise find.

Verification: With the rise of deepfakes and other forms of digital deception, AI tools are being developed to help journalists verify the authenticity of images, videos, and other forms of media. These tools can help journalists navigate the increasingly complex landscape of digital misinformation.

Personalization: AI can help to tailor news content to individual readers based on their interests, reading habits, and other factors. This can help to increase engagement and keep readers coming back for more.

News Gathering: AI can assist in gathering news from various sources including social media, blogs, forums, and other sources. This can greatly enhance a journalists ability to gather information and stay updated with the latest happenings.

Fact-checking: AI can help automate the fact-checking process, which is becoming increasingly important in the era of fake news. By cross-referencing information with reliable sources, AI can help ensure the accuracy of news reporting.

While these are significant benefits, there are also important ethical considerations and potential downsides to consider. AI could potentially displace some journalists from their jobs, particularly those involved in more routine, data-heavy reporting. Theres also the risk that AI-generated news could be manipulated for propaganda or misinformation purposes. And there are questions about how the use of AI for personalization could lead to echo chambers, where people are only exposed to news that reinforces their existing beliefs.

Finally, theres the question of transparency and accountability. If a news story is generated by an AI, who is responsible for its accuracy? These and other questions will need to be addressed as AI continues to play a larger role in journalism. [end ChatGPT answer on how AI will impact journalism]

So there you have it. What an artificial chatbot like ChatGPT by OpenAI thinks of the impact AI will have on journalism. What do you think of this most recent proliferation of AI in our society? This is just the beginning We should mention that AI fixes and improves itself along the way

Of course, you can also listen to this entire article on the SignalsAZ podcast which is text-to-speech service generated by machine learning speech-generating software. Talking Glass Media (owner of SignalsAZ.com ) was the first publisher in the region to adopt this technology.

Read the original:

AIs Impact on Journalism - Signals AZ

The path ahead for generative AI – Inside Higher Ed

Early in 2019, GPT-2 was announced by OpenAI, the private, nonprofit company that now includes $11billion in investments from Microsoft Corporation. Compared to what was to follow, the development was relatively quiet. Claudia Slowik and Filip Kaiser write in the Neoteric blog, On March 15, 2022, OpenAI released the new version of GPT-3 called text-davinci-003. This model was described as more capable than previous versions of GPT. Moreover, it was trained on data up to June 2021, making it way more up-to-date than the previous versions of the models (trained on data up to Oct 2019). It was with the 3.5 series of text and code completion versions that GPT took off. With the 4.0 version, released in November 2022, an all-out scramble launched to create interfaces, apps and associated products to facilitate new and expanded access.

Google is one of the many firms engaged in efforts to catch up with the OpenAI release. After a flawed demo at the release of Googles Bard, The Decoder reports that Googles two large AI research centers, DeepMind and Google Brain AI, have pulled together to support the Gemini project, a large language model that will have a trillion parameters.

It was less than a month and a half ago, on March 30, 2023, that Auto-GPT was posted on GitHub by developer Significant Gravitas. As Wikipedia explains, Auto-GPT is an AI agent that given a goal in natural language, can attempt to achieve it by breaking it into sub-tasks and using the internet and other tools in an automatic loop. It uses OpenAIs GPT-4 or GPT-3.5 APIs, and is among the first examples of an application using GPT-4 to perform autonomous tasks.

Most Popular

With Auto-GPT, we have crossed the virtual Rubicon from the relatively simple-step activities of earlier GPT models to a process of sequencing independent steps to a complex feedback loop of multiple activities and assessments toward a defined outcome. Sabrina Ortiz writes in ZDNet, This means that Auto-GPT can perform a task with little human intervention, and can self-prompt. For example, you can tell Auto-GPT what you want the end goal to be and the application will self-produce every prompt necessary to complete the task. Ortiz suggests, The applications promising, autonomous abilities may make it our first glimpse of artificial general intelligence (AGI), a type of AI that can perform human-level intellectual tasks The Github demo shows sample goal prompts such as Increase net worth, grow Twitter Account, Develop and manage multiple businesses. The applications limitations listed on Github do warn that Auto-GPTs output, May not perform well in complex, real-world business scenarios. However, the results users have been sharing show that Auto-GPT can deliver some really impressive (and helpful) results.

The development of generative AI has been so rapid that we have seen calls to pause development. Yet these calls are more of alarm rather than any reasonable expectation that worldwide research on such a hot topic will be delayed in any way. Such a pause would be impossible to enforce, given the number and diverse locations of sites performing research in this field.

Led by developments in generative AI, we are on our way to AGI. It will not be a straightforward path, and there are numerous high hurdles to overcome, but we have passed an inflection point with the capabilities of Auto-GPT. Ben Lutkevich of Tech Target writes:

Artificial general intelligence (AGI) is the representation of generalized human cognitive abilities in software so that, faced with an unfamiliar task, the AGI system could find a solution. The intention of an AGI system is to perform any task that a human being is capable of AGI is considered to be strong artificial intelligence (AI). Strong AI contrasts with weak or narrow AI, which is the application of artificial intelligence to specific tasks or problems. IBMs Watson supercomputer, expert systems and self-driving cars are examples of narrow artificial intelligence.

How long will it take to develop strong AI? No one knows for certain. Almost certainly, it will take years, but perhaps not the decades that had been previously predicted. We must remember just how quickly the current GPT and associated models have emerged.

What will widespread AGI mean? Again, no one knows for sure. What we do know is that many more human jobs will be performed by strong AI programs. The computers and AI programs will work tirelessly, efficiently and effectively. Of course, there will still be the need for many humans engaged in a myriad of tasks that are not best completed by AGI. We may see shorter workweeks for humans. New human-staffed careers may evolve to employ the displaced workers.

The implications for education are many. Will we still need the knowledge to perform tasks that are regularly completed by AI? Knowledge of how to direct and expand AIs expertise in these areas will be essential. What human skills and abilities will be in most demand? Human values and ethics will be essential to guide programs if we are to coexist comfortably. AGI may be able to extend our knowledge and information in math and the sciences. Perhaps it will bring new insights and opportunities in the arts and humanities that have been in decline at universities in recent years.

With the advent of Auto-GPT, there is now a vision of a pathway for generative AI to take on increasingly multivariate tasks. Ever more complex objectives will be assigned to these more advanced AI apps. We must be vigilant to assure that human values and ethics guide the development in the coming months and years.

We also must carefully monitor the advent of AI in our career fields so that we are not caught unaware when there are reductions in the human workforce due to computer-generated efficiencies. This will require communication, collaboration and shared vision among researchers, corporations and educators. We will do well to recall the warning of Aldous Huxley nearly a century ago that the Brave New World may await those who exclusively value efficiency and technology over human emotion and individuality.

See more here:

The path ahead for generative AI - Inside Higher Ed

What is AI? | National | foxbangor.com – FOX Bangor/ABC 7 News and Stories

AI, or artificial intelligence, is a branch of computer science that is designed to understand and store human intelligence, mimic human capabilities including the completion of tasks, process human language and perform speech recognition. AI is the leading innovation in technology today and its primary goal is to eliminate tedious tasks and assist in immediately accessing extremely detailed and hyper-focused information and data.

AI has the ability to consume and process massive datasets and develop patterns to make predictions for the completion of future tasks.

While the interest in AI around the world is growing, the science poses an existential crisis for jobs, companies, whole industries and potentially human existence. In March, Goldman Sachs released a report and warned the public of the threat to jobs that AI, and ChatGPT, an artificial intelligence chatbot developed by AI research company OpenAI, poses. The report revealed that jobs with repetitive responsibilities and some manual labor are at risk for automation. The report concludes that 300 million jobs could be affected by AI.

ARTIFICIAL INTELLIGENCE FAQ

In simple terms, artificial intelligence is computer science that is capable of completing tasks that humans already perform or require human intelligence to complete.

AI uses technology to learn and recreate human tasks. Currently, in some situations, AI has the ability to perform human tasks better than we do, which poses a threat to the workforce.

While it may seem AI has only recently become popular or relevant to society, it has been used in many ways for years.

Reactive machines are task specific and a basic form of AI. They react to the input provided to them and offer the same output. In the form of reactive machines, AI does not learn new concepts. These machines apply datasets and respond with recommendations based on already existing inputs.

An example of reactive machines is the recommendations section in Netflix. whereby TV shows and movies are recommended by the streaming service to a user based on their search and watch history.

FIVE DISTURBING EXAMPLES OF WHY AI IS NOT QUITE THERE

Limited memory understands by storing previously captured and learned data and builds knowledge for the future based on its findings. An example of limited memory is self-driving cars.

Self-driving cars use signals and sensors to detect their surroundings and make driving decisions. The cars compute where pedestrians, traffic signals and low-light conditions exist, in order to drive more cautiously and avoid accidents or traffic errors.

Theory of mind means that humans have thoughts, feelings, emotions, desires, etc. that impact their day-to-day behaviors and decisions. While early adaptations of AI struggled with theory of mind, it has since made astonishing improvements. In order for AI to procure theory of mind, it must understand that everyone has feelings and develop the ability to change its behaviors as humans do.

An example of theory of mind for humans is to see a wilted plant and understand that it needs to be watered in order to survive. In order for AI to have theory of mind, it will need to do the same.

AI, ChatGPT specifically, has passed a theory of mind test commensurate with 9-year-old ability, as of February 2023.

Finally, when AI is self-aware, the stages of development will be complete. Self-awareness for AI is the most challenging of all AI types as the machines will have achieved human-level consciousness, emotions, empathy, etc. and can commiserate accordingly.

Once the machine has learned to be self-aware, it will have the ability to form its own identity.

This stage of self-awareness is not currently possible. In order for self-awareness to become a possibility, scientists will need to find a way to replicate consciousness in a machine.

WHAT ARE THE DANGERS OF AI? FIND OUT WHY PEOPLE ARE AFRAID OF ARTIFICIAL INTELLIGENCE

Challenger, Gray & Christmas, a coaching company in Chicago, found in an April report that ChatGPT could replace 4.8 million jobs in the future. Specifically, ChatGPT would replace job roles that are repetitive and predictable including copywriters, customer service representatives, cashiers, data clerks, drivers and more.

Individuals with graduate degrees are most fearful of losing their jobs to AI and nearly 69% of them emphasized their fear of it, according to a Tidio survey. While humans are becoming increasingly alarmed by AI, we are already using it in our daily lives in ways people might not even realize.

Here are some of the most popular and typical ways were already leveraging AI.

Facial recognition is being used mostly by law enforcement to identify criminals and assess potential threats. Individuals use it daily to access smart devices and through social media like Facebook photo tag recommendations.

Determining violations of community guidelines, facial recognition, and translation tools for language interpretation are just a few ways social media is operating alongside AI.

Google Home, Amazon Alexa and Apple Siri are all examples of voice assistants that employ AI. Voice assistants use natural language processing and are capable of discovering patterns and behaviors among users in order to conserve preferences and offer results to consumers. The more you use them, the more the voice assistant will learn.

ARE YOU READY FOR AI VOICE CLONING ON YOUR PHONE?

Smart home devices are used in a variety of ways including the protection and security of your home. Technology like Ring doorbells and Nest security systems use AI to detect movement and alert homeowners.

Voice assistants like Siri and Alexa are also examples of smart devices.

Search engines like Google, Bing and Baidu use AI to improve search results for users. Recommended content based on initial search terms are provided to users every time they search. Search engines use natural language processing, a branch of AI, to recognize search intent in order to provide exemplary results.

For example, if you search for "rose" results for the pink wine rose, the flower rose, Rose the singer or rose the verb may appear. When you provide context to your search, AI assimilates and suggests results.

If youre using Google to query "Marylin Monrow," the search engine giant suggests the correct search term and results for "Marilyn Monroe." Search engines are using AI to grasp spelling, context, language and more in order to best satisfy users.

AI is also the power behind the rapid adaptation of search results. Trillions of searches are performed every year and humans dont have the ability to comb through results but AI does.

When you come home from a long day at work to relax on the couch and throw on Netflix, youre leveraging AI to help you choose the next TV show or movie youll watch. When you log onto Instagram or Facebook and a suggested list of new followers or friends appears, youre experiencing the power of AI. When you open your Google Maps app and type "gas" into the search bar to locate the closest gas station near you, youre using AI to make your life easier.

Artificial narrow intelligence or ANI is also known as "Weak" AI. ANI systems are capable of handling singular or limited tasks and are the exact opposite of strong AI, which handles a wide range of tasks.

Example of ANI include Apples Siri, Netflix recommendations and the weather app where you can check the weather for the day or the week. While Siri has the ability to assist with numerous tasks like announce calls or text messages, play music, shortcut smart device apps and more, it struggles with tasks outside its immediate capabilities.

ANI systems are not self-aware or and do not possess genuine intelligence, according to deepAI.org.

ANI uses datasets with specific information to complete tasks and cannot go beyond the data provided to it Though systems like Siri are capable and sophisticated, they cannot be conscious, sentient or self-aware.

"LLMs have a broader set of capabilities than previous narrow AIs, but this breadth is limited," said Ben Goertzel, expert in Artificial General Intelligence, in a Fox News Digital Opinion article. "They cannot intelligently reason beyond their experience-base. They only appear broadly capable because their training base is really enormous and covers almost every aspect of human endeavor."

Artificial general intelligence or AGI is AI that can perform any intellectual task a human can, according to medium.com. AGI capabilities vary from consciousness to self-awareness. We have seen adaptations of life with AGI in movies like "Her" and "Wall-E."

In the Pixar animation film "Wall-E," the sad, lonely robot meets another, Eve, and they fall in love. In this film, while the characters are sentient, they are AGI systems. In addition to "Wall-E," the 2013 film "Her" stars Joaquin Phoenix. "Her" is also an AGI system as she outgrows her first owner and goes out to be on her own.

AGI systems learn, execute, reason, and more but do not experience consciousness.

CLICK HERE TO GET THE FOX NEWS APP

Artificial superintelligence or ASI is the type of AI most people are fearful of. It will have the ability to surpass human intelligence in a number of ways including creativity, self-awareness, problem-solving and more. ASI, if ever created, will have the ability to be sentient. While people are worried about AI becoming sentient, the technology is years away from such capabilities.

In 2018 at South by Southwest tech conference SXSW in Austin, Texas, Elon Musk expressed his concerns over AI and regulations regarding the development of ASI.

Read the original here:

What is AI? | National | foxbangor.com - FOX Bangor/ABC 7 News and Stories