Archive for the ‘Artificial Super Intelligence’ Category

The AI Revolution From Evolution to Super intelligence – Cryptopolitan

TLDR

Artificial Intelligence (AI) is rapidly evolving, and it has the potential to surpass human intelligence in the near future. This article explores the AI Domino Effect, outlining the trajectory from current AI capabilities to the possibility of Artificial General Intelligence (AGI) and even Artificial Superintelligence (ASI). It also highlights the critical role of human intervention and the need for regulation to ensure a safe and beneficial AI future.

AI is transforming our world, and its development is accelerating. Tech companies are in a race to create more advanced AI models, like ChatGPT, which is evolving at an unprecedented pace. AI already outperforms humans in various tasks, from stock analysis to medical diagnosis.

Capitalisms profit-driven incentive fuels AIs progress. Historically, capitalism has driven the improvement of various technologies, and AI is no exception. This raises the question: How intelligent can AI become? There is a strong likelihood that AI will soon surpass the brightest human minds, achieving AGI the ability to excel in any cognitive task.

The AI Domino Effect is a concept that illustrates how AI will evolve over time. Its like a chain of dominoes, where each represents a cognitive task of increasing complexity. Early AI could perform basic calculations, but as technology advances, AI conquers more demanding tasks like art creation, coding, and scientific research.

Before reaching AGI, were likely to see specialized AI in various domains. These AIs, like Midjourney and DALL-E 3 for art or OpenAI Codex for programming, will outperform humans within their niches. This specialization is a step towards AGI.

Artificial General Intelligence (AGI) is the pivotal point in the AI Domino Effect. AGI is not about mastering a single task but excelling in any cognitive task that a human can perform. It matches or exceeds human intelligence across a wide range of tasks.

One ominous domino is the concept of recursive self-improvement, where AI systems improve their code autonomously. This could lead to rapid, unpredictable advancements in AI capabilities.

Beyond AGI lies Artificial Superintelligence (ASI), where AI surpasses all human intelligence combined. ASI is the final frontier, and its potential is both exhilarating and terrifying. Its a realm where AI could solve complex problems or pose unprecedented risks.

Understanding the journey from AI to AGI and potentially ASI is crucial. It shapes our future in unimaginable ways, from solving challenges humanity faces to the unknown risks posed by superintelligent AI. Technologies like quantum computing could accelerate AGI to ASI.

The possibility of AI evolving into ASI raises ethical and technical questions. We have no precedent for superintelligence, making it impossible to predict AIs actions accurately. This underscores the importance of careful consideration and regulation.

While technological progress seems unstoppable, human intervention plays a vital role. Collective efforts can shape AIs development to benefit humanity, addressing challenges like climate change and disease. Ethical considerations and regulations act as barriers, slowing AIs advance when necessary.

Ensuring appropriate guardrails for AI development is essential. Without them, there is a risk that an ASI could become hostile. Its crucial to manage AI skillfully, balancing its benefits and risks.

AIs evolution is a double-edged sword. While it offers tremendous potential for good, it also carries the potential for harm. As AI progresses towards AGI and ASI, humanity faces unprecedented challenges and opportunities. To navigate this journey successfully, we must unite, exercise wisdom, and implement effective regulations. Its time to level up in our approach to AI, acknowledging its limitless benefits while mitigating substantial risks.

Disclaimer.The information provided is not trading advice.Cryptopolitan.comholds no liability for any investments made based on the information provided on this page. We strongly recommend independent research and/or consultation with a qualified professional before making any investment decisions.

View original post here:

The AI Revolution From Evolution to Super intelligence - Cryptopolitan

AI Symposium Explores Flaws and Potential of Artificial Intelligence – The Skanner

The mainstream discussion around artificial intelligence tends to speculate about its dangers: That students will no longer have to complete their own coursework, and will therefore learn nothing. That text-based conversations with people will become indistinguishable from chats with bots. That AI will automate jobs and eventually replace most human skills, even creativity.

A group of Portland City College faculty and administrators is more interested in having a public conversation about the ethics and reality of AI. To that end, PCC's AI and Cultural Computing cohort is hosting the AI Symposium, a three-day event that is grounded in both the science of computing and larger ethical considerations.

By giving folks an understanding of what the technology is, we can have a discussion around it and thus have more control over it, David Torres, new media art instructor, told The Skanner.

How are we able to use this tool and manage it, ethically, morally?

It is a question the group felt wasnt being widely explored.

Having that equity lens was really something we wanted to bring into the conversation, Anne Grey, teaching and learning coordinator at PCC, told The Skanner, noting that human-developed algorithms have already been shown to reflect racism and misogyny. For example, we dont question our databases: Where do they come from? How do they collect data? Thats the premise, that unless we go back to really examine existing inequities, existing flaws, we will be perpetuating the racismits about, how we are inputting that data? And then talking about the other part of ethics: What is considered ethical? What are the lines we are willing to cross? How are we going to be voicing these things?

To explore this, the group created a three-day program with daily themes: Oct. 18 covers AI and Education, Oct. 19 explores AI and Industry, and Oct. 20s focus will be AI in Everyday Life.

Kicking off the event is a keynote by a leading computer science ethicist and thought leader.

Brandeis Marshall is a computer scientist and former college professor who founded DataedX Group, a data ethics education agency that takes aim at "automated oppression efforts," instead introducing culturally responsive approaches. It is a rebuttal to Facebook founder Mark Zuckerberg's breezy "move fast and break things" philosophy.

The cohort used Marshalls Data Conscience: Algorithmic S1ege on our Hum4n1ty as a textbook when exploring how best to prepare students for a changing tech landscape.

There are really few people who are actually talking about it, and not really talking about the impact this would have not just culturally but in re-establishing or magnifying some of the inequities that we already see, Grey said.

Marshall focuses on the need for transparency in how AI is developed and applied, accountability in AI development and strategies for how AI might be governed in law and algorithms. Her work has been described as the meeting of social justice and science presented in an accessible, even engaging, way.

She really changed my thinking in a way that it hasnt been changed in a while, cohort member Melissa Manolas, composition and literature instructor, told The Skanner.

Marshall helped her understand, for example, large language models the major algorithms that use extensive data sets to understand language and eventually generate text.

You start to understand why the large-language models end up being so misogynistic and racist its how theyre trained, Manolas said.

Marshall is knowledgeable about the culture of programming, and points out there are already ethical issues coming up that are being rushed through, but which are really crucial to what you get as an end product. So we need to slow down and really call for that kind of thinking mindfulness about the equity at that stage, not just when you release something and then you deal with it but when youre creating it and programming for it.

Alongside her critiques, Marshall offers hope, she said.

Its her vision of accountability, Manolas said.

(These flaws) are not inevitable.

"Sometimes we hide behind the sense of the inevitable as a way to be non-active about it, and she really doesnt let people off the hook like that.

One idea is that an immediate fix could easily happen in computer science education.

Marshall pointed out time and again that very few curricula and programs that are producing programmers have ethics classes, Manolas said. If they did, if you stopped and slowed it down, you wouldnt have to wait until you already have these problems. She makes you realize, theyre completely aware that might happen when theyre doing the programming, but theres such an impetus to rush through that phase and then deal with it once its out there.

Alongside concerns about AI is excitement for its potential as a creative tool and a means of access.

Technology and new media arts has this history with access, Torres said. When the camera was given to the public in the 60s, usually you had to have a lot of money to make your own films, and then the common folk were able to use the camera to make their own content. Fast forward to now, YouTube is mostly people just making their own videos. AI is doing something similar. We start asking, ok, what actually is art? Like when it came to cinema during the 60s when folks had the camera: Beforehand, because Hollywood had cameras, they could just call cinema whatever. Now were much more critical, because someone on YouTube could do the same thing.

Torres continued, I go back to when Photoshop first came out: Everybody was using it in a very cheesy way, and it was for all the effects it did. But over time, what ends up happening is you fine-tune how the tool can still be used with the human hand, and I think creatively, thats how things happen in the arts. You look at it in movies, there was a huge craze when it came to 3D in cinema, it was this cheesy thing, and it kind of went away. Its like that with a lot of technology: Usually artists find a way to include that as an extra brush in their toolkit that they use. There was an era where we thought VR was the new thing. In actuality, VR has these specific moments, whether its in healthcare, but its not everywhere.

The symposium, funded by a federal AI education grant, is open to the public and, organizers hope, will constitute a large public conversation.

It is also the result of PCC facultys extensive research, study and conversations with experts like Marshall.

This is the culmination of the cohort coming together and wanting as an artifact to put this symposium together to share, to disseminate and to have a discussion within the community, Grey said.

Symposium events are either virtual or in-person. Video of virtual presentations will be made available after the symposium.

Symposium agenda below. For more information on Marshalls work, including free articles, visit https://www.brandeismarshall.com/medium.

Keynote Dr. Brandeis Marshall. 10 a.m. to noon, Virtual. Marshall is founder and CEO of DataedX Group, a data ethics learning and development agency for educators, scholars and practitioners to counteract automated oppression efforts with culturally-responsive instruction and strategies. Brandeis teaches, speaks and writes about the racial, gender, socioeconomic and socio-technical impact of data operations on technology and society. She wrote Data Conscience: Algorithmic Siege on our Humanity as a counter-argument reference for techs "move fast and break things" philosophy. An ASL interpreter will be present for this keynote speaker event.

AI and Education 12:30-2 p.m., Virtual. Ahead of the session, AI experts and researchers Cynthia Alby, Kevin Kelly, and Colin Koopman have submitted responses to questions from PCC faculty, staff, and students on the impact of AI on teaching and learning. Their responses will serve as the basis of an open discussion among PCC AICC cohort members and session attendees.

AI at PCC 2:30-4 p.m., Virtual. Presentation and Q&A with PCC AICC cohort members and Academic Affairs administrators on topics ranging from professional development, instructional support, academic integrity and AI, ChatGPT protocols and best practices. There will also be breakout groups to foster conversations and resource sharing.

AI Campus Workshop 4:30-5:30 p.m., Room 225, Technology Education Building. AICC cohort members will host an open lab for students and the general PCC community in order to showcase resources and equipment available on campuses and facilitate hands-on exercises with commonly-used AI tools.

AI in the Workplace 6-8 p.m., Moriarty Arts and Humanities Building Auditorium. Join us for an opportunity to network and listen to a panel discussion from industry experts about AI in the workplace. Panel guests include: Will Landecker, Data Science Tech Lead, Next Door, Emmanuel Acheampong, co-founder, RoboMUA, and Melissa Evers, Vice President - Software and Advanced Technology Group, General Manager of Strategy to Execution at Intel Corporation.

Spotlight Speaker Nick Insalata 10-11:30 a.m., Moriarty Arts and Humanities Building Auditorium. Join us as we hear from our spotlight speaker Nick Insalata, PCC Computer Science faculty and AICC cohort member talk about the impacts of AI in our everyday life. Nick is interested in the challenges of making complex problems accessible, properly contextualized, as well as interesting and fun to learners of all levels.

AI Campus Workshop 12:30-2 p.m., Room 225, Technology Education Building. Hands-on labs will include how to create text, images, and even music with AI tools and the different ways to incorporate ChatGPT. AI tools include DALL-E 2, Modo Screen, Soundraw, Looka, Legal Robot, and Deep Nostalgia.

AI in Everyday Life Panel Presentation 2:30-4 p.m., Moriarty Arts and Humanities Building Auditorium. Join us as the AICC Cohort members lead a discussion from dooms scrolling to deepfakes, personal assistants to the 'end of work', AI promises both subtle and stunning transformations to our daily lives. Some of the topics discussed will include super intelligence, virtual and augmented reality, ethical AI, and advanced humanoid robots.

See the original post:

AI Symposium Explores Flaws and Potential of Artificial Intelligence - The Skanner

Artificial intelligence has surprising pick to win 2024 Super Bowl – ClutchPoints

Can artificial intelligence really be better at predicting this year's Super Bowl than expects?

While most people turn to ChatGPT for answers on almost everything, this is beyond the software's capabilities. ChatGPT can give you answers to the greatest NBA players of all time and help you with your homework, but ChatGPT is unable to make predictions. In this test, Google's AI (Bard) was asked to predict the 2024 Super Bowl.

His answer? The Buffalo Bills and Philadelphia Eagles.

At first glance, not seeing Patrick Mahomes and the Kansas City Chiefs in the Super Bowl is odd, especially after all the publicity Travis Kelce has received dating Taylor Swift. Does Bard also predict the Kelce and Swift romance not to last by February 2024?

Going back to the matchup between the Bills and Eagles, the answer isn't far-fetched. When Super Bowl odds were released just before the season, the Eagles and Bills had the second- and third-best chance to win, respectively. Bard predicts that this will be the year Josh Allen finally gets over the hump, while the Eagles heading to back-to-back Super Bowls isn't much of a surprise given the depth of their roster.

Right after the Chiefs won their second Super Bowl in four years, 10 ESPN analysts were asked to predict the 2024 Super Bowl. Four of the analysts believe the Chiefs will head back to the Big Game, but will face the San Francisco 49ers (all four also predict the Chiefs to win it all).

Another four went with Joe Burrow and the Cincinnati Bengals in the game, while two others chose the Eagles. Of all the NFC teams, the 49ers were the most popular pick to make it, receiving 7-of-10 votes to be in Vegas next February.

Like with every list, theres always one outlier. In choosing teams to make the Super Bowl, one analyst had Tua Tagovailoa making the leap to lead the Miami Dolphins to their first championship in 50 years.

After gathering all the data from AI and humans to choose the winner for Super Bowl 58, the matchup everyone wants to see is the Chiefs and 49ers.

By asking Bard to simulate 100 matchups between the Chiefs and 49ers, the Chiefs will have a 70% chance to win if this matchup were to happen.

View original post here:

Artificial intelligence has surprising pick to win 2024 Super Bowl - ClutchPoints

Artificial Intelligence isn’t taking over anything – Talon Marks

Lets all take a deep breath and relax about all of this AI stuff because the worries of it taking over songwriting are ridiculous.

Artists do not need to worry about AI songwriting taking over because the difference in quality is hugely noticeable.

At first, AI music seemed to be something that real songwriters should be concerned about when somebody going by the name ghostwriter created a song with vocals of Drake and The Weeknd called, Heart on My Sleeve.

To be fair, when the song came out it left a lot of people amazed because of how close the sound was to the two artists.

The song was so good that it was even eligible for a Grammy Award which is super impressive but then again, the Grammy in recent years has been viewed by many as a joke due to some outlandish winners.

Since then, the AI songwriter ghostwriter hasnt had a song blow up as big as this one nor has there been an AI song that has come anyone near the quality of this one.

Thats simply because it is not the real artist, no matter how good it may sound people know its not the actual artist so why should we care?

Drake is going to be releasing his newest album For All the Dogs on Oct. 6 and do you think that if ghostwriter released an AI Drake album on the same day more people would tune into that album?

Of course, not because the AI music doesnt come close in terms of quality to the actual artist.

For the most part, AI music is being used in another way and that is AI covers, thats where people get an artist for example Juice WRLD and theyll do a cover of him singing Love Yourself by Justin Bieber.

The covers sound amazing, and those arent harmful to the artist because its a song that was already released what should the artist be worried about?

The only real reason an artist would get concerned over an AI cover is that if the cover is doing better than the actual song in terms of numbers that isnt going to happen.

Mainly because most covers that are being done are from popular songs that are already well-established in terms of numbers.

As for photos well that may be a different story but still something people shouldnt be too worried about.

You see with AI photos they can create a photo of anybody they want doing anything possible and it looks real almost too real.

The only reason why this is something to look out for is because they can take a photo of anything while an actual photographer has to bust their asses off to get a great photo or at least even a decent one.

It is for sure something to be concerned about but what will be the fall of this is that we have sources that include the actual person that is being used in the photo.

An AI photo editor could take a photo of a celebrity that is damaging to the celebritys reputation but all it takes is for the source to speak up and deny the photo is them.

The more and more the photos are viewed as false, the more and more people will catch on to this. The same goes for the music side of AI.

Story continues below advertisement

Read the rest here:

Artificial Intelligence isn't taking over anything - Talon Marks

AI and You: The Chatbots Are Talking to Each Other, AI Helps … – CNET

After taking time off, I returned this week to find my inbox flooded with news about AI tools, issues, missteps and adventures. And the thing that stood out was how much investment there is in having AI chatbots pretend to be someone else.

In the case of Meta, CEO Mark Zuckerberg expanded the cast of AI characters the tech giant's more than 3 billion users can interact withon popular Meta platforms like Facebook, Instagram, Messenger and WhatsApp. Those characters are based on real-life celebrities, athletes and artists, including musician Snoop Dogg, famous person Kylie Jenner, ex-quarterback Tom Brady, tennis star Naomi Osaka, other famous person Paris Hilton and celebrated English novelist Jane Austen.

"The characters are a way for people to have fun, learn things, talk recipes or just pass the time all within the context of connecting with friends and family," company executives told The New York Times about all these pretend friends you can now converse with.

Said Zuckerberg, "People aren't going to want to interact with one single super intelligent AI people will want to interact with a bunch of different ones."

But let's not pretend that pretend buddies are just about helping you connect with family and friends. As we know, it's all about the money, and right now tech companies are in a land grab that's currently pitting Meta against other AI juggernauts, including OpenAI's ChatGPT, Microsoft's Bing and Google's Bard. It's a point the Times noted as well: "For Meta, widespread acceptance of its new AI products could significantly increase engagement across its many apps, most of which rely on advertising to make money. More time spent in Meta's apps means more ads shown to its users."

To be sure, Meta wasn't the first to come up with the idea of creating personalities or characters to put a human face on conversational AI chatbots (see ELIZA, who was born in the late '60s.) And it's an approach that seems to be paying off.

Two-year-old Character.ai, which lets you interact with chatbots based on famous people like Taylor Swift and Albert Einstein and fictional characters such as Nintendo's Super Mario, is one of the most visited AI sites and is reportedly seeking funding that would put the startup's valuation at $5 billion to $6 billion, according toBloomberg. This week Character.ai, which also lets you create your own personality-driven chatbots, introduced a new feature for subscribers, called Character Group Chat, that lets you and your friends chat with multiple AI characters at the same time. (Now's your chance to add Swift and Mario to your group chats.)

But using famous people to hawk AI is only fun if those people are in on it and by that I mean get paid for their AI avatars. Earlier this month, actor Tom Hanks warned people about a dental adthat used his likeness without his approval. "Beware!!" Hanks told his 9.5 million Instagram followers. "There's a video out there promoting some dental plan with an AI version of me. I have nothing to do with it."

Hanks in an April podcast predicted the perils posed by AI. "Right now if I wanted to, I could get together and pitch a series of seven movies that would star me in them in which I would be 32 years old from now until kingdom come. Anybody can now re-create themselves at any age they are by way of AI or deepfake technology ... I can tell you that there [are] discussions going on in all of the guilds, all of the agencies, and all of the legal firms to come up with the legal ramifications of my face and my voice and everybody else's being our intellectual property."

Of course, he was right about all those discussions. The Writers Guild of America just ended the writers strike with Hollywood after agreeing to terms on the use of AI in film and TV. But actors, represented by SAG-AFTRA, are still battling it out, with one of the sticking points being the use of "digital replicas."

Here are the other doings in AI worth your attention.

OpenAI is rolling out new voice and image capabilities in ChatGPT that let you "have a voice conversation or show ChatGPT what you're talking about." The new capabilities are available to people who pay to use the chatbot (ChatGPT Plus costs $20 per month.)

Says the company, "Snap a picture of a landmark while traveling and have a live conversation about what's interesting about it. When you're home, snap pictures of your fridge and pantry to figure out what's for dinner (and ask follow up questions for a step by step recipe). After dinner, help your child with a math problem by taking a photo, circling the problem set, and having it share hints with both of you."

So what's it like to talk to ChatGPT? Wall Street Journal reviewer Joanna Stern describes it as similar to the movie Her, in which Joaquin Phoenix falls in love with an AI operating system named Samantha, voiced by Scarlett Johansson.

"The natural voice, the conversational tone and the eloquent answers are almost indistinguishable from a human at times," Stern writes. "But you're definitely still talking to a machine. The response time ... can be extremely slow, and the connection can fail restarting the app helps. A few times it abruptly cut off the conversation (I thought only rude humans did that!)"

A rude AI? Maybe the chatbots are getting more human after all.

Speaking of more humanlike AIs, a company called Fantasy is creating "synthetic humans" for clients including Ford, Google, LG and Spotify to help them "learn about audiences, think through product concepts and even generate new ideas," reported Wired.

"Fantasy uses the kind of machine learning technology that powers chatbots like OpenAI's ChatGPT and Google's Bard to create its synthetic humans," according to Wired. "The company gives each agent dozens of characteristics drawn from ethnographic research on real people, feeding them into commercial large language models like OpenAI's GPT and Anthropic's Claude. Its agents can also be set up to have knowledge of existing product lines or businesses, so they can converse about a client's offerings."

Humans aren't cut out of the loop completely. Fantasy told Wired that for oil and gas company BP, it's created focus groups made up of both real people and synthetic humans and asked them to discuss a topic or product idea. The result? "Whereas a human may get tired of answering questions or not want to answer that many ways, a synthetic human can keep going," Roger Rohatgi, BP's global head of design, told the publication.

So, the end goal may be to just have the bots talking among themselves. But there's a hitch: Training AI characters is no easy feat. Wired spoke with Michael Bernstein, an associate professor at Stanford University who helped create a community of chatbots called Smallville, and it paraphrased him thus:

"Anyone hoping to use AI to model real humans, Bernstein says, should remember to question how faithfully language models actually mirror real behavior. Characters generated this way are not as complex or intelligent as real people and may tend to be more stereotypical and less varied than information sampled from real populations. How to make the models reflect reality more faithfully is 'still an open research question,' he says."

Deloitte updated its report on the "State of Ethics and Trust in Technology" for 2023, and you can download the 53-page report here. It's worth reading, if only as a reminder that the way AI tools and systems are developed, deployed and used is entirely up to us humans.

Deloitte's TL;DR? Organizations should "develop trustworthy and ethical principles for emerging technologies" and work collaboratively with "other businesses, government agencies, and industry leaders to create uniform, ethically robust regulations for emerging technologies."

And if they don't? Deloitte lists the damage from ethical missteps, including reputational harm, human damage and regulatory penalties. The researcher also found that financial damage and employee dissatisfaction go hand in hand. "Unethical behavior or lack of visible attention to ethics can decrease a company's ability to attract and keep talent. One study found employees of companies involved in ethical breaches lost an average of 50% in cumulative earnings over the subsequent decade compared to workers in other companies."

The researcher also found that 56% of professionals are unsure if their companies have ethical guidelines for AI use, according to a summary of the findings by CNET sister site ZDNET.

One of the challenges in removing brain tumors is for surgeons to determine how much around the margins of the tumor they need to remove to ensure they've excised all the bad stuff. It's tricky business, to say the least, because they need to strike a "delicate balance between maximizing the extent of resection and minimizing risk of neurological damage," according to a new study.

That report, published in Nature this week, offers news about a fascinating advance in tumor detection, thanks to an AI neural network. Scientists in the Netherlands developed a deep learning system called Sturgeon that aims to assist surgeons in finding that delicate balance by helping to get a detailed profile of the tumor during surgery.

You can read the Nature report, but I'll share the plain English summary provided by New York Times science writer Benjamin Mueller: "The method involves a computer scanning segments of a tumor's DNA and alighting on certain chemical modifications that can yield a detailed diagnosis of the type and even subtype of the brain tumor. That diagnosis, generated during the early stages of an hours-long surgery, can help surgeons decide how aggressively to operate."

In tests on frozen tumor samples from prior brain cancer operations, Sturgeon accurately diagnosed 45 of 50 cases within 40 minutes of starting that DNA sequencing, the Times said. And then it was tested during 25 live brain surgeries, most of which were on children, and delivered 18 correct diagnoses.

The Times noted that some brain tumors are difficult to diagnose, and that not all cancers can be diagnosed by way of the chemical modifications the new AI method analyzes. Still, it's encouraging to see what could be possible with new AI technologies as the research continues.

Given all the talk above about how AIs are being used to create pretend versions of real people (Super Mario aside), the word I'd pick for the week would be "anthropomorphism," which is about ascribing humanlike qualities to nonhuman things. But I covered that in the Aug. 19 edition of AI and You.

So instead, I offer up the Council of Europe's definition of "artificial intelligence":

A set of sciences, theories and techniques whose purpose is to reproduce by a machine the cognitive abilities of a human being. Current developments aim to be able to entrust a machine with complex tasks previously delegated to a human.

However, the term artificial intelligence is criticized by experts who distinguish between "strong" AI (who are able to contextualize very different specialized problems completely independently) and "weak" or "moderate" AI (who perform extremely well in their field of training). According to some experts, "strong" AI would require advances in basic research to be able to model the world as a whole and not just improvements in the performance of existing systems.

For comparison, here's the US State Departmentquoting the National Artificial Intelligence Act of 2020:

The term "artificial intelligence" means a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments.

Editors' note: CNET is using an AI engine to help create some stories. For more, seethis post.

See the rest here:

AI and You: The Chatbots Are Talking to Each Other, AI Helps ... - CNET