Archive for the ‘Artificial Intelligence’ Category

Why artificial intelligence can’t bring the dead back to life – Fox News

This year is shaping up to be the year of artificial intelligence. ChatGPT has stolen most of the headlines, but it is only the most infamous in a wide assortment AI platforms. One of the most recent to arrive on the scene is HereAfter AI, an app that can "preserve memories with an app that interviews you about your life." The goal: to "let loved ones hear meaningful stories by chatting with the virtual you." Heaven, not in the clouds, but the cloud. Nirvana on your iPhone. Reincarnation through silicon.

The problem is, it wont work. Cant work, in fact.

At this point, no one doubts we can use AI to simulate a generic person, or even a particular person. But this could only ever be a simulation, not the real deal. The reason doesnt have to do with the technical limitations of AI. It rather has to do with the fact that humans are not disembodied souls or pure spirits that could be uploaded to a computer in the first place. Our bodies are not only biological realities--they are a crucial part of who we are.

A couple examples bring the point home: if you are a dancer or an athlete or a musician, you know that when you dance a tango or go in for a layup or run an arpeggio, you think with your body. If you try to think with your head ("first step there, just like so"), youll trip up. Thats why I cant dance I overthink it. Eliminate the body by putting me on an app, and youve eliminated what made me me in the first place.

OPENAI SAYS CHATGPT FEATURE LETTING USERS DISABLE CHAT HISTORY NOW AVAILABLE

Even if it were possible to upload loved ones to a computer, it isnt clear that this would be something we would want. When we lose a loved one, we would do anything to have that person back with us. Thats a natural human desire. But think through what it would mean never to lose anyone, to always have our loved ones in an app, ready for consultation. Not only our parents and grandparents would be part of our lives, but multiple generations of great-grandparents as well. That may be goodit may be, well, strange. But there's no question it would be different than anything we've ever experienced. Imagine the conversations around the Thanksgiving table. Interesting? Absolutely. Something we deeply desire? Not as clear.

There are also problems for HereAfter AI that come directly from how AI is created. To create an AI, one of the first steps is "training": feeding the model massive amounts of data. The model then looks for patterns in these data to transform them into something new. The more training data; the better the model. Thats why Facebook and Twitter and the others are data-hungry: the more data they gather, the better their models become. And it is why ChatGPT is such a powerful form of AI: it was trained on massive amounts of data. As in: all of Wikipedia, millions of ebooks, and snapshots of the entire internet.

Heres the issue: in creating an AI to mimic those weve lost, well need to train the model. How do we do that? HereAfter AI has the answer: feed the model text threads, personal letters, emails, home videos: the list goes on. As with all models, more data means a better model. The closer you come to bringing back someone you love.

CLICK HERE TO GET THE OPINION NEWSLETTER

How many of us, though, in attempting to bring back a loved one, would feed HereAfter AI all the snarky things a loved one said? The times grandma didnt give us the benefit of the doubt? The times a spouse spouted conspiracy theories or garbled words or just plain got things wrong? The times a child lied? Not much of that, Im guessing. Train it on the happy times instead.

CLICK HERE TO GET THE FOX NEWS APP

But a model, of course, is only as good as its training data. Any "person" weve created using only happy data will be but a shiny veneer of a genuine human being. All of us have bumps and warts, failings and shortcomings, biases, and blindspots. Thats part of being human. Sometimes our shortcomings are our most endearing parts: my family loves me because and not despite my quirks and limitations. Remove the bumps and warts, and you havent created a human at all. Youve instead created a saccharine caricature, dressed in a skin that resembles someone you used to know.

In the Harry Potter series, Albus Dumbledore reflects on Lord Voldemorts quest for immortality: "humans do have a knack for choosing precisely those things that are worst for them." HereAfter AI is no Lord Voldemort, but theyve made the same mistake. Life on an app--for either you or your loved ones--is not heaven. Its not something we even want. What is it? Impossible.

Go here to read the rest:
Why artificial intelligence can't bring the dead back to life - Fox News

Missing persons helped by artificial intelligence – WISH TV Indianapolis, IN

Indiana has been grappling with an unsolved murder and missing person case for years. With the help of revolutionary augmented reality (AR) and artificial intelligence (AI) crime-solving tools, attention is being brought to these cases like never before.

CrimeDoor, the true crime news, and content platform co-founded by Neil Mandt, is using AR pop-ups as a reinvention of the Amber Alert system to provide updates on unsolved cases in the area.

The technology allows users to see virtual pop-ups of information related to each case, making it easier for them to stay informed and potentially provide tips to law enforcement.

As May is known as Missing and Unidentified Persons Month, this technology comes at a crucial time.

According to the National Missing and Unidentified Persons System, over 600,000 people go missing in the United States every year.

This technology has the potential to help solve some of these cases and bring closure to families who have been waiting for answers for years.

Neil Mandt joined us to provide updates on several Indiana cases, including Lauren Spierer, Nakyla Williams, Tatyana Sims, and Marilyn Niqui McCown. These cases have remained unsolved for years, but with the help of AR and AI technology, new leads may be discovered. Watch the full interview above to learn more!

View original post here:
Missing persons helped by artificial intelligence - WISH TV Indianapolis, IN

Director Chopras Prepared Remarks on the Interagency … – Consumer Financial Protection Bureau

In recent years, we have seen a rapid acceleration of automated decision-making across our daily lives. Throughout the digital world and throughout sectors of the economy, so-called artificial intelligence is automating activities in ways previously thought to be unimaginable.

Generative AI, which can produce voices, images, and videos that are designed to simulate real-life human interactions are raising the question of whether we are ready to deal with the wide range of potential harms from consumer fraud to privacy to fair competition.

Today, several federal agencies are coming together to make one clear point: there is no exemption in our nations civil rights laws for new technologies that engage in unlawful discrimination. Companies must take responsibility for their use of these tools.

The Interagency Statement we are releasing today seeks to take an important step forward to affirm existing law and rein in unlawful discriminatory practices perpetrated by those who deploy these technologies.1

The statement highlights the all-of-government approach to enforce existing laws and work collaboratively on AI risks.

Unchecked AI poses threats to fairness and to our civil rights in ways that are already being felt.

Technology companies and financial institutions are amassing massive amounts of data and using it to make more and more decisions about our lives, including whether we get a loan or what advertisements we see.

While machines crunching numbers might seem capable of taking human bias out of the equation, thats not what is happening. Findings from academic studies and news reporting raise serious questions about algorithmic bias. For example, a statistical analysis of 2 million mortgage applications found that Black families were 80 percent more likely to be denied by an algorithm when compared to white families with similar financial and credit backgrounds. The response of mortgage companies has been that researchers do not have all the data that feeds into their algorithms or full knowledge of the algorithms. But their defense illuminates the problem: artificial intelligence often feels like black boxes behind brick walls.2

When consumers and regulators do not know how decisions are made by artificial intelligence, consumers are unable to participate in a fair and competitive market free from bias.

Thats why the CFPB and other agencies are prioritizing and confronting digital redlining, which is redlining caused through bias present in lending or home valuation algorithms and other technology marketed as artificial intelligence. They are disguised through so-called neutral algorithms, but they are built like any other AI system by scraping data that may reinforce the biases that have long existed.

We are working hard to reduce bias and discrimination when it comes to home valuations, including algorithmic appraisals. We will be proposing rules to make sure artificial intelligence and automated valuation models have basic safeguards when it comes to discrimination.

We are also scrutinizing algorithmic advertising, which, once again, is often marketed as AI advertising. We published guidance to affirm how lenders and other financial providers need to take responsibility for certain advertising practices. Specifically, advertising and marketing that uses sophisticated analytic techniques, depending on how these practices are designed and implemented, could subject firms to legal liability.

Weve also taken action to protect the public from black box credit models in some cases so complex that the financial firms that rely on them cant even explain the results. Companies are required to tell you why you were denied for credit and using a complex algorithm is not a defense against providing specific and accurate explanations.

Developing methods to improve home valuation, lending, and marketing are not inherently bad. But when done in irresponsible ways, such as creating black box models or not carefully studying the data inputs for bias, these products and services pose real threats to consumers civil rights. It also threatens law-abiding nascent firms and entrepreneurs trying to compete with those who violate the law.

I am pleased that the CFPB will continue to contribute to the all-of-government mission to ensure that the collective laws we enforce are followed, regardless of the technology used.

Thank you.

The rest is here:
Director Chopras Prepared Remarks on the Interagency ... - Consumer Financial Protection Bureau

Artificial Intelligence, Part 1: It’s already everywhere, isn’t it? – UpNorthLive.com

Uber Eats customers in the Mosaic District of Fairfax, VA can have food delivered by one of Cartken's autonomous bots. COURTESY: Cartken

Not unlike a science-fiction novel, theres seemingly no limit to what artificial intelligence technology can do.

It can detect early signs of certain types of cancer before doctors can find it on a CT scan. It can outperform most law school graduates on the bar exam.

It can buy and trade stocks. It can write and produce a song using an artists vocals, even if the artist never sang a note. It even could have written this article (it did not).

Microsoft employee Alex Buscher demonstrates a search feature integration of Microsoft Bing search engine and Edge browser with OpenAI on Tuesday, Feb. 7, 2023, in Redmond. (AP Photo/Stephen Brashear)

It can also impersonate your grandmother over a phone call and ask for money. It can be used by a criminal to start a massive troll farm (groups of fake online agitators who inflame debates) with just a few presses of a button.

It can (and has) written fake news stories accusing public officials of crimes they never committed. It could be hacked and intentionally crash a driverless vehicle; it can harvest large personal datasets in seconds; it could hack robotic military weapons.

Basically, it can do almost everything humans can do and more, but one thing is missing: it has no consciousness, awareness or idea of itself, meaning it cant feel emotions like joy, sadness or remorse.

What could possibly go wrong?

The OpenAI logo is seen on a mobile phone in front of a computer screen displaying output from ChatGPT, Tuesday, March 21, 2023, in Boston. (AP Photo/Michael Dwyer, File)

Weve caught a glimpse into what can go wrong. Weve heard warnings from scientists, advocates, teachers, defense officials, world leaders and even AI researchers and developers themselves about the dangers AI poses.

Some experts are calling this the next Industrial Revolution, except instead of substituting human labor with machine labor, were substituting human creativity with machine creativity. And humans created the machines to do it.

With such a fascinating phenomenon ramping up significantly, many are calling on the government to step in and try to get ahead of the many harms it could cause and the discord it could sow.

But some legal experts caution against trying to get ahead of something so unpredictable. The truth is, we humans likely have no idea what damage this type of powerful technology can cause.

The logo for OpenAI, the maker of ChatGPT, appears on a mobile phone, in New York, Tuesday, Jan. 31, 2023. (AP Photo/Richard Drew)

For starters, any average Joe can use ChatGPT, which was developed by OpenAI and released in November 2022. Its a large language model that can process, manipulate and generate text. Developers trained it to do this by feeding it massive amounts of data from the internet, books, articles and other content to identify patterns and spit human-like text back to users.

Think of it like typing a question into Google, but instead of getting a list of websites answering your question, ChatGPT just gives you the answer it thinks you want. (Note: that answer isnt always the correct one).

A cursor moves over Google's search engine page on Aug. 28, 2018, in Portland, Ore. (AP Photo/Don Ryan, File)

While ChatGPT is probably the most popular (with over 100 million users according to the latest data), plenty of technology companies have released their versions to the public, like Microsofts Bing Chat or Googles Bard and Socratic. Elon Musk has his own AI model in the works TruthGPT. There are AI chatbots for businesses, schoolkids, content creators and everyday people looking up the answer to a question or trying to solve an equation.

And chatbots are just the tip of the iceberg. According to a survey last year by the Society for Human Resource Management, 42% of companies with 5,000 employees or more use AI tools. Another survey by job advice platform Resumebuilder.com found that American companies use AI for a variety of things: 66% for writing code; 58% for creating prose and content; 57% for customer service; and 52% for meeting summaries and other papers.

The Microsoft Bing logo and the website's page are shown in this photo taken in New York on Tuesday, Feb. 7, 2023. (AP Photo/Richard Drew, File)

Its used to personalize your shopping experience online, including those chatbots that pop up asking if you need assistance when you open a new website. Its grading homework in schools and managing enrollment in courses. Its in autonomous vehicles, spam filters, facial recognition, recommendation systems, disease detection, agriculture, social media, fraud detection, traffic management, navigation and ride-sharing.

If an inanimate object is asking you a question or recommending something for you, thats AI. Yes, that includes when you spell a word wrong in a text and your iPhone auto-corrects the word for you.

In short, AI is all around us in places we dont even realize, doing things for us we may not even realize we want to be done.

Uber Eats partnered with AI-robotics company Cartken to start a pilot program of autonomous robots delivering food. COURTESY: Cartken

Many new AI projects are in the pipeline. Cartken, an AI-powered robotics company, partnered with Uber Eats to launch a pilot program where robots deliver food orders to people. Next May, people will be able to order a new smart device called Companion a robotic device that can babysit, train, play with and monitor the health of your dog while youre out and about. Robots are sorting trash and retrieving recyclables from garbage streams.

Its not just digital work AI can accomplish within seconds anymore its physical labor, too.

A customer pulls a food delivery out of one of Cartken's small, six-wheeled autonomous robots. COURTESY: Cartken

[AI] is going to start actually competing with humans for things that humans have historically been good at, and machines have not been, said Anthony Aguirre, a professor at UC Santa Cruz and the executive director of the Future of Life Institute. Something has really changed just in the capabilities of the systems, and that changing capability has not been accompanied by a change in our societys readiness to absorb these technologies and to deal with them and to ensure theyre safe.

Continued here:
Artificial Intelligence, Part 1: It's already everywhere, isn't it? - UpNorthLive.com

Artificial intelligence should be banned in the classroom – Norfolk Daily News

Artificial intelligence (AI) has risen to be an effective yet almost too effective form of communication. These new technologies, such as GPT-3 and Speechify, can create whole essays or even imitate human speech. While these programs can be useful when applied in a meaningful way, a concern for misuse, more specifically students cheating, has been brought to teachers attention. Some schools have allowed the use of AI for brainstorming ideas for research or even to get a better understanding of writing and communication. On the other hand, many teachers have completely banned artificial intelligence tools in the classroom because of plagiarism and academic dishonesty. I would agree with this action, as AI resources should not be allowed for schoolwork.

The rise of artificial intelligence has teachers questioning the integrity of their students. I would like to highlight one specific tool as mentioned before: ChatGPT. GPT is the shortened version of Generative Pretrained Transformer. This new technology can write pages of text just from a little information given. While the technology seems beneficial, sometimes the information isnt always accurate. Another issue teachers are having is identifying plagiarism through GPT. Thomas Keith in his article Combating Academic Dishonesty, published by the University of Chicago, stated that plagiarism detection software relies on comparing student work to a database of pre-existing work and identifying identical phrases, sentences, etc. to produce an originality score. Because the text generated by ChatGPT is (in some sense, anyway) original, it renders this technique useless.

Although teachers jobs are not to find cheaters, they hope that students would want to be honest with their work. As technology is growing, the ways around getting caught have become more advanced.

Although advanced technology can be an issue, some teachers would argue that ChatGPT is more beneficial than not. Superintendent Dan Stepenosky shares his views in the article, Is AI Writing Your Childs Homework, saying, Our mindset really is, is this something that can be a resource and work with our students on, work with our staff on, rather than trying to batten down the hatches and keep it out?

In some instances, typing in a few keywords results in lots more information than what was originally put into it. ChatGPT also has been shown to be beneficial for learning different languages, styles of writing and gathering data. On the other hand, philosophy professor Darren Hick at Furman University stated he came across an essay written by GPT in his upper-level course. Hick was able to discover that his student indeed did not do any of the work by running the essay through a newly created GPTZero system. The main purpose of this is to defeat the use of artificial intelligence when it comes to essays or assignments involving writing. Even though ChatGPT has its benefits, clearly AI is more of a burden than an advantage.

Artificial intelligence has grown significantly and continues to do so still today. With all of its features, the technology can be useful when utilized in the right ways. However, in the classroom, it hinders students from working to their full potential while they have access to technology that requires them to do little work on their own. The advancements keep growing day by day. Whether they are helpful or not falls on us.

Read the rest here:
Artificial intelligence should be banned in the classroom - Norfolk Daily News