Archive for the ‘Artificial Intelligence’ Category

Missing persons helped by artificial intelligence – WISH TV Indianapolis, IN

Indiana has been grappling with an unsolved murder and missing person case for years. With the help of revolutionary augmented reality (AR) and artificial intelligence (AI) crime-solving tools, attention is being brought to these cases like never before.

CrimeDoor, the true crime news, and content platform co-founded by Neil Mandt, is using AR pop-ups as a reinvention of the Amber Alert system to provide updates on unsolved cases in the area.

The technology allows users to see virtual pop-ups of information related to each case, making it easier for them to stay informed and potentially provide tips to law enforcement.

As May is known as Missing and Unidentified Persons Month, this technology comes at a crucial time.

According to the National Missing and Unidentified Persons System, over 600,000 people go missing in the United States every year.

This technology has the potential to help solve some of these cases and bring closure to families who have been waiting for answers for years.

Neil Mandt joined us to provide updates on several Indiana cases, including Lauren Spierer, Nakyla Williams, Tatyana Sims, and Marilyn Niqui McCown. These cases have remained unsolved for years, but with the help of AR and AI technology, new leads may be discovered. Watch the full interview above to learn more!

View original post here:
Missing persons helped by artificial intelligence - WISH TV Indianapolis, IN

Director Chopras Prepared Remarks on the Interagency … – Consumer Financial Protection Bureau

In recent years, we have seen a rapid acceleration of automated decision-making across our daily lives. Throughout the digital world and throughout sectors of the economy, so-called artificial intelligence is automating activities in ways previously thought to be unimaginable.

Generative AI, which can produce voices, images, and videos that are designed to simulate real-life human interactions are raising the question of whether we are ready to deal with the wide range of potential harms from consumer fraud to privacy to fair competition.

Today, several federal agencies are coming together to make one clear point: there is no exemption in our nations civil rights laws for new technologies that engage in unlawful discrimination. Companies must take responsibility for their use of these tools.

The Interagency Statement we are releasing today seeks to take an important step forward to affirm existing law and rein in unlawful discriminatory practices perpetrated by those who deploy these technologies.1

The statement highlights the all-of-government approach to enforce existing laws and work collaboratively on AI risks.

Unchecked AI poses threats to fairness and to our civil rights in ways that are already being felt.

Technology companies and financial institutions are amassing massive amounts of data and using it to make more and more decisions about our lives, including whether we get a loan or what advertisements we see.

While machines crunching numbers might seem capable of taking human bias out of the equation, thats not what is happening. Findings from academic studies and news reporting raise serious questions about algorithmic bias. For example, a statistical analysis of 2 million mortgage applications found that Black families were 80 percent more likely to be denied by an algorithm when compared to white families with similar financial and credit backgrounds. The response of mortgage companies has been that researchers do not have all the data that feeds into their algorithms or full knowledge of the algorithms. But their defense illuminates the problem: artificial intelligence often feels like black boxes behind brick walls.2

When consumers and regulators do not know how decisions are made by artificial intelligence, consumers are unable to participate in a fair and competitive market free from bias.

Thats why the CFPB and other agencies are prioritizing and confronting digital redlining, which is redlining caused through bias present in lending or home valuation algorithms and other technology marketed as artificial intelligence. They are disguised through so-called neutral algorithms, but they are built like any other AI system by scraping data that may reinforce the biases that have long existed.

We are working hard to reduce bias and discrimination when it comes to home valuations, including algorithmic appraisals. We will be proposing rules to make sure artificial intelligence and automated valuation models have basic safeguards when it comes to discrimination.

We are also scrutinizing algorithmic advertising, which, once again, is often marketed as AI advertising. We published guidance to affirm how lenders and other financial providers need to take responsibility for certain advertising practices. Specifically, advertising and marketing that uses sophisticated analytic techniques, depending on how these practices are designed and implemented, could subject firms to legal liability.

Weve also taken action to protect the public from black box credit models in some cases so complex that the financial firms that rely on them cant even explain the results. Companies are required to tell you why you were denied for credit and using a complex algorithm is not a defense against providing specific and accurate explanations.

Developing methods to improve home valuation, lending, and marketing are not inherently bad. But when done in irresponsible ways, such as creating black box models or not carefully studying the data inputs for bias, these products and services pose real threats to consumers civil rights. It also threatens law-abiding nascent firms and entrepreneurs trying to compete with those who violate the law.

I am pleased that the CFPB will continue to contribute to the all-of-government mission to ensure that the collective laws we enforce are followed, regardless of the technology used.

Thank you.

The rest is here:
Director Chopras Prepared Remarks on the Interagency ... - Consumer Financial Protection Bureau

Artificial Intelligence, Part 1: It’s already everywhere, isn’t it? – UpNorthLive.com

Uber Eats customers in the Mosaic District of Fairfax, VA can have food delivered by one of Cartken's autonomous bots. COURTESY: Cartken

Not unlike a science-fiction novel, theres seemingly no limit to what artificial intelligence technology can do.

It can detect early signs of certain types of cancer before doctors can find it on a CT scan. It can outperform most law school graduates on the bar exam.

It can buy and trade stocks. It can write and produce a song using an artists vocals, even if the artist never sang a note. It even could have written this article (it did not).

Microsoft employee Alex Buscher demonstrates a search feature integration of Microsoft Bing search engine and Edge browser with OpenAI on Tuesday, Feb. 7, 2023, in Redmond. (AP Photo/Stephen Brashear)

It can also impersonate your grandmother over a phone call and ask for money. It can be used by a criminal to start a massive troll farm (groups of fake online agitators who inflame debates) with just a few presses of a button.

It can (and has) written fake news stories accusing public officials of crimes they never committed. It could be hacked and intentionally crash a driverless vehicle; it can harvest large personal datasets in seconds; it could hack robotic military weapons.

Basically, it can do almost everything humans can do and more, but one thing is missing: it has no consciousness, awareness or idea of itself, meaning it cant feel emotions like joy, sadness or remorse.

What could possibly go wrong?

The OpenAI logo is seen on a mobile phone in front of a computer screen displaying output from ChatGPT, Tuesday, March 21, 2023, in Boston. (AP Photo/Michael Dwyer, File)

Weve caught a glimpse into what can go wrong. Weve heard warnings from scientists, advocates, teachers, defense officials, world leaders and even AI researchers and developers themselves about the dangers AI poses.

Some experts are calling this the next Industrial Revolution, except instead of substituting human labor with machine labor, were substituting human creativity with machine creativity. And humans created the machines to do it.

With such a fascinating phenomenon ramping up significantly, many are calling on the government to step in and try to get ahead of the many harms it could cause and the discord it could sow.

But some legal experts caution against trying to get ahead of something so unpredictable. The truth is, we humans likely have no idea what damage this type of powerful technology can cause.

The logo for OpenAI, the maker of ChatGPT, appears on a mobile phone, in New York, Tuesday, Jan. 31, 2023. (AP Photo/Richard Drew)

For starters, any average Joe can use ChatGPT, which was developed by OpenAI and released in November 2022. Its a large language model that can process, manipulate and generate text. Developers trained it to do this by feeding it massive amounts of data from the internet, books, articles and other content to identify patterns and spit human-like text back to users.

Think of it like typing a question into Google, but instead of getting a list of websites answering your question, ChatGPT just gives you the answer it thinks you want. (Note: that answer isnt always the correct one).

A cursor moves over Google's search engine page on Aug. 28, 2018, in Portland, Ore. (AP Photo/Don Ryan, File)

While ChatGPT is probably the most popular (with over 100 million users according to the latest data), plenty of technology companies have released their versions to the public, like Microsofts Bing Chat or Googles Bard and Socratic. Elon Musk has his own AI model in the works TruthGPT. There are AI chatbots for businesses, schoolkids, content creators and everyday people looking up the answer to a question or trying to solve an equation.

And chatbots are just the tip of the iceberg. According to a survey last year by the Society for Human Resource Management, 42% of companies with 5,000 employees or more use AI tools. Another survey by job advice platform Resumebuilder.com found that American companies use AI for a variety of things: 66% for writing code; 58% for creating prose and content; 57% for customer service; and 52% for meeting summaries and other papers.

The Microsoft Bing logo and the website's page are shown in this photo taken in New York on Tuesday, Feb. 7, 2023. (AP Photo/Richard Drew, File)

Its used to personalize your shopping experience online, including those chatbots that pop up asking if you need assistance when you open a new website. Its grading homework in schools and managing enrollment in courses. Its in autonomous vehicles, spam filters, facial recognition, recommendation systems, disease detection, agriculture, social media, fraud detection, traffic management, navigation and ride-sharing.

If an inanimate object is asking you a question or recommending something for you, thats AI. Yes, that includes when you spell a word wrong in a text and your iPhone auto-corrects the word for you.

In short, AI is all around us in places we dont even realize, doing things for us we may not even realize we want to be done.

Uber Eats partnered with AI-robotics company Cartken to start a pilot program of autonomous robots delivering food. COURTESY: Cartken

Many new AI projects are in the pipeline. Cartken, an AI-powered robotics company, partnered with Uber Eats to launch a pilot program where robots deliver food orders to people. Next May, people will be able to order a new smart device called Companion a robotic device that can babysit, train, play with and monitor the health of your dog while youre out and about. Robots are sorting trash and retrieving recyclables from garbage streams.

Its not just digital work AI can accomplish within seconds anymore its physical labor, too.

A customer pulls a food delivery out of one of Cartken's small, six-wheeled autonomous robots. COURTESY: Cartken

[AI] is going to start actually competing with humans for things that humans have historically been good at, and machines have not been, said Anthony Aguirre, a professor at UC Santa Cruz and the executive director of the Future of Life Institute. Something has really changed just in the capabilities of the systems, and that changing capability has not been accompanied by a change in our societys readiness to absorb these technologies and to deal with them and to ensure theyre safe.

Continued here:
Artificial Intelligence, Part 1: It's already everywhere, isn't it? - UpNorthLive.com

Artificial intelligence should be banned in the classroom – Norfolk Daily News

Artificial intelligence (AI) has risen to be an effective yet almost too effective form of communication. These new technologies, such as GPT-3 and Speechify, can create whole essays or even imitate human speech. While these programs can be useful when applied in a meaningful way, a concern for misuse, more specifically students cheating, has been brought to teachers attention. Some schools have allowed the use of AI for brainstorming ideas for research or even to get a better understanding of writing and communication. On the other hand, many teachers have completely banned artificial intelligence tools in the classroom because of plagiarism and academic dishonesty. I would agree with this action, as AI resources should not be allowed for schoolwork.

The rise of artificial intelligence has teachers questioning the integrity of their students. I would like to highlight one specific tool as mentioned before: ChatGPT. GPT is the shortened version of Generative Pretrained Transformer. This new technology can write pages of text just from a little information given. While the technology seems beneficial, sometimes the information isnt always accurate. Another issue teachers are having is identifying plagiarism through GPT. Thomas Keith in his article Combating Academic Dishonesty, published by the University of Chicago, stated that plagiarism detection software relies on comparing student work to a database of pre-existing work and identifying identical phrases, sentences, etc. to produce an originality score. Because the text generated by ChatGPT is (in some sense, anyway) original, it renders this technique useless.

Although teachers jobs are not to find cheaters, they hope that students would want to be honest with their work. As technology is growing, the ways around getting caught have become more advanced.

Although advanced technology can be an issue, some teachers would argue that ChatGPT is more beneficial than not. Superintendent Dan Stepenosky shares his views in the article, Is AI Writing Your Childs Homework, saying, Our mindset really is, is this something that can be a resource and work with our students on, work with our staff on, rather than trying to batten down the hatches and keep it out?

In some instances, typing in a few keywords results in lots more information than what was originally put into it. ChatGPT also has been shown to be beneficial for learning different languages, styles of writing and gathering data. On the other hand, philosophy professor Darren Hick at Furman University stated he came across an essay written by GPT in his upper-level course. Hick was able to discover that his student indeed did not do any of the work by running the essay through a newly created GPTZero system. The main purpose of this is to defeat the use of artificial intelligence when it comes to essays or assignments involving writing. Even though ChatGPT has its benefits, clearly AI is more of a burden than an advantage.

Artificial intelligence has grown significantly and continues to do so still today. With all of its features, the technology can be useful when utilized in the right ways. However, in the classroom, it hinders students from working to their full potential while they have access to technology that requires them to do little work on their own. The advancements keep growing day by day. Whether they are helpful or not falls on us.

Read the rest here:
Artificial intelligence should be banned in the classroom - Norfolk Daily News

Researchers using artificial intelligence to study Conn. bridge conditions – FOX61 Hartford

HARTFORD, Conn. The road to keeping Connecticut's bridges strong and stable runs through the transportation lab at the University of Hartford.

That's where Professor of Civil Engineering Clara Fang is the principal investigator on a project using artificial intelligence to evaluate and predict infrastructure needs.

"AI has [a] remarkable ability to acquire the knowledge from the past and try to predict what's going to happen in the future," said Fang.

The work is made possible by a $238,000 research grant from the Connecticut Department of Transportation and Federal Highway Administration.

Fang said the benefits of the project are simple.

Sign up for the FOX61 newsletters:Morning Forecast, Morning Headlines, Evening Headlines

"In terms of safety, efficiency, and cost-effectiveness," said Fang.

The project uses AI to gather knowledge from the state's bridge inspection records and bridge inventory database over the past 30 years. It then creates algorithms, considering factors like design, materials, traffic dynamics, and weather.

"What the AI model can do is to help the DOT monitor the bridge deterioration process and [be] able to identify when maintenance, repair, rehabilitation, and even replacement should be anticipated," said Fang.

Civil engineering student Daniel Jimenez Gil is also on the research team.

"AI has taken off really quickly in the past two years with Chat GPT. I'm getting a lot of knowledge on how it works," said Jimenez Gil.

"I think students can enjoy and feel the joy of the discovery," said Fang.

The value of that knowledge is impossible to measure in miles, but it will go a long way in helping the DOT.

"Using AI in combination with our human inspections and what we know about our infrastructure needs in the state, it can just be a really valuable tool for us when we're planning billions of dollars in infrastructure investments," said Josh Morgan with the Connecticut Department of Transportation.

The project spans two years. The team started last year and is about halfway through.

Angelo Bavaro is an anchor and reporter at FOX61 News. He can be reached at abavaro@fox61.com. Follow him onFacebookandTwitter.

Have a story idea or something on your mind you want to share? We want to hear from you! Email us atnewstips@fox61.com

HERE ARE MORE WAYS TO GET FOX61 NEWS

Download the FOX61 News APP

iTunes:Clickhereto download

Google Play:Clickhereto download

Stream Live onROKU:Add the channel from theROKU storeor by searching FOX61.

Steam Live onFIRE TV:Search FOX61 and click Get to download.

FOLLOW US ONTWITTER,FACEBOOK&INSTAGRAM

Read the rest here:
Researchers using artificial intelligence to study Conn. bridge conditions - FOX61 Hartford