Archive for the ‘Ai’ Category

Police Turn to AI to Review Bodycam Footage – ProPublica

ProPublica is a nonprofit newsroom that investigates abuses of power. Sign up to receive our biggest stories as soon as theyre published.

Over the last decade, police departments across the U.S. have spent millions of dollars equipping their officers with body-worn cameras that record what happens as they go about their work. Everything from traffic stops to welfare checks to responses to active shooters is now documented on video.

The cameras were pitched by national and local law enforcement authorities as a tool for building public trust between police and their communities in the wake of police killings of civilians like Michael Brown, an 18 year old black teenager killed in Ferguson, Missouri in 2014. Video has the potential not only to get to the truth when someone is injured or killed by police, but also to allow systematic reviews of officer behavior to prevent deaths by flagging troublesome officers for supervisors or helping identify real-world examples of effective and destructive behaviors to use for training.

But a series of ProPublica stories has shown that a decade on, those promises of transparency and accountability have not been realized.

One challenge: The sheer amount of video captured using body-worn cameras means few agencies have the resources to fully examine it. Most of what is recorded is simply stored away, never seen by anyone.

Axon, the nations largest provider of police cameras and of cloud storage for the video they capture, has a database of footage that has grown from around 6 terabytes in 2016 to more than 100 petabytes today. Thats enough to hold more than 5,000 years of high definition video, or 25million copies of last years blockbuster movie Barbie.

In any community, body-worn camera footage is the largest source of data on police-community interactions. Almost nothing is done with it, said Jonathan Wender, a former police officer who heads Polis Solutions, one of a growing group of companies and researchers offering analytic tools powered by artificial intelligence to help tackle that data problem.

The Paterson, New Jersey, police department has made such an analytic tool a major part of its plan to overhaul its force.

In March 2023, the states attorney general took over the department after police shot and killed Najee Seabrooks, a community activist experiencing a mental health crisis who had called 911 for help. The killing sparked protests and calls for a federal investigation of the department.

The attorney general appointed Isa Abbassi, formerly the New York Police Departments chief of strategic initiatives, to develop a plan for how to win back public trust.

Changes in Paterson are led through the use of technology, Abbassi said at a press conference announcing his reform plan in September, Perhaps one of the most exciting technology announcements today is a real game changer when it comes to police accountability and professionalism.

The department, Abassi said, had contracted with Truleo, a Chicago-based software company that examines audio from bodycam videos to identify problematic officers and patterns of behavior.

For around $50,000 a year, Truleos software allows supervisors to select from a set of specific behaviors to flag, such as when officers interrupt civilians, use profanity, use force or mute their cameras. The flags are based on data Truleo has collected on which officer behaviors result in violent escalation. Among the conclusions from Truleos research: Officers need to explain what they are doing.

There are certain officers who dont introduce themselves, they interrupt people, and they dont give explanations. They just do a lot of command, command, command, command, command, said Anthony Tassone, Truleos co-founder. That officers headed down the wrong path.

For Paterson police, Truleo allows the department to review 100% of body worn camera footage to identify risky behaviors and increase professionalism, according to its strategic overhaul plan. The software, the department said in its plan, will detect events like uses of force, pursuits, frisks and non-compliance incidents and allow supervisors to screen for both professional and unprofessional officer language.

Paterson police officials declined to be interviewed for this story.

Around 30 police departments currently use Truleo, according to the company. In October, the NYPD signed on to a pilot program for Truleo to review the millions of hours of footage it produces annually, according to Tassone.

Amid a crisis in police recruiting, Tassone said some departments are using Truleo because they believe it can help ensure new officers are meeting professional standards. Others, like the department in Aurora, Colorado, are using the software to bolster their case for emerging from external oversight. In March 2023, city attorneys successfully lobbied the City Council to approve a contract with Truleo, saying it would help the police department more quickly comply with a consent decree that calls for better training and recruitment and collection of data on things like use of force and racial disparities in policing.

Truleo is just one of a growing number of such analytics providers.

In August 2023, the Los Angeles Police Department said it would partner with a team of researchers from the University of Southern California and several other universities to develop a new AI-powered tool to examine footage from around 1,000 traffic stops and determine which officer behaviors keep interactions from escalating. In 2021, Microsoft awarded $250,000 to a team from Princeton University and the University of Pennsylvania to develop software that can organize video into timelines that allow easier review by supervisors.

Dallas-based Polis Solutions has contracted with police in its hometown, as well as departments in St. Petersburg, Florida, Kinston, North Carolina, and Alliance, Nebraska, to deploy its own software, called TrustStat, to identify videos supervisors should review. What were saying is, look, heres an interaction which is statistically significant for both positive and negative reasons. A human being needs to look, said Wender, the companys founder.

TrustStat grew out of a project of the Defense Advanced Research Projects Agency, the research and development arm of the U.S. Defense Department, where Wender previously worked. It was called the Strategic Social Interaction Modules program, nicknamed Good Stranger, and it sought to understand how soldiers in potentially hostile environments, say a crowded market in Baghdad, could keep interactions with civilians from escalating. The program brought in law enforcement experts and collected a large database of videos. After it ended, Wender founded Polis Solutions, and used the Good Stranger video database to train the TrustStat software. TrustStat is entirely automated: Large language models analyze speech, and image processing algorithms identify physical movements and facial expressions captured on video.

At Washington State Universitys Complex Social Interactions Lab, researchers use a combination of human reviewers and AI to analyze video. The lab began its work seven years ago, teaming up with the Pullman, Washington, police department. Like many departments, Pullman had adopted body cameras but lacked the personnel to examine what the video was capturing and train officers accordingly.

The lab has a team of around 50 reviewers drawn from the universitys own students who comb through video to track things like the race of officers and civilians, the time of day, and whether officers gave explanations for their actions, such as why they pulled someone over. The reviewers note when an officer uses force, if officers and civilians interrupt each other and whether an officer explains that the interaction is being recorded. They also note how agitated officers and civilians are at each point in the video.

Machine learning algorithms are then used to look for correlations between these features and the outcome of each police encounter.

From that labeled data, youre able to apply machine learning so that were able to get to predictions so we can start to isolate and figure out, well, when these kind of confluences of events happen, this actually minimizes the likelihood of this outcome, said David Makin, who heads the lab and also serves on the Pullman Police Advisory Committee.

One lesson has come through: Interactions that dont end in violence are more likely to start with officers explaining what is happening, not interrupting civilians and making clear that cameras are rolling and the video is available to the public.

The lab, which does not charge clients, has examined more than 30,000 hours of footage and is working with 10 law enforcement agencies, though Makin said confidentiality agreements keep him from naming all of them.

Much of the data compiled by these analyses and the lessons learned from it remains confidential, with findings often bound up in nondisclosure agreements. This echoes the same problem with body camera video itself: Police departments continue to be the ones to decide how to use a technology originally meant to make their activities more transparent and hold them accountable for their actions.

Under pressure from police unions and department management, Tassone said, the vast majority of departments using Truleo are not willing to make public what the software is finding. One department using the software Alameda, California has allowed some findings to be publicly released. At the same time, at least two departments Seattle and Vallejo, California have canceled their Truleo contracts after backlash from police unions.

The Pullman Police Department cited Washington State Universitys analysis of 4,600 hours of video to claim that officers do not use force more often, or at higher levels, when dealing with a minority suspect, but did not provide details on the study.

At some police departments, including Philadelphias, policy expressly bars disciplining officers based on spot-check reviews of video. That policy was pushed for by the citys police union, according to Hans Menos, the former head of thePolice Advisory Committee, Philadelphias civilian oversight body. The Police Advisory Committee has called on the department to drop the restriction.

Were getting these cameras because weve heard the call to have more oversight, Menos said in an interview. However, were limiting how a supervisor can use them, which is worse than not even requiring them to use it.

How Chicago Became an Unlikely Leader in Body-Camera Transparency

Philadelphias police department and police union did not respond to requests for comment.

Christopher J. Schneider, a professor at Canadas Brandon University who studies the impact of emerging technology on social perceptions of police, said the lack of disclosure makes him skeptical that AI tools will fix the problems in modern policing.

Even if police departments buy the software and find problematic officers or patterns of behavior, those findings might be kept from the public just as many internal investigations are.

Because its confidential, he said, the public are not going to know which officers are bad or have been disciplined or not been disciplined.

Visit link:

Police Turn to AI to Review Bodycam Footage - ProPublica

Meta’s Big Rally Spotlights Investors’ Questions About AI Returns – The Information

Shares of Meta Platforms stock soared 20% on Friday as investors applauded the companys fourth-quarter results, which showed a strong ad recovery from its 2022 slump. But the enthusiasm of some investors was tempered by uncertainty about the returns Meta can get from the billions it is pouring into new generative artificial intelligence technology.

During Metas fourth-quarter earnings call on Thursday night, company executives said capital expenditures on expanding servers and data centers, along with the costs of operating that infrastructure, would both rise this year, at least partly because of AI tech development. Much of that spending is for open-source products, however, which doesnt generate revenue for Meta. The most obvious near-term way for Meta to make money from AI is through the AI-infused advertising tools it is now pitching to marketersbut calculating the impact of that is hard to impossible, as New Street Research analyst Dan Salmon said in a report on Thursday.

Read this article:

Meta's Big Rally Spotlights Investors' Questions About AI Returns - The Information

Amazon launches AI shopping assistant called…Rufus? – Mashable

Amazon is entering a new stage of AI-powered shopping with the launch of Rufus, its new shopping assistant chatbot.

Available today in beta to a small group of users in the U.S., Rufus is rolling out more widely across the country in following weeks. "Select customers" will be able to access the tool by updating the Amazon Shopping app, according to the company.

The e-commerce AI bot will be trained on Amazon's catalogue, customer reviews, the community Q&As sitting on product pages, and, as the company somewhat broadly puts it, "information from across the web."

According to Amazon, Rufus will provide product recommendations and comparisons, as it's designed to answer specific search questions (as opposed to product type searches) like: "what are the differences between trail and road running shoes?" and "what to consider when buying running shoes?" notably, terms users would usually punch into a search engine like Google, not Amazon itself.

People can use the AI tool by typing their questions into the search bar on the platform's mobile app. Rufus will then begin the conversation in a chat dialogue box, where customers can ask follow-ups. Other facets of the bot include shopping by occasion or purpose (from parties to sports), and getting recommendations for specific people and products ("best dinosaur toys for a five-year-old").

There are several questions to be asked here like why is it called "Rufus" specifically? This feels almost like a "Grok" moment. Bloomberg reporter Matt Day pointed out on X that this link back to an early Amazon employee who had a dog of the same name, who reportedly frequented the company's offices in its early days.

There's also the more pertinent question of the company's wider intentions: Amazon dominates product search online, so is this AI tool a way to side-step Google's own search-based shopping recommendations and Amazon's lofty ad spends on the platform, as the NYT suggests? Time will tell.

Rufus is also just the latest AI-centric shopping tool Amazon has invested in. The tech giant has recently been flooded with AI-generated summaries of products, a feature announced last August. As Mashable's Cecily Mauran noted, the reviews are helpful in theory, but a quick search on the platform reveals some problems: "One could very cautiously argue that the answer to that question is 'trust, but verify, by understanding the technology's flaws and weaknesses.'"

Other companies have hopped on the AI shopping assistant-bandwagon. Google Search's AI image generator lets you dream up products and shop the real versions, and the company's virtual try-on (VTO) tool aims to make shopping more diverse and size-inclusive. Other shopping giants like Walmart have introduced similar AI-powered features that make recommendations and chat with customers.

If you want human-based shopping recommendations instead? Mashable's got a whole team of real people helping you with just that.

See original here:

Amazon launches AI shopping assistant called...Rufus? - Mashable

Is Jumping on the AI Bandwagon Prudent? – Catholic Exchange

I recently had to replace my MacBook of 6 years when it started to slowly die. I decided to return to a Windows based operating system, so I ordered a Dell. After setting it up, I was startled to see how much AI technology is available on the computer considering it was the basic model. In fact, new changes and updates seem to be added regularly at a startling pace. I recently saw an advertisement on YouTube about an AI buddy you could chat with depicting an impossibly beautiful womanand obviously not realand deep horror struck my heart as I began to think about all of the lonely young people in the world who could easily buy the lie that an AI relationship is real.

Artificial Intelligence is quickly taking over all of our devices. The rise of ChatGPT was simply the beginning. The implications for this technology morally, spiritually, and humanly is gravely serious. Have we seriously considered the very dangerous path we are walking down and have we considered who exactly is leading us there? As Catholics, we are called to discern the proper use of technology in order to ensure that we do not fall into false idolatry, spiritual slavery, addiction, or immorality. We have to make sure the technology we use is leading us to sanctity, not away from the Lord.

What started as a desire for connectivity, has in many ways, given way to a leviathan that looks more like a form of anti-communion. Our culture has never been lonelier even as it is the most connected at any point in world history. I know that I will be rather unpopular for pointing this out, but when I was still plugged into social media and as I grew in my spiritual life, I started to see social media as a false form of communion. Its community without the real, deep connection that in person relationships requires. The sacrificial dimension of relationships was missing. Love requires sacrifice and presence, not likes and endless screens.

We have been lulled into a false belief that the virtual world matters more than the one in our homes and right outside of our doors. In many ways, it takes the Church away from the Church. We saw this clearly during COVID when Mass was reduced to livestreaming, but social media does the same thing. It reduces relationships to pixels. The COVID lockdowns convinced a lot of people that being bodily present is not necessary. I encounter this constantly serving in pastoral care at the hospital. I heard it numerous times from people of all ages when I was a Director of Faith Formation. I watch Mass online and I dont need to come back.

The Internet, and especially social media, laid the ground work for detaching our relationships from matter. What I mean is, we no longer believe that we need to be within the physical presence with someone or the Someone who is supposed to be the center of our lives, Jesus Christ in the Holy Eucharist. The dualism of our age, has reached an apex through Artificial Intelligence. Social media was simply the catalyst that divorced us from one another.

Instead of getting together in person and truly catching up with one another, we convinced ourselves that the constant updatingso often ego driven, I was very guilty of thisand likes constitutes relationships. If this were true then we would not have a loneliness epidemic. Social media has isolated, divided, and addicted us, which is the perfect time for Artificial Intelligence to come in and save the day. All of those lonely kids who should not have smartphones can turn to a friend who truly understands them on an AI platform. We can have AI perform all of our mundane tasks for us that were meant to help sanctify us. If students have a paper to write, then ChatGPT can write it for them. Why should we do any work ourselves if AI can take care of it?

We are quickly entering an age of unreality, when we will not know if what we are seeing on our screen is real or not. The loss of Aristotelian-Thomistic mind-object agreement has ushered in relativism, but this takes on a whole new level of detachment from reality. We will not be able to trust our senses at all. Its already all over social media and the newer computers. ChatGPT constantly shows up somewhere on my computer or web browser. I went through and disabled as much as I could.

Our children will face temptations and alienation at a scale we never dreamed of when we were sitting on the couch listening to our Walkmans. Multiple generations of kids are addicted to technology, as are all of us adults. The battle is alive and well in my own home. My daughter is allowed to use my phone to text and call her friends. She will not get a phone until shes driving and we will never buy her a smartphone. She can buy one when she is 18 if she chooses.

She was very upset when I gave up my iPhone for the Light phone a seminarian friend gave to me when he returned to Rome. I tried to point out to her that her dependence on my iPhone is a serious problem. My dependence on my iPhone was a hindrance to me spiritually. She still can talk to her friends on Google Meet to do homework on my laptop at times, but technology has convinced all of us that we need to be in constant communication.

Kids rarely do homework together anymore because they virtually meet. Whether we realize it or not, this is still isolation. Not to mention, in the world of highly addictive smartphones, there can be no downtime, and above all, no silence. There is little silence, which means little peace. We cannot hear God without periods of silence. Why are we shocked that kids are highly anxious and fearful?

The spiritual dangers of this age will increase as newer and newer advances in AI are released to the public. Have we considered who might be leading us towards what is beginning to feel like another Tower of Babel moment? People are shocked when I tell them demons text and email exorcists. They didnt realize demons do work through technology for their evil purposes. Everyone who is working with AI has pointed out they feel like they are talking to a conscious being. Who exactly do we think it might be who is rushing us over this cliff?

Our children are addicted to smartphones and we laugh about it and shrug it off. ChatGPT is now making it impossible for teachers to know whether or not their students are doing their own work. AI is now making art. There are AI buddies to talk to and do whatever with. All of this is creating an artificial world, so that we dont have to deal with the mundanity and sufferings of this life. We no longer need to look at a magnificent sunset, when AI has made one on our screens. We dont need in person relationships because we can create our own relationships dominated by our own ego with AI or social media.

Social media was a primer, and in many ways, cut us off from one another. Countless people sit in restaurants staring at their phones while flesh and blood people are at the table with them. I lived this way for over a decade, but when I finally started to unplug from it, I could see all of the time I wasted and the addiction I had fallen into. Soon people will sit at the table ignoring each other while they talk to AI. This is clearly not of the Lord. We know things by their fruits. As Catholics, we are called to exercise prudence, not jump on every bandwagon that comes along without any thought.

The answer for Catholics is to choose reality. To accept the Cross. Suffering is an essential part of purification and sanctification in this life. It is the furnace of Divine Love where we learn how to love. We cannot love people through a screen because we cannot be there to will their good, except of course through our prayers and encouraging words. This is always a good thing, but there are people in our homes, parishes, and towns who need us. The people in front of us are the ones the Lord has assigned to us to walk with in this life; whether its a lifetime with a spouse, children, parents, or a quick encounter at the grocery store. We cant see these people so long as we prefer an unreality. Somewhere along the way, in our rush to evangelize the Internet, we forgot our own principles of solidarity and subsidiarity.

As Catholics we need to very seriously consider the implications of where AI is leading us, as well as what technology addiction is doing to all of us. Do we want our kids cheating on their homework? Do we want AI art and photography to warp our sense of beauty? Do we want to be a part of a lonely society turning to AI for relationships? There is going to come a moment when we will have to place a line in the sand, God or AI. The decision we will face very soon is that serious.

Photo by Christopher Burns on Unsplash

See more here:

Is Jumping on the AI Bandwagon Prudent? - Catholic Exchange

Apples Vision Pro has native apps for Adobe Lightroom and Firefly AI – The Verge

Adobes Firefly AI, the text-to-image tool behind features like Photoshops generative fill, will be available on the Apple Vision Pro as a native app, alongside the companys popular Lightroom photo editing software already demonstrated during the headsets announcement.

The creative software giant announced in a press release that the new Firefly experience had been purpose-built for the headsets visionOS system, allowing users to move and place images generated by the app onto real-world spaces like walls and desks.

The interface of the Firefly visionOS app should be familiar to anyone whos already used the web-based version of the tool users just need to enter a text description within the prompt box at the bottom and hit generate. This will then spit out four different images that can be dragged out of the main app window and placed around the home like virtual posters or prints.

Meanwhile, we also now have a better look at the native Adobe Lightroom photo editing app that was mentioned back when the Apple Vision Pro was announced last June. The visionOS Lightroom experience is similar to that of the iPad version, with a cleaner, simplified interface that should be easier to navigate with hand gestures than the more feature-laden desktop software.

Theres no shortage of creative VR applications available on other platforms. Googles Tilt Brush was enabling folks to paint in virtual reality environments back in 2016, for example, when it was released for the HTC Vive. But Apple just launched the most ambitious VR headset yet. Its historical focus on creatives coupled with Adobes strong embrace of Apple Silicon could make the Vision Pros eye-watering $3,500 price tag worth the investment for some creatives.

Read more from the original source:

Apples Vision Pro has native apps for Adobe Lightroom and Firefly AI - The Verge