Archive for the ‘Ai’ Category

U.S. Patent and Trademark Office announces $70 million contract for AI patent search tool – FedScoop

The U.S. Patent and Trademark Office said this week that it intends to award an estimated $70 million contract to Accenture Federal Services for its Patent Search Artificial Intelligence capabilities.

The notice of intent described the need for a contractor to provide a full system development effort to continue maintenance for PSAI capabilities, and provide new enhancements for the component. USPTO anticipates negotiating and awarding this responsibility to AFS by April 1.

The office currently has an AI-based search tool for its Patents End-to-End (PE2E) suite to leverage AI for prior art searches, a tool that examiners use to assess the novelty of an invention, of which PSAI is a component.

In an effort to modernize the Patents Automated Information Systems, the USPTO launched PE2E, a single web-based system that provides examiners with a unified and robust set of tools to use in the examination process, the USPTO statement about PE2E states. PE2E Search is a system within PE2E that presents a modern interface design and introduces new tools and features, such as AI search capabilities.

USPTO in August announced that it was seeking information concerning AI deployment capabilities to improve searches for prior art during the patent process, FedScoop previously reported.

USPTO did not respond to a request for comment. AFS declined to comment.

Read the rest here:

U.S. Patent and Trademark Office announces $70 million contract for AI patent search tool - FedScoop

AI May Destroy Humankind in Just Two Years, Expert Says – Futurism

"We have a shred of a chance that humanity survives." Terminator Vision

The notoriously pessimistic AI researcher Eliezer Yudkowsky is back with a new prediction about the future of humankind.

"If you put me to a wall," he told The Guardian in a fascinating new interview, "and forced me to put probabilities on things, I have a sense that our current remaining timeline looks more like five years than 50 years. Could be two years, could be 10."

If you're wondering what "remaining timeline" means in this context,The Guardian's Tom Lamont interpreted it as the "machine-wrought end of all things," a "Terminator-like apocalypse," or a "Matrix hellscape."

"The difficulty is, people do not realise," Yudkowsky, the founder of theMachine Intelligence Research Institute in California, told the newspaper. "We have a shred of a chance that humanity survives."

The entireGuardianpiece is worth a read. Lamont spoke to many prominent figures in the space, ranging from Brian Merchant to Molly Crabapple, with the throughline being skepticism about the idea that just because a new tech comes along, we need to adopt it even if it's not good for people.

These days, the focus of much of that critique is AI. Why, critics contend, should we treat the tech as inevitable even if it seems poised to eliminate and destabilize large numbers of jobs?

Or, in Yudkowsky's case, if the tech likely presents an existential threat. His remarks were the most provocative in the piece, which probably isn't surprising given his history. AI watchers may remember last year, for instance, when he called for bombing data centers to halt the rise of AI.

He's rethought that particular claim, he toldThe Guardian but only slightly: he stands behind the idea of bombing data centers, he said, but no longer thinks that nuclear weapons should be used to target them.

"I would pick more careful phrasing now," he told the newspaper.

More on AI: Oppenheimer's Grandson Signs Letter Saying AI Threatens "Life on Earth"

Read the rest here:

AI May Destroy Humankind in Just Two Years, Expert Says - Futurism

SAP names Philipp Herzig as chief artificial intelligence officer – CIO

SAP is reorganizing its AI activities. Philipp Herzig, formerly head of cross-product engineering and experience, now leads a new end-to-end growth area focused on AI as the companys chief artificial intelligence officer (CAIO).

Herzig now reports directly to CEO Christian Klein, and will oversee the entire value chain for SAP business AI from research and product development through to implementation at the customer.

With this new structure, SAP aims to accelerate the pace of its AI development, according to a statement from the software manufacturer. The newly established organization also underlines the central importance of business AI as a strategic driver for SAPs further growth. Herzigs team will work closely with other innovators within SAP. The aim is to integrate artificial intelligence into every part of the portfolio. Based on this, customers should be able to use SAP business AI consistently across the entire SAP portfolio.

SAPs increased focus on business AI marks the start of a completely new generation of enterprise innovation, and Im honored to have the chance to help customers make the most of this unprecedented opportunity, said Herzig, describing his role. I look forward to working with our team, as well as our ecosystem of customers and partners, to drive the development and delivery of relevant, reliable, responsible business AI that fundamentally changes the way business runs.

Walter Sun, who moved from Microsoft to SAP in September 2023, will coordinate the development of SAPs next generation of enterprise software worldwide as Global Head of AI and lead AI product developer in Herzigs team. Herzig was already Suns boss before his latest move.

This is not the only change in SAPs reporting structure. At the beginning of 2024, the company established a new Executive Board department, Customer Services & Delivery. Its head, Thomas Saueressig, is tasked with driving the still hesitant cloud transformation among customers. Product Development, which Saueressig had previously headed, has been taken over by Muhammad Alam, who has also been promoted to the SAP Executive Board. Like Sun, Alam had moved from Microsoft to SAP, albeit at the end of January 2022.

Read the original here:

SAP names Philipp Herzig as chief artificial intelligence officer - CIO

Police Turn to AI to Review Bodycam Footage – ProPublica

ProPublica is a nonprofit newsroom that investigates abuses of power. Sign up to receive our biggest stories as soon as theyre published.

Over the last decade, police departments across the U.S. have spent millions of dollars equipping their officers with body-worn cameras that record what happens as they go about their work. Everything from traffic stops to welfare checks to responses to active shooters is now documented on video.

The cameras were pitched by national and local law enforcement authorities as a tool for building public trust between police and their communities in the wake of police killings of civilians like Michael Brown, an 18 year old black teenager killed in Ferguson, Missouri in 2014. Video has the potential not only to get to the truth when someone is injured or killed by police, but also to allow systematic reviews of officer behavior to prevent deaths by flagging troublesome officers for supervisors or helping identify real-world examples of effective and destructive behaviors to use for training.

But a series of ProPublica stories has shown that a decade on, those promises of transparency and accountability have not been realized.

One challenge: The sheer amount of video captured using body-worn cameras means few agencies have the resources to fully examine it. Most of what is recorded is simply stored away, never seen by anyone.

Axon, the nations largest provider of police cameras and of cloud storage for the video they capture, has a database of footage that has grown from around 6 terabytes in 2016 to more than 100 petabytes today. Thats enough to hold more than 5,000 years of high definition video, or 25million copies of last years blockbuster movie Barbie.

In any community, body-worn camera footage is the largest source of data on police-community interactions. Almost nothing is done with it, said Jonathan Wender, a former police officer who heads Polis Solutions, one of a growing group of companies and researchers offering analytic tools powered by artificial intelligence to help tackle that data problem.

The Paterson, New Jersey, police department has made such an analytic tool a major part of its plan to overhaul its force.

In March 2023, the states attorney general took over the department after police shot and killed Najee Seabrooks, a community activist experiencing a mental health crisis who had called 911 for help. The killing sparked protests and calls for a federal investigation of the department.

The attorney general appointed Isa Abbassi, formerly the New York Police Departments chief of strategic initiatives, to develop a plan for how to win back public trust.

Changes in Paterson are led through the use of technology, Abbassi said at a press conference announcing his reform plan in September, Perhaps one of the most exciting technology announcements today is a real game changer when it comes to police accountability and professionalism.

The department, Abassi said, had contracted with Truleo, a Chicago-based software company that examines audio from bodycam videos to identify problematic officers and patterns of behavior.

For around $50,000 a year, Truleos software allows supervisors to select from a set of specific behaviors to flag, such as when officers interrupt civilians, use profanity, use force or mute their cameras. The flags are based on data Truleo has collected on which officer behaviors result in violent escalation. Among the conclusions from Truleos research: Officers need to explain what they are doing.

There are certain officers who dont introduce themselves, they interrupt people, and they dont give explanations. They just do a lot of command, command, command, command, command, said Anthony Tassone, Truleos co-founder. That officers headed down the wrong path.

For Paterson police, Truleo allows the department to review 100% of body worn camera footage to identify risky behaviors and increase professionalism, according to its strategic overhaul plan. The software, the department said in its plan, will detect events like uses of force, pursuits, frisks and non-compliance incidents and allow supervisors to screen for both professional and unprofessional officer language.

Paterson police officials declined to be interviewed for this story.

Around 30 police departments currently use Truleo, according to the company. In October, the NYPD signed on to a pilot program for Truleo to review the millions of hours of footage it produces annually, according to Tassone.

Amid a crisis in police recruiting, Tassone said some departments are using Truleo because they believe it can help ensure new officers are meeting professional standards. Others, like the department in Aurora, Colorado, are using the software to bolster their case for emerging from external oversight. In March 2023, city attorneys successfully lobbied the City Council to approve a contract with Truleo, saying it would help the police department more quickly comply with a consent decree that calls for better training and recruitment and collection of data on things like use of force and racial disparities in policing.

Truleo is just one of a growing number of such analytics providers.

In August 2023, the Los Angeles Police Department said it would partner with a team of researchers from the University of Southern California and several other universities to develop a new AI-powered tool to examine footage from around 1,000 traffic stops and determine which officer behaviors keep interactions from escalating. In 2021, Microsoft awarded $250,000 to a team from Princeton University and the University of Pennsylvania to develop software that can organize video into timelines that allow easier review by supervisors.

Dallas-based Polis Solutions has contracted with police in its hometown, as well as departments in St. Petersburg, Florida, Kinston, North Carolina, and Alliance, Nebraska, to deploy its own software, called TrustStat, to identify videos supervisors should review. What were saying is, look, heres an interaction which is statistically significant for both positive and negative reasons. A human being needs to look, said Wender, the companys founder.

TrustStat grew out of a project of the Defense Advanced Research Projects Agency, the research and development arm of the U.S. Defense Department, where Wender previously worked. It was called the Strategic Social Interaction Modules program, nicknamed Good Stranger, and it sought to understand how soldiers in potentially hostile environments, say a crowded market in Baghdad, could keep interactions with civilians from escalating. The program brought in law enforcement experts and collected a large database of videos. After it ended, Wender founded Polis Solutions, and used the Good Stranger video database to train the TrustStat software. TrustStat is entirely automated: Large language models analyze speech, and image processing algorithms identify physical movements and facial expressions captured on video.

At Washington State Universitys Complex Social Interactions Lab, researchers use a combination of human reviewers and AI to analyze video. The lab began its work seven years ago, teaming up with the Pullman, Washington, police department. Like many departments, Pullman had adopted body cameras but lacked the personnel to examine what the video was capturing and train officers accordingly.

The lab has a team of around 50 reviewers drawn from the universitys own students who comb through video to track things like the race of officers and civilians, the time of day, and whether officers gave explanations for their actions, such as why they pulled someone over. The reviewers note when an officer uses force, if officers and civilians interrupt each other and whether an officer explains that the interaction is being recorded. They also note how agitated officers and civilians are at each point in the video.

Machine learning algorithms are then used to look for correlations between these features and the outcome of each police encounter.

From that labeled data, youre able to apply machine learning so that were able to get to predictions so we can start to isolate and figure out, well, when these kind of confluences of events happen, this actually minimizes the likelihood of this outcome, said David Makin, who heads the lab and also serves on the Pullman Police Advisory Committee.

One lesson has come through: Interactions that dont end in violence are more likely to start with officers explaining what is happening, not interrupting civilians and making clear that cameras are rolling and the video is available to the public.

The lab, which does not charge clients, has examined more than 30,000 hours of footage and is working with 10 law enforcement agencies, though Makin said confidentiality agreements keep him from naming all of them.

Much of the data compiled by these analyses and the lessons learned from it remains confidential, with findings often bound up in nondisclosure agreements. This echoes the same problem with body camera video itself: Police departments continue to be the ones to decide how to use a technology originally meant to make their activities more transparent and hold them accountable for their actions.

Under pressure from police unions and department management, Tassone said, the vast majority of departments using Truleo are not willing to make public what the software is finding. One department using the software Alameda, California has allowed some findings to be publicly released. At the same time, at least two departments Seattle and Vallejo, California have canceled their Truleo contracts after backlash from police unions.

The Pullman Police Department cited Washington State Universitys analysis of 4,600 hours of video to claim that officers do not use force more often, or at higher levels, when dealing with a minority suspect, but did not provide details on the study.

At some police departments, including Philadelphias, policy expressly bars disciplining officers based on spot-check reviews of video. That policy was pushed for by the citys police union, according to Hans Menos, the former head of thePolice Advisory Committee, Philadelphias civilian oversight body. The Police Advisory Committee has called on the department to drop the restriction.

Were getting these cameras because weve heard the call to have more oversight, Menos said in an interview. However, were limiting how a supervisor can use them, which is worse than not even requiring them to use it.

How Chicago Became an Unlikely Leader in Body-Camera Transparency

Philadelphias police department and police union did not respond to requests for comment.

Christopher J. Schneider, a professor at Canadas Brandon University who studies the impact of emerging technology on social perceptions of police, said the lack of disclosure makes him skeptical that AI tools will fix the problems in modern policing.

Even if police departments buy the software and find problematic officers or patterns of behavior, those findings might be kept from the public just as many internal investigations are.

Because its confidential, he said, the public are not going to know which officers are bad or have been disciplined or not been disciplined.

Visit link:

Police Turn to AI to Review Bodycam Footage - ProPublica

Meta’s Big Rally Spotlights Investors’ Questions About AI Returns – The Information

Shares of Meta Platforms stock soared 20% on Friday as investors applauded the companys fourth-quarter results, which showed a strong ad recovery from its 2022 slump. But the enthusiasm of some investors was tempered by uncertainty about the returns Meta can get from the billions it is pouring into new generative artificial intelligence technology.

During Metas fourth-quarter earnings call on Thursday night, company executives said capital expenditures on expanding servers and data centers, along with the costs of operating that infrastructure, would both rise this year, at least partly because of AI tech development. Much of that spending is for open-source products, however, which doesnt generate revenue for Meta. The most obvious near-term way for Meta to make money from AI is through the AI-infused advertising tools it is now pitching to marketersbut calculating the impact of that is hard to impossible, as New Street Research analyst Dan Salmon said in a report on Thursday.

Read this article:

Meta's Big Rally Spotlights Investors' Questions About AI Returns - The Information