Archive for the ‘Media Control’ Category

Why we need a new agency to regulate advanced artificial intelligence: Lessons on AI control from the Facebook Files – Brookings Institution

With the development of ever more advanced artificial intelligence (AI) systems, some of the worlds leading scientists, AI engineers and businesspeople have expressed concerns that humanity may lose control over its creations, giving rise to what has come to be called the AI Control Problem. The underlying premise is that our human intelligence may be outmatched by artificial intelligence at some point and that we may not be able to maintain meaningful control over them. If we fail to do so, they may act contrary to human interests, with consequences that become increasingly severe as the sophistication of AI systems rises. Indeed, recent revelations in the so-called Facebook Files provide a range of examples of one of the most advanced AI systems on our planet acting in opposition to our societys interests.

In this article, I lay out what we can learn about the AI Control Problem using the lessons learned from the Facebook Files. I observe that the challenges we are facing can be distinguished into two categories: the technical problem of direct control of AI, i.e. of ensuring that an advanced AI system does what the company operating it wants it to do, and the governance problem of social control of AI, i.e. of ensuring that the objectives that companies program into advanced AI systems are consistent with societys objectives. I analyze the scope for our existing regulatory system to address the problem of social control in the context of Facebook but observe that it suffers from two shortcomings. First, it leaves regulatory gaps; second, it focuses excessively on after-the-fact solutions. To pursue a broader and more pre-emptive approach, I argue the case for a new regulatory bodyan AI Control Councilthat has the power to both dedicate resources to conduct research on the direct AI control problem and to address the social AI control problem by proactively overseeing, auditing, and regulating advanced AI systems.

A fundamental insight from control theory1 is that if you are not careful about specifying your objectives in their full breadth, you risk generating unintended side effects. For example, if you optimize just on a single objective, it comes at the expense of all the other objectives that you may care about. The general principle has been known for eons. It is reflected for example in the legend of King Midas, who was granted a wish by a Greek god and, in his greed, specified a single objective: that everything he touched turn into gold. He realized too late that he had failed to specify the objectives that he cared about in their full breadth when his food and his daughter turned into gold upon his touch.

The same principle applies to advanced AI systems that pursue the objectives that we program into them. And as we let our AI systems determine a growing range of decisions and actions and as they become more and more effective at optimizing their objectives, the risk and magnitude of potential side effects grow.

The revelations from the Facebook Files are a case in point: Facebook, which recently changed its name to Meta, operates two of the worlds largest social networks, the eponymous Facebook as well as Instagram. The company employs an advanced AI systema Deep Learning Recommendation Model (DLRM)to decide which posts to present in the news feeds of Facebook and Instagram. This recommendation model aims to predict which posts a user is most likely to engage with, based on thousands of data points that the company has collected about each of its billions of individual users and trillions of posts.

Facebooks AI system is very effective in maximizing user engagement, but at the expense of other objectives that our society values. As revealed by whistleblower Frances Haugen via a series of articles in the Wall Street Journal in September 2021, the company repeatedly prioritized user engagement over everything else. For example, according to Haugen, the company knew from internal research that the use of Instagram was associated with serious increases in mental health problems related to body image among female teenagers but did not adequately address them. The company attempted to boost meaningful social interaction on its platform in 2018 but instead exacerbated the promotion of outrage, which contributed to the rise of echo chambers that risk undermining the health of our democracy. Many of the platforms problems are even starker outside of the U.S., where drug cartels and human traffickers employed Facebook to do their business, and Facebooks attempts to thwart them were insufficient. These examples illustrate how detrimental it can be to our society when we program an advanced AI system that affects many different areas of our lives to pursue a single objective at the expense of all others.

The Facebook Files are also instructive for another reason: They demonstrate the growing difficulty of exerting control over advanced AI systems. Facebooks recommendation model is powered by an artificial neural network with some 12 trillion parameters, which currently makes it the largest artificial neural network in the world. The system accomplishes the job of predicting which posts a user is most likely to engage with better than a team of human experts ever could. It therefore joins a growing list of AI systems that can accomplish tasks that were previously reserved for humans at super-human levels. Some researchers refer to such systems as domain-specific, or narrow, superintelligences, i.e. AI systems that outperform humans within a narrow domain of application. Humans still lead when it comes to general intelligencethe ability to solve a wide range of problems in many different domains. However, the club of narrow superintelligences has been growing rapidly in recent years. It includes AlphaGo and AlphaFold, creations of Google subsidiary DeepMind that can play Go and predict how proteins fold at super-human levels, as well as speech recognition and image classification systems that can perform their tasks better than humans. As these systems acquire super-human capabilities, their complexity makes it increasingly difficult for humans to understand how they arrive at solutions. As a result, an AIs creator may lose control of the AIs output.

There are two dimensions of AI control that are useful to distinguish because they call for different solutions: The direct control problem captures the difficulty of the company or entity operating an AI system to exert sufficient control, i.e. to make sure the system does what the operator wants it to do. The social control problem reflects the difficulty of ensuring that an AI system acts in accordance with social norms.

Direct AI control is a technical challenge that companies operating advanced AI systems face. All the big tech companies have experienced failures of direct control over their AI systemsfor example, Amazon employed a resume-screening system that was biased against women; Google developed a photo categorization system that labeled black men as gorillas; Microsoft operated a chatbot that quickly began to post inflammatory and offensive tweets. At Facebook, Mark Zuckerberg launched a campaign to promote COVID-19 vaccines in March 2021, but one of the articles in the Facebook Files documents that Facebook instead turned into a source of rampant misinformation, concluding that [e]ven when he set a goal, the chief executive couldnt steer the platform as he wanted.

One of the fundamental problems of advanced AI systems is that the underlying algorithms are, at some level, black boxes. Their complexity makes them opaque and makes their workings difficult to fully understand for humans. Although there have been some advances in making deep neural networks explainable, these are innately limited by the architecture of such networks. For example, with sufficient effort, it is possible to explain how one particular decision was made (called local interpretability), but it is impossible to foresee all possible decisions and their implications. This exacerbates the difficulty of controlling what our AI systems do.

Frequently, we only detect AI control problems after they have occurredas was the case in all the examples from big tech discussed above. However, this is a risky path with potentially catastrophic outcomes. As AI systems acquire greater capabilities and we delegate more decisions to them, relying on after-the-fact course corrections exposes our society to large potential costs. For example, if a social networking site contributes to encouraging riots and deaths, a course correction cannot undo the loss of life. The problem is of even greater relevance in AI systems for military use. This creates an urgent case for proactive work on the direct control problem and public policy measures to support and mandate such work, which I will discuss shortly below.

In contrast to the technical challenge of the direct control problem, the social AI control problem is a governance challenge. It is about ensuring that AI systemsincluding those that do precisely what their operators want them to doare not imposing externalities on the rest of society. Most of the problems identified in the Facebook Files are examples of this, as Zuckerberg seems to have prioritized user engagementand by extension the profits and market share of his companyover the common good.

The problem of social control of AI systems that are operated by corporations is exacerbated by market forces. It is frequently observed that unfettered market forces may provide corporations with incentives to pursue a singular objective, profit maximization, at the expense of all other objectives that humanity may care about. As we already discussed in the context of AI systems, pursuing a single objective in a multi-faceted world is bound to lead to harmful side effects on some or all members of society. Our society has created a rich set of norms and regulations in which markets are embedded so that we can reap the benefits of market forces while curtailing their downsides.

Advanced AI systems have led to a shift in the balance of power between corporations and societythey have given corporations the ability to pursue single-minded objectives like user engagement in hyper-efficient ways that used to be impossible before such technologies were available. The resulting potential harms for society are therefore larger and call for more proactive and targeted regulatory solutions.

Throughout our history, whenever we developed new technologies that posed new hazards for society, our nation has made it a habit to establish new regulatory bodies and independent agencies endowed with world-class expertise to oversee and investigate the new technologies. For example, the National Transportation Safety Board (NTSB) and the Federal Aviation Administration (FAA) were established at the onset of the age of aviation; or the Nuclear Regulatory Commission (NRC) was established at the onset of the nuclear age. By many measures, advanced artificial intelligence has the potential to be an even more powerful technology that may impose new types of hazards on society, as exemplified by the Facebook Files.

Given the rise of artificial intelligence, it is now time to establish a federal agency to oversee advanced artificial intelligencean AI Control Council that is explicitly designed to address the AI Control Problem, i.e. to ensure that the ever more powerful AI systems we are creating act in societys interest. To be effective in meeting this objective, such a council would need to have the ability to (i) pursue solutions to the direct AI control problem and (ii) to oversee and when necessary regulate the way AI is used across the U.S. economy to address the social control problem, all while ensuring that it does not handicap advances in AI. (See also here for a complementary proposal by Ryan Calo for a federal agency to oversee advances in robotics.) In what follows I first propose the role and duties of an AI Control Council and then discuss some of the tradeoffs and design issues inherent in the creation of a new federal agency.

First, there are many difficult technical questions related to direct AI controland even some philosophical questionsthat require significant fundamental research. Such work has broad public benefits but is hampered by the fact that the most powerful computing infrastructure, the most advanced AI systems, and increasingly the vast majority of AI researchers are located within private corporations which do not have sufficient incentive to invest in broader public goods. The AI Control Council should have the ability to direct resources to addressing these questions. Since the U.S. is one of the leading AI superpowers, this would have the potential to steer the direction of AI advancement in a more desirable direction at a worldwide level.

Second, to be truly effective, the council would need to have a range of powers to oversee AI development by private and public actors to meet the challenge of social control of AI:

Since talent shortages in the AI sector are severe, the Council needs to be designed with an eye towards making it attractive for the worlds top experts on AI and AI control to join. Many of the leading experts on AI recognize the high stakes involved in AI control. If the design of the Council carries the promise to make progress in addressing the AI control problem, highly talented individuals may be eager to serve and contribute to meeting one of the greatest technological challenges of our time.

One of the questions that the Council will need to address is how to ensure that its actions steer advances in AI in a desirable direction without holding back technological progress and U.S. leadership in the field. The Councils work on the direct control problem as well as the lessons learned from impact assessments will benefit AI advancement broadly because they will allow private sector actors to build on the findings of the Council and of other AI researchers. Moreover, if well-designed, even the oversight and regulation required to address the social control problem can in fact spur technological progress by providing certainty about the regulatory environment and by forestalling a race to the bottom by competing companies.

Another important question in designing the Council is resolution of domain issues when AI systems are deployed in areas that are already regulated by an existing agency. In that case, it would be most useful for the Council to play an advisory role and assist with expertise as needed. For example, car accidents produced by autonomous vehicles would fall squarely into the domain of the National Highway Traffic Safety Administration (NHTSA), but the new AI Control Council could assist with its expertise on advanced AI.

By contrast, when an advanced AI system gives rise to (i) effects in a new domain or (ii) emergent effects that cut across domains covered by individual agencies, then it would fall within the powers of the AI Control Council to intervene. For example, the mental health effects of the recommendation models of social networks would be a new domain that is not covered by existing regulations and that calls for impact assessments, transparency, and potentially for regulation. Conversely, if for example a social network targets stockbrokers with downbeat content to affect their mood and by extension stock markets to benefit financially in a way that is not covered by existing regulations on market manipulation, it would be a cross-domain case that the council should investigate alongside the Securities and Exchange Commission (SEC).

From a longer-term perspective, the problems revealed in the Facebook Files are only the beginning of humanitys struggle to control our ever more advanced AI systems. As the amount of computing power available to the leading AI systems and the human and financial resources invested in AI development grow exponentially, the capabilities of AI systems are rising alongside. If we cannot successfully address the AI control problems we face now, how can we hope to do so in the future when the powers of our AI systems have advanced by another order of magnitude? Creating the right institutions to address the AI control problem is therefore one of the most urgent challenges of our time. We need a carefully crafted federal AI Control Council to meet the challenge.

The Brookings Institution is financed through the support of a diverse array of foundations, corporations, governments, individuals, as well as an endowment. A list of donors can be found in our annual reports published onlinehere. The findings, interpretations, and conclusions in this report are solely those of its author(s) and are not influenced by any donation.

View original post here:
Why we need a new agency to regulate advanced artificial intelligence: Lessons on AI control from the Facebook Files - Brookings Institution

BuySTARcase.com Named CES 2022 Innovation Awards Honoree for First-ever App Controlled Smart Battery Case for Smart Phones – Star Local Media

LONGVIEW, Texas, Dec. 8, 2021 /PRNewswire/ -- BuySTARcase.coma leading innovator in mobile phone accessories -- has been named a CES 2022 Innovation Awards Honoree for its revolutionary STARcase.

The STARcase is an innovative, app-controlled smartphone case with automatic charge technology that displays customized notifications on a LED screen for calls, texts, emails, and third-party apps. The STARcase syncs with the user's contacts and apps, allowing the user to choose from hundreds of light shows, animated icons, and customizable scrolling text to view messages and notifications on the back of the phone. It also provides industry-first "smart charging" capability, allowing the user to customize whenand how muchto charge the phone from the case, reducing the prospects of ever being caught with a dead phone battery.

"We are thrilled to be named a CES 2022 Innovation Award Honoree," says Tom Coverstone, Chief Innovation Officer, at BuySTARase.com. "We have spent years engineering, modifying, and perfecting the STARcase. It is an honor to be recognized with the prestigious CES Innovation Award as recognition for the team'sexceptional efforts."

The CES Innovation Awardsare an annual competition honoring outstanding design and engineering in consumer technology products across 28 product categories. This year's CES Innovation Awards program received a record-high number of over 1800 submissions, and the STARcase prevailed in the Mobile Phone and Accessories Category. The honorees of this highly anticipated competition receive global recognition from industry leaders and media who use the CES Innovation Awards to identify outstanding products, upcoming trends, and how companies are using technology to change lives for the better.

An elite panel of industry expert judges, including members of the media, designers, engineers, and more, reviewed submissions based on innovation, engineering and functionality, aesthetics and design.

About BuySTARcase.com

The history of BuySTARcase.com parallels the story of American innovationthat with hard work, collaboration, and persistence, an idea can become a reality. The STARcase was invented, designed, and is assembled in the USA. The STARcase team is committed to utilizing the highest quality parts available that are hand-selected and tested for quality and durability. Their extensive patent and trademark portfolio reflects and validates a commitment to innovation. The STARcase mission extends to the global community. It is their genuine hope that the STARcase connects and brightens the lives of people across our globe. For more information visit: https://buySTARcase.com/.

About CES

Join BuySTARcase.com at CES 2022, the world's most influential technology event, happening Jan. 5-8 in Las Vegas, NV, in person and digitally.

Owned and produced by CTA, CES 2022, the global stage for innovation, will convene the tech industry giving global audiences access to the latest technology as well as the world's most-influential tech leaders and industry advocates.

To schedule a meeting or for media or business inquires with BuySTARcase.com at the upcoming conference, please contact Business Development at: sales@buystarcase.com.

View original content to download multimedia:https://www.prnewswire.com/news-releases/buystarcasecom-named-ces-2022-innovation-awards-honoree-for-first-ever-app-controlled-smart-battery-case-for-smart-phones-301440384.html

SOURCE BuySTARcase.com

Read this article:
BuySTARcase.com Named CES 2022 Innovation Awards Honoree for First-ever App Controlled Smart Battery Case for Smart Phones - Star Local Media

Australia’s voters hold government and the news media in contempt and the contagion is spreading | Peter Lewis – The Guardian Australia

While the showdown with a mutating virus may have hogged the political limelight this year, there has been another global outbreak which may have an even more profound impact on our collective wellbeing: the corrosion of trust in information.

Where once access to information was regarded as a self-evident liberating force, we appear to have reached an inflection point where exponential growth in flows of content is clogging our public square in muck.

This is a contagion that threatens to divide us, undermine our efforts to mediate our differences and respond to our collective challenges, anchored to a commonly agreed set of facts.

The impact on our civic spaces has hit us in waves over the past 12 months, from the attacks on the US Capitol over a stolen presidential election, to the fervent anti-vaxxer protests, to our own governments stubborn refusal to address climate change.

While the voices that dominate these outbreaks of collective madness may appear to come from the fringes, their impact is shared in an unrelenting diminution of our trust in institutions and ultimately in each other.

After a short renaissance in public trust in 2020, figures in this weeks Guardian Essential Report show the majority of Australians end the year with little or no trust in the information we receive from government, with similar disdain for the output of the traditional news media and other institutions involved in public discourse.

As we enter the pointy end of the political cycle, this becomes an acute challenge with the real potential to influence the ultimate outcome of the election. Like a utility delivering dirty drinking water, we enter this critical moment unsure of the quality of what we are consuming.

These low levels of trust in information have a political impact. Confusion is a friend of the status quo if you cant agree on the problem, theres not much value in an answer.

While scientists maintain high regard for their information, there are a still quarter of Australians who say they have little or no faith in people whose careers are dedicated to separating the facts from the feelings.

Despite our relative success in managing the pandemic, other questions in this weeks poll show the consensus around public health measures declining as the Omicron variant threatens to take hold.

Pointedly, the trade in vaccine disinformation has created a new fault line to divide around, with majority support for establishing a two-tier health system where the unvaccinated would be asked to pay for any Covid treatment.

These results also show there is a particular disdain for the digital platforms that have built their unimaginable wealth and influence by monetising division and anger in an effort to extract and then sell our attention to the highest bidder.

In a separate question there is majority support for measures to regulate social media platforms and disrupt their model of collecting user information. There is also a growing appetite for the government to play a role in supporting alternative networks that operate in the public, rather than a commercial interest.

As QUT academic Axel Bruns argues in his contribution in a new book of essays released by the Australia Institutes Centre for Responsible Technology, The Public Square Project, providing independent researchers with access to the secret black box of platform algorithms is essential if we are to secure a clean supply of information.

Bruns, who works to chart the flow of disinformation through social networks, argues a critical driver of conspiracy is the interaction with traditional media and public figures, be they elected officials or celebrities.

To its credit, the Morrison government has bookended 2021 with attempts to place greater responsibility on to the platforms, first with the news media bargaining code forcing the platforms to fund journalism and more recently, with measures to force greater responsibility for the behaviour of users online.

But the way the prime minister approaches the election campaign will be just as telling as any legislative push. Already he appears to be conjuring his own virtual reality, where Gladys Berejiklian is the victim of a kangaroo court, Labors climate plan will destroy the economy and household prices will inevitably rise if he is no longer in control.

A real leadership test as the election heats up would be to self-moderate these flows of disinformation and vitriol rather than micro-targeting lies and anger at vulnerable voting groups.

Last time it was Death Taxes, the previous election it was Labors Mediscare, each brutally effective confections. Each success in political disinformation builds on the next until elections cease to be a contest of real ideas, but a cartoon cut-out of tropes and cliches.

Elections are rarely won pretty, but Trump showed there is a limit to and consequence of fully embracing the ugly. A multi-partisan commitment to ending the digital disinformation arms race would be a transformative commitment to a reality-based future from all sides.

Peter Lewis will discuss the latest Essential Report findings with Guardian Australia political editor Katharine Murphy at 1pm on Tuesday. Free registration here

Original post:
Australia's voters hold government and the news media in contempt and the contagion is spreading | Peter Lewis - The Guardian Australia

MEDIA ADVISORY: Benchmark Digital, ESG-Focused Executives to Host Forum on Global Regulation, Climate Strategy, Data Management, and Other ESG Issues…

CINCINNATI--(BUSINESS WIRE)--Benchmark Digital (Benchmark), a leading provider of cloud-based Environmental, Social and Governance (ESG) software solutions invites you to a virtual ESG Executive Collaboration Forum on December 8, 2021. The session is the second in a recurring series of virtual forums designed to share best practices among ESG practitioners and equip organizations with practical insights into the evolving ESG regulatory and standards landscape (GRI, SASB, TCFD) and strategies for addressing ESG disclosure and the operational foundations necessary for ESG performance excellence.

FEATURED SPEAKERS:

WHAT:

In this 1-hour session, experts will provide takeaways from COP26, a real-time update on the global ESG regulatory and reporting landscape, and the role of management systems in ESG disclosure and data management, including a case study of Global Partners LP and their climate program. ESG leaders will be able to connect in an open exchange of insights and best practices, find out how their peers are overcoming common challenges, and learn what they can do to strengthen their ESG programs and their companys results.

WHEN:

WHERE:

About Benchmark ESG

Benchmark ESG (the next generation of Gensuite) enables companies to implement robust cross-functional Environmental, Social, and Governance (ESG) Solutions locally, globally and across diverse operating profiles. Our comprehensive cloud-based software suite features intuitive, best-practice process functionality, flexible configurations and powerful extensions. For over two decades, our digital platform has helped companies manage safe & sustainable operations worldwide, with a focus on fast return on investment (ROI), service excellence and continuous innovation. Join over 1,500,000 users that trust Benchmark ESG with their software system needs for operational risk and compliance, EHS, sustainability, product stewardship, responsible sourcing, and ESG disclosure reporting and management.

Read more from the original source:
MEDIA ADVISORY: Benchmark Digital, ESG-Focused Executives to Host Forum on Global Regulation, Climate Strategy, Data Management, and Other ESG Issues...

CJ McCollum Reacts To Trail Blazers’ Management Changes: "There’s A Lot Of Sh*t Going On. Theres Sh*t Going On Everyday." – Fadeaway World

Source: Ahn Fire Digital

The Portland Trail Blazers have had a horrendous start to the season. New head coach Chauncey Billups has not been able to get his team playing the way he had hoped. Despite Damian Lillard expressing a desire not to leave and join another team, if the Blazers keep playing this poorly, he may have to leave, for the sake of his career.

CJ McCollum spoke recently about the situation in Portland with The Athletic. He said that the circumstances are different from anything he has seen before. Everything that has happened has an effect on the teams performance on the court. But this is taking a massive toll on him.

This is different than anything Ive ever experienced because of the circumstances. This is the first year of my career where we lost our whole coaching staff, brought in a new coach, a new staff, the GM gets fired in the middle of the season all of that affects you on the court. But there is no excuses. I didnt come here to tell you Theres a lot of sh*t going on but yeah, there is. Theres sh*t going on every day. And Im a f*cking human being. But look, at the end of the day, my job is to play basketball. So I go play basketball.

McCollum has not had an easy road so far, and things have gotten worse, even suffering a significant injury, a collapsed right lung that will keep him out indefinitely. Things have gone from bad to worse so far for McCollum and the Blazers, and it has not just been on the court.

The Blazers have been a franchise in disarray for some time. Aside from their 11-14 record, they have had internal problems, especially when it comes to their front office.They recently fired their president Neil Oshey, which could put a lot of their teams structure into jeopardy.

On top of that, there have been reports that Damian Lillard wants to play with Ben Simmons. Any fans of the Blazers will hope that things will settle down, but it will take a lot of progress from the franchise on all fronts to make that possible.

See the original post:
CJ McCollum Reacts To Trail Blazers' Management Changes: "There's A Lot Of Sh*t Going On. Theres Sh*t Going On Everyday." - Fadeaway World