Archive for the ‘Ai’ Category

Opinion | Elon Musk, Geoff Hinton, and the War Over A.I. – The New York Times

There is no shortage of researchers and industry titans willing to warn us about the potential destructive power of artificial intelligence. Reading the headlines, one would hope that the rapid gains in A.I. technology have also brought forth a unifying realization of the risks and the steps we need to take to mitigate them.

The reality, unfortunately, is quite different. Beneath almost all of the testimony, the manifestoes, the blog posts and the public declarations issued about A.I. are battles among deeply divided factions. Some are concerned about far-future risks that sound like science fiction. Some are genuinely alarmed by the practical problems that chatbots and deepfake video generators are creating right now. Some are motivated by potential business revenue, others by national security concerns.

The result is a cacophony of coded language, contradictory views and provocative policy demands that are undermining our ability to grapple with a technology destined to drive the future of politics, our economy and even our daily lives.

These factions are in dialogue not only with the public but also with one another. Sometimes, they trade letters, opinion essays or social threads outlining their positions and attacking others in public view. More often, they tout their viewpoints without acknowledging alternatives, leaving the impression that their enlightened perspective is the inevitable lens through which to view A.I. But if lawmakers and the public fail to recognize the subtext of their arguments, they risk missing the real consequences of our possible regulatory and cultural paths forward.

To understand the fight and the impact it may have on our shared future, look past the immediate claims and actions of the players to the greater implications of their points of view. When you do, youll realize this isnt really a debate only about A.I. Its also a contest about control and power, about how resources should be distributed and who should be held accountable.

Beneath this roiling discord is a true fight over the future of society. Should we focus on avoiding the dystopia of mass unemployment, a world where China is the dominant superpower or a society where the worst prejudices of humanity are embodied in opaque algorithms that control our lives? Should we listen to wealthy futurists who discount the importance of climate change because theyre already thinking ahead to colonies on Mars? It is critical that we begin to recognize the ideologies driving what we are being told. Resolving the fracas requires us to see through the specter of A.I. to stay true to the humanity of our values.

One way to decode the motives behind the various declarations is through their language. Because language itself is part of their battleground, the different A.I. camps tend not to use the same words to describe their positions. One faction describes the dangers posed by A.I. through the framework of safety, another through ethics or integrity, yet another through security and others through economics. By decoding who is speaking and how A.I. is being described, we can explore where these groups differ and what drives their views.

The loudest perspective is a frightening, dystopian vision in which A.I. poses an existential risk to humankind, capable of wiping out all life on Earth. A.I., in this vision, emerges as a godlike, superintelligent, ungovernable entity capable of controlling everything. A.I. could destroy humanity or pose a risk on par with nukes. If were not careful, it could kill everyone or enslave humanity. Its likened to monsters like the Lovecraftian shoggoths, artificial servants that rebelled against their creators, or paper clip maximizers that consume all of Earths resources in a single-minded pursuit of their programmed goal. It sounds like science fiction, but these people are serious, and they mean the words they use.

These are the A.I. safety people, and their ranks include the Godfathers of A.I., Geoff Hinton and Yoshua Bengio. For many years, these leading lights battled critics who doubted that a computer could ever mimic capabilities of the human mind. Having steamrollered the public conversation by creating large language models like ChatGPT and other A.I. tools capable of increasingly impressive feats, they appear deeply invested in the idea that there is no limit to what their creations will be able to accomplish.

This doomsaying is boosted by a class of tech elite that has enormous power to shape the conversation. And some in this group are animated by the radical effective altruism movement and the associated cause of long-term-ism, which tend to focus on the most extreme catastrophic risks and emphasize the far-future consequences of our actions. These philosophies are hot among the cryptocurrency crowd, like the disgraced former billionaire Sam Bankman-Fried, who at one time possessed sudden wealth in search of a cause.

Reasonable sounding on their face, these ideas can become dangerous if stretched to their logical extremes. A dogmatic long-termer would willingly sacrifice the well-being of people today to stave off a prophesied extinction event like A.I. enslavement.

Many doomsayers say they are acting rationally, but their hype about hypothetical existential risks amounts to making a misguided bet with our future. In the name of long-term-ism, Elon Musk reportedly believes that our society needs to encourage reproduction among those with the greatest culture and intelligence (namely, his ultrarich buddies). And he wants to go further, such as limiting the right to vote to parents and even populating Mars. Its widely believed that Jaan Tallinn, the wealthy long-termer who co-founded the most prominent centers for the study of A.I. safety, has made dismissive noises about climate change because he thinks that it pales in comparison with far-future unknown unknowns like risks from A.I. The technology historian David C. Brock calls these fears wishful worries that is, problems that it would be nice to have, in contrast to the actual agonies of the present.

More practically, many of the researchers in this group are proceeding full steam ahead in developing A.I., demonstrating how unrealistic it is to simply hit pause on technological development. But the roboticist Rodney Brooks has pointed out that we will see the existential risks coming, the dangers will not be sudden and we will have time to change course. While we shouldnt dismiss the Hollywood nightmare scenarios out of hand, we must balance them with the potential benefits of A.I. and, most important, not allow them to strategically distract from more immediate concerns. Lets not let apocalyptic prognostications overwhelm us and smother the momentum we need to develop critical guardrails.

While the doomsayer faction focuses on the far-off future, its most prominent opponents are focused on the here and now. We agree with this group that theres plenty already happening to cause concern: Racist policing and legal systems that disproportionately arrest and punish people of color. Sexist labor systems that rate feminine-coded rsums lower. Superpower nations automating military interventions as tools of imperialism and, someday, killer robots.

The alternative to the end-of-the-world, existential risk narrative is a distressingly familiar vision of dystopia: a society in which humanitys worst instincts are encoded into and enforced by machines. The doomsayers think A.I. enslavement looks like the Matrix; the reformers point to modern-day contractors doing traumatic work at low pay for OpenAI in Kenya.

Propagators of these A.I. ethics concerns like Meredith Broussard, Safiya Umoja Noble, Rumman Chowdhury and Cathy ONeil have been raising the alarm on inequities coded into A.I. for years. Although we dont have a census, its noticeable that many leaders in this cohort are people of color, women and people who identify as L.G.B.T.Q. They are often motivated by insight into what it feels like to be on the wrong end of algorithmic oppression and by a connection to the communities most vulnerable to the misuse of new technology. Many in this group take an explicitly social perspective: When Joy Buolamwini founded an organization to fight for equitable A.I., she called it the Algorithmic Justice League. Ruha Benjamin called her organization the Ida B. Wells Just Data Lab.

Others frame efforts to reform A.I. in terms of integrity, calling for Big Tech to adhere to an oath to consider the benefit of the broader public alongside or even above their self-interest. They point to social media companies failure to control hate speech or how online misinformation can undermine democratic elections. Adding urgency for this group is that the very companies driving the A.I. revolution have, at times, been eliminating safeguards. A signal moment came when Timnit Gebru, a co-leader of Googles A.I. ethics team, was dismissed for pointing out the risks of developing ever-larger A.I. language models.

While doomsayers and reformers share the concern that A.I. must align with human interests, reformers tend to push back hard against the doomsayers focus on the distant future. They want to wrestle the attention of regulators and advocates back toward present-day harms that are exacerbated by A.I. misinformation, surveillance and inequity. Integrity experts call for the development of responsible A.I., for civic education to ensure A.I. literacy and for keeping humans front and center in A.I. systems.

This groups concerns are well documented and urgent and far older than modern A.I. technologies. Surely, we are a civilization big enough to tackle more than one problem at a time; even those worried that A.I. might kill us in the future should still demand that it not profile and exploit us in the present.

Other groups of prognosticators cast the rise of A.I. through the language of competitiveness and national security. One version has a post-9/11 ring to it a world where terrorists, criminals and psychopaths have unfettered access to technologies of mass destruction. Another version is a Cold War narrative of the United States losing an A.I. arms race with China and its surveillance-rich society.

Some arguing from this perspective are acting on genuine national security concerns, and others have a simple motivation: money. These perspectives serve the interests of American tech tycoons as well as the government agencies and defense contractors they are intertwined with.

OpenAIs Sam Altman and Metas Mark Zuckerberg, both of whom lead dominant A.I. companies, are pushing for A.I. regulations that they say will protect us from criminals and terrorists. Such regulations would be expensive to comply with and are likely to preserve the market position of leading A.I. companies while restricting competition from start-ups. In the lobbying battles over Europes trailblazing A.I. regulatory framework, U.S. megacompanies pleaded to exempt their general purpose A.I. from the tightest regulations, and whether and how to apply high-risk compliance expectations on noncorporate open-source models emerged as a key point of debate. All the while, some of the moguls investing in upstart companies are fighting the regulatory tide. The Inflection AI co-founder Reid Hoffman argued, The answer to our challenges is not to slow down technology but to accelerate it.

Any technology critical to national defense usually has an easier time avoiding oversight, regulation and limitations on profit. Any readiness gap in our military demands urgent budget increases, funds distributed to the military branches and their contractors, because we may soon be called upon to fight. Tech moguls like Googles former chief executive Eric Schmidt, who has the ear of many lawmakers, signal to American policymakers about the Chinese threat even as they invest in U.S. national security concerns.

The warriors narrative seems to misrepresent that science and engineering are different from what they were during the mid-20th century. A.I. research is fundamentally international; no one country will win a monopoly. And while national security is important to consider, we must also be mindful of self-interest of those positioned to benefit financially.

As the science-fiction author Ted Chiang has said, fears about the existential risks of A.I. are really fears about the threat of uncontrolled capitalism, and dystopias like the paper clip maximizer are just caricatures of every start-ups business plan. Cosma Shalizi and Henry Farrell further argue that weve lived among shoggoths for centuries, tending to them as though they were our masters as monopolistic platforms devour and exploit the totality of humanitys labor and ingenuity for their own interests. This dread applies as much to our future with A.I. as it does to our past and present with corporations.

Regulatory solutions do not need to reinvent the wheel. Instead, we need to double down on the rules that we know limit corporate power. We need to get more serious about establishing good and effective governance on all the issues we lost track of while we were becoming obsessed with A.I., China and the fights picked among robber barons.

By analogy to the health care sector, we need an A.I. public option to truly keep A.I. companies in check. A publicly directed A.I. development project would serve to counterbalance for-profit corporate A.I. and help ensure an even playing field for access to the 21st centurys key technology while offering a platform for the ethical development and use of A.I.

Also, we should embrace the humanity behind A.I. We can hold founders and corporations accountable by mandating greater A.I. transparency in the development stage, in addition to applying legal standards for actions associated with A.I. Remarkably, this is something that both the left and the right can agree on.

Ultimately, we need to make sure the network of laws and regulations that govern our collective behavior is knit more strongly, with fewer gaps and greater ability to hold the powerful accountable, particularly in those areas most sensitive to our democracy and environment. As those with power and privilege seem poised to harness A.I. to accumulate much more or pursue extreme ideologies, lets think about how we can constrain their influence in the public square rather than cede our attention to their most bombastic nightmare visions for the future.

More:

Opinion | Elon Musk, Geoff Hinton, and the War Over A.I. - The New York Times

The UK AI Summit: Time to Elevate Democratic Values – Council on Foreign Relations

Not long ago, the United States and the United Kingdom were leading the effort to establish global norms for the governance of artificial intelligence (AI). Both nations backed the Organization for Economic Cooperation and Development (OECD) AI Principles of 2019, the first global AI policy framework, and the Global Partnership on AI that followed. But efforts slowed as the European Union took the lead on regulatory efforts with the EU Artificial Intelligence Act, now heading toward the finish line as final negotiations on the Act wrap up later this year.

Now Prime Minister Sunak is hosting a Global Summit on AI Safety from November 1 to 2, following meetings with tech leaders in the UK and the meeting with President Biden in Washington, D.C. Speaking with reporters after the event, Sunak described the United States and the UK as the worlds foremost AI democratic powers. He emphasized the shared values of freedom, democracy, and rule of law.

More on:

Robots and Artificial Intelligence

Technology and Innovation

Global Governance

The UK AI summit now provides an opportunity for the United States and the UK to align on policy and move beyond the techno-libertarianism that characterized the early days of AI policymaking in both countries and begin to develop solutions to the challenges of AI, but there are challenges ahead. At the Summit and in future discussions on the role of AI, the UK should work to include civil society, integrate an AI fairness agenda into talks, and ensure that human rights and democratic values are central to any proposed international regulation.

First, the Global AI Summit must be inclusive. Prime Minister Sunak is already under criticism for a preliminary announcement that included statements from only tech CEOs and a plan that appears to sideline academics and civil society. While it is true that the tech CEOs had a White House meeting with the President, the Biden administration also quickly reached out to civil society organizations and labor leaders for input and advice on AI. Senate Majority Leader Chuck Schumer (D-NY) has already held an inaugural AI Insight Forumto gather expert input, albeit behind closed doors, on his proposed SAFE Innovation Act, including the perspectives of labor leaders and civil rights leaders, as well as the insights of practitioners and researchers focused on bias mitigation Prime Minister Sunak would do well to follow the American lead on civil society participation and ensure that the AI Safety Forum fairly reflects those impacted by AI systems, including marginalized communities.

Second, the AI safety agenda should not ignore the AI fairness agenda. Prime Minister Sunak is right to underscore the need for an international framework to ensure the safe and reliable development of AI. President Biden has also said that companies should not deploy AI systems that are not safe. Mitigating risk is a top priority, but so too is ensuring that AI systems treat people fairly, that systems are accountable, that adverse decisions are contestable, and that transparency is meaningful. In the rush to address existential risk there is the danger that the existing impact of AI on decisions in housing, credit, employment, education, and criminal justice will be ignored. The Prime Minister can address these concerns by including such topics as algorithmic bias, equity, and accountability in the meeting agenda.

Third, human rights and democratic values should remain key pillars of the UK AI Summit. There are many AI policy challenges ahead and several of the solutions do not favor democratic outcomes. For example, countries emphasizing safety and security are also establishing ID requirements for users of AI systems. And the desire to identify users and build massive new troves of personal data is not limited to governments. Several of the tech CEOs, including OpenAIs Sam Altman and former Google head Eric Schmidt, also favor identity requirements for AI users, even as they argue against regulation of their own AI services. Altman is CEO of a company that is seeking to establish a global biometric database based on eye scans. Requiring biometric data from users of AI while leaving AI systems unregulated is an outcome that democratic states should avoid.

Countries that value human dignity and autonomy should choose instead technical solutions that are less data intensive. The United States and the UK have already launched important work on Privacy Enhancing Technology that could minimize or eliminate the collection of personal data. That work should be encouraged and user identification requirements should be dropped. Strong data protection safeguards will help ensure that AI innovation does not undermine privacy.

More on:

Robots and Artificial Intelligence

Technology and Innovation

Global Governance

The UK government also needs to establish prohibitions and controls on AI systems that violate fundamental rights. The UK has already endorsed the United Nations Educational, Scientific and Cultural Organization (UNESCO) Recommendation on AI Ethics that proposes a ban on the use of AI for social scoring and mass surveillance. UK domestic law should implement these recommendations as well as other proposed limitations, including biometric categorization, predictive policing, and emotion detection. Drawing these red lines will be critical to ensure that AI systems are both human-centric and trustworthy, key goals set out in the OECD AI Principles and previously endorsed by the United States and the UK.

The renewed commitment to a regulatory framework for the governance of AI is welcome, especially as both the European Union and China pursue regulation for AI. As we have warned previously, the UKs light touch strategy for AI was unlikely to establish the necessary guardrails for the safe deployment of artificial intelligence. President Biden has already warned tech firms that they should not deploy AI systems that are not safe, and the U.S. Congress is now considering several bills, including the Blumenthal-Hawley U.S. AI Act, to govern AI services. Again, the UK would be wise to follow the United States lead and build in the necessary guardrails for AI products and services. For the worlds foremost AI democratic powers, the summit is an excellent moment to align national AI policies with democratic values.

Merve Hickok is President of the Center for AI and Digital Policy (CAIDP). Marc Rotenberg is Executive Director of CAIDP and a CFR Life Member. CAIDP publishes the AI and Democratic Values Index annually.

View post:

The UK AI Summit: Time to Elevate Democratic Values - Council on Foreign Relations

Higher ed AI anxiety? An advisory board could help – Inside Higher Ed

Experts say AI advisory boards could help universities looking to navigate the technology.

Photo illustration by Justin Morrison/Inside Higher Ed | Getty Images

When it comes to artificial intelligence and higher ed, the excitement and hype are matched by the uncertainties and need for guidance. One solution: creating an AI advisory board that brings together students, faculty and staff for open conversations about the new technology.

That was a key idea presented at the University of Central Floridas inaugural Teaching and Learning With AI conference, a two-day event that drew more than 500educators from around the country.

AI has had a breakout year, said Ray Schroeder, a senior fellow of the University Professional and Continuing Education Association (and a contributor to Inside Higher Ed). Schroeder, who has recently focused on the intersection of AI and higher education, opened the conference seeking to help faculty, administrators and staff attempt to navigate the choppy waters of AI.

Most Popular

We cannot afford to ignore it, he said. The intent is to make clear, What is the intention of the university? How are they going to move with this technology?

Schroeder and other experts interviewed said universities need a formal mechanism for getting advice on how to proceed.

Artificial intelligence is a technology that impacts nearly every aspect of higher education institutionsrecruiting, admissions, financial aid, student support services, teaching and learning, assessment, operation, and more, said Kristina Ishmael, deputy director of the Education Departments Office of Educational Technology.

Ishmael said in an email to Inside Higher Ed that the departments top recommendation about AI is to emphasize humans in the loop. Institutions that choose to create an AI advisory board, or a similar group, would be implementing this recommendation.

Many universities are already pursuing that advice. The University of Louisville had its first AI advisory board meeting last week. Stanford University and Vanderbilt University formed boards earlier this year after investing millions in AI research on campus.

Northeastern University created an external AI board, co-chaired by two faculty members and joined by industry heavyweights including Honeywell and the Mayo Clinic.

The University of Michigan unveiled its 18-member advisory board in May, a group tasked with creating a report centered on best practices for generative AI.

The Michigan board was the brainchild of Ravi Pendse, the universitys vice president for information technology, who chatted with fellow faculty members about AI at the start of the year.

I said, We need to get a faculty group together to provide general guidance to the campus, said Pendse, who also serves as the universitys chief information officer. We need to make sure we consider this technology, frankly, with our eyes wide-open and feet on the ground, so we embrace it, but do so thoughtfully.

There is no perfect blueprint to building an AI advisory board, and the approach will vary for each college or university. However, the experts interviewed noted important factors to make it work.

An AI advisory board may not work for each institution, and there are other approaches. Schroeder suggested a workshop for faculty could suffice. Pendse pointed toward having a lunch-and-learn series and said that the key, whatever the format, is to encourage discussion.

These discussions are already happening, Pendse said. We want to know what this thing is, to use it, to leverage it, to debate it. And the way you do that is providing safe spaces where this debate can happen and engage with each other.

For the institutions forming boards, however, it can be dual purpose, according to experts. Theres the value it delivers to students, which, Schroeder said, can help prepare them for the changing workforce.

For students, I think its a matter of tapping their expectations, he said. And I think their expectations are driven in part, perhaps in large part, by the expectations of employers.

Those students, once prepared, can boost discussion and ultimately help with AI research in the future, creating a flywheel effect.

We cant just sit idly in this country; other countries are investing, so we need to be flying, not running or walking, Pendse said. And the only way we can is with institutions contributing to the AI talentthat will create policy makers, people who can debate the pros and cons. Thats how we can compete in the world.

Go here to see the original:

Higher ed AI anxiety? An advisory board could help - Inside Higher Ed

IU Luddy School partners with CODE19 Racing to develop … – IU Newsroom

BLOOMINGTON, Ind. Indiana University will partner with the worlds first professional autonomous racing franchise, CODE19 Racing Inc., to participate in global competitions of self-driving race cars, an emerging sector in the world of racing.

IU associate professor Lantao Liu, IU Luddy School Dean Joanna Millunchick and CODE19 Racing co-founders Lawrence Walter and Oliver Wells, front row from left, with IU Luddy School graduate students involved in the AI driver project. Photo by Chris Kowalczyk, Indiana University

IU and CODE19 kicked off the partnership Sept. 26 to 28 during a workshop and starting grid event with graduate students and faculty at the IU Luddy School of Informatics, Computing and Engineering, who will create the first AI driver for CODE19.

We are thrilled to partner with CODE19 Racing, said Joanna Millunchick, dean of the IU Luddy School of Informatics, Computing and Engineering. This is an unmatched opportunity for our students to apply their skills to real-world problems and to compete against the best AI teams in the world. AI racing truly has the potential to advance autonomous vehicle technology in the same way that motorsports technology has advanced the consumer automotive industry.

A member of the IU Ventures Founders and Funders Network, CODE19 has a mission of accelerating the development of autonomous driving technology by developing and racing autonomous race cars at the highest level.

As an IU alumni-founded startup, we are excited to partner with the IU Luddy School to develop the next generation of advanced AI for autonomous race cars, said, Lawrence Walter, president of CODE19 Racing. The Luddy School is home to some of the brightest minds in artificial intelligence, and we are excited to help IU develop a world-class AI driver.

With the Luddy School at the controls, CODE19 will race in global competitions for autonomous race cars. These experimental competitions help accelerate the development of autonomous driving technology by providing a challenging and competitive environment for teams to test their AI drivers.

Lawrence Walter discusses an early demonstration of an AI race car driver in a simulated video environment during the Starting Grid event at Luddy Hall at IU Bloomington. Photo by Chris Kowalczyk, Indiana University

The Luddy School is one of the largest and most comprehensive schools of its kind in the world. The CODE19 Racing AI driver will be developed by the schools Vehicle Autonomy and Intelligence Lab, a state-of-the-art robotics research team led by Lantao Liu, a Luddy School associate professor of intelligent systems engineering.

An expert on robotics and artificial intelligence, Liu focuses on developing autonomous systems involving single or multiple robots with applications in autonomous navigation, smart transportation, and search and rescue. His lab specifically focuses on enhancing the autonomy and intelligence of robotic systems such as unmanned ground, aerial and aquatic vehicles.

The team will also have access to the deep pool of talent in informatics, computer science and engineering at the Luddy School in Indianapolis. Indianapolis-based advisors include Zebulun Wood, a lecturer in media arts and science, who will help develop the strategy for an interactive avatar for the AI driver, putting a face on the technology and interacting with the world.

In addition to the Luddy Schools resources, the project will benefit from IUs partnership with Naval Surface Warfare Center, Crane Division, which will support a Luddy Ph.D. student to advance development of the teams AI driver.

IU Luddy faculty members Lantao Liu, right, and Zeb Wood, center, speak with Lawrence Walter, founder and CEO of CODE19 Racing, before the partnership announcement at the Rally Conference in Indianapolis in August. Photo by Justin Casterline, Indiana University

A naval research and development laboratory in Crane, Indiana, NSWC Crane is responsible for developing and testing naval technologies, including autonomous systems.

NSWC Crane is committed to advancing the state-of-the-art autonomous systems, said Charles Colglazier, NSWC Crane liaison for IU and the National Security Innovation Network. This innovative initiative with CODE19 Racing and IU will help rapidly develop and test new dual-use autonomous technologies that could be used to support the warfighters in the field. Were excited to see how fast the Hoosiers AI driver can go.

Support from the partnership will help attract top talent to the team and accelerate the development of the AI driver, Walter said.

We are confident that our team has the skills and resources to manage winning AI drivers, Walter said. We are focused on competing globally and showing the world what IU can do on the race track.

Go here to read the rest:

IU Luddy School partners with CODE19 Racing to develop ... - IU Newsroom

AI predicts how many earthquake aftershocks will strike and their … – Nature.com

A powerful earthquake on 24 August 2016 killed hundreds of people in Amatrice, Italy (pictured) and was followed by destructive aftershocks. New machine learning models hold potential for predicting the number of quake aftershocks.Credit: Stefano Montesi/Corbis via Getty

Seismologists are finally making traction on one of their most tantalizing but challenging goals: using machine learning to improve earthquake forecasts.

Three new papers describe deep-learning models that perform better than a conventional state-of-the-art model for forecasting earthquakes13. The findings are preliminary and apply only to limited situations, such as in assessing the risk of aftershocks after a big one has already hit. But they are a rare advance towards the long-sought goal of harnessing the power of machine learning to reduce seismic risk.

Im really excited that this is finally happening, says Morgan Page, a seismologist at the US Geological Survey (USGS) in Pasadena, California, who was not involved with the studies.

Heres what earthquake forecasts are not: predictions of an event of a particular magnitude happening in a particular location at a particular time the next Tuesday at 3 p.m. scenario. The notion that scientists can make such highly specific predictions has been discredited. Instead, statistical analyses are helping seismologists understand broader trends, such as how many aftershocks might be expected in the days to weeks after a large earthquake. Agencies such as the USGS issue aftershock forecasts to warn people in quake-ravaged areas of what else might be coming.

Algorithms spot millions of Californias tiniest quakes in historical data

At first glance, earthquake forecasts seem to be an obvious application to try to improve using deep learning4. The techniques do well when they ingest and synthesize large amounts of data and use them to predict the next steps in a pattern. And seismology is rich with data from catalogues of earthquakes that occur worldwide. Just as a large language model can train itself on millions of words to predict what word might come next, an earthquake-forecasting model should be able to train itself on earthquake catalogues to forecast the chances of a quake following one that has already occurred.

But researchers have struggled to extract meaningful trends from all the quake data5. Big earthquakes are rare, and working out what to worry about isnt easy.

In the past several years, however, seismologists have used machine learning to uncover small earthquakes that had not been spotted before in seismic records. These quakes have bulked up the existing earthquake catalogues, and provide fresh fodder for a second round of machine-learning analysis.

Current USGS forecasts use a model that relies on basic information about past earthquake magnitudes and locations to predict what might happen next. The three latest papers instead use a neural-network approach, which updates calculations during each step of the analysis to better capture the complex patterns of how earthquakes occur.

In the first1, geophysicist Kelian Dascher-Cousineau at the University of California, Berkeley, and his colleagues tested their model on a catalogue of thousands of quakes that struck southern California between 2008 and 2021. Their model performed better than the standard one at forecasting how many quakes would occur in rolling two-week periods. It was also better at capturing the full magnitude range of possible earthquakes, thus reducing the chance of a surprise big one.

At the University of Bristol, UK, applied statistician Samuel Stockman developed a similar method that performed well when trained2 on a catalogue of earthquakes that shook central Italy in 201617, damaging several towns. When researchers lower the magnitude of quakes included in the training set, the machine-learning model starts to perform better, Stockman says.

Rubble piles still stood in Castro, Italy, almost a year after the village was damaged by the same earthquake that levelled Amatrice.Credit: Amelia Hennighausen/Nature

And at Tel Aviv University in Israel, physicist Yohai Bar-Sinai led a team that developed a third neural-network model3. When tested against 30 years of quake data from Japan, it, too, did better than the standard model. The work might provide insight into fundamental quake physics, Bar-Sinai says. There is hope that we will understand more about the underlying mechanisms about what causes earthquakes to start, what determines their magnitude.

All three models are moderately promising, says Leila Mizrahi, a seismologist at the Swiss Federal Institute of Technology (ETH) in Zurich. They arent breakthroughs in their current form, she says, but they show potential for bringing machine-learning techniques into quake forecasting on an everyday basis.

Its certainly no silver bullet, adds Maximilian Werner, a seismologist at the University of Bristol who works with Stockman. But, he says, machine learning will gradually become part of official earthquake forecasting over the coming years, because it is so well suited to working with the huge earthquake data sets that are becoming more common.

Agencies such as the USGS will probably start to use machine-learning models alongside their standard one, and then transition entirely to the machine-learning approach if it proves to be superior, Page says. That could improve forecasts when aftershocks are rumbling unpredictably and disrupting peoples lives for months, as happened in Italy. The models could also be used to improve forecasts after large rare earthquakes, including the magnitude-6.8 earthquake that hit Morocco in September, killing thousands.

Still, Dascher-Cousineau warns people not to rely on these fancy new models too much. At the end of the day, preparing for quakes is the most important, he says. We dont get to stop making sure our buildings are up to code, we dont get to not have our earthquake kits, [just] because we have a better earthquake-forecasting model.

Here is the original post:

AI predicts how many earthquake aftershocks will strike and their ... - Nature.com