Archive for the ‘Artificial Super Intelligence’ Category

Will AIs Next Wave of Super Intelligence Replace Human Ingenuity? Its Complicated – Grit Daily

OpenAI, the GenAI poster child that unleashed ChatGPT, has its sights set on inventing the worlds first General AI that can outperform human intelligence. While nearly impossible, its more likely for next-gen AI to pave the way to new paths by doing what no humans can do.

Since the 1960s, theres increasingly been a growing distrust of humans vs. AI as evidenced by HAL 9000 going insane in the Stanley Kubrick masterpiece 2001: A Space Odyssey among countless other pop-culture imaginations. In fact, it even spawned a new sector of academic study called social robotics for a new age of human-like robots (Think: The Jetsons cartoon) who serve as tight-knit members of the family.

More recently with rapid AI advances going mainstream, the fear and distrust of AI has hit closer to home from the actors and writers strikes in Hollywood to concerns about what the future of IP law looks like, and more numerous lawsuits tied to protecting the rights of humans vs. algorithms across a broadening swath of industry sectors.

Adding more fuel to the fire is the ultimate Holy Grail being sought by a new wave of AI pioneers: the design and eventual arrival of general artificial intelligence (AGI). For example, OpenAIs charter is to develop and commercialize AGI that can outperform humans at most economically valuable work and to do so in a way that benefits all of humanity.

The academic and industry definitions of AGI vary, but most revolve around the concept of an eventual evolution to highly autonomous systems that can outperform humans at nearly any task due to their super intelligence. As for AGIs ETA, estimates vary from decades to more than a century away. Some AI specialists believe that achieving absolute AGI is not possible.

For many of us, AIs evolution and its ultimate impact on how we live and earn our livings, the idea of an unpleasant hypothetical future where our tech grows out of control and transforms our realities dubbed The Singularity is setting off our Spidey Sense alarms these days. For others, theres a brighter future where humans and machines can co-exist and prosper together.

If you ask ChatGPT to write you a poem today, it will likely be a mediocre one. Perhaps a technically correct haiku, if that was in the prompt, or one that rhymes in the right spots, but its more unlikely to cause an emotional stir like one from a human

This is because ChatGPT is only using a small sliver of what we, the most super-intelligent creatures on Earth, are programmed to do. The brilliant trick performed by Large Language Models is the amazing speed they can access a massive trove of human communications and model them into the right format for the ask in only seconds much faster than most of us.

But is it possible for GenAI tools to write true literature, poetry, music, comedy, or screenplays that will create something thats truly novel, moving, or awe-inspiring? That is a far more ambitious goal and one that is exceedingly more difficult.

Yes, GenAI blended with human creativity and prompting can produce some magnificent magic, and rapidly from spectacular special effects for streaming TV giants to AI-powered art and design, or various plot lines for franchise books and movies. But there is a massive capability gap to having AI outperform Beyonc or Taylor Swift and their teams ability to produce chart-busting, award-winning albums or to compose a great opera, paint the Mona Lisa, or write the next Catcher in the Rye for Generation Z.

Thats because todays AI capabilities are narrow and more general but not on par with OpenAIs AGI vision.It is not super-intelligent or wired to be creative to produce something that is novel and interesting. And to push the concept further, the real question is: what is interesting? It is not necessarily interesting if it has never existed before.

AGI will arrive when machines can beat the best of humans in every task: for example, out-performing Lady Gaga and Elton John in the music category in front of an international audience during a Grammys or Oscar Awards broadcast.

The bigger opportunity for AI in the future is not in doing everything better than humans can.

While commercialization goals for AGI vary, the breakthrough being pursued by major tech players such as OpenAI is defined as when machines can beat the best of all our humans in every task: to write poetry better than Maya Angelou, a speech better than Dr. MLK Jr., a movie script as provocative as Spike Lee, or a dissent as moving as one from Notorious RBG.

Achieving this during our lifetimes is improbable. However, by using different programming techniques for example, using a Creative Adversarial Network (CAN) to reduce the need to create something conforming, or following typical conventions

Over time, AI can be taught to create art, designs, literature, or music that is interesting by breaking rules, while providing a framework for the AI to have more freedom. For example, by subverting the main rules about intervals in music, or using more old fashioned deep-learning algorithms like my colleagues at Sony Computer Science Lab did for Daddys Car a Beatles-like song that was the first one to be composed by AI, released back in 2016.

The bigger opportunity for AI in the future is not in doing everything better than humans can.

The real treasure for next-wave AI innovators to focus on is using advanced AI systems to do amazing things humans cannot. For example, AI-powered devices can make observations and new associations most humans dont notice such as pupils growing wider or heartbeats quickening based on different imagery on a screen. And they can mathematically perceive many different spatial dimensions and reams of information that we cannot.

If we, as a global tech community, focus on driving this practical intelligence by designing and tapping into the advanced, non-human capabilities of AI, that could be truly transformative.

Giordano Cabral is a professor at UFPE and the Chairman of the Board at CESAR Innovation Center and School located in Recife, Brazils Porto Digital. He is a specialist in artificial intelligence, computational creativity, gamification, and audio and music technology who is currently leading the study of these topics as a visiting scholar at Stanford University.

Read the original here:

Will AIs Next Wave of Super Intelligence Replace Human Ingenuity? Its Complicated - Grit Daily

New Novel Skillfully Weaves Artificial Intelligence, Martial Arts and … – Lakenewsonline.com

(NewsUSA) - My name is Tigress and I am immortal. This is my story.

Michael Crichton meets Bruce Lee in THE GIRL FROM WUDANG (Tuttle Publishing), a gripping story for fans of legendary cyberpunk novels and gritty sci-fi thrillers. Author PJ Caldas, an Emmy Award winning advertising executive and martial artist with 40 years of experience, gives us a cinematic and thought-provoking technothriller shrouded in immortality. Through his unique storytelling prowess, Caldas, named by the Dictionary of Brazilian Literature as one of the most important writers of the twenty-first century, brings the essence of old kung fu movies that inspired generations into the new world of modern fighting, artificial intelligence, and neuroscience in a timely story that will stick with you long after you reach the end.

Headstrong, untameablea beastYinyin defies the warnings of her late shifu, her martial arts master, and carries her ferocity from the kung fu school in the mountains of Wudang to the mixed martial arts fighting cages of California. There, surrounded for the first time by Western technology, she ignores voices of reason when offered an implant that could end her crippling headaches. It could end her pain. It could even make her . . . more. All she has to do is allow the doctors to implant tiny, super intelligent nanobots directly into her brain.

Making her mark as an MMA fighter in California, Yinyin is poised to become part of something big. But what that big turns out to be is beyond her imagining when the scientific experiment she participated in makes herunbeatable.

It feels like a dream, but nothing comes without a price. This experimental neuro-connection could give others access to family secrets buried deep within her mindsecrets Yinyin has sworn to protect. Secrets that, in the wrong hands, could be very dangerous.

The key brain tech described in THE GIRL FROM WUDANG is inspired by real studies reportedly being developed in labs at companies like Google and Elon Musks Neuralink, and studies originally proposed by scientists like Ray Kurzweil, cofounder of Singularity University. According to researchers, connected brains are still 30 years from being a reality, but they are coming.

Fans of the legendary cyberpunk novels and gritty sci-fi thrillers of William Gibson and Stieg Larsson will be captivated by this new techno-thriller--a fast-paced blend of action, neuroscience, spirituality and martial arts.

The book is receiving high accolades:

"An interdisciplinary brewing of ideas and imagination, packed with futuristic brain science tech, martial arts action, and Asian culture, says Professor Paul Li, faculty and author in Cognitive Science at UC Berkeley.

Monica Rector, professor emeritus of Literature, University of North Carolina at Chapel Hill, calls the book unpredictable, disorienting and wonderfully absorbing. A meditation on life, consciousness and letting go, in the form of a book you can't stop reading. Eduardo Capeluto, third degree black belt in Brazilian Jiu Jitsu, calls the book a mosaic full of bright colors and vivid details, a voyage into the imaginary that will keep you looking forward to the next page."

In short, THE GIRL FROM WUDANG is one of the years most addictive new thrillers.

Visit http://www.pjcaldas.com.

Read the original here:

New Novel Skillfully Weaves Artificial Intelligence, Martial Arts and ... - Lakenewsonline.com

Googles artificial intelligence predicts the weather around the globe in just one minute – EL PAS USA

The series of squalls that have been punishing Galicia for a month are the result of atmospheric rivers of water vapor, which are key to meteorologists forecasts.

For years, artificial intelligence has been dethroning its creatorshumansin different areas. Now, its meteorologys turn. The science is one of the greatest human creations since the Roman augurs; previously, they opened an animals guts to determine if the weather favored sowing the field or if the next mornings conditions would be conducive to waging war. Todays weather predictions are done with very complex models based on the laws governing the dynamics of the atmosphere and the oceans, which are run on some of the worlds most powerful supercomputers. Now, using a single machine the size of a personal computer and the artificial intelligence of DeepMind, in just one minute Alphabet (Googles parent company) can forecast the weather around the world 10 days from now. And in so doing, it outperforms almost all of the most modern weather forecasting systems. But in this case, it seems that artificial intelligence serves to complement human intelligence, not replace it.

The European Centre for Medium-Range Weather Forecasts (ECMWF) has a highly advanced system, and last year it renewed its forecasting muscle. At its facilities in Bologna, Italy, ECMWF operates a supercomputer with about 1 million processors (compared to the personal computers two or four) and the computing power of 30 petaflops, or 30,000 trillion, calculations per second. It requires that many petaflops to allow one of its tools, High Resolution Forecasting (HRES), to do what it does: to very accurately predict the weather across the planet in the medium term (usually 10 days), and to do so with a spatial resolution of nine kilometers (5.6 miles). That is where many of the weather people across the world get their forecasts. GraphCast, Google DeepMinds artificial intelligence for weather forecasting, has been measured against this Goliath.

The results of the comparison, published Tuesday in the journal Science, show that GraphCast predicts hundreds of weather variables as well as or better than HRES. The researchers show that in 90.3% of the 1,380 metrics considered, Googles machine outperforms the ECMWF machine. If the data referring to the stratosphere (some 6-8 kilometers, or 3.7 to 5 miles, up in the sky) is discarded and the analysis is limited to the tropospherethe atmospheric layer where the closest weather events occurartificial intelligence (AI) outperforms human-supervised supercomputing in 99.7% of the variables analyzed. And that feat has been accomplished by using a machine that is very similar to a personal computer; it is called a tensor processing unit, or TPU.

Once trained, each forecast can be done in less than a minute using a single TPU, [which is] much more efficient than a normal PC, but it is similar in size.

TPUs are specialized hardware for training and running artificial intelligence software much more efficiently than a normal PC, but it is similar in size, explains Google DeepMind researcher Alvaro Snchez Gonzlez. In the same way that the computers graphics card (also known as GPU) is specialized in rendering images, TPUs are specialized in making matrix products. To train GraphCast, we used 32 of these TPUs over several weeks. However, once trained, each prediction can be made in less than a minute using a single TPU, says Snchez Gonzlez, one of the innovations creators.

One of the major differences between GraphCast and current forecasting systems is that the former relies on weather history. Its creators trained it with all the meteorological data stored in the ECMWF archive since 1979. That includes the rainfall in Santiago since then, as well as all the cyclones that have reached Acapulco in 40 years. It took researchers a while to train it, but now GraphCast only needs to know what the weather was six hours ago and current weather conditions before it issues its new forecast; it takes only a second to determine what the weather will be like in another six hours. And each new prediction feeds back to the previous one.

DeepMinds Ferran Alet, a co-creator of the machine, explains how it works: Our neural network predicts the weather six hours in the future. If we want to predict the weather in 24 hours, we simply evaluate the model four times. Another option would have been to train different models, one for 6 hours, one for 24 hours. But we know that the physics 6 hours from now will be the same as it is now. So, we know that if we find the right 6-hour model and give it its own predictions as input, it should predict the weather 12 hours from now and we can repeat the process every six hours. Doing so gives them a lot more data for a single model, making it train more efficiently, Alet says.

Until now, forecasts have been based on the so-called numerical weather prediction, which uses physical equations provided by science throughout history to respond to the different processes that make up a system as complex as the dynamics of the atmosphere. With those results, a series of mathematical algorithms are defined, which the supercomputers use to forecast the next hours, days or weeks (there are also ones for a longer term, but the reliability drops dramatically after 15 days) in mere minutes. To do all this, the supercomputer must be quite super indeed, and that means an enormous cost and a lot of engineering work. What is perhaps striking is that these systems do not take advantage of the weather yesterday or last year in the same place at the same time. GraphCast does it differently, almost backwards. Its deep learning leverages decades of historical weather data to learn a model of the cause-and-effect relationships that govern the evolution of the Earths weather.

Jos Luis Casado, a Spanish Meteorological Agency (AEMET) spokesman, explains why it dispenses with historical data: The atmospheric model uses available observations and the models own immediately preceding forecast: if the atmospheres current state is well known, its future evolution can be predicted. Unlike machine learning methods, it does not use predictions or historical data.

The importance of DeepMinds work is that it demonstrates that you can even improve traditional models predictive forecasting using artificial intelligence.

At Google Researchs California headquarters, researcher Ignacio Lpez Gmez thinks about weather prediction systems based on massive data. At the beginning of the year, he published his most recent work in which he used artificial intelligence to predict heat waves. Although he knows several of the creators of GraphCast, he did not participate in its design or calculations. The importance of the work of DeepMind and others like it (such as the recent Pangu-Weather system designed by Chinese scientists) is that they demonstrate that you can achieve or even improve on the predictive forecasting of traditional models by using artificial intelligence. Lpez acknowledges that AI models are expensive to train, but he says they can do weather forecasts much more efficiently once they are trained. Instead of requiring supercomputers, AI-based predictions can even be done on personal computers within a reasonable amount of time.

ECMWF has taken note and is already developing its own AI-based forecasting system. In October, they announced that they already had the first alpha version of its AIFS (or Artificial Intelligence/Integrated Forecasting System). Its based on the same method as Googles, says AEMETs Casado. Although AIFS is not a fully operational system, it is a big step forward, he adds. As the creators of GraphCast concluded in their scientific paper, AI is not a substitute for human ingenuity, much less for traditional weather forecasting methods developed over decades, rigorously tested in many real-world contexts. In fact, the ECMWF actively collaborated with Google, providing access to data and supporting them for this project. As Casado concludes, traditional models based on physical equations and new data-driven machine learning models could be complementary.

Sign upfor our weekly newsletter to get more English-language news coverage from EL PAS USA Edition

Read the original here:

Googles artificial intelligence predicts the weather around the globe in just one minute - EL PAS USA

Nick Bostrom: Will AI lead to tyranny? – UnHerd

Flo Read is UnHerd's producer and a presenter for UnHerd TV.

November 12, 2023

November 12, 2023

In the last year, artificial intelligence has progressed from a science-fiction fantasy to an impending reality. We can see its power in everything from online gadgets to whispers of a new, post-singularity tech frontier as well as in renewed fears of an AI takeover.

One intellectual who anticipated these developments decades ago is Nick Bostrom, a Swedish philosopher at Oxford University and director of its Future of Humanity Institute. He joined UnHerds Florence Read to discuss the AI era, how governments might exploit its power for surveillance, and the possibility of human extinction.

Already registered? Sign in

Florence Read: Youre particularly well-known for your work on existential risk what do you mean by that?

Nick Bostrom: The concept of existential risk refers to ways that the human story could end prematurely. That might mean literal extinction. But it could also mean getting ourselves permanently locked into some radically suboptimal state, that could either collapse, or you could imagine some kind of global totalitarian surveillance dystopia that you could never overthrow. If it were sufficiently bad, that could also count as an existential catastrophe. Now, as for collapse scenarios, many of those might not be existential catastrophes, because civilisations have risen and fallen, empires have come and gone and eventually. If our own contemporary civilisation totally collapsed, perhaps out of the ashes would eventually rise another civilisation hundreds or thousands of years from now. So for something to be an existential catastrophe it would not just have to be bad, but have some sort of indefinite longevity.

FR: It might be too extreme, but to many people it feels that a state of semi-anarchy has already descended.

NB: I think there has been a general sense in the last few years that the wheels are coming off, and institutional processes and long-term trends that were previously taken for granted can no longer be relied upon. Like that there are going to be fewer wars every year, or that the education system is gradually improving. The faith people had in those assumptions has been shaken over the last five years or so.

FR: Youve written a great deal about how we need to learn from each existential threat as we move forward, so that next time when it becomes more severe or more intelligent or more sophisticated, we can cope. And that specifically, of course, relates to artificial intelligence.

NB: Its quite striking how radically the public discourse on this has shifted, even just in the last six to 12 months. Having been involved in the field for a long time, there were people working on it but broadly, in society, it was more viewed as science-fiction speculation, not as a mainstream concern, and certainly nothing that top-level policymakers would have been concerned with. But in the UK weve recently had this Global AI Summit, and the White House just came out with executive orders. Theres been quite a lot of talk, including about potential existential risks from AI as well as more near-term issues, and that is kind of striking.

I think that technical progress is really what has been primarily responsible for this. People saw for themselves with GPT-3, then GPT-3.5 and GPT-4 how much this technology has improved.

FR: How close are we to something that you might consider the singularity or AGI that does actually supersede any human control over it?

NB: There is no obvious clear barrier that would necessarily prevent systems next year or the year after from reaching this level. It doesnt mean that thats the most likely scenario. But we dont know what happens as you scale GPT-4 to GPT-5. But we know that when you scaled it from GPT-3 to GPT-4 it unlocked new abilities. There is also this phenomenon of grokking. So initially, you try to teach the AI some tasks, and its too hard. Maybe it gets slightly better over time because it memorises more and more specific instances of the problem, but thats the hard, sluggish way of learning to do something. But then at some point, it kind of gets it. Once it has enough neurons in its brain or has seen enough examples, it sort of sees the underlying principle, or develops the right higher-level concept that enables it to suddenly have a rapid spike in performance.

FR: You write about the idea that we have to begin to teach AI a set of values by which it will function, if we have any hope of maintaining its benefit for humanity in the long term. And one of the liberal values that has been called into question when it comes to AI is freedom of speech. There have been examples of AI effectively censoring information, or filtering information that is available on a platform. Do you think that there is a genuine threat to freedom or a totalitarian impulse built into some of these systems that were going to see extended and exaggerated further down the line?

NB: I think AI is likely to greatly increase the ability of centralised powers to keep track of what people are thinking and saying. Weve already had, for a couple of decades, the ability to collect huge amounts of information. You can eavesdrop on peoples phone calls or social-media postings and it turns out governments do that. But what can you do with that information? So far, not that much. You can map out the network of who is talking to whom. And then, if there is a particular individual of concern, you could assign some analyst to read through their emails.

With AI technology, you could simultaneously analyse everybodys political opinions in a sophisticated way, using sentiment analysis. You could probably form a pretty good idea of what each citizen thinks of the government or the current leader if you had access to their communications. So you could have a kind of mass manipulation, but instead of sending out one campaign message to everybody, you could have customised persuasion messages for each individual. And then, of course, you can combine that with physical surveillance systems like facial recognition, gait recognition and credit card information. If you imagine all of this information feeding into one giant model, I think you will have a pretty good idea of what each person is up to, what and who they know, but also what they are thinking and intending to do.

If you have some sufficiently powerful regime in place, it might then implement these measures and then, perhaps, make itself immune to overthrow.

FR: Do you think the rise in hyper-realistic propaganda deep-fake videos, which AI is going to make possible in the coming years will coincide with the rise in generalised scepticism in Western societies?

NB: I think in principle a society could adjust to it. But I think it will come at the same time as a whole bunch of other things: automated persuasion bots for instance, social companions built from these large language models and then with visual components that might be very compelling and addictive. And then also mass surveillance, mass potential censorship or propaganda.

FR: Were talking about a tyrannical government that uses AI to surveil its citizens but is there an innate moral component to the AI itself? Is there a chance that an AGI model could in some way become a bad actor on its own without human intervention?

NB: There are a bunch of different concerns that one might have as we move towards increasingly powerful AI tools and there are completely unnecessary feuds that people have between them. Well, I think concern X should be taken seriously, and somebody else says I think concern Y should be taken seriously. People love to form tribes and to beat one another, but X, Y, Z and B and W need to be taken into account. But yes, youre right that there is also the separate alignment problem which is: with an arbitrarily powerful AI system, how can you make sure that it does what the people building it intend it to do?

FR: And this is where its about building in certain principles, an ethical code, into the system is that the way of mitigating that risk?

NB: Yes, or being able to steer it basically. Its a separate question of where you do steer it if you build in some principle or goal which goal or which principle? But even just having the ability to point it towards any particular outcome you want, or a set of principles you want it to follow that is a difficult technical problem. And in particular, what is hard is to figure out if the way we would do that would continue to work even if the AI system became smarter than us and perhaps eventually super-intelligent. If, at that point, we are no longer able to understand what it is doing or why it is doing it, or whats going on inside its brain, we still want an original scaling method to keep working to arbitrarily high levels of intelligence. And we might need to get that right on the first try.

FR: How do we do that with such incredible levels of dispute and ideological schism across the world?

NB: Even if its toothless, we should make an affirmation of the general principle that ultimately AI should be for the benefit of all sentient life. If were talking about a transition to the super-intelligence era, all humans will be exposed to some of the risk, whether they want it or not. And so it seems fair that all should also stand to have some slice of the upside if it goes well. And those principles should go beyond all currently existing humans and include, for example, animals that we are treating very badly in many cases today, but also some of the digital minds themselves that might become moral subjects. As of right now, all we might hope for is some general, vague principle, and then that can sort of be firmed up as we go along.

Another hope, and some recent progress has been made on this, is for the next-generation systems to be tested prior to deployment to check that they dont lend themselves to people who would want to make biological weapons of mass destruction or commit cybercrime. And so far AI companies have done some voluntary work on this: Open AI, before releasing GPT-4, had the technology for around half a year and did red-teaming exercises too. Research on technical AI alignment would be good to solve the problem of scalable alignment before we have super-intelligence.

I think the whole area of the moral status of digital mind will require more attention. I think it needs to start to migrate from a philosophy seminar topic to a serious mainstream issue. We dont want to have a future where the majority of sentient minds or digital minds are horribly oppressed and were like pigs in Animal Farm. That would be one way of creating a dystopia. And its going to be a big challenge, because its already hard for us to extend empathy sufficiently to animals, even though animals have eyes and faces and can squeak.

Incidentally, I think there might be grounds for moral status besides sentience. I think if somebody can suffer, that might be sufficient to give them moral status. But I think even if you thought they were not conscious but they had goals, a conception of self, the sense of an entity persisting through time, the ability to enter into reciprocal relationships with other beings and humans that might also ground various forms of moral status.

FR: Weve talked a lot about the risks of AI, but what are its potential upsides? What would be the best case scenario?

NB: I think the upsides are enormous. In fact, it would be tragic if we never developed advanced artificial intelligence. I think all the paths to really great futures ultimately lead through the development of machine super-intelligence. But the actual transition itself will be associated with major risks, and we need to be super-careful to get that right. But Ive started slightly worrying now in the last year or so that we might overshoot with this increase in attention to the risks and downsides. It still seems unlikely, but less unlikely than it did a year ago, that we might get to the point of a permafrost, some situation where it is never developed.

FR: A kind of AI nihilism?

NB: Yes, where it becomes so stigmatised that it just becomes impossible for anybody to say anything positive about it. There may pretty much be a permanent ban on AI. I think that could be very bad. I still think we need to have a greater level of concern than we currently have. But I would want us to reach the optimal level of concern and stop there.

FR: Like a Goldilocks level of fear for AI.

NB: People like to move in herds, and I worry about it becoming a big stampede to say negative things about AI, and then destroying the future in that way. We could go extinct through some other method instead, maybe synthetic biology, without even ever getting to at least roll the die with AI.

I would think that, actually, the optimal level of concern is slightly greater than what we currently have, and I still think there should be more concern. Its more dangerous than most people have realised. But Im just starting to worry about overshooting, the conclusion being: lets wait for a thousand years before we develop it. Then of course, its unlikely that our civilisation will remain on track for a thousand years.

NB: We will hopefully be fine either way, but I think I would like the AI before some radical biotech revolution. If you think about it this way: if you first get some sort of super-advanced synthetic biology, that might kill us, but if were lucky, we survive it, and then maybe invent some super-advanced molecular nanotechnology and that might kill us, but if were lucky we survive that, and then you do the AI, and then maybe that will kill us. Or, if were lucky, we survive that and we get utopia. Well, then you have to get through three separate existential risk, like first a biotech risk, plus the nanotech risk, plus the AI risks.

Whereas if we get AI first, maybe that will kill us, but if not, we get through that and then I think that will handle the biotech and nanotech risks. And so the total amount of existential risk on that second trajectory would be less than on the former. Now, its more complicated than that, because we need some time to prepare for the Ay, but you can start to think about optimal trajectories rather than a very simplistic binary question of: Is technology X good or bad? We should be thinking, on the margin, Which ones should we try to accelerate and which ones retard?

NB: It is weird. If this worldview is even remotely correct, that we should happen to be alive at this particular point in human history so close to this fulcrum or nexus on which the giant future of earth-originating intelligent life might hinge out of all the different people that have lived throughout history, people that might come later if things go well: that one should sit so close to this critical juncture, that seems a bit too much of a coincidence. And then youre led to these questions about the simulation hypothesis, and so on. I think there is more in heaven and on earth than is dreamed of in our philosophy and that we understand quite little about how all of these pieces fit together.

Read the rest here:

Nick Bostrom: Will AI lead to tyranny? - UnHerd

Appeals court mulls whether to revive Wynn FARA case – POLITICO

With help from Daniel Lippman

FARA TUESDAY: A federal appeals court this morning wrestled with whether to revive the Justice Departments bid to force casino magnate Steve Wynn to register as a foreign agent of the Chinese government, after the departments previous effort to do so was tossed because of decades-old legal precedent that bars DOJ from requiring foreign agents to retroactively register once they are no longer performing that work.

DOJ sued the GOP megadonor and Donald Trump ally last year in an effort to force Wynn to register as a foreign agent for seeking to persuade the Trump administration to extradite the billionaire Chinese fugitive Guo Wengui in 2017.

A U.S. District Court judge dismissed the case last October in a ruling that could deal a major blow to U.S. efforts to expose foreign influence campaigns, arguing that because Wynns alleged relationship with Beijing had concluded, his hands were tied by a 1987 ruling from the D.C. Circuit Court that a foreign agents obligation to register expires when the agent ceases activities on behalf of the foreign principal.

Today, DOJ attorney Joseph Minta sought to convince a three-judge panel in the D.C. Circuit Court of Appeals that Congress did in fact intend for the Justice Department to have the ability to compel retroactive FARA registration when it was writing and updating the World War II-era law, but members of the panel appeared divided as to how to proceed.

Circuit Court Judge Patricia Millett, a Barack Obama appointee, grilled Minta on DOJs assertion that a plain text reading of the avenues for civil relief for FARA violations indicates it applies even to those who are no longer acting as foreign agents. Theres no language here at all that suggests retroactivity, Millett said, particularly given the so-called McGoff precedent.

For the purposes of enjoining someone to comply with FARAs disclosure requirements, it says you may enjoin an agent of a foreign principal, Millett added. You are asking a court to enjoin right now someone who is not, at the time of the injunction, an agent of a foreign principal.

An attorney for Wynn sought to poke holes in Mintas textual analysis of the statute as well. I think that the language is very clear, over and over and over again through the repeated use of the present tense, Robert Luskin argued, describing the text as an indicator of the intention that it be applied presently and prospectively, but not retroactively.

Judge Cornelia Pillard also pointed to FARAs requirement that foreign agents retain records relating to their activities for at least three years after ceasing to act as a foreign agent, which appears to support the notion that Congress wasnt contemplating an indefinite obligation to register, she said.

Beyond the textual analysis of the law, Minta argued that upholding the lower courts dismissal of the Wynn case takes away a key tool for shedding light on foreign influence efforts, a contention that appeared to gain traction with two of the other judges on the panel.

The District Courts interpretation is contrary to the purpose of FARA, which is ensuring disclosure of foreign efforts to influence U.S. policy, Minta told the panel.

It does seem potentially dysfunctional for a foreign agents disclosure obligations to cut off once that relationship has ended, noted Pillard, who also pressed Minta on the value of learning more about Wynns alleged activity on Chinas behalf after the fact. Typical circumstances may have changed with regard to FARAs utility, she added, given the internet and how information has a much longer half-life than it did at the time of the McGoff ruling.

Pillard, who is also an Obama appointee, zeroed in on how the existing precedent creates a longer statute of limitations for criminal FARA prosecutions, arguing that theres something perverse about that when the core interest in the statute is in the disclosure.

Wynns lawyer pushed back on that line of thinking. The public interest, once an individual has ceased to act as an agent is significantly attenuated, Luskin argued, pointing out that much of FARAs disclosure regime revolves around contemporaneous disclosure.

The remaining jurist in todays oral arguments gave even fewer indications as to how shes leaning in the case. Judge Karen Henderson, a George H.W. Bush appointee, spoke up just a few times during the session.

But she appeared very interested in the progress of a potential escape hatch for the court, asking Minta about the status of legislation introduced over the summer to close the Wynn loophole potentially tipping her hand by referring to the bipartisan bill as one trying to fix the McGoff problem.

Happy Tuesday and welcome to PI. Send tips: [emailprotected]. And be sure to follow me on X, the platform formerly known as Twitter: @caitlinoprysko.

A message from CTIA Wireless Foundation:

CTIA Wireless Foundation is at the forefront of social innovation powered by wireless. Its signature initiative, Catalyst, is a grant program accelerating mobile-first solutions to pressing challenges in American communities. The Catalyst 2023 Winners are using 5G to address cyberbullying, education inequities and veterans mental health. Learn more.

FIRST IN PI HARRIS GETS BACKUP FROM LEFT: A coalition of left-leaning labor groups, tech watchdogs and civil society organizations is deploying a little positive reinforcement, praising Vice President Kamala Harris for her emphasis on addressing the near-term threats of the proliferation of artificial intelligence over the more abstract existential warnings being issued by groups aligned with the effective altruism movement.

As you astutely noted in your recent remarks at the U.S. Embassy in London, the harms are not only hypothetical or occurring in a far-off future; they are happening right now, the Tech Oversight Project, Demand Progress, Fight For The Future, the Institute for Local Self-Reliance, the American Federation of Teachers, National Education Association, Public Citizen and Public Knowledge said in a letter to Harris yesterday.

The groups praised Harris leadership on this important issue, and pledged to amplify, support, and work together with Harris on future efforts. The letter offers Harris cover for a new part of her policy portfolio from several key liberal groups, a constituency with whom the vice president has not often been on the same page throughout her political career.

We are encouraged by the imperative set in your speech to move swiftly to advance policies that make A.I. safer for communities across the globe, the coalition wrote, praising her mentions of ways that the technology is already equipped to pose threats to vulnerable communities, like seniors, women and minorities. These stories need to be shared, so policymakers can take action that prevents similar harms from happening in the future.

A NEW PRIORITY FOR PRIORITIES: Priorities USA, one of the biggest liberal super PACs, will not run a single television advertisement in the 2024 election cycle, per The New York Times Rebecca Davis OBrien.

Instead, the group announced Tuesday, Priorities USA is reshaping itself as a digital political strategy operation, the culmination of a yearslong transition from its supporting role in presidential campaigns to a full-service communications, research and training behemoth for Democrats up and down the ballot.

The move reflects a broad shift in media consumption over the past decade, away from traditional broadcast outlets and toward a fragmented online world. It also shows the growing role played by big-money groups in shaping campaigns and American political life: Priorities USA says it will spend $75 million on digital communications, research and infrastructure in the next year.

Priorities said it was developing relationships with influencers and other content creators to spread campaign messages on platforms like TikTok. The group has also been working on contextual targeting, which it defined as presenting ads to voters based on what they were watching on their devices at any given moment.

Though the organization is essentially without peer or competitor in its new role, Executive Director Danielle Butterfield likened its new focus to that of the Center for Campaign Innovation, a conservative nonprofit group not a super PAC that is focused on digital politics.

NRSC HITS RICK SCOTT OPPONENT WITH FEC COMPLAINT: The Senate Republicans official campaign arm in Washington is filing a complaint alleging that the Republican challenging GOP Sen. Rick Scott in 2024 used businesses he owns to make impermissible contributions to his campaign, NBC News Matt Dixon reports.

The complaint from the National Republican Senatorial Committee is hitting Keith Gross, a Panama City, Florida, businessman and attorney who has said he would spend millions of dollars from his personal wealth to try and defeat Scott, a first-term senator and former two-term Florida governor.

The basis for the complaint stems from campaign finance reports in which Gross lists debts owed to Pure Blue and 1954 Capital Partners LLC, two companies he owns. Gross lists owing $13,500 to the first for business rentals and $12,600 to the second for aircraft rental.

Federal law prohibits corporations, such as Pure Blue Inc. and likely 1954 Capital Partners LLC, from making contributions to Federal candidates, the complaint said. If a corporation makes its resources available to one candidate for free, it must do so for all candidates.

In sum, Gross campaign owes tens of thousands of dollars to corporation(s) owned and managed by Gross, it said. In response, Gross called the complaint entirely baseless. The expenses in question are completely legitimate and have been paid in accordance with FEC guidelines. This is typical swamp politics and the voters see right through it, he told NBC.

A message from CTIA Wireless Foundation:

Matthew Lane has joined Fight for the Future as senior policy counsel. He was previously a senior director at InSight Public Affairs.

William Crozer has been named co-head of BGR Groups state and local advocacy practice.

Mike Abboud has joined Targeted Victory as a managing director on the public affairs team. He most recently was national press secretary for former Speaker Kevin McCarthys political operation and served as press secretary at the EPA during the Trump administration.

Sarah Selip has relaunched the boutique conservative PR firm 917 Strategies. She most recently was communications director for Rep. Ronny Jackson (R-Texas) and is a Jody Hice alum.

Molly Drenkard is now vice president of public affairs at the National Marine Manufacturers Association. She most recently was director of corporate communications at Anheuser-Busch and is a Cathy McMorris Rodgers (R-Wash.) alum.

Richard Whitt is returning to NetsEdge LLC as a consultant, per Morning Tech. Hes currently senior vice president of government relations and public policy at Twilio.

Adam Bozzi is launching Anticipate Public Affairs, a strategic comms and public affairs firm. He most recently was senior adviser for the Democratic staff of the House Administration Committee and is an End Citizens United, Sen. Michael Bennet (D-Colo.) and Sen. Jack Reed (D-R.I.) alum.

Nicholas Kowalski is launching Vantage Point Public Affairs. He previously led Twenty20 Strategies.

Peter Colavito is joining Invest in Our Future as executive director. He previously was an adviser, working with the Service Employees International Union, Natural Resources Defense Council, the Open Society Foundations and the ACLU.

Andrew Kilberg has been promoted to partner at Gibson Dunn.

Lockheed Martin named Christina Mancinelli its vice president of space security, cyber and analytics within the business national security space unit. She was the director of national critical systems.

Intelsat named David Broadbent its new head of government business. He was president of the space systems unit at RTX.

A message from CTIA Wireless Foundation:

Innovative social entrepreneurs are taking advantage of the power of wireless and 5Gs speed, efficiency, and versatility to create groundbreaking solutions. CTIA Wireless Foundations Catalyst program awards over $200,000 each year to social entrepreneurs using wireless for good. The Catalyst 2023 Winners ReThink, Dope Nerds and Healium are using 5G to combat online harassment, provide STEM education to underserved students and deliver veteran mental health services. CTIA Wireless Foundation is committed to supporting social entrepreneurs that may face barriers to accessing capital, and the Catalyst 2023 winners have lived experiences with the issues they are working to solve, giving them the perspective and passion needed to make a difference. CTIA Wireless Foundation is proud to support the trailblazing, mobile-first work of the 2023 Catalyst Winners. Learn more.

None.

Americans United for Liberty and Truth (Hybrid PAC)

BEEHIVE VALUES PAC (Super PAC)

Facts4Peace (Super PAC)

The Future - Today & Tomorrow (Super PAC)

SCREAMING EAGLE PAC INC. (Super PAC)

Acorn Consulting: Blue Sky Infrastructure, LLC

Acorn Consulting: Nisource Inc.

Acorn Consulting: Southland Holdings, LLC

Acorn Consulting: Tallgrass

Becker & Poliakoff, P.A.: Clean Refineries Inc.

Becker & Poliakoff, P.A.: Emergency Sandbag Response, Inc.

Becker & Poliakoff, P.A.: Langton Associates Inc. (On Behalf Of City Of Jacksonville, Fl)

Becker & Poliakoff, P.A.: Okeechobee County, Fl

Becker & Poliakoff, P.A.: Ptubes, Inc.

Becker & Poliakoff, P.A.: Terumo Blood And Cell Technologies

Cfm Strategic Communications (Conkling Fiskum & Mccormick): Clatsop County

Covington & Burling LLP: Mark Osmond Isaacs

Dgsr LLC: Stanton Park Group LLC On Behalf Of Intuit, Inc. And Affiliates

Forbes-Tate: City Of Birmingham, Alabama

Hashemi Strategic Advisors: Ultragenyx

Holland & Knight LLP: Reveal Technology Inc.

Holland & Knight LLP: USn Opco, LLC D/B/A Panoramic Health

Invariant LLC: Glytec, LLC

Mcintyre & Lemon, Pllc: Lion Cave

Mehlman Consulting, Inc.: Netgear, Inc.

Mehlman Consulting, Inc.: Palo Alto Networks, Inc.

Neowise Corp.: US Inventor

ONeil Bradley Consulting LLC: Earnin

Ott Bielitzki & ONeill Pllc: Zeda, Inc.

Williams And Jensen, Pllc: Airmatrix

Williams And Jensen, Pllc: Xenesis

Ag Processing, Inc.: Ag Processing Inc

Becker & Poliakoff, P.A.: C4 Recovery Foundation

Becker & Poliakoff, P.A.: Kansas Municipal Energy Agency

Becker & Poliakoff, P.A.: Ozinga Ready Made Concrete, Inc.

Covington & Burling LLP: New Venture Fund

Nvg, LLC: Community Justice Action Fund

Go here to see the original:

Appeals court mulls whether to revive Wynn FARA case - POLITICO