Archive for the ‘Artificial Intelligence’ Category

Pope, once a victim of AI-generated imagery, calls for treaty to regulate artificial intelligence – WBRZ

ROME (AP) Pope Francis on Thursday called for an international treaty to ensure artificial intelligence is developed and used ethically, arguing that the risks of technology lacking human values of compassion, mercy, morality and forgiveness are too great.

Francis added his voice to increasing calls for binding, global regulation of AI in his annual message for the World Day of Peace, which the Catholic Church celebrates each Jan. 1. The Vatican released the text of the message on Thursday.

For Francis, the appeal is somewhat personal: Earlier this year, an AI-generated image of him wearing a luxury white puffer jacket went viral, showing just how quickly realistic deepfake imagery can spread online.

The popes message was released just days after European Union negotiators secured provisional approval on the worlds first comprehensive AI rules that are expected to serve as a gold standard for governments considering their own regulation.

Francis acknowledged the promise AI offers and praised technological advances as a manifestation of the creativity of human intelligence, echoing the message the Vatican delivered at this years U.N. General Assembly where a host of world leaders raised the promise and perils of the technology.

But his new peace message went further and emphasized the grave, existential concerns that have been raised by ethicists and human rights advocates about the technology that promises to transform everyday life in ways that can disrupt everything from democratic elections to art.

Artificial intelligence may well represent the highest-stakes gamble of our future, said Cardinal Michael Czerny of the Vaticans development office, who introduced the message at a press conference Thursday. If it turns out badly, humanity is to blame.

The document insisted that the technological development and deployment of AI must keep foremost concerns about guaranteeing fundamental human rights, promoting peace and guarding against disinformation, discrimination and distortion.

Pope Francis leaves after an audience with sick people and Lourdes pilgrimage operators in the Paul VI Hall, at the Vatican, Thursday, Dec. 14, 2023. (AP Photo/Alessandra Tarantino) Pope Francis leaves after an audience with sick people and Lourdes pilgrimage operators in the Paul VI Hall, at the Vatican, Thursday, Dec. 14, 2023. (AP Photo/Alessandra Tarantino)

Francis greatest alarm was devoted to the use of AI in the armaments sector, which has been a frequent focus of the Jesuit pope who has called even traditional weapons makers merchants of death.

He noted that remote weapons systems had already led to a distancing from the immense tragedy of war and a lessened perception of the devastation caused by those weapons systems and the burden of responsibility for their use.

The unique capacity for moral judgment and ethical decision-making is more than a complex collection of algorithms, and that capacity cannot be reduced to programming a machine, he wrote.

He called for adequate, meaningful and consistent human oversight of Lethal Autonomous Weapons Systems (or LAWS), arguing that the world has no need for new technologies that merely end up promoting the folly of war.

On a more basic level, he warned about the profound repercussions on humanity of automated systems that rank citizens or categorize them. In addition to the threats to jobs around the world that can be done by robots, Francis noted that such technology could determine the reliability of an applicant for a mortgage, the right of a migrant to receive political asylum or the chance of reoffending by someone previously convicted of a crime.

Algorithms must not be allowed to determine how we understand human rights, to set aside the essential human values of compassion, mercy and forgiveness, or to eliminate the possibility of an individual changing and leaving his or her past behind, he wrote.

For Francis, the issue hits at some of his priorities as pope to denounce social injustices, advocate for migrants and minister to prisoners and those on the margins of society.

The popes message didnt delve into details of a possible binding treaty other than to say it must be negotiated at a global level, to both promote best practices and prevent harmful ones. Technology companies alone cannot be trusted to regulate themselves, he said.

He repurposed arguments he has used before to denounce multinationals that have ravaged Earths national resources and impoverished the Indigenous peoples who live off them.

Freedom and peaceful coexistence are threatened whenever human beings yield to the temptation to selfishness, self-interest, the desire for profit and the thirst for power, he wrote.

Barbara Caputo, professor at the Turin Polytechnic universitys Artificial Intelligence Hub, noted that there was already convergence on some fundamental ethical issues and definitions in both the EUs regulation and the executive order unveiled by U.S. President Joe Biden in October.

This is no small thing, she told the Vatican briefing. This means that whoever wants to produce artificial intelligence, there is a common regulatory base.

Original post:
Pope, once a victim of AI-generated imagery, calls for treaty to regulate artificial intelligence - WBRZ

SWISS International Airlines to Use Artificial Intelligence to Count Passengers With Special Cameras Installed at the … – paddleyourownkanoo.com

SWISS International Airlines is to install a new digital boarding system on its aircraft, which will uses artificial intelligence to conduct a passenger count and make sure no stowaways have managed to sneak onboard.

The Zurich-based carrier has decided to adopt the system after a successful three-month trial conducted earlier this year. During the trial, the airline wanted to make sure that the AI model could work in various light conditions and detect a parent carrying an infant in their arms.

Unlike some airlines that rely on automated passenger reconciliation via boarding pass scanners, cabin crew at the Swiss flag carrier are still required to conduct a manual headcount of passengers using an old-fashioned clicker.

The new system makes that process obsolete, and SWISS says it expects the boarding process to be a lot quicker as a result.

Developed by Berlin-based tech startup Vion AI, the new passenger count system works with a camera installed at the boarding door, which monitors people coming and going from the plane.

A prototype of the system was only developed earlier this year, but during the trial conducted by SWISS, the airline found that it conducted passenger boarding counts reliably under a wide range of conditions.

Further work is, however, required to develop and refine the system and the airline doesnt expect to start installing the system across its fleet until later in 2024. Initially, the short-haul fleet will have the system fitted from the third quarter of 2024, while work to install the cameras on long-haul aircraft will begin in the final three months of 2024.

In the meantime, some aircraft will have the system installed as part of the ongoing development of the AI software but crew members will still be required to conduct manual passenger counts.

Addressing privacy concerns, SWISS says all data will be processed in full compliance with the strict European and Swiss data protection rules.

In adopting this AI-based solution for counting our passengers during boarding, were taking another major step forward into the digital future, commented Oliver Buchhofer, SWISSs Head of Operations.

The use of artificial intelligence will help make the boarding process faster and more efficient, Buchhofer continued. This in turn will reduce waiting times and give our guests a pleasanter travel experience. The new digital count will ease the workload on our cabin crews, too.

No spam, just a weekly roundup of the best aviation news that you don't want to miss

Mateusz Maszczynski honed his skills as an international flight attendant at the most prominent airline in the Middle East and has been flying throughout the COVID-19 pandemic for a well-known European airline. Matt is passionate about the aviation industry and has become an expert in passenger experience and human-centric stories. Always keeping an ear close to the ground, Matt's industry insights, analysis and news coverage is frequently relied upon by some of the biggest names in journalism.

See original here:
SWISS International Airlines to Use Artificial Intelligence to Count Passengers With Special Cameras Installed at the ... - paddleyourownkanoo.com

Vladimir Putin lost for words as he confronts his AI ‘double’ – The Jerusalem Post

Russian President Vladimir Putin appeared briefly lost for words on Thursday when confronted with an AI-generated version of himself.

The "double" took the opportunity to put a question to Putin about artificial intelligence during an annual news conference where dozens of callers from around the country were hooked up to the president by video link.

"Vladimir Vladimirovich, hello, I am a student at St Petersburg state university. I want to ask, is it true you have a lot of doubles?" the double asked, prompting laughter among the audience in the hall with Putin in Moscow.

"And also: How do you view the dangers that artificial intelligence and neural networks bring into our lives?"

The question prompted a rare hesitation from Putin, already in his fourth hour of taking questions at the marathon event.

"I see you may resemble me and speak with my voice. But I have thought about it and decided that only one person must be like me and speak with my voice, and that will be me," he said.

There has been recurrent speculation, particularly in Western media, that Putin has one or more body doubles to cover for him in some public appearances because of alleged health problems. The Kremlin had denied that and said the president's health is excellent.

Originally posted here:
Vladimir Putin lost for words as he confronts his AI 'double' - The Jerusalem Post

Is Studying Artificial Intelligence In University A Useless Pursuit? – Medium

Is Studying Artificial Intelligence In University A Useless Pursuit? A Critical Look At The Claims and Realities

The rise of artificial intelligence (AI) has ignited a firestorm of debate, particularly within the realm of education. While its potential to revolutionize industries and reshape our lives is undeniable, questions remain about the value of studying it in a formal university setting. Critics argue that AI is a rapidly evolving field, rendering established academic frameworks obsolete and leaving graduates ill-equipped for the ever-shifting landscape. Is this a valid concern, or is studying AI at university still a worthwhile investment?

The Case Against Studying AI:

Rapidly Changing Field: Critics claim the field of AI is evolving at breakneck speed, making it impossible for academic curriculum to keep pace. New algorithms and breakthroughs emerge constantly, rendering existing knowledge outdated and potentially irrelevant by the time graduates enter the workforce. Overly Theoretical: Accusations abound that university AI courses focus excessively on theoretical foundations and mathematical underpinnings, neglecting practical skills needed for real-world applications. Graduates may possess deep theoretical understanding but lack the practical ability to implement AI solutions or navigate the complexities of real-world data and systems. Job Market Saturation: Some argue that the AI job market is becoming saturated, leading to fierce competition for a limited number of positions. With universities churning out a growing number of AI graduates, the fear is that many will struggle to find meaningful employment in the field. Alternatives Exist: Critics point to alternative avenues for acquiring AI skills, such as online bootcamps, self-directed learning through online resources, and hands-on experience through personal projects. These alternatives, they argue, can provide practical skills at a lower cost and without the constraints of a traditional academic setting.

Countering the Arguments:

Building a Strong Foundation: While the field of AI is undoubtedly dynamic, a solid understanding of its core principles remains crucial. University courses provide this foundational knowledge, enabling graduates to adapt and learn new skills as the field evolves. This adaptability is critical in a rapidly changing environment. Developing Critical

Read the original post:
Is Studying Artificial Intelligence In University A Useless Pursuit? - Medium

Inside OpenAI’s Crisis Over the Future of Artificial Intelligence – The New York Times

Around noon on Nov. 17, Sam Altman, the chief executive of OpenAI, logged into a video call from a luxury hotel in Las Vegas. He was in the city for its inaugural Formula 1 race, which had drawn 315,000 visitors including Rihanna and Kylie Minogue.

Mr. Altman, who had parlayed the success of OpenAIs ChatGPT chatbot into personal stardom beyond the tech world, had a meeting lined up that day with Ilya Sutskever, the chief scientist of the artificial intelligence start-up. But when the call started, Mr. Altman saw that Dr. Sutskever was not alone he was virtually flanked by OpenAIs three independent board members.

Instantly, Mr. Altman knew something was wrong.

Unbeknownst to Mr. Altman, Dr. Sutskever and the three board members had been whispering behind his back for months. They believed Mr. Altman had been dishonest and should no longer lead a company that was driving the A.I. race. On a hush-hush 15-minute video call the previous afternoon, the board members had voted one by one to push Mr. Altman out of OpenAI.

Now they were delivering the news. Shocked that he was being fired from a start-up he had helped found, Mr. Altman widened his eyes and then asked, How can I help? The board members urged him to support an interim chief executive. He assured them that he would.

Within hours, Mr. Altman changed his mind and declared war on OpenAIs board.

His ouster was the culmination of years of simmering tensions at OpenAI that pit those alarmed by A.I.s power against others who saw the technology as a once-in-a-lifetime profit and prestige bonanza. As divisions deepened, the organizations leaders sniped and turned on one another. That led to a boardroom brawl that ultimately showed who has the upper hand in A.I.s future development: Silicon Valleys tech elite and deep-pocketed corporate interests.

The drama embroiled Microsoft, which had committed $13 billion to OpenAI and weighed in to protect its investment. Many top Silicon Valley executives and investors, including the chief executive of Airbnb, also mobilized to support Mr. Altman.

Some fought back from Mr. Altmans $27 million mansion in San Franciscos Russian Hill neighborhood, lobbying through social media and voicing their displeasure in private text threads, according to interviews with more than 25 people with knowledge of the events. Many of their conversations and the details of their confrontations have not been previously reported.

At the center of the storm was Mr. Altman, a 38-year-old multimillionaire. A vegetarian who raises cattle and a tech leader with little engineering training, he is driven by a hunger for power more than by money, a longtime mentor said. And even as Mr. Altman became A.I.s public face, charming heads of state with predictions of the technologys positive effects, he privately angered those who believed he ignored its potential dangers.

OpenAIs chaos has raised new questions about the people and companies behind the A.I. revolution. If the worlds premier A.I. start-up can so easily plunge into crisis over backbiting behavior and slippery ideas of wrongdoing, can it be trusted to advance a technology that may have untold effects on billions of people?

OpenAIs aura of invulnerability has been shaken, said Andrew Ng, a Stanford professor who helped found the A.I. labs at Google and the Chinese tech giant Baidu.

From the moment it was created in 2015, OpenAI was primed to combust.

The San Francisco lab was founded by Elon Musk, Mr. Altman, Dr. Sutskever and nine others. Its goal was to build A.I. systems to benefit all of humanity. Unlike most tech start-ups, it was established as a nonprofit with a board that was responsible for making sure it fulfilled that mission.

The board was stacked with people who had competing A.I. philosophies. On one side were those who worried about A.I.s dangers, like Mr. Musk, who left OpenAI in a huff in 2018. On the other were Mr. Altman and those focused more on the technologys potential benefits.

In 2019, Mr. Altman who had extensive contacts in Silicon Valley as president of the start-up incubator Y Combinator became OpenAIs chief executive. He would own just a tiny stake in the start-up.

Why is he working on something that wont make him richer? One answer is that lots of people do that once they have enough money, which Sam probably does, said Paul Graham, a founder of Y Combinator and Mr. Altmans mentor. The other is that he likes power.

Mr. Altman quickly changed OpenAIs direction by creating a for-profit subsidiary and raising $1 billion from Microsoft, spurring questions about how that would work with the boards mission of safe A.I.

Earlier this year, departures shrank OpenAIs board to six people from nine. Three Mr. Altman, Dr. Sutskever and Greg Brockman, OpenAIs president were founders of the lab. The others were independent members.

Helen Toner, a director of strategy at Georgetown Universitys Center for Security and Emerging Technology, was part of the effective altruist community that believes A.I. could one day destroy humanity. Adam DAngelo had long worked with A.I. as the chief executive of the question-and-answer website Quora. Tasha McCauley, an adjunct scientist at the RAND Corporation, had worked on tech and A.I. policy and governance issues and taught at Singularity University, which was named for the moment when machines can no longer be controlled by their creators.

They were united by a concern that A.I. could become more intelligent than humans.

After OpenAI introduced ChatGPT last year, the board became jumpier.

As millions of people used the chatbot to write love letters and brainstorm college essays, Mr. Altman embraced the spotlight. He appeared with Satya Nadella, Microsofts chief executive, at tech events. He met President Biden and embarked on a 21-city global tour, hobnobbing with leaders like Prime Minister Narendra Modi of India.

Yet as Mr. Altman raised OpenAIs profile, some board members worried that ChatGPTs success was antithetical to creating safe A.I., two people familiar with their thinking said.

Their concerns were compounded when they clashed with Mr. Altman in recent months over who should fill the boards three open seats.

In September, Mr. Altman met investors in the Middle East to discuss an A.I. chip project. The board was concerned that he wasnt sharing all his plans with it, three people familiar with the matter said.

Dr. Sutskever, 37, who helped pioneer modern A.I., was especially disgruntled. He had become fearful that the technology could wipe out humanity. He also believed that Mr. Altman was bad-mouthing the board to OpenAI executives, two people with knowledge of the situation said. Other employees have also complained to the board about Mr. Altmans behavior.

In October, Mr. Altman promoted another OpenAI researcher to the same level as Dr. Sutskever, who saw it as a slight. Dr. Sutskever told several board members that he might quit, two people with knowledge of the matter said. The board interpreted the move as an ultimatum to choose between him and Mr. Altman, the people said.

Dr. Sutskevers lawyer said it was categorically false that he had threatened to quit.

Another conflict erupted in October when Ms. Toner published a paper, Decoding Intentions: Artificial Intelligence and Costly Signals, at her Georgetown think tank. In it, she and her co-authors praised Anthropic, an OpenAI rival, for delaying a product release and avoiding the frantic corner-cutting that the release of ChatGPT appeared to spur.

Mr. Altman was displeased, especially since the Federal Trade Commission had begun investigating OpenAIs data collection. He called Ms. Toner, saying her paper could cause problems.

The paper was merely academic, Ms. Toner said, offering to write an apology to OpenAIs board. Mr. Altman accepted. He later emailed OpenAIs executives, telling them that he had reprimanded Ms. Toner.

I did not feel were on the same page on the damage of all this, he wrote.

Mr. Altman called other board members and said Ms. McCauley wanted Ms. Toner removed from the board, people with knowledge of the conversations said. When board members later asked Ms. McCauley if that was true, she said that was absolutely false.

This significantly differs from Sams recollection of these conversations, an OpenAI spokeswoman said, adding that the company was looking forward to an independent review of what transpired.

Some board members believed that Mr. Altman was trying to pit them against each other. Last month, they decided to act.

Dialing in from Washington, Los Angeles and the San Francisco Bay Area, they voted on Nov. 16 to dismiss Mr. Altman. OpenAIs outside lawyer advised them to limit what they said publicly about the removal.

Fearing that if Mr. Altman got wind of their plan he would marshal his network against them, they acted quickly and secretly.

When news broke of Mr. Altmans firing on Nov. 17, a text landed in a private WhatsApp group of more than 100 chief executives of Silicon Valley companies, including Metas Mark Zuckerberg and Dropboxs Drew Houston.

Sam is out, the text said.

The thread immediately blew up with questions: What did Sam do?

That same query was being asked at Microsoft, OpenAIs biggest investor. As Mr. Altman was being fired, Kevin Scott, Microsofts chief technology officer, got a call from Mira Murati, OpenAIs chief technology officer. She told him that in a matter of minutes, OpenAIs board would announce that it had canned Mr. Altman and that she was the interim chief.

Mr. Scott immediately asked someone at Microsofts headquarters in Redmond, Wash., to get Mr. Nadella, the chief executive, out of a meeting he was having with top lieutenants. Shocked, Mr. Nadella called Ms. Murati about the OpenAI boards reasoning, three people with knowledge of the call said. In a statement, OpenAIs board had said only that Mr. Altman was not consistently candid in his communications with the board. Ms. Murati didnt have answers.

Mr. Nadella then phoned Mr. DAngelo, OpenAIs lead independent director. What could Mr. Altman have done, Mr. Nadella asked, to cause the board to act so abruptly? Was there anything nefarious?

No, Mr. DAngelo replied, speaking in generalities. Mr. Nadella remained confused.

Shortly after Mr. Altmans removal from OpenAI, a friend reached out to him. It was Brian Chesky, Airbnbs chief executive.

Mr. Chesky asked Mr. Altman what he could do to help. Mr. Altman, who was still in Las Vegas, said he wanted to talk.

The two men had met in 2009 at Y Combinator. When they spoke on Nov. 17, Mr. Chesky peppered Mr. Altman with questions about why OpenAIs board had terminated him. Mr. Altman said he was as uncertain as everyone else.

At the same time, OpenAIs employees were demanding details. The board dialed into a call that afternoon to talk to about 15 OpenAI executives, who crowded into a conference room at the companys offices in a former mayonnaise factory in San Franciscos Mission neighborhood.

The board members said that Mr. Altman had lied to the board, but that they couldnt elaborate for legal reasons.

This is a coup, one employee shouted.

Jason Kwon, OpenAIs chief strategy officer, accused the board of violating its fiduciary responsibilities. It cannot be your duty to allow the company to die, he said, according to two people with knowledge of the meeting.

Ms. Toner replied, The destruction of the company could be consistent with the boards mission.

OpenAIs executives insisted that the board resign that night or they would all leave. Mr. Brockman, 35, OpenAIs president, had already quit.

The support gave Mr. Altman ammunition. He flirted with creating a new start-up, but Mr. Chesky and Ron Conway, a Silicon Valley investor and friend, urged Mr. Altman to reconsider.

You should be willing to fight back at least a little more, Mr. Chesky told him.

Mr. Altman decided to take back what he felt was his.

After flying back from Las Vegas, Mr. Altman awoke on Nov. 18 in his San Francisco home, with sweeping views of Alcatraz Island. Just before 8 a.m., his phone rang. It was Mr. DAngelo and Ms. McCauley.

The board members were rattled by the meeting with OpenAI executives the day before. Customers were considering shifting to rival platforms. Google was already trying to poach top talent, two people with knowledge of the efforts said.

Mr. DAngelo and Ms. McCauley asked Mr. Altman to help stabilize the company.

That day, more than two dozen supporters showed up at Mr. Altmans house to lobby OpenAIs board to reinstate him. They set up laptops on his kitchens white marble countertops and spread out across his living room. Ms. Murati joined them and told the board that she could no longer be interim chief executive.

To capitalize on the boards vulnerability, Mr. Altman posted on X: i love openai employees so much. Ms. Murati and dozens of employees replied with emojis of colored hearts.

Yet even as the board considered bringing Mr. Altman back, it wanted concessions. That included bringing on new members who could control Mr. Altman. The board encouraged the addition of Bret Taylor, Twitters former chairman, who quickly won everyones approval and agreed to help the parties negotiate. As insurance, the board also sought another interim chief executive in case talks with Mr. Altman broke down.

By then, Mr. Altman had gathered more allies. Mr. Nadella, now confident that Mr. Altman was not guilty of malfeasance, threw Microsofts weight behind him.

In a call with Mr. Altman that day, Mr. Nadella proposed another idea. What if Mr. Altman joined Microsoft? The $2.8 trillion company had the computing power for anything that he wanted to build.

Mr. Altman now had two options: negotiating a return to OpenAI on his terms or taking OpenAIs talent with him to Microsoft.

By Nov. 19, Mr. Altman was so confident that he would be reappointed chief executive that he and his allies gave the board a deadline: Resign by 10 a.m. or everyone would leave.

Mr. Altman went to OpenAIs office so he could be there when his return was announced. Mr. Brockman also showed up with his wife, Anna. (The couple had married at OpenAIs office in a 2019 ceremony officiated by Dr. Sutskever. The ring bearer was a robotic hand.)

To reach a deal, Ms. Toner, Ms. McCauley and Mr. DAngelo logged into a day of meetings from their homes. They said they were open to Mr. Altmans return if they could agree on new board members.

Mr. Altman and his camp suggested Penny Pritzker, a secretary of commerce under President Barack Obama; Diane Greene, who founded the software company VMware; and others. But Mr. Altman and the board could not agree, and they bickered over whether he should rejoin OpenAIs board and whether a law firm should conduct a review of his leadership.

With no compromise in sight, board members told Ms. Murati that evening that they were naming Emmett Shear, a founder of Twitch, a video-streaming service owned by Amazon, as interim chief executive. Mr. Shear was outspoken about developing A.I. slowly and safely.

Mr. Altman left OpenAIs office in disbelief. Im going to Microsoft, he told Mr. Chesky and others.

That night, Mr. Shear visited OpenAIs offices and convened an employee meeting. The companys Slack channel lit up with emojis of a middle finger.

Only about a dozen workers showed up, including Dr. Sutskever. In the lobby, Anna Brockman approached him in tears. She tugged his arm and urged him to reconsider Mr. Altmans removal. He stood stone-faced.

At 4:30 a.m. on Nov. 20, Mr. DAngelo was awakened by a phone call from a frightened OpenAI employee. If Mr. DAngelo didnt step down from the board in the next 30 minutes, the employee said, the company would collapse.

Mr. DAngelo hung up. Over the past few hours, he realized, things had worsened.

Just before midnight, Mr. Nadella had posted on X that he was hiring Mr. Altman and Mr. Brockman to lead a lab at Microsoft. He had invited other OpenAI employees to join.

That morning, more than 700 of OpenAIs 770 employees had also signed a letter saying they might follow Mr. Altman to Microsoft unless the board resigned.

One name on the letter stood out: Dr. Sutskever, who had changed sides. I deeply regret my participation in the boards actions, he wrote on X that morning.

OpenAIs viability was in question. The board members had little choice but to negotiate.

To break the impasse, Mr. DAngelo and Mr. Altman talked the next day. Mr. DAngelo suggested former Treasury Secretary Lawrence H. Summers, a professor at Harvard, for the board. Mr. Altman liked the idea.

Mr. Summers, from his Boston-area home, spoke with Mr. DAngelo, Mr. Altman, Mr. Nadella and others. Each probed him for his views on A.I. and management, while he asked about OpenAIs tumult. He said he wanted to be sure that he could play the role of a broker.

Mr. Summerss addition pushed Mr. Altman to abandon his demand for a board seat and agree to an independent investigation of his leadership and dismissal.

By late Nov. 21, they had a deal. Mr. Altman would return as chief executive, but not to the board. Mr. Summers, Mr. DAngelo and Mr. Taylor would be board members, with Microsoft eventually joining as a nonvoting observer. Ms. Toner, Ms. McCauley and Dr. Sutskever would leave the board.

This week, Mr. Altman and some of his advisers were still fuming. They wanted his name cleared.

Do u have a plan B to stop the postulation about u being fired its not healthy and its not true!!! Mr. Conway texted Mr. Altman.

Mr. Altman said he was working with OpenAIs board: They really want silence but i think important to address soon.

Nico Grant contributed reporting from San Francisco. Susan Beachy contributed research.

Here is the original post:
Inside OpenAI's Crisis Over the Future of Artificial Intelligence - The New York Times