Archive for the ‘Artificial General Intelligence’ Category

Elon Musk Withdraws His Lawsuit Against OpenAI and Sam Altman – The New York Times

Elon Musk withdrew his lawsuit on Tuesday against OpenAI, the maker of the online chatbot ChatGPT, a day before a state judge in San Francisco was set to consider whether it should be dismissed.

The suit, filed in February, had accused the artificial intelligence start-up and two of its founders, Sam Altman and Greg Brockman, of breaching OpenAIs founding contract by prioritizing commercial interests over the public good.

A multibillion-dollar partnership that OpenAI signed with Microsoft, Mr. Musks suit claimed, represented an abandonment of the companys pledge to carefully develop A.I. and make the technology publicly available.

Mr. Musk had argued that the founding contract said that the organization should instead be focused on building artificial general intelligence, or A.G.I., a machine that can do anything the brain can do, for the benefit of humanity.

OpenAI, based in San Francisco, had called for a dismissal days after Mr. Musk filed the suit. He could still refile the suit in California or another state.

Mr. Musk did not immediately respond to a request for comment, and OpenAI declined to comment.

Mr. Musk helped found OpenAI in 2015 along with Mr. Altman, Mr. Brockman and several young A.I. researchers. He saw the research lab as a response to A.I. work being done at the time by Google. Mr. Musk believed Google and its co-founder, Larry Page, were not appropriately concerned with the risks that A.I. presented to humanity.

Mr. Musk parted ways with OpenAI after a power struggle in 2018. The company later become an A.I. technology leader, creating ChatGPT, a chatbot that can generate text and answer questions in humanlike prose.

Mr. Musk founded his own A.I. company last year called xAI, while repeatedly claiming that OpenAI was not focused enough on the dangers of the technology.

He filed his lawsuit weeks after members of the OpenAI board unexpectedly fired Mr. Altman, saying he could no longer be trusted with the companys mission to build A.I. for the good of humanity. Mr. Altman was reinstated after five days of negotiations with the board, and soon cemented his control over the company, reclaiming a seat on the board.

Late last month, OpenAI announced that it had started working on a new artificial intelligence model that would succeed the GPT-4 technology that drives ChatGPT. The company said that it expected the new model to bring the next level of capabilities as it strove to build A.G.I.

The company also said it was creating a new Safety and Security Committee to explore how it should handle the risks posed by the new model and future technologies.

Read more:

Elon Musk Withdraws His Lawsuit Against OpenAI and Sam Altman - The New York Times

Staying Ahead of the AI Train – ATD

EXL: 2024 BEST Award Winner, #19

Advertisement

If you want to envision life in the fastest of lanes, imagine sitting in the driver's seat of Sanjay Dutt, global head of learning and capability development for EXL. You may soon be following his lead.

A major provider of data and artificial-intelligence-led digital operations, solutions, and analytics services, EXL is modernizing its portfolio to embrace the latest advances in data, domain excellence, and AI solutions. In the process, it has gained global expertise in technology's most dynamic trends, including generative AI, cloud technology, data, and analytics.

It is Dutt's job to make the necessary talent transformation succeed by upskilling the company's workforce and radically changing the culture of more than 54,000 employeesall while staying ahead of technological whirlwinds. From his base in Dublin, Ireland, Dutt heads a team of 85 capability development and HR professionals who are engaged in EXL offices around the world.

He says success in his role requires an unwavering commitment to building a workforce that is not only well prepared for AI and the digital age, but one that can also drive innovation, customer experience, and productivity and efficiency gains.

The initiative includes training programs on cutting-edge technologies to align employee capabilities with emerging industry demands. Through a comprehensive upskilling and reskilling initiative, Dutt's team has created a culture of continuous learning.

Achievements to date include higher value delivery to clients complemented by high levels of client satisfaction. "Our strategic talent development efforts not only addressed skills gaps but transformed it into a catalyst for growth and excellence," Dutt shares.

The campaign equipped more than 7,000 digital practitioners with skills and knowledge needed to excel within the burgeoning landscape. In specific domains, such as AI, cloud, data management, machine learning, and computer vision, the learning team developed more than 650 digital experts who are now industry leaders in their respective areas, Dutt says.

EXL rolled out its Future Ready Talent Strategy in 2021, as AI and generative AI began making waves in the marketplace. The company evaluated the current capabilities of the EXL workforce, focusing on where their professions may end up within four to five years. The learning team consulted the field's leading experts and top business school experts for insight. The resulting feedback, Dutt states, "created huge excitement within the company."

Within an accelerated span of time, the team upskilled EXL to be ready for the generative AI crazea considerable feat, says Dutt, for an enterprise of its size.

A critical focus that emerged was to drive practitioners' capabilities around identifying specific use cases for enterprise transformation and orchestrating it end to end from strategy and design to deployment, change management, and results. In response, Dutt's team approached leading data and AI experts and business schools for their insights regarding AI's future impacts on client business.

Senior leaders are the driving force behind the employees' empowerment, Dutt notes. They furnish essential resources from cutting-edge technologies to a rich array of learning channels, including online courses and interactive workshops, ensuring that individuals acquire new capabilities effectively.

"Talent development is front and center of any strategic conversation among our company's leaders," Dutt states.

"We rely on people who can listen, who are comfortable with AI, who know how to use data, and are not just managing services," he says. "When that happens, you change the culture of the company."

Dutt's advice for TD professionals everywhere is to prepare for the emerging trends within their own organizations. "Since technology keeps changing, one of the biggest challenges faced by companies is the difficulty of upskilling their people at scale without fully employing learning technologies to their fullest," he warns.

Dutt also urges TD professionals to climb aboard the AI train if they haven't already done so. "Gen AI and [large language models] are making AI accessible for widespread adoption," he stresses. "AI will soon start shaping strategy, operations, and people's livesa profound change." Within EXL, Dutt and other senior leaders lead by example by conducting their own research and meeting personally with AI pioneers.

The business world has not fully realized just how profound that change will be. "There's a lot in this emerging field that's not generally known, from the intricacies of 'narrow AI' to the complexities of general intelligence," he states. At a minimum, he cautions that TD practitioners will be preparing employees to transition into higher-value-added roles as AI assumes the mantle of repetitive tasks.

View theentire listof 2024 BEST Award winners.

Read the original:

Staying Ahead of the AI Train - ATD

BEYOND LOCAL: ‘Noise’ in the machine: Human differences in judgment lead to problems for AI – The Longmont Leader

Many people understand the concept of bias at some intuitive level. In society, and in artificial intelligence systems, racial and gender biases are well documented.

If society could somehow remove bias, would all problems go away? The late Nobel laureate Daniel Kahneman, who was a key figure in the field of behavioral economics, argued in his last book that bias is just one side of the coin. Errors in judgments can be attributed to two sources: bias and noise.

Bias and noise both play important roles in fields such as law, medicine and financial forecasting, where human judgments are central. In our work as computer and information scientists, my colleagues and I have found that noise also plays a role in AI.

Statistical noise

Noise in this context means variation in how people make judgments of the same problem or situation. The problem of noise is more pervasive than initially meets the eye. A seminal work, dating back all the way to the Great Depression, has found that different judges gave different sentences for similar cases.

Worryingly, sentencing in court cases can depend on things such as the temperature and whether the local football team won. Such factors, at least in part, contribute to the perception that the justice system is not just biased but also arbitrary at times.

Other examples: Insurance adjusters might give different estimates for similar claims, reflecting noise in their judgments. Noise is likely present in all manner of contests, ranging from wine tastings to local beauty pageants to college admissions.

Noise in the data

On the surface, it doesnt seem likely that noise could affect the performance of AI systems. After all, machines arent affected by weather or football teams, so why would they make judgments that vary with circumstance? On the other hand, researchers know that bias affects AI, because it is reflected in the data that the AI is trained on.

For the new spate of AI models like ChatGPT, the gold standard is human performance on general intelligence problems such as common sense. ChatGPT and its peers are measured against human-labeled commonsense datasets.

Put simply, researchers and developers can ask the machine a commonsense question and compare it with human answers: If I place a heavy rock on a paper table, will it collapse? Yes or No. If there is high agreement between the two in the best case, perfect agreement the machine is approaching human-level common sense, according to the test.

So where would noise come in? The commonsense question above seems simple, and most humans would likely agree on its answer, but there are many questions where there is more disagreement or uncertainty: Is the following sentence plausible or implausible? My dog plays volleyball. In other words, there is potential for noise. It is not surprising that interesting commonsense questions would have some noise.

But the issue is that most AI tests dont account for this noise in experiments. Intuitively, questions generating human answers that tend to agree with one another should be weighted higher than if the answers diverge in other words, where there is noise. Researchers still dont know whether or how to weigh AIs answers in that situation, but a first step is acknowledging that the problem exists.

Tracking down noise in the machine

Theory aside, the question still remains whether all of the above is hypothetical or if in real tests of common sense there is noise. The best way to prove or disprove the presence of noise is to take an existing test, remove the answers and get multiple people to independently label them, meaning provide answers. By measuring disagreement among humans, researchers can know just how much noise is in the test.

The details behind measuring this disagreement are complex, involving significant statistics and math. Besides, who is to say how common sense should be defined? How do you know the human judges are motivated enough to think through the question? These issues lie at the intersection of good experimental design and statistics. Robustness is key: One result, test or set of human labelers is unlikely to convince anyone. As a pragmatic matter, human labor is expensive. Perhaps for this reason, there havent been any studies of possible noise in AI tests.

To address this gap, my colleagues and I designed such a study and published our findings in Nature Scientific Reports, showing that even in the domain of common sense, noise is inevitable. Because the setting in which judgments are elicited can matter, we did two kinds of studies. One type of study involved paid workers from Amazon Mechanical Turk, while the other study involved a smaller-scale labeling exercise in two labs at the University of Southern California and the Rensselaer Polytechnic Institute.

You can think of the former as a more realistic online setting, mirroring how many AI tests are actually labeled before being released for training and evaluation. The latter is more of an extreme, guaranteeing high quality but at much smaller scales. The question we set out to answer was how inevitable is noise, and is it just a matter of quality control?

The results were sobering. In both settings, even on commonsense questions that might have been expected to elicit high even universal agreement, we found a nontrivial degree of noise. The noise was high enough that we inferred that between 4% and 10% of a systems performance could be attributed to noise.

To emphasize what this means, suppose I built an AI system that achieved 85% on a test, and you built an AI system that achieved 91%. Your system would seem to be a lot better than mine. But if there is noise in the human labels that were used to score the answers, then were not sure anymore that the 6% improvement means much. For all we know, there may be no real improvement.

On AI leaderboards, where large language models like the one that powers ChatGPT are compared, performance differences between rival systems are far narrower, typically less than 1%. As we show in the paper, ordinary statistics do not really come to the rescue for disentangling the effects of noise from those of true performance improvements.

Noise audits

What is the way forward? Returning to Kahnemans book, he proposed the concept of a noise audit for quantifying and ultimately mitigating noise as much as possible. At the very least, AI researchers need to estimate what influence noise might be having.

Auditing AI systems for bias is somewhat commonplace, so we believe that the concept of a noise audit should naturally follow. We hope that this study, as well as others like it, leads to their adoption.

Mayank Kejriwal, Research Assistant Professor of Industrial & Systems Engineering, University of Southern California

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Follow this link:

BEYOND LOCAL: 'Noise' in the machine: Human differences in judgment lead to problems for AI - The Longmont Leader

OpenAI disbands its AI risk mitigation team –

OpenAI on Friday said that it has disbanded a team devoted to mitigating the long-term dangers of super-smart artificial intelligence (AI).

It began dissolving the so-called superalignment group weeks ago, integrating members into other projects and research, the San Francisco-based firm said.

OpenAI co-founder Ilya Sutskever and team coleader Jan Leike announced their departures from the company during the week.

The dismantling of a team focused on keeping sophisticated AI under control comes as such technology faces increased scrutiny from regulators and fears mount regarding its dangers.

OpenAI must become a safety-first AGI [artificial general intelligence] company, Leike wrote on X on Friday.

Leike called on all OpenAI employees to act with the gravitas warranted by what they are building.

OpenAI CEO Sam Altman responded to Leikes post with one of his own.

Altman thanked Leike for his work at the company and said he was sad to see him leave.

Hes right, we have a lot more to do, Altman said. We are committed to doing it.

Altman promised more on the topic in the coming days.

Sutskever said on X that he was leaving after almost a decade at OpenAI, the trajectory of which has been nothing short of miraculous.

Im confident that OpenAI will build AGI that is both safe and beneficial, he added, referring to computer technology that seeks to perform as well as or better than human cognition.

Sutskever, who is also OpenAIs chief scientist, sat on the board that voted to remove Altman in November last year.

The ousting threw the company into a tumult, as staff and investors rebelled.

The OpenAI board ended up hiring Altman back a few days later.

OpenAI earlier last week released a higher-performing and even more human-like version of the AI technology that underpins ChatGPT, which was made free to all users.

It feels like AI from the movies, Altman said in a blog post.

Altman has previously pointed to Scarlett Johanssons character in the movie Her, where she voices an AI-based virtual assistant dating a man, as an inspiration for where he would like AI interactions to go.

The day would come when digital brains will become as good and even better than our own, Sutskever said at a talk during a TED AI summit in San Francisco late last year.

AGI will have a dramatic impact on every area of life, Sutskever added.

Comments will be moderated. Keep comments relevant to the article. Remarks containing abusive and obscene language, personal attacks of any kind or promotion will be removed and the user banned. Final decision will be at the discretion of the Taipei Times.

View original post here:

OpenAI disbands its AI risk mitigation team -

Machine Learning Researcher Links OpenAI to Drug-Fueled Sex Parties – Futurism

A machine learning researcher is claiming to have knowledge of kinky drug-fueled orgies in Silicon Valley's storied hacker houses and appears to be linking those parties, and the culture surrounding them, to OpenAI.

"The thing about being active in the hacker house scene is you are accidentally signing up for a career as a shadow politician in the Silicon Valley startup scene," begins the lengthy X-formerly-Twitter post by Sonia Joseph, a former Princeton ML researcher who's now affiliated with the deep learning institute Mila Quebec.

What follows is a vague and anecdotal diatribe about the "dark side" of startup culture made particularly explosive by Joseph's reference to so-called "consensual non-consent" sex parties that she says took place within the artificial general intelligence (AGI) enthusiast community in the valley.

The jumping off point, as far as we can tell, stems from a thread announcing that OpenAI superalignment chief Jan Leike was leaving the company as it dissolved his team that was meant to prevent advanced AI from going rogue.

At the end of his X thread, Leike encouraged remaining employees to "feel the AGI," a phrase that was also ascribed to newly-exited OpenAI cofounder Ilya Sutskever during seemingly cultish rituals revealed in an Atlantic expos last year but nothing in that piece, nor the superalignment chief's tweets, suggests anything having to do with sex, drugs, or kink.

Still, Joseph addressed her second viral memo-length tweet "to the journalists contacting me about the AGI consensual non-consensual (cnc) sex parties." And in the post, said she'd witnessed "some troubling things" in Silicon Valley's "community house scene" when she was in her early 20s and new to the tech industry.

"It is not my place to speak as to why Jan Leike and the superalignment team resigned. I have no idea why and cannot make any claims," wrote the researcher, who is not affiliated with OpenAI. "However, I do believe my cultural observations of the SF AI scene are more broadly relevant to the AI industry."

"I don't think events like the consensual non-consensual (cnc) sex parties and heavy LSD use of some elite AI researchers have been good for women," Joseph continued. "They create a climate that can be very bad for female AI researchers... I believe they are somewhat emblematic of broader problems: a coercive climate that normalizes recklessness and crossing boundaries, which we are seeing playing out more broadly in the industry today. Move fast and break things, applied to people."

While she said she doesn't think there's anything generally wrong with "sex parties and heavy LSD use," she also charged that the culture surrounding these alleged parties "leads to some of the most coercive and fucked up social dynamics that I have ever seen."

"I have seen people repeatedly get shut down for pointing out these problems," Joseph wrote. "Once, when trying to point out these problems, I had three OpenAI and Anthropic researchers debate whether I was mentally ill on a Google document. I have no history of mental illness; and this incident stuck with me as an example of blindspots/groupthink."

"Its likely these problems are not really on OpenAI but symptomatic of a much deeper rot in the Valley," she added. "I wish I could say more, but probably shouldnt."

Overall, it's hard to make heads or tails of these claims.We've reached out to Joseph and OpenAI for more info.

"I'm not under an NDA. I never worked for OpenAI," Joseph wrote. "I just observed the surrounding AI culture through the community house scene in SF, as a fly-on-the-wall, hearing insider information and backroom deals, befriending dozens of women and allies and well-meaning parties, and watching many them get burned."

More on OpenAI: Sam Altman Clearly Freaked Out by Reaction to News of OpenAI Silencing Former Employees

See the rest here:

Machine Learning Researcher Links OpenAI to Drug-Fueled Sex Parties - Futurism