Archive for the ‘Artificial Intelligence’ Category

OpenAI Whistle-Blowers Describe Reckless and Secretive Culture – The New York Times

A group of OpenAI insiders is blowing the whistle on what they say is a culture of recklessness and secrecy at the San Francisco artificial intelligence company, which is racing to build the most powerful A.I. systems ever created.

The group, which includes nine current and former OpenAI employees, has rallied in recent days around shared concerns that the company has not done enough to prevent its A.I. systems from becoming dangerous.

The members say OpenAI, which started as a nonprofit research lab and burst into public view with the 2022 release of ChatGPT, is putting a priority on profits and growth as it tries to build artificial general intelligence, or A.G.I., the industry term for a computer program capable of doing anything a human can.

They also claim that OpenAI has used hardball tactics to prevent workers from voicing their concerns about the technology, including restrictive nondisparagement agreements that departing employees were asked to sign.

OpenAI is really excited about building A.G.I., and they are recklessly racing to be the first there, said Daniel Kokotajlo, a former researcher in OpenAIs governance division and one of the groups organizers.

The group published an open letter on Tuesday calling for leading A.I. companies, including OpenAI, to establish greater transparency and more protections for whistle-blowers.

We are having trouble retrieving the article content.

Please enable JavaScript in your browser settings.

Thank you for your patience while we verify access. If you are in Reader mode please exit andlog intoyour Times account, orsubscribefor all of The Times.

Thank you for your patience while we verify access.

Already a subscriber?Log in.

Want all of The Times?Subscribe.

Read the rest here:
OpenAI Whistle-Blowers Describe Reckless and Secretive Culture - The New York Times

OpenAI, Anthropic and Google DeepMind workers warn of AIs dangers – The Washington Post

A handful of current and former employees at OpenAI and other prominent artificial intelligence companies warned that the technology poses grave risks to humanity in a Tuesday letter, calling on companies to implement sweeping changes to ensure transparency and foster a culture of public debate.

The letter, signed by 13 people including current and former employees at Anthropic and Googles DeepMind, said AI can exacerbate inequality, increase misinformation, and allow AI systems to become autonomous and cause significant death. Though these risks could be mitigated, corporations in control of the software have strong financial incentives to limit oversight, they said.

Because AI is only loosely regulated, accountability rests on company insiders, the employees wrote, calling on corporations to lift nondisclosure agreements and give workers protections that allow them to anonymously raise concerns.

The move comes as OpenAI faces a staff exodus. Many critics have seen prominent departures including of OpenAI co-founder Ilya Sutskever and senior researcher Jan Leike as a rebuke of company leaders, who some employees argue chase profit at the expense of making OpenAIs technologies safer.

Daniel Kokotajlo, a former employee at OpenAI, said he left the start-up because of the companys disregard for the risks of artificial intelligence.

Summarized stories to quickly stay informed

I lost hope that they would act responsibly, particularly as they pursue artificial general intelligence, he said in a statement, referencing a hotly contested term referring to computers matching the power of human brains.

They and others have bought into the move fast and break things approach, and that is the opposite of what is needed for technology this powerful and this poorly understood, Kokotajlo said.

Liz Bourgeois, a spokesperson at OpenAI, said the company agrees that rigorous debate is crucial given the significance of this technology. Representatives from Anthropic and Google did not immediately reply to a request for comment.

The employees said that absent government oversight, AI workers are the few people who can hold corporations accountable. They said that they are hamstrung by broad confidentiality agreements and that ordinary whistleblower protections are insufficient because they focus on illegal activity, and the risks that they are warning about are not yet regulated.

The letter called for AI companies to commit to four principles to allow for greater transparency and whistleblower protections. Those principles are a commitment to not enter into or enforce agreements that prohibit criticism of risks; a call to establish an anonymous process for current and former employees to raise concerns; supporting a culture of criticism; and a promise to not retaliate against current and former employees who share confidential information to raise alarms after other processes have failed.

The Washington Post in December reported that senior leaders at OpenAI raised fears about retaliation from CEO Sam Altman warnings that preceded the chiefs temporary ouster. In a recent podcast interview, former OpenAI board member Helen Toner said part of the nonprofits decision to remove Altman as CEO late last year was his lack of candid communication about safety.

He gave us inaccurate information about the small number of formal safety processes that the company did have in place, meaning that it was basically just impossible for the board to know how well those safety processes were working, she told The TED AI Show in May.

The letter was endorsed by AI luminaries including Yoshua Bengio and Geoffrey Hinton, who are considered godfathers of AI, and renowned computer scientist Stuart Russell.

See more here:
OpenAI, Anthropic and Google DeepMind workers warn of AIs dangers - The Washington Post

The impact of artificial intelligence in healthcare: opportunities and challenges – LLYC

As humanity evolves, new technologies are constantly being created. The 1990s were marked by the popularization of the internet, web browsers, and cell phones. In 2007, Apples launch of the iPhone revolutionized mobile technology and smartphones. During this same period, cloud computing and social networks emerged. Currently, ChatGPT, launched two years ago, is considered one of the greatest recent technological advancements, popularizing artificial intelligence (AI) a technology that has driven significant advances in areas such as healthcare, transforming diagnoses, treatments, and data management. And, like everything revolutionary, new challenges also arise, such as privacy and data security.

According to a mapping conducted in 2023 by the National Association of Private Hospitals (Anahp) and the Brazilian Association of Health Startups (ABSS), which interviewed representatives from hospitals across all regions of Brazil except the North region, 62.5% of institutions claimed to use AI in some way. Of this total, 10% use AI to support clinical decision-making and 8% in the analysis of medical images. AI has improved accuracy in image diagnosis, helping to detect diseases such as cancer, cardiovascular diseases, and eye diseases. AI-based tools, such as those developed by Google Health, can analyze X-rays, CT scans, and MRIs with high precision.

Regarding early disease detection, machine learning algorithms have been used to identify early signs of diseases in medical images, often with comparable or superior accuracy to human radiologists. In addition, other exams use AI to assess the likelihood of someone having cancer or identify tumors in early stages, increasing the cure rate and enabling more specific, less aggressive treatments, and occasionally with less financial impact.

Another opportunity that goes hand in hand with AI and that is already being widely used is the approach to personalized medicine, which is based on the analysis of large data sets, including genomic information, clinical data, medical history, and test results. AI can not only process and analyze this data quickly and efficiently, identifying patterns and correlations that may not be apparent to humans, but also help doctors select the most appropriate and personalized treatment for each patient. This includes predicting the effectiveness of different therapies and identifying potential side effects.

Challenges

The advances and transformations generated by AI are undeniable, but at the same time, new challenges and areas of attention can already be observed. The mapping by Anahp and ABSS also raised the challenges encountered by the interviewees, some of which include: the engagement of open clinical bodies is a barrier to transformation; health insurers do not always keep pace with hospitals technological evolution; there is the challenge of maintaining attention and care for patients outside the hospital, and data security and trust in the market are still low.

Recommendations

According to the research on quality, patient safety, and the importance of clinical decision support tools conducted in 2023 by Anahp in partnership with Wolters Kluwer, among the 74 responding hospitals, 47.39% consider telemedicine important in the patient care flow. Within this group, 31.08% believe it is a technology applicable in remote patient monitoring, and 43.24% see it as a means of conducting remote training for the clinical team. Thus, telemedicine can be used as a tool to improve clinical staff engagement, as well as maintain patient attention and care both inside and outside the hospital.

In order to enhance data security and market trust, it is essential to implement robust data protection policies and regulations, as well as promote transparency and accountability in the use of AI in healthcare. At the beginning of 2024, the World Health Organization (WHO) released the Guide for Multimodal AI Models for Health, which provides, among other guidelines, directions for the ethical use of AI in accordance with data protection laws. In Brazil, it is necessary for relevant bodies such as the Ministry of Health and the National Health Surveillance Agency (Anvisa) to dedicate themselves to creating regulations, possibly based on the released guide, since the country has been a WHO member since its foundation. In this way, AI can be fully exploited, ensuring user safety.

Giovanna Braga

Giovanna Braga, Healthcare and Advocacy consultant at LLYC Brasil

Read more from the original source:
The impact of artificial intelligence in healthcare: opportunities and challenges - LLYC

Congress wrestles with AIs boost to campaigns, potential misuse – Roll Call

Lawmakers pushing ahead with some of the first bills governing artificial intelligence are confronting old problems as they deal with a new technology.

At a recent Senate committee makeup of legislation that would prohibit the distribution of deceptive AI in campaigns for federal office and require disclosure when AI is used, some Republicans espoused support for the measures ideals while voting against them, citing the potential limits on free speech.

We have to balance the potential for innovation with the potential for deceptive or fraudulent use, Nebraska Republican Sen. Deb Fischer, ranking member of the Senate Rules and Administration Committee, said at the markup. On top of that, we cant lose sight of the important protections our Constitution provides for free speech in this country. These two bills do not strike that careful balance.

Political battles are only likely to get more intense over AI as campaigns increasingly rely on it to fine-tune messages and find target audiences and others use it spread disinformation.

The technology is here to stay, proponents say, because AI greatly increases efficiency.

Campaigns can positively benefit from AI-derived messaging, said Mark Jablonowski, who is president ofDSPolitical, a digital advertising company that works with Democratic candidates and for progressive causes, and chief technology officer at its parent, Optimal. Our clients are using AI successfully to create messaging tracks.

But consultants, lawmakers, and government officials say the same tools that boost efficiency in campaigns will spread disinformation or impersonate candidates, causing confusion among voters and likely eroding confidence in the electoral process.

Senate Majority Leader Charles E. Schumer, D-N.Y., echoed those concerns at the Rules markup.

If deepfakes are everywhere and no one believes the results of the elections, woe is our democracy, he said. This is so damn serious.

Sen. Amy Klobuchar, D-Minn., the committees chairwoman, said AI tools have the potential to turbocharge the spread of disinformation and deceive voters.

The panel advanced a measure that would prohibit deceptive AI in campaigns for federal office and one that would require disclaimers when AI is used, both on9-2 votes with GOP lawmakers casting the opposing votes.

Klobuchar said she would be open to changes to address concerns raised by Republicans.

A third measure requiring the Election Assistance Commission to develop guidelines on the uses and risks of AI advanced on a 11-0 vote.

Campaign workers may enter a few prompts into generative AI tools that then spit out 50 or 60 unique messaging tracks, with workers choosingthe top three or fourthat really hit the mark, Jablonowski said in an email. There are many efficiency gains helping campaigns do more with less and create a more meaningful message, which is very important in politics.

Consultants and digital advertising firms now have access to more than two dozen AI-based tools that assist with various aspects of political campaigns, ranging from those that generate designs, ads and video content to those that produce emails, op-eds and media monitoring platforms, according to Higher Ground Labs, a venture fund that invests in tech platforms to help progressive candidates and causes.

AI-generated content is becoming most prevalent in political communications, particularly in content generation across images, video, audio, and text, Higher Ground Labs said in a May 23 report. Human oversight remains critical to ensure quality, accuracy and ethical use, the report said.

The report cited one study that found that using AI tools to generate fundraising emails grew dollars raised per work hour by 350 percent to 440 percent. The tools helped save time without losing quality even when employed by less experienced staffers, the report said.

AI tools also are helping campaigns with audience targeting. In 2023, Boise, Idaho, Mayor Lauren McLean built a target audience group using an AI tool that proved to be more capable in identifying supporters and outperformed standard partisanship models, according to the Higher Ground report.

But even the consultants who rely on these new technologies are aware of the downsides.

I wont sugarcoat it. As someone who has been in this space for two decades, this is the sort of Venn diagram I focus on, Jablonowski said, referring to the intersection of AI tools and those who might misuse them. I think were going to see a lot of good coming from AI this year, and were going to see significant potential challenges from bad actors. This keeps me up at night.

Beyond legislation, the Federal Communications Commission is looking into new rules that would require disclosures on messages generated using AI tools.

As artificial intelligence tools become more accessible, the Commission wants to make sure consumers are fully informed when the technology is used, FCC Chair Jessica Rosenworcel said in a May 22 statement. Today, Ive shared with my colleagues a proposal that makes clear consumers have a right to know when AI tools are being used in the political ads they see.

The FCC said the proposed rules are not intended to prohibit content generated by AI but are intended to disclose use of the technology.

States also are racing to pass laws that would require campaigns and candidates to disclose use of AI tools in their messages.

Alabama became the latest state to enact a law to criminalize the use of AI in election campaigning. The measure, passed last month, makes it a misdemeanor for a first-time offense and a felony for subsequent violations for distributing AI-generated deepfakes falsely showing a person speaking or doing something they did not.

Florida legislationsigned into law by Gov. Ron DeSantis in April likewise would impose prison terms for running AI-generated ads without disclosure.

Several other states have enacted laws requiring disclosure of of AI in generating messages and ads and imposing civil penalties for failing to do so.

Deepfake messages are not theoretical. Last week, the FCC issued a proposed $6 million fine to Steve Kramer, a political consultant, for organizing a fake robocall in New Hampshire, whichauthorities say was received by 20,000 or more voters, in which the artificial-intelligence-doctored voice of President Joe Biden asked them to skip the states primary in January. Kramer admitted he was behind the call in an interview with CBS News.

New Hampshire Attorney General John Formella charged Kramer with voter suppression, a felony, and misdemeanor charges for impersonation of a candidate. A spokesman for the attorney generals office said Kramer is set to be arraigned in early June.

Jablonowski argued that some bad actors may break the rules regardless of whether laws require disclosure because the payoff might be worth any potential consequences.

It is particularly concerning that people who use generative AI maliciously are not going to be following the rules no matter what industry and regulators say, Jablonowski said. Requiring folks to label content as being created by generative AI only works if people follow those rules with fidelity.

One way to stem the spread of fake messages is for social media platforms to curb them, Jablonowski said. Meta Platforms Inc., for example, requires disclosure for ads using AI. And the company has said it will label AI-generated images on Facebook, Instagram and Threads.

Nick Clegg, president of global affairs at Meta, told MIT Technology Review at a May 22 conference that the company has yet to see large-scale use of AI-generated deepfakes on its platforms.

The interesting thing so far I stress, so far is not how much but how little AI-generated content [there is], Clegg said at the conference.

Tools to detect AI-generated material are not perfect and still evolving, and watermarks or digital signatures indicating AI-generated content can be tampered with, he said.

In addition to AI-generated deepfakes, social media platforms are still grappling with old-fashioned misinformation on their platforms, Jablonowski said, creating likely confusion and distrust among voters. Despite laws and actions by platforms, people bent on using AI to create confusion are going to do whatever they think they can get away with, said Jerry McNerney, a senior policy adviser at the law firm of Pillsbury Winthrop Shaw Pittman LLP.

McNerney is a former member of Congress who was co-chair of the Congressional Artificial Intelligence Caucus.

Trying to keep ahead of [such bad actors] with specific prohibitions is going to be a losing battle, McNerney said in an interview, arguing that federal agencies and industry groups may have to come up with standards that are enforceable. You need something more systemic.

View post:
Congress wrestles with AIs boost to campaigns, potential misuse - Roll Call

Types of Artificial Intelligence That You Should Know in 2024 – Simplilearn

The use and scope of Artificial Intelligence dont need a formal introduction. Artificial Intelligence is no more just a buzzword; it has become a reality that is part of our everyday lives. With companies building intelligent machines for diverse applications using AI, it is revolutionizing business sectors like never before. You will learn about the various stages and categories of artificial intelligence in this article on Types Of Artificial Intelligence.

Artificial Intelligence is the process of building intelligent machines from vast volumes of data. Systems learn from past learning and experiences and perform human-like tasks. It enhances the speed, precision, and effectiveness of human efforts. AI uses complex algorithms and methods to build machines that can make decisions on their own. Machine Learning and Deep learning forms the core of Artificial Intelligence.

AI is now being used in almost every sector of business:

Now that you know what AI really is, lets look at what are the different types of artificial intelligence?

Artificial Intelligence can be broadly classified into several types based on capabilities, functionalities, and technologies. Here's an overview of the different types of AI:

This type of AI is designed to perform a narrow task (e.g., facial recognition, internet searches, or driving a car). Most current AI systems, including those that can play complex games like chess and Go, fall under this category. They operate under a limited pre-defined range or set of contexts.

A type of AI endowed with broad human-like cognitive capabilities, enabling it to tackle new and unfamiliar tasks autonomously. Such a robust AI framework possesses the capacity to discern, assimilate, and utilize its intelligence to resolve any challenge without needing human guidance.

This represents a future form of AI where machines could surpass human intelligence across all fields, including creativity, general wisdom, and problem-solving. Superintelligence is speculative and not yet realized.

These AI systems do not store memories or past experiences for future actions. They analyze and respond to different situations. IBM's Deep Blue, which beat Garry Kasparov at chess, is an example.

These AI systems can make informed and improved decisions by studying the past data they have collected. Most present-day AI applications, from chatbots and virtual assistants to self-driving cars, fall into this category.

This is a more advanced type of AI that researchers are still working on. It would entail understanding and remembering emotions, beliefs, needs, and depending on those, making decisions. This type requires the machine to understand humans truly.

This represents the future of AI, where machines will have their own consciousness, sentience, and self-awareness. This type of AI is still theoretical and would be capable of understanding and possessing emotions, which could lead them to form beliefs and desires.

AI systems capable of self-improvement through experience, without direct programming. They concentrate on creating software that can independently learn by accessing and utilizing data.

A subset of ML involving many layers of neural networks. It is used for learning from large amounts of data and is the technology behind voice control in consumer devices, image recognition, and many other applications.

This AI technology enables machines to understand and interpret human language. It's used in chatbots, translation services, and sentiment analysis applications.

This field involves designing, constructing, operating, and using robots and computer systems for controlling them, sensory feedback, and information processing.

This technology allows machines to interpret the world visually, and it's used in various applications such as medical image analysis, surveillance, and manufacturing.

These AI systems answer questions and solve problems in a specific domain of expertise using rule-based systems.

Find Our Artificial Intelligence Course in Top Cities

AI research has successfully developed effective techniques for solving a wide range of problems, from game playing to medical diagnosis.

There are many branches of AI, each with its focus and set of techniques. Some of the essential branches of AI include:

We might be far from creating machines that can solve all the issues and are self-aware. But, we should focus our efforts toward understanding how a machine can train and learn on its own and possess the ability to base decisions on past experiences.

I hope this article helped you to understand the different types of artificial intelligence. If you are looking to start your career in Artificial Intelligent and Machine Learning, then check out Simplilearn's Post Graduate Program in AI and Machine Learning.

Do you have any questions regarding this article? If you have, please put in the comments section of this article on types of artificial intelligence. Our team will help you solve your queries at the earliest!

An AI model is a mathematical model used to make predictions or decisions. Some of the common types of AI models:

There are two main categories of AI:

The father of AI is John McCarthy. He is a computer scientist who coined the term "artificial intelligence" in 1955. McCarthy is also credited with developing the first AI programming language, Lisp.

Read more:
Types of Artificial Intelligence That You Should Know in 2024 - Simplilearn