Archive for the ‘Artificial General Intelligence’ Category

MIT Professor Compares Ignoring AGI to Don’t Look Up – Futurism

MIT professor and AI researcher Max Tegmark is pretty stressed out about the potential impact of artificial general intelligence (AGI) on human society. In anew essay for Time, he rings the alarm bells, painting a pretty dire picture of a future determined by an AI that can outsmart us.

"Sadly, I now feel that we're living the movie 'Don't Look Up' for another existential threat: unaligned superintelligence," Tegmark wrote, comparing what he perceives to be a lackadaisical response to a growing AGI threat to director Adam McKay's popular climate change satire.

For those who haven't seen it, "Don't Look Up" is a fictional story about a team of astronomers who, after discovering that a species-destroying asteroid is hurtling towards Earth, set out to warn the rest of human society. But to their surprise and frustration, a massive chunk of humanity doesn't care.

The asteroid is one big metaphor for climate change. But Tegmark thinks that the story can apply to the risk of AGI as well.

"A recent survey showed that half of AI researchers give AI at least ten percent chance of causing human extinction," the researcher continued. "Since we have such a long history of thinking about this threat and what to do about it, from scientific conferences to Hollywood blockbusters, you might expect that humanity would shift into high gear with a mission to steer AI in a safer direction than out-of-control superintelligence."

"Think again," he added, "instead, the most influential responses have been a combination of denial, mockery, and resignation so darkly comical that it's deserving of an Oscar."

In short, according to Tegmark, AGI is a very real threat, and human society isn't doing nearly enough to stop it or, at the very least, isn't ensuring that AGI will be properly aligned with human values and safety.

And just like in McKay's film, humanity has two choices: begin to make serious moves to counter the threat or, if things go the way of the film, watch our species perish.

Tegmark's claim is pretty provocative, especially considering that a lot of experts out there either don't agreethat AGI will ever actually materialize, or argue that it'll take a very long time to get there, if ever. Tegmark does address this disconnect in his essay, although his argument arguably isn't the most convincing.

"I'm often told that AGI and superintelligence won't happen because its impossible: human-level Intelligence is something mysterious that can only exist in brains," Tegmark writes. "Such carbon chauvinism ignores a core insight from the AI revolution: that intelligence is all about information processing, and it doesnt matter whether the information is processed by carbon atoms in brains or by silicon atoms in computers."

Tegmark goes as far as to claim that superintelligence "isn't a long-term issue," but is even "more short-term than e.g. climate change and most people's retirement planning." To support his theory, the researcher pointed to a recent Microsoft study arguing that OpenAI's large language model GPT-4 is already showing "sparks" of AGI and a recent talk given by deep learning researcher Yoshua Bengio.

While the Microsoft study isn't peer-reviewed and arguably reads more like marketing material, Bengio's warning is much more compelling. His call to action is much more grounded in what we don't know about the machine learning programs that already exist, as opposed to making big claims about tech that does not yet exist.

To that end, the current crop of less sophisticated AIs already poses a threat, from misinformation-spreading synthetic content to the threat of AI-powered weaponry.

And the industry at large, as Tegmark further notes, hasn't exactly done an amazing job so far at ensuring a slow and safe development, arguing that we shouldn't have taught it how to code, connect it to the internet, or give it a public API.

Ultimately, if and when AGI might come to fruition is still unclear.

While there's certainly a financial incentive for the field to keep moving quickly, a lot of experts agree that we should slow down the development of more advanced AIs, regardless of whether AGI is around the corner or still lightyears away.

And in the meantime, Tegmark argues that we should agree there's a very real threat in front of us before it's too late.

"Although humanity is racing toward a cliff, we're not there yet, and there's still time for us to slow down, change course and avoid falling off and instead enjoying the amazing benefits that safe, aligned AI has to offer," Tegmark writes. "This requires agreeing that the cliff actually exists and falling off of it benefits nobody."

"Just look up!" he added.

More on AI: Elon Musk Says He's Building a "Maximum Truth-Seeking AI"

See the rest here:

MIT Professor Compares Ignoring AGI to Don't Look Up - Futurism

Meet the Greta Thunberg of AI – POLITICO – POLITICO

With help from Derek Robertson and Sam Sutton

Sneha Revanur speaking in 2022. | Getty Images for Unfinished Live

Parents just dont understand the risks of generative artificial intelligence. At least according to a group of Zoomers grappling with this new force that their elders are struggling to regulate.

While young people often bear the brunt of new technologies, and must live with their long-term consequences, no youth movement has emerged around tech regulation that matches the scope or power of youth climate and gun control activism.

Thats starting to change, though, especially as concerns about AI mount.

Earlier today, a consortium of 10 youth organizations sent a letter to congressional leaders and the White House Office of Science and Technology Policy calling on them to include more young people on AI oversight and advisory boards.

The letter, provided first to DFD, was spearheaded by Sneha Revanur, a first-year student at Williams College in Massachusetts and the founder of Encode Justice, an AI-focused civil society group. As a charismatic teenager who is not shy about condemning a generation of policymakers who are out of touch, as she put it in an interview, shes the closest thing the emerging movement to rein in AI has to its own Greta Thunberg. Thunberg began her rise as a global icon of the climate movement in 2018, at the age of 15, with weekly solo protests outside of Swedens parliament.

A native of San Jose in the heart of Silicon Valley, Revanur also got her start in tech advocacy as a 15-year-old. In 2020, she volunteered for the successful campaign to defeat Californias Proposition 25, which would have enshrined the replacement of cash bail with a risk-based algorithmic system.

Encode Justice emerged from that ballot campaign with a focus on the use of AI algorithms in surveillance and the criminal justice system. It currently boasts a membership of 600 high school and college students across 30 countries. Revanur said the groups primary source of funding currently comes from the Omidyar Network, a self-described social change venture led by left-leaning eBay founder Pierre Omidyar.

Revanur has become increasingly preoccupied with generative AI as it sends ripples through societies across the world. The aha moment came when she read that February New York Times article about a seductive, conniving AI chatbot. In recent weeks, concerns have only grown about the potential for generative AI to deceive and manipulate people, as well as the broader risks posed by the potential development of artificial general intelligence.

We were somewhat skeptical about the risks of generative AI, Revanur says. We see this open letter as a marking point that were pivoting.

The letter is borne in part out of concerns that older policymakers are ill-prepared to handle this rapidly developing technology. Revanur said that when she meets with congressional offices, she is struck by the lack of tech-specific expertise. Were almost always speaking to a judiciary staffer or a commerce staffer. State legislatures, she said, tend to be worse.

One sign of the generational tension at play: Todays letter calls on policymakers to improve technical literacy in government.

The letter comes at a time when the fragmented youth tech movement is starting to coalesce, according to Zamaan Qureshi, co-chair of Design It For Us Coalition, a signatory of the AI letter.

The groups that are out there have been working in a disjointed way, Qureshi, a junior at American University in Washington, said. The coalition grew out of a successful campaign last year in support of the California Age Appropriate Design Code, a state law governing online privacy for children.

To improve coordination on tech safety issues, Qureshi and a group of fellow activists launched the Design It For Us Coalition at the end of March with a kickoff call featuring advisory board member Frances Haugen, the Facebook whistleblower. The coalition is currently focused on social media, which is often blamed for a teen mental health crisis, Qureshi said.

But its the urgency of AI that prompted todays letter.

So, is this the issue that will catapult youth tech activists to the same visibility and influence of other youth movements?

Qureshi said he and his fellow organizers have been in touch with youth climate activists and with organizers from March for Our Lives, the student-led gun control organization.

And the tech activists are looking to push their weight around in 2024.

Revanur, who praised President Joe Biden for prioritizing tech regulation, said Encode Justice plans to make an endorsement in the upcoming presidential race, and is watching to see what his administration does on AI. The group is also considering congressional and state legislative endorsements.

But endorsements and a politely-worded letter are a far cry from the combative and controversial tactics that have put the youth climate movement in the spotlight, such as a 2019 confrontation with Democratic Sen. Dianne Feinstein inside her Bay Area office.

Tech activists remain open to the adversarial approach. Revanur said the risks of AI run amuck could justify more confrontational measures going forward.

We definitely do see ourselves expanding direct action, she said, because we have youth on the ground.

BEVERLY HILLS Digital money is here to stay, International Monetary Fund Managing Director Kristalina Georgieva said at the Milken Global Institutes annual conference today. But if people expect central bank digital currencies to upend the banking sector, they shouldnt hold their breath.

Georgieva splashed cold water on a retail CBDC which refers to tokens issued directly to the public while offering a tacit endorsement of wholesale digital currencies that could be used by banks.

We think that wholesale CBDCs can be put in place with fairly little space for undesirable surprises, she said. Retail CBDCs, on the other hand, could completely transform the financial system in a way that we dont quite know what consequences he could bring. Sam Sutton

AIs medical takeover continues apace: Todays Future Pulse newsletter reveals the results of a new study showing that ChatGPT might give real-life doctors a run for their money when it comes to bedside manner.

The study, published in JAMA Internal Medicine, took 195 question-and-answer pairings from the popular subreddit r/AskDocs, ran the same questions by ChatGPT, and then had a panel of five experts evaluate whether the real-life doctors or the AI platform gave a better response. It was no contest: The experts found that 78 percent of the time ChatGPT prevailed.

And not only that, its responses were also rated significantly more empathetic than physician responses, by a factor of almost ten. The researchers suggest using the findings not to replace, but to augment doctor-patient interactions, writing that it could be used in scenarios such as using [a] chatbot to draft responses that physicians could then edit, and that Randomized trials could assess further if using AI assistants might improve responses, lower clinician burnout, and improve patient outcomes.

Stay in touch with the whole team: Ben Schreckinger ([emailprotected]); Derek Robertson ([emailprotected]); Mohar Chatterjee ([emailprotected]); Steve Heuser ([emailprotected]); and Benton Ives ([emailprotected]). Follow us @DigitalFuture on Twitter.

If youve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.

The rest is here:

Meet the Greta Thunberg of AI - POLITICO - POLITICO

The future of generative AI is niche, not generalized – MIT Technology Review

ChatGPT has sparked speculation about artificial general intelligence. But the next real phase of AI will be in specific domains and contexts.

The relentless hype surrounding generative AI in the past few months has been accompanied by equally loud anguish over the supposed perils just look at the open letter calling for a pause in AI experiments. This tumult risks blinding us to more immediate risks think sustainability and bias and clouds our ability to appreciate the real value of these systems: not as generalist chatbots, but instead as a class of tools that can be applied to niche domains and offer novel ways of finding and exploring highly specific information.

This shouldnt come as a surprise. The news that a dozen companies have developed ChatGPT plugins is a clear demonstration of the likely direction of travel. A generalized chatbot wont do everything for you, but if youre, say, Expedia, being able to offer customers a simple way to organize their travel plans is undeniably going to give you an edge in a marketplace where information discovery is so important.

Whether or not this really amounts to an iPhone moment or a serious threat to Google search isnt obvious at present while it will likely push a change in user behaviors and expectations, the first shift will be organizations pushing to bring tools trained on large language models (LLMs) to learn from their own data and services.

Dont settle for half the story.Get paywall-free access to technology news for the here and now.

And this, ultimately, is the key the significance and value of generative AI today is not really a question of societal or industry-wide transformation. Its instead a question of how this technology can open up new ways of interacting with large and unwieldy amounts of data and information.

OpenAI is clearly attuned to this fact and senses a commercial opportunity: although the list of organizations taking part in the ChatGPT plugin initiative is small, OpenAI has opened up a waiting list where companies can sign up to gain access to the plugins. In the months to come, we will no doubt see many new products and interfaces backed by OpenAIs generative AI systems.

While its easy to fall into the trap of seeing OpenAI as the sole gatekeeper of this technology and ChatGPT as the go-to generative AI tool this fortunately is far from the case. You dont need to sign up on a waiting list or have vast amounts of cash available to hand over to Sam Altman; instead, its possible to self-host LLMs.

This is something were starting to see at Thoughtworks. In the latest volume of the Technology Radar our opinionated guide to the techniques, platforms, languages and tools being used across the industry today weve identified a number of interrelated tools and practices that indicate the future of generative AI is niche and specialized, contrary to what much mainstream conversation would have you believe.

Unfortunately, we dont think this is something many business and technology leaders have yet recognized. The industrys focus has been set on OpenAI, which means the emerging ecosystem of tools beyond it exemplified by projects like GPT-J and GPT Neo and the more DIY approach they can facilitate have so far been somewhat neglected. This is a shame because these options offer many benefits. For example, a self-hosted LLM sidesteps the very real privacy issues that can come from connecting data with an OpenAI product. In other words, if you want to deploy an LLM to your own enterprise data, you can do precisely that yourself; it doesnt need to go elsewhere. Given both industry and public concerns with privacy and data management, being cautious rather than being seduced by the marketing efforts of big tech is eminently sensible.

A related trend weve seen is domain-specific language models. Although these are also only just beginning to emerge, fine-tuning publicly available, general-purpose LLMs on your own data could form a foundation for developing incredibly useful information retrieval tools. These could be used, for example, on product information, content, or internal documentation. In the months to come, we think youll see more examples of these being used to do things like helping customer support staff and enabling content creators to experiment more freely and productively.

If generative AI does become more domain-specific, the question of what this actually means for humans remains. However, Id suggest that this view of the medium-term future of AI is a lot less threatening and frightening than many of todays doom-mongering visions. By better bridging the gap between generative AI and more specific and niche datasets, over time people should build a subtly different relationship with the technology. It will lose its mystique as something that ostensibly knows everything, and it will instead become embedded in our context.

Indeed, this isnt that novel. GitHub Copilot is a great example of AI being used by software developers in very specific contexts to solve problems. Despite its being billed as your AI pair programmer, we would not call what it does pairing its much better described as a supercharged, context-sensitive Stack Overflow.

As an example, one of my colleagues uses Copilot not to do work but as a means of support as he explores a new programming language it helps him to understand the syntax or structure of a language in a way that makes sense in the context of his existing knowledge and experience.

We will know that generative AI is succeeding when we stop noticing it and the pronouncements about what it might do die down. In fact, we should be willing to accept that its success might actually look quite prosaic. This shouldnt matter, of course; once weve realized it doesnt know everything and never will that will be when it starts to become really useful.https://wp.technologyreview.com/wp-content/uploads/2022/04/Thoughtworks_Video_ContributedArticle_April2022.mp4Provided by Thoughtworks

This content was produced by Thoughtworks. It was not written by MIT Technology Reviews editorial staff.

Follow this link:

The future of generative AI is niche, not generalized - MIT Technology Review

Promises, Perils, And Predictions For Artificial Intelligence In Medicine: A Radiologists Perspective – Forbes

I recently attended the 2023 annual meeting of the American Roentgen Ray Society (ARRS), one of the major professional societies for radiologists and medical imaging specialists. As expected, one of the hot topics was artificial intelligence (AI) and expected impact on radiologists in particular, as well as medical practitioners in general.

Although I could not attend all of the numerous lectures, panel discussions, and research presentations on AI, I did learn of many exciting developments as well as areas of both opportunity and concern. In this column, Id like to share some thoughts on how AI will affect patients and physicians alike in the short-to-medium term future.

(Note: This discussion will be confined to so-called narrow AI to accomplish particular medical tasks, rather than artificial general intelligence or AGI that can simulate general human cognition. Ill leave the debate over whether a sufficiently advanced AI will exterminate humanity to others.)

1) AI will play an increasingly greater role in medical care, in ways both obvious and non-obvious to patients.

In my own field of radiology, AI will be used to enhance (but not yet replace) human radiologists making diagnoses from medical images. There are already FDA-approved AI algorithms to detect subtle internal bleeding within the brain or potentially fatal blood clots (pulmonary embolism) within the arteries of the lung.

When properly used, these algorithms could alert the human radiologists that a patients scan has one of these life-threatening abnormalities and bump the case to the top of the priority queue. This could significantly shorten the time between the scan and the appropriate treatment and thus save lives. (See this paper by Dr. Kiran Batra and colleagues from University of Texas Southwestern Medical Center for one example of the time savings achieved by AI.)

AI can also be used to enhance medical care in ways not directly related to rendering diagnoses. For instance, developers are working on physician co-pilot software that can sift through a patients medical records and extract the information most relevant for the patients upcoming visit to the radiology department (or internal medicine clinic, etc.). This could save the practitioners valuable time during each patient visit.

Robotic physician holding stethoscope

getty

2) The AIs are still not perfect, and human physicians will still need to have the final say in diagnoses and treatments.

For example, AIs are pretty good in detecting early breast cancer in mammogram images, but still make errors. (Often they make errors humans dont, and vice versa.) This makes AI great as an assistant to the human radiologist, but not (yet) a viable replacement.

Thus, we will see an interesting period of time where human physician-plus-AI will perform better than either human alone or AI alone. At some point in time, I predict that AI-assisted medicine will become standard of care and physicians who do not incorporate AI into their daily practices could open themselves to lawsuits for practicing substandard care.

3) As AIs get better, humans may start to over-rely on them.

This phenomenon is known as de-skilling. As an analogy (made by Dr. Charles Kahn of University of Pennsylvania in one of the ARRS panel discussions), suppose we develop self-driving automobiles that could handle most traffic conditions, but still required a human driver to take the wheel in emergencies. As AIs got increasingly better and the need for human intervention became less frequent, we human drivers could easily become complacent and lose good driving-related cognitive habits and reflexes.

If a partially-automated car going 70 mph on the highway suddenly alerted a human driver who hadnt truly driven in the past year to take over because of icy conditions ahead, things could go badly.

Similarly, if a human radiologist lets their cancer detection skills go rusty, they could run into trouble when the medical images included complex visual features beyond the ability of the AI to accurately parse.

My own personal approach will be to think of the AI as a tireless-but-quirky medical student constantly asking questions like, Could that squiggle be a cancer? How about this dark line is it a fracture? Could this dot be a small blood clot? An inquisitive human medical student can keep experienced doctors on their toes in a good way, and the same could be true for an AI.

4) AI could take over some interactions with patients that currently require human medical personnel.

Were probably not too far from reaching the point that a LLM (Large Language Model) AI like ChatGPT could take a radiology report written in medical jargon and translate it into terms understandable to non-physicians and possibly even answer follow-up questions about the significance of the findings.

A recent article by Ayers and colleagues in JAMA Intern Med compared how AI chatbots and human physicians responded to patient medical questions offered on social media. According to the judges (who were blinded as to the author of the answers), the chatbot answers were considered better both in terms of information quality and empathy than the human physicians answers!

The use of artificial intelligence in medicine is a rapidly evolving field, and Ive only scratched the surface of the exciting work being done. Given the rapid pace of developments, I dont know what things will look like in 5 months, let alone in 5 years. But Im glad to be alive during this time of potentially massive innovation (and admittedly potentially uncomfortable upheaval). For now, I remain optimistic that AI could be an enormous boon for patients and physicians alike.

I am a physician with long-standing interests in health policy, medical ethics and free-market economics. I am the co-founder of Freedom and Individual Rights in Medicine (FIRM). I graduated from University of Michigan Medical School and completed my residency in diagnostic radiology at the Washington University School of Medicine in St. Louis (where I was also a faculty member). I'm now in private practice in the Denver area. All my opinions are my own, and not necessarily shared by my employer.

Read this article:

Promises, Perils, And Predictions For Artificial Intelligence In Medicine: A Radiologists Perspective - Forbes

AI seeps into coursework The Brookhaven Courier – Brookhaven Courier

In less than six months, ChatGPT has become a household name. The AI service can write paragraphs, essays, speeches and fill in exams. So many people have flocked to the chatbot for a glimpse of its power that the servers have to be shut down at times. It is a tour de force of artificial intelligence.

ChatGPT was developed by OpenAI, an artificial intelligence company founded in 2015 with a mission to ensure that artificial general intelligence benefits all of society, according to OpenAIs website.

The human-like chatbot can answer almost any question the user provides, and it has been trained to respond as a human would.

When asked about its pros and cons, ChatGPT said, Its important to note that while I can be a helpful tool for certain tasks, human judgment and critical thinking should always be exercised when interpreting and using the information generated by AI systems like me.

With ChatGPTs capabilities, it comes as no surprise students have been tempted to consult it for assistance with their assignments, especially in English courses. However, some Dallas College faculty warn students of ChatGPTs downsides.

A lot of people have heard of it but arent sure exactly what it does, or what it doesnt do, Marylynn Patton, Dallas College El Centro Campus ESOL curriculum chair, said. It doesnt do everything.

Patton recently presented on ChatGPT at a national Teaching English to Speakers of Other Languages conference where she spoke about ways ChatGPT can be used as an educator, and how to detect whether something has been written using ChatGPT.

She said since its release in November 2022, ChatGPT has greatly improved. Where it used to score mid-range on AP exams and Bar exams, it is now scoring higher than 90%, Patton said.

One area where the AI falters is in English and literature courses. Patton said ChatGPT is scoring a two on the AP English exam, which is below college level. [ChatGPT] is not highly qualified, Patton said.

In the lower-level skills like read, respond, summarize, [with] those things it can do pretty well, Patton said. Its the higher level, the critical thinking, the evaluating materials, giving reactions to things. Those are the things that it cannot do.

Dallas College does not have an official stance on ChatGPT yet, Patton said. But she urges instructors to sway students from resorting to ChatGPT. She suggests that teachers discuss the AI chatbot with students. Talk about the ethical side of it, how it could be used or ways that its being abused, Patton said.

On the upside, ChatGPT is useful for providing formats for essays and letters, Patton said.

English professor Kendra Unruh said she is changing how she formats assignments for her students. One thing she does is have students write a rough draft in class so she has something on which to base her students writing. If a student turns in the final draft and their writing style veers from the rough draft, it will be obvious they did not write the essay.

Unruh has also updated her discussion board posts to directly ask students what they thought about a topic. She said she tries to make the process personal enough that an AI can not reproduce the results.

Patton related ChatGPT to a calculator. It will be the new calculator for writing, Patton said. In math, you learn your basics and then after you learn your basics you can use the calculator.

Read more from the original source:

AI seeps into coursework The Brookhaven Courier - Brookhaven Courier