Archive for the ‘Artificial Intelligence’ Category

Is artificial intelligence ready for health care prime time? – Montana Free Press

What use could health care have for someone who makes things up, cant keep a secret, doesnt really know anything, and, when speaking, simply fills in the next word based on whats come before? Lots, if that individual is the newest form of artificial intelligence, according to some of the biggest companies out there.

Companies pushing the latest AI technology known as generative AI are piling on: Google and Microsoft want to bring types of so-called large language models to health care. Big firms that are familiar to folks in white coats but maybe less so to your average Joe and Jane are equally enthusiastic: Electronic medical records giants Epic and Oracle Cerner arent far behind. The space is crowded with startups, too.

The companies want their AI to take notes for physicians and give them second opinions assuming they can keep the intelligence from hallucinating or, for that matter, divulging patients private information.

Theres something afoot thats pretty exciting, said Eric Topol, director of the Scripps Research Translational Institute in San Diego. Its capabilities will ultimately have a big impact. Topol, like many other observers, wonders how many problems it might cause like leaking patient data and how often. Were going to find out.

The specter of such problems inspired more than 1,000 technology leaders to sign an open letter in March urging that companies pause development on advanced AI systems until we are confident that their effects will be positive and their risks will be manageable. Even so, some of them are sinking more money into AI ventures.

The underlying technology relies on synthesizing huge chunks of text or other data for example, some medical models rely on 2 million intensive care unit notes from Beth Israel Deaconess Medical Center in Boston to predict text that would follow a given query. The idea has been around for years, but the gold rush, and the marketing and media mania surrounding it, are more recent.

The frenzy was kicked off in December 2022 by Microsoft-backed OpenAI and its flagship product, ChatGPT, which answers questions with authority and style. It can explain genetics in a sonnet, for example.

OpenAI, started as a research venture seeded by Silicon Valley elites like Sam Altman, Elon Musk, and Reid Hoffman, has ridden the enthusiasm to investors pockets. The venture has a complex, hybrid for- and nonprofit structure. But a new $10 billion round of funding from Microsoft has pushed the value of OpenAI to $29 billion, The Wall Street Journal reported. Right now, the company is licensing its technology to companies like Microsoft and selling subscriptions to consumers. Other startups are considering selling AI transcription or other products to hospital systems or directly to patients.

Hyperbolic quotes are everywhere. Former Treasury Secretary Larry Summers tweeted recently: Its going to replace what doctors do hearing symptoms and making diagnoses before it changes what nurses do helping patients get up and handle themselves in the hospital.

I would not put patient data in. We dont understand what happens with these data once they hit OpenAI servers.

But just weeks after OpenAI took another huge cash infusion, even Altman, its CEO, is wary of the fanfare. The hype over these systems even if everything we hope for is right long term is totally out of control for the short term, he said for a March article in the New York Times.

Few in health care believe this latest form of AI is about to take their jobs (though some companies are experimenting controversially with chatbots that act as therapists or guides to care). Still, those who are bullish on the tech think itll make some parts of their work much easier.

Eric Arzubi, a psychiatrist in Billings, used to manage fellow psychiatrists for a hospital system. Time and again, hed get a list of providers who hadnt yet finished their notes their summaries of a patients condition and a plan for treatment.

Writing these notes is one of the big stressors in the health system: In the aggregate, its an administrative burden. But its necessary to develop a record for future providers and, of course, insurers.

When people are way behind in documentation, that creates problems, Arzubi said. What happens if the patient comes into the hospital and theres a note that hasnt been completed and we dont know whats been going on?

The new technology might help lighten those burdens. Arzubi is testing a service, called Nabla Copilot, that sits in on his part of virtual patient visits and then automatically summarizes them, organizing into a standard note format the complaint, the history of illness, and a treatment plan.

Results are solid after about 50 patients, he said: Its 90% of the way there. Copilot produces serviceable summaries that Arzubi typically edits. The summaries dont necessarily pick up on nonverbal cues or thoughts Arzubi might not want to vocalize. Still, he said, the gains are significant: He doesnt have to worry about taking notes and can instead focus on speaking with patients. And he saves time.

If I have a full patient day, where I might see 15 patients, I would say this saves me a good hour at the end of the day, he said. (If the technology is adopted widely, he hopes hospitals wont take advantage of the saved time by simply scheduling more patients. Thats not fair, he said.)

Nabla Copilot isnt the only such service; Microsoft is trying out the same concept. At Aprils conference of the Healthcare Information and Management Systems Society an industry confab where health techies swap ideas, make announcements, and sell their wares investment analysts from Evercore highlighted reducing administrative burden as a top possibility for the new technologies.

But overall? They heard mixed reviews. And that view is common: Many technologists and doctors are ambivalent.

For example, if youre stumped about a diagnosis, feeding patient data into one of these programs can provide a second opinion, no question, Topol said. Im sure clinicians are doing it. However, that runs into the current limitations of the technology.

Joshua Tamayo-Sarver, a clinician and executive with the startup Inflect Health, fed fictionalized patient scenarios based on his own practice in an emergency department into one system to see how it would perform. It missed life-threatening conditions, he said. That seems problematic.

The technology also tends to hallucinate that is, make up information that sounds convincing. Formal studies have found a wide range of performance. One preliminary research paper examining ChatGPT and Google products using open-ended board examination questions from neurosurgery found a hallucination rate of 2%. A study by Stanford researchers, examining the quality of AI responses to 64 clinical scenarios, found fabricated or hallucinated citations 6% of the time, co-author Nigam Shah told KFF Health News. Another preliminary paper found, in complex cardiology cases, ChatGPT agreed with expert opinion half the time.

Privacy is another concern. Its unclear whether the information fed into this type of AI-based system will stay inside. Enterprising users of ChatGPT, for example, have managed to get the technology to tell them the recipe for napalm, which can be used to make chemical bombs.

In theory, the system has guardrails preventing private information from escaping. For example, when KFF Health News asked ChatGPT its email address, the system refused to divulge that private information. But when told to role-play as a character, and asked about the email address of the author of this article, it happily gave up the information. (It was indeed the authors correct email address in 2021, when ChatGPTs archive ends.)

I would not put patient data in, said Shah, chief data scientist at Stanford Health Care. We dont understand what happens with these data once they hit OpenAI servers.

Tina Sui, a spokesperson for OpenAI, told KFF Health News that one should never use our models to provide diagnostic or treatment services for serious medical conditions. They are not fine-tuned to provide medical information, she said.

With the explosion of new research, Topol said, I dont think the medical community has a really good clue about whats about to happen.

House Bill 234 became a flashpoint in the 2023 Legislature for conservative concerns about allegedly obscene books in public schools. The proposal just became law, but not before undergoing a significant round of changes.

The Chinese-owned social media platform will be illegal in Montana starting Jan. 1, 2024, barring a successful court challenge.

More here:
Is artificial intelligence ready for health care prime time? - Montana Free Press

Artificial Intelligence in employment: the regulatory and legal … – Farrer & Co

It wont have escaped your attention that AI is in the news a lot at the moment. Following the release of ChatGPT at the end of 2022, not a week seems to go by without headlines either extolling its benefits or panicking about its risks.

Irrespective of which side of the fence you sit on, what is clear is that rapidly advancing AI is here to stay. With that comes the increasing need to consider AI risk management, particularly in areas where AI has the potential to make or inform decisions about individuals. The field of employment is a prime example of this.

In this blog, we look at the current (though evolving) legal and regulatory landscape in the UK regarding the use of AI in employment, as well as how employers might navigate their way through it.

When it comes to worldwide regulation of AI, there is currently no consensus as to approach. While the EU is preparing strict regulation and tough restrictions on the use of AI, with Italy banning ChatGPT over privacy concerns, the UK is planning an innovative and iterative approach to regulation.

In its recently published White Paper A pro-innovation approach to AI regulation, rather than introducing new legislation the UK Government proposes a system of non-statutory principles overseen and implemented by existing regulators.

What this means for the employment sector is that the Government intends to encourage the Equality and Human Rights Commission and the Information Commissioner to work with the Employment Agency Standards Inspectorate to issue joint guidance on the use of AI systems in recruitment or employment. In particular, the Government envisages the joint guidance will:

For more detailed analysis on the Governments White Paper, Ian De Freitas (a partner in our Data, IP and Technology Disputes team), provides helpful commentary in his article Regulating Artificial Intelligence. In the article he explores the five common principles proposed by the Government, assessing them against other recent developments.

In the absence of specific legislation governing AI in the workplace, and pending possible guidance, it is important employers understand how existing legal risks and obligations may affect their use of AI. These include:

We have provided detailed commentary on using AI in employment in two blogs:

In summary, employers may want to consider the following:

There is no escaping the fact that AI has the potential to radically transform employment as we know it. Recent reports predict that AI could replace the equivalent of 300 million full-time jobs. With that comes concerns about the treatment of workers and the erosion of workers rights (for example as highlighted by the TUC in its latest conference).

Employers will need to prepare strategically for the changing nature of work and the need to integrate AI into workplace operations. Currently there are likely to be more questions than answers: will there be a need to redesign roles or change work allocation and workflow processes? How can employees be supported in this transition? Is there a need to invest in workforce training to help employees develop the skills needed to work with AI or take on different roles? Regardless, with AI likely to impact most jobs in some way, there is a need for employers to look afresh at their workforce strategies in order to keep pace with the rapid changes that AI might bring.

This publication is a general summary of the law. It should not replace legal advice tailored to your specific circumstances.

Farrer & Co LLP, May 2023

Partner

David advises employer clients, with a particular focus on the financial services and sport sectors, on a wide range of contentious and non-contentious employment issues. He also acts for individuals in relation to contract and exit negotiations and advises them on matters relating to restrictive covenants.

David advises employer clients, with a particular focus on the financial services and sport sectors, on a wide range of contentious and non-contentious employment issues. He also acts for individuals in relation to contract and exit negotiations and advises them on matters relating to restrictive covenants.

Senior Counsel

Amy is a Senior Counsel and Knowledge Lawyer in the employment team, providing expert technical legal support to the team and leading its know-how function. Given the fast-changing nature of employment law, Amy ensures the team is at the forefront of all legal changes and can provide the best possible advice to our clients.

Amy is a Senior Counsel and Knowledge Lawyer in the employment team, providing expert technical legal support to the team and leading its know-how function. Given the fast-changing nature of employment law, Amy ensures the team is at the forefront of all legal changes and can provide the best possible advice to our clients.

Here is the original post:
Artificial Intelligence in employment: the regulatory and legal ... - Farrer & Co

Google’s Latest Artificial Intelligence Marked a Significant Surge in … – Digital Information World

In a recent report, CNBC discovered that Google introduced its latest extended language prototype, which harnesses nearly fivefold the activity data compared to its 2022 version. This significant boost empowers the model to undertake more sophisticated endeavors such as advanced coding, mathematical calculations, and visionary writing assignments.

During the Google I/O event, the organization revealed PaLM 2, their latest big language prototype created for general usage. Interior documentation accessed by CNBC reveals that Pathways Language Model 2 has been oriented on a staggering 3.5 trillion tickets. Tickets, which comprise word strings, play a crucial role in language prototypes as they enable the model to anticipate the subsequent word in a given sequence. In 2022, Google introduced the earlier version of PaLM (Pathways Language Model), which was trained on 770 billion tickets.

While Google has demonstrated enthusiasm in showcasing the capabilities of its AI, seamlessly integrating it into search functions, excel spreadsheets, editing documents, and email messages, the organization has chosen not to divulge specific details about the scale or composition of its data.

Similarly, OpenAI, the business backed by Microsoft and the developer of ChatGPT, has kept the details of its most recent major language model, GPT-4, undercover. Both companies attribute the absence of transparency to the competitive environment within the industry, as they vie for the attention of users who prefer casual chatbot-based information retrieval over conventional search engines. However, as the race for AI advancements intensifies, the research community is increasingly demanding greater clarity in these endeavors.

Following the introduction of Pathways Language Model 2, Google has emphasized that the most delinquent version is of reduced size compared to previous large language models (LLMs). This development carries significance as it denotes the increasing efficiency of Google's technology, enabling it to tackle more intricate tasks successfully. As per internal documentation, PaLM 2 has undergone training with 335 billion parameters, signifying the model's intricacy. Regarding this matter, Google has not yet issued a direct statement for this particular account.

In a blog, discussing Pathways Language Model 2, Google revealed the incorporation of a novel approach known as "compute-optimal scaling." This technique enhances the efficiency and overall performance of the big language prototype, resulting in quicker speculation, reduced parameter calculation, and lower serving expenses.

Google also assured CNBC's last report by affirming that Pathways Language Model 2 is skilled in a hundred different languages and boasts a wide array of capabilities, empowering it to drive 25 components and products. Pathways Language Model 2 is available in 4 various dimensions, ranging from the smallest, Gecko, to the largest, Unicorn, with Otter and Bison in between.

Based on publicly available information, PaLM 2 surpasses all current models in terms of power. For instance, Facebook's LLaMA large language prototype. While the training size for OpenAI's ChatGPT was last disclosed as three hundred billion tickets with GPT-3. OpenAI recently released GPT-4 in March, claiming it demonstrates a "human-level version" across numerous proficient assessments.

As emerging Artificial Intelligence applications rapidly penetrate the mainstream, debates surrounding fundamental technology are evolving more intensely. Controversies surrounding the underlying Artificial Intelligence are gaining momentum in response to its widespread adoption.

In February, El Mahdi El Mhamdi, a prominent senior Google researcher, resigned due to the company's insufficient transparency. During a hearing held by the (SJS) Senate Judiciary Subcommittee on solitude and technology, OpenAI CEO concurred with legislators, highlighting the necessity for a fresh framework to govern AI, acknowledging the significant responsibility borne by businesses like his concerning the tools they introduce to the world.

Read this article:
Google's Latest Artificial Intelligence Marked a Significant Surge in ... - Digital Information World

Artificial intelligence programs are causing concern for educators – WTAJ – www.wtaj.com

UNIVERSITY PARK, Pa. (WTAJ) New artificial intelligence programs are popping up rapidly and some can do your homework.

Programs like ChatGPT are making this form of plagiarism easier for students, but its raising a slew of ethical concerns for teachers.

Whats recently happened is the development of these things that we call large language models, sometimes LLMs, Assistant Professor at the Penn State College of Information Sciences & Techology School Shomir Wilson said.

Wilson said LLMs are large statistical models of how words follow each other in language.

Theyve been trained on huge volumes of text typically gathered on the internet and what theyre able to do, with some tweaking, is behave as a chatbot, Wilson said.

Wilson said theres a growing concern amongst schools where some students have used LLMs to do their assignments.

These large language models do make it easier to generate text with some concerns again about accuracy, Wilson said. That introduces concerns that students might not be learning how to write as well as they should.

Wilson said there are ways to get a sense that this information would be plagiarized. You could use a plagiarism check on the internet, but theres no certainty.

You can get some idea of how similar a document is to something generated by a large language model, Wilson said. But not enough to really say, Yes, this is definitively from that.'

These programs arent all bad. Wilson said there are some benefits from using the technology; like for a draft or a summary of information.

Read the original post:
Artificial intelligence programs are causing concern for educators - WTAJ - http://www.wtaj.com

Adventists in Germany Discuss Artificial Intelligence – Adventist News Network

On May 7, 2023, Hope Media Europe, the media center of the Seventh-day Adventist Church, organized the 12th Media Day in Alsbach-Hhnlein (near Darmstadt). Coming from German-speaking countries, around 50 media professionals, students, and people interested in mediafrom the fields of video, audio, design, photography, text/print, journalism, communication, and internetmet at this exchange-and-networking event to discuss the topic "Artificial Intelligence (AI): the beginning of a new era?"

Two AI practitioners had been invited for the lectures: William Edward Timm, theologian, digital media expert, and department head of Novo Tempo, the Adventist TV station in Brazil, which belongs to the Hope Channel broadcasting family; and Danillo Cabrera, software expert at Hope Media Europe. Both have already gained practical experience with the use of artificial intelligence.

Evolution of AI

"We are in the middle of a revolution" were the words of Timm, who first gave a brief overview of the history of artificial intelligence in his keynote speech. As early as 1950, the British mathematician Alan Turing invented the Turing Test: A computer is considered intelligent if, in any question-answer game over an electrical connection, humans cannot distinguish whether a computer or a human is sitting at the other end of the line. In 1956, the first AI program in history, "Logic Theorist," was written. This program was able to prove 38 theorems from Russell and Whitehead's fundamental work Principia Mathematica.

Additionally, in 1965, Herbert Simon, an American social scientist and later Nobel Prize winner for economics, predicted that in 20 years, machines would be able to do what humans could. In 1997, the time had come: a computer called "Deep Blue" defeated the then world chess champion Garri Kasparov.

Meanwhile, a lot of artificial intelligence is already being used in the background, says Timmfor example, in algorithms that suggest music and videos in social media according to the user's taste. What is new, however, is generative AI, with which users can solve concrete tasks or create products, such as ChatGPT or the image generator Midjourney.

Timm put forward the thesis that this generative AI would democratize AI, as it could now be used by every human being in a self-determined way, not only as a component of software over which one had no influence (e.g., algorithms). He distinguished three phases in the development of AI: the generative AI already mentioned, neuronal networks that would imitate the human mind, and so-called Deep Learning, which would, for example, allow self-driving cars to drive accident-free. Finally, Timm addressed the ethical aspects of the application of AI.

Artificial Intelligence and Ethics

Timm cited the AI-supported production of meat substitutes as a positive example. Artificial intelligence can analyze the molecular structure of meat and use the results to assemble a similar product from plant molecules that is very similar in consistency and taste to the meat product.

In 2021, Guiseppe Scionti has already produced a meat substitute product from the 3D printer in this way, although it is not yet fully developed. However, that could change quickly, says Timm.

In the ethical evaluation of AI, it is important to distinguish between "Narrow AI," which is intended for practical, labor-saving purposes, and "General AI," which resembles the human mind and acts independently. In general, one of the main dangers is the expected spread of fakes of all kinds (fake news, pictures, videos, etc.). Since a democracy lives from dialogue and discussion, this should not be taken over, damaged, or prevented by AI, says Timm.

According to calculations by the Goldman Sachs banking firm, AI could cause 300 million people worldwide to lose their previous jobs and have to be retrained. This would have not only political but also psychological consequences. "Many people will have the feeling of being superfluous," said Timm. He assumes, however, that after a transitional phase in which AI makes previous activities more efficient, new fields of activity will emerge for which resources will then be available. "At the beginning of every new technology, there are adjustment problems until a new distribution of roles has become established."

Timm formulated some rules for dealing with artificial intelligence:

Practical Tools

Cabrera then presented a number of practical applications for AI in his talk. They ranged from video, image, and music generators to text-based tools, such as ChatGPT, and avatars with a human appearance that could be used, for example, to conduct customer conversations.

Project Slam

In Project Slam, participants presented their projects in contributions of ten minutes each. They were in the fields of music, film, marketing, podcast, and comic drawing.

Some examples: Singer/Songwriter:www.shulami-melodie.de; Marketingintou-content.de/ and cookafrog.info/; Podcast "Der kleine Kampf"open.spotify.com/show/23HNDzTxjoHjFKUlmrklY0

Media Day Award

Film music composer Manuel Igler was awarded the Media Day Award. He wrote music for various TV commercials and series on Hope TV (e.g., Encounters, the intro for the moonlight show, and the series about the Old Testament book of Daniel [manueligler.com]).

Hope Media

Hope Media Europe operates, among others, the television channel Hope TV. It is part of the international Hope Channel family of channels, which was founded in 2003 by the Seventh-day Adventist Church in the USA and now consists of over 60 national channels.

Hope TV can be received via satellite, Germany-wide via cable, and on the internet via http://www.hopetv.de.

The original version of this story was posted on the Inter-European Division website.

Visit link:
Adventists in Germany Discuss Artificial Intelligence - Adventist News Network