Archive for the ‘Artificial Intelligence’ Category

ASCRS 2023: Artificial intelligence application to ophthalmology – Ophthalmology Times

Alvin Liu, MD, sat down with Sheryl Stevenson, Group Editorial Director,Ophthalmology Times, to discuss his presentation on deep learning and 3D OCT at the ASCRS annual meeting in San Diego.

Editors note:This transcript has been edited for clarity.

We're joined with Dr. Alvin Liu, who's going to be presenting at this year's ASCRS. Welcome to you. Tell us a little bit more about your presentation regarding deep learning and 3D OCT.

Sheryl, thank you so much for having me speak today. I'm happy to share results. So let me introduce myself a little bit more. My name is Alvin Liu. I'm a retina specialist at the Wilmer Eye Institute at Johns Hopkins University.

My research focuses on the artificial intelligence application to ophthalmology. And specifically, I'm also the director of the Wilmer Precision Ophthalmology Center of Excellence. So the work that I will be presenting at the ASCRS this year, is directly related to our center of excellence.

The overall premise is that we know macular degeneration is a leading cause of central vision loss in the elderly in the US and around the world. Specifically, most patients with AMD lose vision because of the wet form of AMD. Specifically for wet AMD we know that earlier, timely treatment with better presenting visual acuity predicts better final visual acuity. So it is imperative for us to figure out which patients are at a high risk of imminent conversion to wet AMD.

Currently there are ways for us to provide an average estimate of conversion or progression to advanced AMD using ERAS criteria. However, the ERAS criteria can only provide an average risk estimate for over five years. The model we have developed can be used as a tool that can provide information in a more reasonable or more meaningful timeframe, which is six months. We start out by asking ourselves, can we use deep learning, which is the cutting edge artificial intelligence technique for medical image analysis. Can we use deep learning to analyze OCT images to predict imminent conversion from dry to wet AMD within six months.

To do that, we collected a dataset of over 2500 patients with AMD and over 30,000 OCT images. We train a model that is able to produce robust prediction for when an eye is at a high risk of converting to wet AMD within six months using an OCT image alone. In addition, we ran different experiments, trying to see what if we also feed this model additional information in the form of how many obtainable clinical variables, such as the patient's age, sex, visual acuity, or fellow eye status, and we were able to demonstrate that in the prediction of imminent conversion to wet AMD. In the first eye of patients, meaning these are patients who had never converted to wet AMD in either eye, this additional tabular clinical information was also helpful.

Continued here:
ASCRS 2023: Artificial intelligence application to ophthalmology - Ophthalmology Times

UK and US intervene amid AI industrys rapid advances – The Guardian

Artificial intelligence (AI)

Competition and Markets Authority sends pre-warning to sector, while White House announces measures to address risks

The UK and US have intervened in the race to develop ever more powerful artificial intelligence technology, as the British competition watchdog launched a review of the sector and the White House advised tech firms of their fundamental responsibility to develop safe products.

Regulators are under mounting pressure to intervene, as the emergence of AI-powered language generators such as ChatGPT raises concerns about the potential spread of misinformation, a rise in fraud and the impact on the jobs market, with Elon Musk among nearly 30,000 signatories to a letter published last month urging a pause in significant projects.

The UK Competition and Markets Authority (CMA) said on Thursday it would look at the underlying systems or foundation models behind AI tools. The initial review, described by one legal expert as a pre-warning to the sector, will publish its findings in September.

On the same day, the US government announced measures to address the risks in AI development, as Kamala Harris, the vice-president, met chief executives at the forefront of the industrys rapid advances. In a statement, the White House said firms developing the technology had a fundamental responsibility to make sure their products are safe before they are deployed or made public.

The meeting capped a week during which a succession of scientists and business leaders issued warnings about the speed at which the technology could disrupt established industries. On Monday, Geoffrey Hinton, the godfather of AI, quit Google in order to speak more freely about the technologys dangers, while the UK governments outgoing scientific adviser, Sir Patrick Vallance, urged ministers to get ahead of the profound social and economic changes that could be triggered by AI, saying the impact on jobs could be as big as that of the Industrial Revolution.

Sarah Cardell said AI had the potential to transform the way businesses competed, but that consumers must be protected.

The CMA chief executive said: AI has burst into the public consciousness over the past few months but has been on our radar for some time. Its crucial that the potential benefits of this transformative technology are readily accessible to UK businesses and consumers while people remain protected from issues like false or misleading information.

ChatGPT and Googles rival Bard service are prone to delivering false information in response to users prompts, while concerns have been raised about AI-generated voice scams. The anti-misinformation outfit NewsGuard said this week that chatbots pretending to be journalists were running almost 50 AI-generated content farms. Last month, a song featuring fake AI-generated vocals purporting to be Drake and the Weeknd was pulled from streaming services.

The CMA review will look at how the markets for foundation models could evolve, what opportunities and risks there are for consumers and competition, and formulate guiding principles to support competition and protect consumers.

The leading players in AI are Microsoft, ChatGPT developer OpenAI in which Microsoft is an investor and Google parent Alphabet, which owns a world-leading AI business in UK-based DeepMind, while leading AI startups include Anthropic and Stability AI, the British company behind Stable Diffusion.

Alex Haffner, competition partner at the UK law firm Fladgate, said: Given the direction of regulatory travel at the moment and the fact the CMA is deciding to dedicate resource to this area, its announcement must be seen as some form of pre-warning about aggressive development of AI programmes without due scrutiny being applied.

In the US, Harris met the chief executives of OpenAI, Alphabet and Microsoft at the White House, and outlined measures to address the risks of unchecked AI development. In a statement following the meeting, Harris said she told the executives that the private sector has an ethical, moral, and legal responsibility to ensure the safety and security of their products.

The administration said it would invest $140m (111m) in seven new national AI research institutes, to pursue artificial intelligence advances that are ethical, trustworthy, responsible, and serve the public good. AI development is dominated by the private sector, with the tech industry producing 32 significant machine-learning models last year, compared with three produced by academia.

Leading AI developers have also agreed to their systems being publicly evaluated at this years Defcon 31 cybersecurity conference. Companies that have agreed to participate include OpenAI, Google, Microsoft and Stability AI.

This independent exercise will provide critical information to researchers and the public about the impacts of these models, said the White House.

Robert Weissman, the president of the consumer rights non-profit Public Citizen, praised the White Houses announcement as a useful step but said more aggressive action is needed. Weissman said this should including a moratorium on the deployment of new generative AI technologies, the term for tools such as ChatGPT and Stable Diffusion.

At this point, Big Tech companies need to be saved from themselves. The companies and their top AI developers are well aware of the risks posed by generative AI. But they are in a competitive arms race and each believes themselves unable to slow down, he said.

The EU was also told on Thursday that it must protect grassroots AI research or risk handing control of the technologys development to US firms.

In an open letter coordinated by the German research group Laion or Large-scale AI Open Network the European parliament was told that one-size-fits-all rules risked eliminating open research and development.

Rules that require a researcher or developer to monitor or control downstream use could make it impossible to release open-source AI in Europe, which would entrench large firms and hamper efforts to improve transparency, reduce competition, limit academic freedom, and drive investment in AI overseas, the letter said.

Europe cannot afford to lose AI sovereignty. Eliminating open-source R&D will leave the European scientific community and economy critically dependent on a handful of foreign and proprietary firms for essential AI infrastructure.

The largest AI efforts, by companies such as OpenAI and Google, are heavily controlled by their creators. It is impossible to download the model behind ChatGPT, for instance, and the paid-for access that OpenAI provides to customers comes with a number of legal and technical restrictions on how it can be used. By contrast, open-source efforts involve creating a model and then releasing it for anyone to use, improve or adapt as they see fit.

We are working on open-source AI because we think that sort of AI will be more safe, more accessible and more democratic, said Christoph Schuhmann, the organisational lead at Laion.

{{topLeft}}

{{bottomLeft}}

{{topRight}}

{{bottomRight}}

{{.}}

See the original post here:
UK and US intervene amid AI industrys rapid advances - The Guardian

Did Stephen Hawking Warn Artificial Intelligence Could Spell the … – Snopes.com

Image Via Image Via Sion Touhig/Getty Images")}else if(is_tablet()) {document.write("")}

On May 1, 2023, the New York Post ran a story saying that British theoretical physicist Stephen Hawking had warned that the development of artificial intelligence (AI) could mean "the end of the human race."

Hawking, who died in 2018, had indeed said so in an interviewwith the BBC in 2014.

"The development of full artificial intelligence could spell the end of the human race," Hawking said during the interview. "Once humans develop artificial intelligence, it would take off on its own and re-design itself at an ever-increasing rate."

Another story, from CNBC in 2017, relayed a similar warning about AI from the physicist. It came from Hawking's speech at the Web Summit technology conference in Lisbon, Portugal, according to CNBC. Hawking reportedly said:

Unless we learn how to prepare for, and avoid, the potential risks, AI could be the worst event in the history of our civilization. It brings dangers, like powerful autonomous weapons, or new ways for the few to oppress the many. It could bring great disruption to our economy.

Such warnings became more common in 2023. In March, tech leaders, scientists, and entrepreneurs warned about the dangers posed by AI creations, like ChatGPT, to humanity.

"AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs," they wrote in an open letter published by the Future of Life Institute, a nonprofit. The letter garnered over 27,500 signatures as of this writing in early May 2023. Among the signatories were CEO of SpaceX, Tesla, and Twitter Elon Musk, Apple co-founder Steve Wozniak, and Pinterest co-founder Evan Sharp.

In addition, Snopes and other fact-checking organizations noted a dramatic uptick in misinformation conveyed on social media via AI-generated contentin 2022 and 2023.

Then, on May 2, long-time researcher at Google, Geoffrey Hinton, quit the technology behemoth to sound the alarm about AI products. Hinton, known as "Godfather of AI," told MIT Technology Review that chatbots like GPT-4 that OpenAI, an AI lab "are on track to be a lot smarter than he thought they'd be."

Given that Hawking was indeed documented as warning about the potential for AI to "spell the end of the human race," we rate this quote as correctly attributed to him.

"Geoffrey Hinton Tells Us Why He's Now Scared of the Tech He Helped Build." MIT Technology Review, https://www.technologyreview.com/2023/05/02/1072528/geoffrey-hinton-google-why-scared-ai/. Accessed 3 May 2023.

"'Godfather of AI' Leaves Google, Warns of Tech's Dangers." AP NEWS, 2 May 2023, https://apnews.com/article/ai-godfather-google-geoffery-hinton-fa98c6a6fddab1d7c27560f6fcbad0ad.

"Pause Giant AI Experiments: An Open Letter." Future of Life Institute, https://futureoflife.org/open-letter/pause-giant-ai-experiments/. Accessed 3 May 2023.

Stephen Hawking Says AI Could Be "worst Event" in Civilization. 6 Nov. 2017, https://web.archive.org/web/20171106191334/https://www.cnbc.com/2017/11/06/stephen-hawking-ai-could-be-worst-event-in-civilization.html.

Stephen Hawking Warned AI Could Mean the "End of the Human Race." 3 May 2023, https://web.archive.org/web/20230503162420/https://nypost.com/2023/05/01/stephen-hawking-warned-ai-could-mean-the-end-of-the-human-race/.

"Stephen Hawking Warns Artificial Intelligence Could End Mankind." BBC News, 2 Dec. 2014. http://www.bbc.com, https://www.bbc.com/news/technology-30290540.

Damakant Jayshi is a fact-checker for Snopes, based in Atlanta.

Read this article:
Did Stephen Hawking Warn Artificial Intelligence Could Spell the ... - Snopes.com

Artificial Intelligence and Jobs: Whos at Risk – Barron’s

Since the release of ChatGPT, companies have scrambled to understand how generative artificial intelligence will affect jobs. This past week, IBM CEO Arvind Krishna said the company will pause hiring for roles that could be replaced by AIaffecting as much as 30% of back-office jobs over five years. And Chegg , which provides homework help and online tutoring, saw its stock lose half of its value after warning of slower growth as students turned to ChatGPT.

A recent study by a team of professors from Princeton University, the University of Pennsylvania, and New York University analyzed how generative AI relates to 52 human abilities. The researchers then calculated AI exposure for occupations. (Exposure doesnt necessarily mean job loss.) Among high-exposure jobs, a few are obvioustelemarketers, HR specialists, loan officers, and law clerks. More surprising: Eight of the top 10 are humanities professors.

In a survey from customer-service software firm Tidio, 64% of respondents thought chatbots, robots, or AI can replace teachers, though many believe that empathy and listening skills may be tough to replicate. A survey from the Walton Family Foundation found that within two months of ChatGPTs introduction, 51% of teachers tapped it for lesson planning and creative ideas. Some 40% said they used it at least once a week, compared with 22% of students.

AI isnt just knocking on the door; its already inside. Language-learning app Duolingo has been using AI since 2020. Even Chegg unveiled an AI learning service called CheggMate using OpenAIs GPT-4. Still, Morgan Stanley analyst Josh Baer wrote that its highly unlikely that CheggMate can insulate the company from AI.

Write to Evie Liu at evie.liu@barrons.com

Advertisement - Scroll to Continue

Devon Energy , KKR , McKesson , PayPal Holdings , and Tyson Foods release earnings.

Airbnb , Air Products & Chemicals , Apollo Global Management , Duke Energy , Electronic Arts , Occidental Petroleum , and TransDigm Group report quarterly results.

The National Federation of Independent Business releases its Small Business Optimism Index for April. Consensus estimate is for a 90 reading, roughly even with the March figure. The index has had 15 consecutive readings below the 49-year average of 98 as inflation and a tight labor market remain top of mind for small-business owners.

Walt Disney

Advertisement - Scroll to Continue

Brookfield Asset Management , Roblox , Toyota Motor , and Trade Desk release earnings.

The Bureau of Labor Statistics releases the consumer price index for April. Economists forecast a 5% year-over-year increase, matching the March data. The core CPI, which excludes volatile food and energy prices, is expected to rise 5.4%, two-tenths of a percentage point less than previously. Both indexes are well below their peaks from last year but also much higher than the Federal Reserves 2% target.

Honda Motor , JD.com , PerkinElmer , and Tapestry hold conference calls to discuss quarterly results.

Advertisement - Scroll to Continue

The Bank of England announces its monetary-policy decision. The central bank is widely expected to raise its bank rate by a quarter of a percentage point, to 4.5%. The United Kingdoms CPI rose 10.1% in March from the year prior, making it the only Western European country with a double-digit rate of inflation.

Advertisement - Scroll to Continue

The Department of Labor reports initial jobless claims for the week ending on May 6. Claims averaged 239,250 in April, returning to historical averages after a prolonged period of being below trend, signaling a loosening of a very tight labor market.

The BLS releases the producer price index for April. The consensus call is for the PPI to increase 2.4% and the core PPI to rise 3.3%. This compares with gains of 2.7% and 3.4%, respectively, in March. The PPI and core PPI are at their lowest levels in about two years.

The University of Michigan releases its Consumer Sentiment Index for May. Economists forecast a dour 62.6 reading, about one point lower than in April. Consumers year-ahead inflation expectations surprisingly jumped by a percentage point in April to 4.6%.

The rest is here:
Artificial Intelligence and Jobs: Whos at Risk - Barron's

Artificial intelligence helping detect early signs of breast cancer in some US hospitals – FOX 9 Minneapolis-St. Paul

Loading Video

This browser does not support the Video element.

October raises awareness for Breast Cancer and LiveNOW from FOX talks with a doctor about the advances in treatments and importance of early detection.

BOCA RATON, Fla. - Some doctors believe artificial intelligence is saving lives after a major advancement in breast cancer screenings. In some cases, AI is detecting early signs of the disease years before the tumor would be visible on a traditional scan.

The Christine E. Lynn Women's Health and Wellness Institute at the Boca Raton Regional Hospital found a 23% increase in cancer cases since implementing AI during breast cancer screenings.

Dr. Kathy Schilling, the medical director at the institute, told Fox News Digital the practice has nine dedicated breast radiologists who are all fellowship trained, so the increase in early detections was surprising.

"All we do is read breast imaging studies, and so I thought, you know, we were probably pretty good at what we were doing, but this study really comes in shows us that even the dedicated and committed breast radiologists can do better utilizing artificial intelligence," Schilling said.

CHAT GPT ANSWERED 25 BREAST CANCER SCREENING QUESTIONS , BUT 'IT'S NOT READY FOR THE REAL WORLD'-HERE'S WHY

"ProFound AI," created by iCad, is designed to flag problem areas on mammograms. The program studied millions of breast cancer scans and, over time, learned to circle lesions and estimate the cancer risk.

"If you realize that 90% of the cases are benign and have no findings, you know, you just become fatigued. You get mesmerized by scrolling through the images. The AI helps us to refocus and find those little tiny cancers that we're looking for," Schilling said.

Medical personnel use a mammogram to examine a woman's breast for breast cancer. Photo: Hannibal Hanschke/dpa (Photo by Michael Hanschke/picture alliance via Getty Images)

ProFound AI became the first technology of its kind to be FDA cleared in December 2018. The Christine E. Lynn Women's Health and Wellness Institute adopted the groundbreaking technology during the COVID-19 pandemic, and the hospital now boasts one of the earliest studies on AI's impact on cancer.

"What I think we're going to be finding is that we're finding cancers when they're three to six millimeters in size, and finding the invasive lobular cancers which are very difficult for us to find, because they don't form masses in the breast," Schilling said.

Schilling also stated that over the past two years, the institute has offered less severe therapies to patients diagnosed with breast cancer because the cells are so small.

"We are doing smaller lumpectomies, fewer mastectomies, less chemotherapy, less radiation therapy," she continued. "I think we're entering into a whole new era in breast care."

ARTIFICIAL INTELLIGENCE IN HEALTH CARE: NEW PRODUCT ACTS AS COPILOT FOR DOCTORS'

Schilling also believes AI's early detection capabilities may have helped save Luz Torres' life after a routine mammogram on April 1 revealed a small cancerous tumor. Torres said she had no symptoms or inclination that something could be wrong.

"I have very dense breast tissue, so I always have a mammography and an ultrasound. The recommendation of that visit was the breast biopsy, so I had that done within a week's time, and then I got a phone call that the pathology was breast cancer," Torres said in an emotional interview. "It was an early detection. I come every year, I'm on track with my mammography, so it's very small tumor."

RELATED: New FDA rule requires info on breast density with all mammograms

Torres was diagnosed with stage 1 breast cancer in early April and recently completed surgery. Fortunately, she is expected to make a full recovery after early detection.

"It looks good. Because it was called early stage 1, I won't need chemotherapy so very happy about that," said Torres, who described the institute as "amazing."

Loading Video

This browser does not support the Video element.

Dr. Ko Un Park, a surgical oncologist at OSUs Comprehensive Cancer Center, discusses the signs of inflammatory breast cancer, treatment, and other things to know about the rare, yet deadly form of the disease.

"The desire to improve the technology for the patients to find this breast cancer in patients early when it's treatable, and the prognosis ends up being great. I'm fortunate enough to be one of those patients. It's a blessing," she concluded.

Several companies have released AI products with the ability to flag abnormalities during cancer screenings. Doctors are also using AI to detect brain cancer, lung cancer and prostate cancer.

Find more updates on this story at FOXNews.com.

Link:
Artificial intelligence helping detect early signs of breast cancer in some US hospitals - FOX 9 Minneapolis-St. Paul