Archive for the ‘Ai’ Category

Humane AI Pin review and OpenAIs YouTube project – The Verge

Seven. Hundred. Dollars. After a year of asking questions about this much-hyped AI wearable, the Humane AI Pin is here, and, well, we still have lots of questions. Were also still trying to figure out how it all works and where it goes from here.

On this episode of The Vergecast, we dive deep into our review of the AI Pin and try to figure out what went wrong with this device and whether theres a real future for it or any other AI-powered gadget. The trouble, we discover, is that these devices are stacking new technology on top of new technology, and until it all works perfectly, none of it will work very well. Also, did we mention the AI Pin is seven hundred dollars?

After that, we talk about the growing rift between OpenAI and the rest of the internet after some very good reporting showed how many millions of YouTube videos the company transcribed and used to train its models. We also talk about how Taylor Swifts music came back to TikTok and whether there might be more to come.

Finally, we get a remarkably on-brand set of news in the lightning round, including E Ink screens, content regulation, and photo sharing. Sony also made a new party speaker, so of course, we spend an unreasonably long time on the party speaker. You have to look at those photos.

If you want to know more about everything we discuss in this episode, here are a few links to get you started, beginning with Humane:

And in the lightning round:

Read more from the original source:

Humane AI Pin review and OpenAIs YouTube project - The Verge

AI makes retinal imaging 100 times faster, compared to manual method – National Institutes of Health (NIH) (.gov)

News Release

Wednesday, April 10, 2024

NIH scientists use artificial intelligence called P-GAN to improve next-generation imaging of cells in the back of the eye.

Researchers at the National Institutes of Health applied artificial intelligence (AI) to a technique that produces high-resolution images of cells in the eye. They report that with AI, imaging is 100 times faster and improves image contrast 3.5-fold. The advance, they say, will provide researchers with a better tool to evaluate age-related macular degeneration (AMD) and other retinal diseases.

Artificial intelligence helps overcome a key limitation of imaging cells in the retina, which is time, said Johnny Tam, Ph.D., who leads the Clinical and Translational Imaging Section at NIH's National Eye Institute.

Tam is developing a technology called adaptive optics (AO) to improve imaging devices based on optical coherence tomography (OCT). Like ultrasound, OCT is noninvasive, quick, painless, and standard equipment in most eye clinics.

Adaptive optics takes OCT-based imaging to the next level, said Tam. Its like moving from a balcony seat to a front row seat to image the retina. With AO, we can reveal 3D retinal structures at cellular-scale resolution, enabling us to zoom in on very early signs of

While adding AO to OCT provides a much better view of cells, processing AO-OCT images after theyve been captured takes much longer than OCT without AO.

Tams latest work targets the retinal pigment epithelium (RPE), a layer of tissue behind the light-sensing retina that supports the metabolically active retinal neurons, including the photoreceptors. The retina lines the back of the eye and captures, processes, and converts the light that enters the front of the eye into signals that it then transmits through the optic nerve to the brain. Scientists are interested in the RPE because many diseases of the retina occur when the RPE breaks down.

Imaging RPE cells with AO-OCT comes with new challenges, including a phenomenon called speckle. Speckle interferes with AO-OCT the way clouds interfere with aerial photography. At any given moment, parts of the image may be obscured. Managing speckle is somewhat similar to managing cloud cover. Researchers repeatedly image cells over a long period of time. As time passes, the speckle shifts, which allows different parts of the cells to become visible. The scientists then undertake the laborious and time-consuming task of piecing together many images to create an image of the RPE cells that's speckle-free.

Tam and his team developed a novel AI-based method called parallel discriminator generative adversarialnetwork (P-GAN)a deep learning algorithm. By feeding the P-GAN network nearly 6,000 manually analyzed AO-OCT-acquired images of human RPE, each paired with its corresponding speckled original, the team trained the network to identify and recover speckle-obscured cellular features.

When tested on new images, P-GAN successfully de-speckled the RPE images, recovering cellular details. With one image capture, it generated results comparable to the manual method, which required the acquisition and averaging of 120 images. With a variety of objective performance metrics that assess things like cell shape and structure, P-GAN outperformed other AI techniques. Vineeta Das, Ph.D., a postdoctoral fellow in the Clinical and Translational Imaging Section at NEI, estimates that P-GAN reduced imaging acquisition and processing time by about 100-fold. P-GAN also yielded greater contrast, about 3.5 greater than before.

By integrating AI with AO-OCT, Tam believes that a major obstacle for routine clinical imaging using AO-OCT has been overcome, especially for diseases that affect the RPE, which has traditionally been difficult to image.

Our results suggest that AI can fundamentally change how images are captured, said Tam. Our P-GAN artificial intelligence will make AO imaging more accessible for routine clinical applications and for studies aimed at understanding the structure, function, and pathophysiology of blinding retinal diseases. Thinking about AI as a part of the overall imaging system, as opposed to a tool that is only applied after images have been captured, is a paradigm shift for the field of AI.

More news from the NEI Clinical and Translational Imaging Section.

This press release describes a basic research finding. Basic research increases our understanding of human behavior and biology, which is foundational to advancing new and better ways to prevent, diagnose, and treat disease. Science is an unpredictable and incremental process each research advance builds on past discoveries, often in unexpected ways. Most clinical advances would not be possible without the knowledge of fundamental basic research. To learn more about basic research, visit https://www.nih.gov/news-events/basic-research-digital-media-kit.

NEIleads the federal governments effortsto eliminate vision loss and improve quality of life through vision researchdriving innovation, fostering collaboration, expanding the vision workforce, and educating the public and key stakeholders.NEI supports basic and clinical science programs to develop sight-saving treatments and to broaden opportunities for people with vision impairment.For more information, visithttps://www.nei.nih.gov.

About the National Institutes of Health (NIH): NIH, the nation's medical research agency, includes 27 Institutes and Centers and is a component of the U.S. Department of Health and Human Services. NIH is the primary federal agency conducting and supporting basic, clinical, and translational medical research, and is investigating the causes, treatments, and cures for both common and rare diseases. For more information about NIH and its programs, visit http://www.nih.gov.

NIHTurning Discovery Into Health

Vineeta Das, Furu Zhang, Andrew Bower, et al. Revealing speckle obscured living human retinal cells with artificial intelligence assisted adaptive optics optical coherence tomography.Communications Medicine. April 10, 2024,https://doi.org/10.1038/s43856-024-00483-1.

###

Original post:

AI makes retinal imaging 100 times faster, compared to manual method - National Institutes of Health (NIH) (.gov)

AI Is Poised to Replace the Entry-Level Grunt Work of a Wall Street Career – The New York Times

Pulling all-nighters to assemble PowerPoint presentations. Punching numbers into Excel spreadsheets. Finessing the language on esoteric financial documents that may never be read by another soul.

Such grunt work has long been a rite of passage in investment banking, an industry at the top of the corporate pyramid that lures thousands of young people every year with the promise of prestige and pay.

Until now. Generative artificial intelligence the technology upending many industries with its ability to produce and crunch new data has landed on Wall Street. And investment banks, long inured to cultural change, are rapidly turning into Exhibit A on how the new technology could not only supplement but supplant entire ranks of workers.

The jobs most immediately at risk are those performed by analysts at the bottom rung of the investment banking business, who put in endless hours to learn the building blocks of corporate finance, including the intricacies of mergers, public offerings and bond deals. Now, A.I. can do much of that work speedily and with considerably less whining.

The structure of these jobs has remained largely unchanged at least for a decade, said Julia Dhar, head of BCGs Behavioral Science Lab and a consultant to major banks experimenting with A.I. The inevitable question, as she put it, is do you need fewer analysts?

We are having trouble retrieving the article content.

Please enable JavaScript in your browser settings.

Thank you for your patience while we verify access. If you are in Reader mode please exit andlog intoyour Times account, orsubscribefor all of The Times.

Thank you for your patience while we verify access.

Already a subscriber?Log in.

Want all of The Times?Subscribe.

See more here:

AI Is Poised to Replace the Entry-Level Grunt Work of a Wall Street Career - The New York Times

Wall Street is bullish on copper, thanks to AI. Analysts love these stocks, giving one 234% upside – CNBC

Wall Street is getting very bullish on copper, despite the metal's recent rallies . The rallies have been fueled by supply risks and rising demand for it amid the energy transition and the artificial intelligence boom. Copper is used in data centers for power cables, electrical connectors, power strips and more, Jefferies noted in an April 10 note. It estimates that global copper demand by data centers will increase from 239 kt (thousand tons) in 2023 to at least 450 kt per annum in 2030. "Our analysis shows that this potential demand growth will exacerbate an underlying copper market deficit, ultimately leading to higher prices," Jefferies analysts wrote. Data centers house vast amounts of computing power needed for AI workloads, and that need is set to grow as many tech companies are rapidly developing infrastructure for artificial intelligence. Large language models require a lot of data center capacity. In a recent note, Morgan Stanley predicted that the price of the metal will reach $10,500 per ton by the fourth quarter of this year representing around 12% upside. "Hopes for GenAI / data centre copper demand growth are adding to investor bullishness on copper, against a backdrop of constrained supply," it wrote. Demand for copper is also widely considered an indicator of economic health. The metal has a wide range of applications throughout construction and industry. It's also a critical component in electric vehicles, used in batteries, wiring, charging points and more. For those looking to buy into the sector, CNBC Pro screened for stocks in theGlobal X Copper Miners ETF. The following stocks have buy ratings from 50% or more of analysts covering them, average price target upside of 10% or more, and are covered by at least five analysts. Canadian firm Solaris Resources stood out for having more than 200% potential upside the highest in the list and a 100% buy rating. Filo Mining also made the cut, getting 25% upside from analysts and a 92% buy rating. In addition to the Global X Copper Miners ETF, those who want to invest in this sector via exchange-traded funds can consider the Sprott Copper Miners ETF and the iShares Copper and Metals Mining ETF.

Read more:

Wall Street is bullish on copper, thanks to AI. Analysts love these stocks, giving one 234% upside - CNBC

AI model has potential to detect risk of childbirth-related post-traumatic stress disorder – National Institutes of Health (NIH) (.gov)

Media Advisory

Thursday, April 11, 2024

NIH-funded study suggests model could identify large percentage of those at risk.

Researchers have adapted an artificial intelligence (AI) program to identify signs of childbirth-related post-traumatic stress disorder (CB-PTSD) by evaluating short narrative statements of patients who have given birth. The program successfully identified a large proportion of participants likely to have the disorder, and with further refinementssuch as details from medical records and birth experience data from diverse populationsthe model could potentially identify a large percentage of those at risk. The study, which was funded by the National Institutes of Health, appears in Scientific Reports.

Worldwide, CB-PTSD affects about 8 million people who give birth each year, and current practice for diagnosing CB-PTSD requires a physician evaluation, which is time-consuming and costly. An effective screening method has the potential to rapidly and inexpensively identify large numbers of postpartum patients who could benefit from diagnosis and treatment. Untreated CB-PTSD may interfere with breastfeeding, bonding with the infant and the desire for a future pregnancy. It also may worsen maternal depression, which can lead to suicidal thoughts and behaviors.

Investigators administered the CB-PTSD Checklist, which is a questionnaire designed to screen for the disorder, to 1,295 postpartum people. Participants also provided short narratives of approximately 30 words about their childbirth experience. Researchers then trained an AI model to analyze a subset of narratives from patients who also tested high for CB-PTSD symptoms on the questionnaire. Next, the model was used to analyze a different subset of narratives for evidence of CB-PTSD. Overall, the model correctly identified the narratives of participants who were likely to have CB-PTSD because they scored high on the questionnaire.

The authors believe their work could eventually make the diagnosis of childbirth post-traumatic stress disorder more accessible, providing a means to compensate for past socioeconomic, racial, and ethnic disparities.

The study was conducted by Alon Bartal, Ph.D., of Bar Ilan University in Israel, and led by senior author Sharon Dekel, Ph.D., of Massachusetts General Hospital and Harvard Medical School, Boston. Funding was provided by NIHs Eunice Kennedy Shriver National Institute of Child Health and Human Development (NICHD).

Maurice Davis, D.H.A., M.P.A., M.H.S.A., of the NICHD Pregnancy and Perinatology Branch, is available for comment.

Bartal A, et al. AI and narrative embeddings detect PTSD following childbirth via birth stories. Scientific Reports (2024).

About the Eunice Kennedy Shriver National Institute of Child Health and Human Development (NICHD): NICHD leads research and training to understand human development, improve reproductive health, enhance the lives of children and adolescents, and optimize abilities for all. For more information, visit https://www.nichd.nih.gov.

About the National Institutes of Health (NIH): NIH, the nation's medical research agency, includes 27 Institutes and Centers and is a component of the U.S. Department of Health and Human Services. NIH is the primary federal agency conducting and supporting basic, clinical, and translational medical research, and is investigating the causes, treatments, and cures for both common and rare diseases. For more information about NIH and its programs, visit http://www.nih.gov.

NIHTurning Discovery Into Health

###

See the rest here:

AI model has potential to detect risk of childbirth-related post-traumatic stress disorder - National Institutes of Health (NIH) (.gov)