Archive for the ‘Artificial Intelligence’ Category

‘Artificial Intelligence’ Integrated PET-CT launched at Yashoda Hospitals, Hyderabad on the occasion of World Cancer Day 2021 – PR Newswire India

"This year's World Cancer Day's theme, 'I Am and I Will', is all about you and your commitment to act. The new state-of-the-art artificial intelligence integrated PET-CT scanner at Yashoda Hospital Somajiguda is one more step towards our commitment to early detection of Cancer. The new scanner is now two times faster than the old generation scanners primarily due to the advanced technology known as 'Time of Flight'. The scanner provides best quality images with reduced scanning duration and lesser radiation dose," said Dr. G. Srinivasa Rao, Director of Public Health & Family Welfare, Government of Telangana.

Yashoda Hospitals Somajiguda is well equipped with a comprehensive Nuclear Medicine set up providing services like PET-CT, Gamma camera imaging and radionuclide therapy under one roof. Apart from the newly upgraded imaging of FDG PET-CT, the department provides advanced and rare imaging like Ga-68 DOTA, Ga-68 PSMA, 18F DOPA PET-CTs, DAT imaging & WBC scans, apart from routine Gamma imaging like bone scan & renal scintigraphy.

"Yashoda Hospitals Somajiguda is one of the busiest and high volume centres of radionuclide therapies for thyroid cancer, neuroendocrine tumours, and prostate cancer. The Centre also provides rare therapies like radiosynovectomy for inflammatory joint disease. Patients not only from Telangana and Andhra Pradesh, but across India, visitus for these rare therapies. NextGen PET-CT is effective in the diagnosis of Cancer, Endocrine Abnormalities and Neurodegenerative Disease," said Dr. Lingaiah Amidayala, Director - Medical Services, Yashoda Hospitals Group, Hyderabad.

The Combined PET-CT Scan at Yashoda Hospitals, Somajiguda merges PET and CT images and provides detailed information about the size, shape and differentiating cancerous lesions from normal structures with accuracy. It is a diagnostic examination that combines two state-of-the-art imaging modalities and produces 3 dimensional (3D) images of the body based on the detection of radiation from the emission of positrons. It helps in early detection of cancer and any potential health problem that reveals how the tissues and organs are functioning by identifying a variety of conditions.

Dr. Hrushikesh Aurangabadkar and Dr. A Naveen Kumar Reddy, Consultants in Nuclear Medicine while explaining about the PET-CT said, "The cancer cells require a great deal of sugar, or glucose, to have enough energy to grow. PET scanning utilizes a radioactive molecule that is similar to glucose, called fluorodeoxyglucose (FDG). FDG accumulates within malignant cells because of their high rate of glucose metabolism. Once injected with this agent, the patient is imaged on the whole body PET scanner to reveal cancer growth, which are usually difficult to characterize by conventional CT, X-Ray, or MRI."

With this new technology, motion artifacts caused by respiration can be decreased and accurate diagnosis achieved.

The use of PET scans will also help the doctors to more accurately detect the presence and location of new or recurrent cancers.

Relevant Links: https://www.yashodahospitals.com/location/somajiguda/

Nuclear Medicine: https://www.yashodahospitals.com/specialities/nuclear-medicine-hospital-in-hyderabad/

About Yashoda Hospitals Hyderabad

Yashoda Group of Hospitals has been providing quality healthcare for 3 decades for people with diverse medical needs. Under astute leadership and a strong management, Yashoda Group of Hospitals has evolved as a centre of excellence in medicine providing the highest quality standards of medical treatment. Guided by the needs of patients and delivered by perfectly combined revolutionary technology even for rare and complex procedures, the Yashoda Group hosts medical expertise and advanced procedures by offering sophisticated diagnostic and therapeutic care in virtually every specialty and subspecialty of medicine and surgery. Currently operating with 3 independent hospitals in Secunderabad, Somajiguda and Malakpet and an upcoming hospital (currently under development) in Hi-Tech city, Telangana which is expected to be one of the largest medical facilities in India and will be spread over 20 lakhs sq. ft. with a capacity of 2000 beds. With a constant and relentless emphasis on quality, excellence in service, empathy, Yashoda Group provides world-class healthcare services at affordable costs.

Photo: https://mma.prnewswire.com/media/1433696/AI_PET_CT_Launched_Yashoda.jpg

SOURCE Yashoda Hospitals Hyderabad

View post:
'Artificial Intelligence' Integrated PET-CT launched at Yashoda Hospitals, Hyderabad on the occasion of World Cancer Day 2021 - PR Newswire India

Artificial Intelligence Is a Work in Progress, Official Says – Department of Defense

Expectations are high that artificial intelligence will be a game changer for the military and it is, in fact, one of the Defense Department's top priorities.

"We're in the very early days of a very long history of continued very rapid development in the AI field," said William Scherlis, director of the Information Innovation Office at the Defense Advanced Research Projects Agency. He spoke yesterday at a virtual panel discussion at the Defense One Genius Machines 2021 summit.

There are a lot of moving parts to AI that must come together to make it all work for the warfighter, he said.

Components include, machine learning, symbolic reasoning, statistical learning, knowledge representation, search and planning, data, cloud infrastructure, algorithms and computing, he said.

"If you want to do strategy planning, then you're gonna have a mashup of machine learning with, maybe, game theory and a few other elements. So when we talk about AI, sometimes people are referring to just machine-learning algorithms and data and training. But in the systems engineering context, we're really talking about how to build systems that, that have elements of AI capability embedded within them," he said.

Scherlis discussed the history of AI, back to the 1940s and noted that there were three waves of development.

The first wave involved symbolic AI, which has explicit rules, such as if it's raining, then bring an umbrella, he said. Commercial income tax programs operate this way, using rules, logic and reasoning to reach a conclusion.

The second wave involved neural nets, which Scherlis refers to as statistical AI. Neural nets attempt to replicate higher-order human thinking skills, such as problem solving.

All AI relies on having good data. But although data is certainly important, the real game-changer for AI will be the third wave where symbolic is meshed with statistical to get the best of both worlds, Scherlis predicted.

"This is a wide open research area, but there's a lot of good work in this area and I think it's very promising," he said, referring to third wave research.

This third wave will need to focus on how AI systems interact with humans in a productive and symbiotic way, he said.

Warriors will have to understand what it's like to have an AI as a trusted team member, he said.

Currently, AI isn't yet ready for prime time, he said. It's still fragile, opaque, biased and not robust enough, which means it does not yet have trustworthiness.

"At DARPA, we have another number of programs that are, that are addressing these challenges," he added.

Read the original here:
Artificial Intelligence Is a Work in Progress, Official Says - Department of Defense

Researchers speed up analysis of Arctic ice and snow data through artificial intelligence – National Science Foundation

AI technique enables researchers to study data trends more quickly, improving prediction ability

Researchers are speeding up analysis of Arctic ice and snow data through AI.

January 27, 2021

Researchers at the University of Maryland, Baltimore County have developed a technique for quicker analysis of extensive data from Arctic ice sheets to gain knowledge of patterns and trends.

Over the years, vast amounts of data have been collected about Arctic and Antarctic ice. These data are essential for scientists and policymakers seeking to understand climate change and the current trend of melting.

Researchers Masoud Yari and Maryam Rahnemoonfar have utilized new AI technology to develop a fully automatic technique to analyze ice data. They describe the technology in the Journal of Glaciology. Their effort is part of the U.S. National Science Foundation's ongoing BigData project. The data build on new image-processing algorithms developed by John Paden at the University of Kansas.

"It is great to see the cooperation between computer revision and machine learning to help predict ice changes," said Sylvia Spengler, a program director in NSF's Computer and Information Science and Engineering Directorate.

For decades, researchers have kept close track of polar ice, snow and soil measurements, but processing the large volume of available data has proven challenging.

According to Rahnemoonfar, "Radar big data is very difficult to mine and understand just by using manual techniques." The AI techniques she and Yari are developing can be used to mine the data more quickly, to get useful information on trends related to the thickness of the ice sheets and the level of snow accumulation in a certain location.

The researchers developed an algorithm that learns how to identify objects and patterns in the Arctic and Antarctic data. An AI algorithm must be exposed to hundreds of thousands of examples to learn how to identify important elements and patterns. Rahnemoonfar and her team used existing Arctic data labeled as incomplete to train the AI algorithm how to categorize and understand new data.

The algorithm's training is not yet complete, as it will need to be scaled up over multiple sensors and locations to create a more accurate tool. However, it has already successfully begun to automate a process that was previously inefficient and labor-intensive.

The rapid expansion of AI technology to understand ice and snow thickness in the Arctic is allowing scientists and researchers to make faster and more accurate climate predictions. The rate at which Arctic ice is melting impacts sea level rise, and, if scientists are better able to predict the severity of the melting, society can better mitigate the harm caused by sea level rise, the researchers say.

Read more here:
Researchers speed up analysis of Arctic ice and snow data through artificial intelligence - National Science Foundation

Engineering and artificial intelligence combine to safeguard COVID-19 patients – Princeton University

Spurred by the demands of the COVID-19 pandemic, researchers at Princeton and Google are applying mechanical engineering and artificial intelligence to increase the availability and effectiveness of ventilation treatments worldwide.

Ventilators and their support equipment are expensive and complex devices that require expert attention from doctors and other highly trained medical workers. The devices must be carefully calibrated and monitored to ensure they are meeting a range of parameters pressure, volume, breath rate tuned to each individual patient. Often, these measures change during treatment, requiring further tuning.

If that monitoring and adjustment is handled by artificial intelligence, it could ease the burden on medical workers and allow ventilators to be deployed in areas with staffing shortages. That was the logic that led Elad Hazan, a professor of computer science and director of Google AI Princeton, and Daniel Cohen, an assistant professor of mechanical and aerospace engineering, to launch the project.

Graduate student Daniel Suo and senior Paula Gradu are part of a team of researchers using AI to improve the way ventilators assist patients.

Photo by

Aaron Nathans, Office of Engineering Communications

Modern ventilators seek to maximize clinical outcomes while at the same time protecting patients from excessive levels of pressure and volume, said Daniel Notterman, a board certified pediatric intensive care physician with experience managing patients with respiratory failure, who is also a lecturer with the rank of professor in molecular biology. Although conceptually simple, the regulation of ventilator performance is extremely complex. This effort provided the opportunity for experts in programming, engineering and clinical medicine to rethink many of the usual solutions, under the leadership of Professor Cohen.

Since the initial COVID-19 outbreak last spring, Cohens team had been working to design low-cost ventilators using readily available parts. Initially, Cohen met with Hazan to discuss a control system for the new design. But the researchers realized that artificial intelligence could improve controls for all ventilators, not just the type designed at Princeton.

The hypothesis is that applying AI tools can make systems more robust and safer, Hazan said.

Access to Cohens ventilator has been critical, Hazan said. The physics underlying breathing is complex, and breaking the fluid dynamics down into working equations is generally impractical and inaccurate. So instead of approaching the control problem through the physics of the lungs, the researchers ran experiments on the Cohen teams ventilators and applied machine learning to uncover patterns in the data that would guide the safe and effective operation of the ventilator.

Tom Zajdel, a post doctoral researcher, was part of the team that designed and built a new ventilator at Princeton. The open-source design uses readily available parts.

The development of the ventilator began as part of an effort by Cohen and Notterman to design a new system that was inexpensive and could be assembled from off-the-shelf parts.

It basically goes together like Legos, said Julienne LaChance, a graduate student in Cohens lab who led the prototype construction efforts from her garage. I picture my high school robotics team putting this together.

The ventilator is now fully built and meets key FDA performance standards, while costing less than $1,500 a tenth or twentieth the price of commercial ventilators, Cohen said. The team is now actively seeking manufacturing partners to help push for regulatory approval, especially in less affluent countries in need of ventilators.

We have been using robust, simple parts that we put together with a lot of very well done software and coding, said Cohen. We are trying to develop a generalized platform that anyone can work with, or improve upon, anywhere in the world, even after the pandemic.

Researchers from Hazans lab include senior Paula Gradu; graduate studentsXinyi Chen, Udaya Ghai, Edgar Minasyan,Karan SinghandDaniel Suo; and recent Ph.D. graduatesNaman AgarwalandCyril Zhang. In addition to LaChance, Notterman and Cohen, the local Princeton ventilator team includes postdoctoral researchersTom ZajdelandManuel Schottdorf, senior research software engineer Grant Wallace, and graduate studentsSophie DvaliandZhenyu Song, as well as a number of external collaborators.

Editors note: For the full version of this story, visitthe Engineering website.

Read the original here:
Engineering and artificial intelligence combine to safeguard COVID-19 patients - Princeton University

Five ways artificial intelligence can help space exploration – The Conversation UK

Artificial intelligence has been making waves in recent years, enabling us to solve problems faster than traditional computing could ever allow. Recently, for example, Googles artificial intelligence subsidiary DeepMind developed AlphaFold2, a program which solved the protein-folding problem. This is a problem which has had baffled scientists for 50 years.

Advances in AI have allowed us to make progress in all kinds of disciplines and these are not limited to applications on this planet. From designing missions to clearing Earths orbit of junk, here are a few ways artificial intelligence can help us venture further in space.

Do you remember Tars and Case, the assistant robots from the film Interstellar? While these robots dont exist yet for real space missions, researchers are working towards something similar, creating intelligent assistants to help astronauts. These AI-based assistants, even though they may not look as fancy as those in the movies, could be incredibly useful to space exploration.

A recently developed virtual assistant can potentially detect any dangers in lengthy space missions such as changes in the spacecraft atmosphere for example increased carbon dioxide or a sensor malfunction that could be potentially harmful. It would then alert the crew with suggestions for inspection.

An AI assistant called Cimon was flown to the international space station (ISS) in December 2019, where it is being tested for three years. Eventually, Cimon will be used to reduce astronauts stress by performing tasks they ask it to do. NASA is also developing a companion for astronauts aboard the ISS, called Robonaut, which will work alongside the astronauts or take on tasks that are too risky for them.

Read more: Astronauts are experts in isolation, here's whatthey can teach us

Planning a mission to Mars is not an easy task, but artificial intelligence can make it easier. New space missions traditionally rely on knowledge gathered by previous studies. However, this information can often be limited or not fully accessible.

This means the technical information flow is constrained by who can access and share it among other mission design engineers. But what if all the information from practically all previous space missions were available to anyone with authority in just a few clicks. One day there may be a smarter system similar to Wikipedia, but with artificial intelligence that can answer complex queries with reliable and relevant information to help with early design and planning of new space missions.

Researchers are working on the idea of a design engineering assistant to reduce the time required for initial mission design which otherwise takes many human work hours. Daphne is another example of an intelligent assistant for designing Earth observation satellite systems. Daphne is used by systems engineers in satellite design teams. It makes their job easier by providing access to relevant information including feedback as well as answers to specific queries.

Earth observation satellites generate tremendous amounts of data. This is received by ground stations in chunks over a large period of time, and has to be pieced together before it can be analysed. While there have been some crowdsourcing projects to do basic satellite imagery analysis on a very small scale, artificial intelligence can come to our rescue for detailed satellite data analysis.

For the sheer volume of data received, AI has been very effective in processing it smartly. Its been used to estimate heat storage in urban areas and to combine meteorological data with satellite imagery for wind speed estimation. AI has also helped with solar radiation estimation using geostationary satellite data, among many other applications.

AI for data processing can also be used for the satellites themselves. In recent research, scientists tested various AI techniques for a remote satellite health monitoring system. This is capable of analysing data received from satellites to detect any problems, predict satellite health performance and present a visualisation for informed decision making.

One of the biggest space challenges of the 21st century is how to tackle space debris. According to ESA, there are nearly 34,000 objects bigger than 10cm which pose serious threats to existing space infrastructure. There are some innovative approaches to deal with the menace, such as designing satellites to re-enter Earths atmosphere if they are deployed within the low Earth orbit region making them disintegrate completely in a controlled way.

Another approach is to avoid any possible collisions in space, preventing the creation of any debris. In a recent study, researchers developed a method to design collision avoidance manoeuvres using machine-learning (ML) techniques.

Another novel approach is to use the enormous computing power available on Earth to train ML models, transmit those models to the spacecraft already in orbit or on their way, and use them on board for various decisions. One way to ensure safety of space flights has recently been proposed using already trained networks on board the spacecraft. This allows more flexibility in satellite design while keeping the danger of in orbit collision at a minimum.

On Earth, we are used to tools such as Google Maps which use GPS or other navigation systems. But there is no such a system for other extraterrestrial bodies, for now.

We do not have any navigation satellites around the Moon or Mars but we could use the millions of images we have from observation satellites such as the Lunar Reconnaissance Orbiter (LRO). In 2018, a team of researchers from NASA in collaboration with Intel developed an intelligent navigation system using AI to explore the planets. They trained the model on the millions of photographs available from various missions and created a virtual Moon map.

As we carry on to explore the universe, we will continue to plan ambitious missions to satisfy our inherent curiosity as well as to improve the human lives on Earth. In our endeavours, artificial intelligence will help us both on Earth and in space make this exploration possible.

Read the original here:
Five ways artificial intelligence can help space exploration - The Conversation UK