Archive for the ‘Machine Learning’ Category

FOXG1 Research Foundation to Pioneer a Machine Learning Approach to Accelerate Rare Disease Research with Support From the Chan Zuckerberg Initiative…

Rare diseases are defined as having less than 200,000 patients in the U.S., and many rare diseases have less than 1,000 patients worldwide. Thus, collecting necessary data from all patients is critical to accurately understand the disease. Currently, patients must travel to select academic centers to be part of these studies. This becomes difficult for patients with complex medical needs, accommodation challenges, and for those who cannot losework time. These aspects reduce participant enrollment and retention. Costs to patient organizations for studies can exceed $10,000 per subject per year and participants typically do not have access to study results all limiting patients' access to and engagement in rare disease research.

Thanks to two years of planning and this grant from CZI, the FOXG1 Research Foundation is launching a groundbreaking study using technology and machine learning to supplement the current NHS model by digitally collecting and analyzing critical patient data in order to scale rare disease research without exponential cost. Most importantly, this model allows patients direct access to their consolidated, digitized data that also uniquely summarizes their experience, which can be used to get second opinions or share with multiple providers to facilitate and improve their care.

"In order to find cures for rare diseases, all aspects of drug development need to be democratized, especially the collecting, analyzing and sharing of patient data to better understand unknown diseases. This has to be easy for the patient, affordable for the advocacy group, and totally accessible for researchers," explains Nasha Fitter, CEO and cofounder of the FOXG1 Research Foundation.

The new digital Natural History platform is launching with four rare disease groups: FOXG1 syndrome (FRF), SLC13A5 deficiency (TESS Research Foundation), SYNGAP1-related disorder (SynGAP Research Fund) and Rett Syndrome (Rett Syndrome Research Trust). These advocacy groups are at the forefront of rare disease research and are dedicated to redefining the drug development process. For some of these groups, natural history studies already exist and this digital platform will augment the existing NH dataset and provide a valuable and unique service to families and researchers. Accumulating data on multiple rare disease groups also enables cross-referencing for potential therapies.

In a partnership with Ciitizen, a Palo-Alto medical records platform provider, the rare disease groups kicking off this new model will onboard patients (caregivers) to sign up on the platform that will digitally collect the patient's medical records on their behalf, and then the anonymized data will be extracted and available for clinicians, researchers and biopharma to aid in research and therapy development. Researchers will be able to access large amounts of data for these small diseases to help determine clinical endpoints for potential treatments.

Key benefits of this new Digital Natural History Study include:

"We're excited to support the FOXG1 Research Foundation's efforts as they spearhead innovative work to accelerate patient access and engagement in rare disease research," said Tania Simoncelli, Director of the Science in Society Program at CZI. "We expect this effort will produce learnings and applications relevant to the broader rare disease community."

The FOXG1 Research Foundation is proud to be a Chan Zuckerberg Initiative Rare As One Strategic Partner. For more information about the Rare As One Network please visit our webpage.

About the FOXG1 Research Foundation:Founded in 2017, the FRF is a 501(c) parent-led global organization dedicated to funding science along the path to a cure and therapies for children and adults who are afflicted with the severe, rare, neurodevelopmental genetic disorder called FOXG1 syndrome. FOXG1 syndrome is characterized by severe developmental, cognitive, and physical disabilities, and epilepsy. For more information, please visit http://www.foxg1research.org.

About the Chan Zuckerberg Initiative:Founded by Dr. Priscilla Chan and Mark Zuckerberg in 2015, CZI is a new kind of philanthropy that's leveraging technology to help solve some of the world's toughest challenges from eradicating disease, to improving education, to reforming the criminal justice system. Across three core Initiative focus areas of Science, Education, and Justice & Opportunity, we're pairing engineering with grant-making, impact investing, and policy and advocacy work to help build an inclusive, just and healthy future for everyone. For more information, please visit http://www.chanzuckerberg.com.

Media Contact:Nicole Johnson, Co-Founder, Communications Director, FOXG1 Research Foundation [emailprotected]

SOURCE FOXG1 Research Foundation; SynGAP Research Fund

https://syngapresearchfund.org/

See the original post here:
FOXG1 Research Foundation to Pioneer a Machine Learning Approach to Accelerate Rare Disease Research with Support From the Chan Zuckerberg Initiative...

The way we train AI is fundamentally flawed – MIT Technology Review

For example, they trained 50 versions of an image recognition model on ImageNet, a dataset of images of everyday objects. The only difference between training runs were the random values assigned to the neural network at the start. Yet despite all 50 models scoring more or less the same in the training testsuggesting that they were equally accuratetheir performance varied wildly in the stress test.

The stress test used ImageNet-C, a dataset of images from ImageNet that have been pixelated or had their brightness and contrast altered, and ObjectNet, a dataset of images of everyday objects in unusual poses, such as chairs on their backs, upside-down teapots, and T-shirts hanging from hooks. Some of the 50 models did well with pixelated images, some did well with the unusual poses; some did much better overall than others. But as far as the standard training process was concerned, they were all the same.

The researchers carried out similar experiments with two different NLP systems, and three medical AIs for predicting eye disease from retinal scans, cancer from skin lesions, and kidney failure from patient records. Every system had the same problem: models that should have been equally accurate performed differently when tested with real-world data, such as different retinal scans or skin types.

We might need to rethink how we evaluate neural networks, says Rohrer. It pokes some significant holes in the fundamental assumptions we've been making.

DAmour agrees. The biggest, immediate takeaway is that we need to be doing a lot more testing, he says. That wont be easy, however. The stress tests were tailored specifically to each task, using data taken from the real world or data that mimicked the real world. This is not always available.

Some stress tests are also at odds with each other: models that were good at recognizing pixelated images were often bad at recognizing images with high contrast, for example. It might not always be possible to train a single model that passes all stress tests.

One option is to design an additional stage to the training and testing process, in which many models are produced at once instead of just one. These competing models can then be tested again on specific real-world tasks to select the best one for the job.

Thats a lot of work. But for a company like Google, which builds and deploys big models, it could be worth it, says Yannic Kilcher, a machine-learning researcher at ETH Zurich. Google could offer 50 different versions of an NLP model and application developers could pick the one that worked best for them, he says.

DAmour and his colleagues dont yet have a fix but are exploring ways to improve the training process. We need to get better at specifying exactly what our requirements are for our models, he says. Because often what ends up happening is that we discover these requirements only after the model has failed out in the world.

Getting a fix is vital if AI is to have as much impact outside the lab as it is having inside. When AI underperforms in the real-world it makes people less willing to want to use it, says co-author Katherine Heller, who works at Google on AI for healthcare: We've lost a lot of trust when it comes to the killer applications, thats important trust that we want to regain.

Originally posted here:
The way we train AI is fundamentally flawed - MIT Technology Review

Machine Learning Predicts How Cancer Patients Will Respond to Therapy – HealthITAnalytics.com

November 18, 2020 -A machine learning algorithm accurately determined how well skin cancer patients would respond to tumor-suppressing drugs in four out of five cases, according to research conducted by a team from NYU Grossman School of Medicine and Perlmutter Cancer Center.

The study focused on metastatic melanoma, a disease that kills nearly 6,800 Americans each year. Immune checkpoint inhibitors, which keep tumors from shutting down the immune systems attack on them, have been shown to be more effective than traditional chemotherapies for many patients with melanoma.

However, half of patients dont respond to these immunotherapies, and these drugs are expensive and often cause side effects in patients.

While immune checkpoint inhibitors have profoundly changed the treatment landscape in melanoma, many tumors do not respond to treatment, and many patients experience treatment-related toxicity, said corresponding study authorIman Osman, medical oncologist in the Departments of Dermatology and Medicine (Oncology) at New York University (NYU) Grossman School of Medicine and director of the Interdisciplinary Melanoma Program at NYU Langones Perlmutter Cancer Center.

An unmet need is the ability to accurately predict which tumors will respond to which therapy. This would enable personalized treatment strategies that maximize the potential for clinical benefit and minimize exposure to unnecessary toxicity.

READ MORE: How Social Determinants Data Can Enhance Machine Learning Tools

Researchers set out to develop a machine learning model that could help predict a melanoma patients response to immune checkpoint inhibitors. The team collected 302 images of tumor tissue samples from 121 men and women treated for metastatic melanoma with immune checkpoint inhibitors at NYU Langone hospitals.

They then divided these slides into 1.2 million portions of pixels, the small bits of data that make up images. These were fed into the machine learning algorithm along with other factors, such as the severity of the disease, which kind of immunotherapy regimen was used, and whether a patient responded to the treatment.

The results showed that the machine learning model achieved an AUC of 0.8 in both the training and validation cohorts, and was able to predict which patients with a specific type of skin cancer would respond well to immunotherapies in four out of five cases.

Our findings reveal that artificial intelligence is a quick and easy method of predicting how well a melanoma patient will respond to immunotherapy, said study first author Paul Johannet, MD, a postdoctoral fellow at NYU Langone Health and its Perlmutter Cancer Center.

Researchers repeated this process with 40 slides from 30 similar patients at Vanderbilt University to determine whether the results would be similar at a different hospital system that used different equipment and sampling techniques.

READ MORE: Simple Machine Learning Method Predicts Cirrhosis Mortality Risk

A key advantage of our artificial intelligence program over other approaches such as genetic or blood analysis is that it does not require any special equipment, said study co-author Aristotelis Tsirigos, PhD, director of applied bioinformatics laboratories and clinical informatics at the Molecular Pathology Lab at NYU Langone.

The team noted that aside from the computer needed to run the program, all materials and information used in the Perlmutter technique are a standard part of cancer management that most, if not all, clinics use.

Even the smallest cancer center could potentially send the data off to a lab with this program for swift analysis, said Osman.

The machine learning method used in the study is also more streamlined than current predictive tools, such as analyzing stool samples or genetic information, which promises to reduce treatment costs and speed up patient wait times.

Several recent attempts to predict immunotherapy responses do so with robust accuracy but use technologies, such as RNA sequencing, that are not readily generalizable to the clinical setting, said corresponding study authorAristotelis Tsirigos, PhD, professor in the Institute for Computational Medicine at NYU Grossman School of Medicine and member of NYU Langones Perlmutter Cancer Center.

READ MORE: Machine Learning Forecasts Prognosis of COVID-19 Patients

Our approach shows that responses can be predicted using standard-of-care clinical information such as pre-treatment histology images and other clinical variables.

However, researchers also noted that the algorithm is not yet ready for clinical use until they can boost the accuracy from 80 percent to 90 percent and test the algorithm at more institutions. The research team plans to collect more data to improve the performance of the model.

Even at its current level of accuracy, the model could be used as a screening method to determine which patients across populations would benefit from more in-depth tests before treatment.

There is potential for using computer algorithms to analyze histology images and predict treatment response, but more work needs to be done using larger training and testing datasets, along with additional validation parameters, in order to determine whether an algorithm can be developed that achieves clinical-grade performance and is broadly generalizable, said Tsirigos.

There is data to suggest that thousands of images might be needed to train models that achieve clinical-grade performance.

Read the rest here:
Machine Learning Predicts How Cancer Patients Will Respond to Therapy - HealthITAnalytics.com

DIY Camera Uses Machine Learning to Audibly Tell You What it Sees – PetaPixel

Adafruit Industries has created a machine learning camera built with the Raspberry Pi that can identify objects extremely quickly and audibly tell you what it sees. The group has listed all the necessary parts you need to build the device at home.

The camera is based on Adafruits BrainCraft HAT add-on for the Raspberry Pi 4, and uses TensorFlow Lite object recognition software to be able to recognize what it is seeing. According to Adafruits website, its compatible with both the 8-megapixel Pi camera and the 12.3-megapixel interchangeable lens version of module.

While interesting on its own, DIY Photography makes a solid point by explaining a more practical use case for photographers:

You could connect a DSLR or mirrorless camera from its trigger port into the Pis GPIO pins, or even use a USB connection with something like gPhoto, to have it shoot a photo or start recording video when it detects a specific thing enter the frame.

A camera that is capable of recognizing what it is looking at could be used to only take a photo when a specific object, animal, or even a person comes into the frame. That would mean it could have security system or wildlife monitoring applications. Whenever you might wish your camera knew what it was looking at, this kind of technology would make that a reality.

You can find all the parts you will need to build your own version of this device on Adafruits website here. They also have published an easy machine learning guide for the Raspberry Pi as well as a guide on running TensorFlow Lite.

(via DPReview and DIY Photography)

Read more here:
DIY Camera Uses Machine Learning to Audibly Tell You What it Sees - PetaPixel

How machine learning was used to decode an ancient Chinese cave – Times of India

The name of the cell, a prosaic Cave 465, does not quite convey the cornucopia of imagery it contains angry Tantric deities in frenzied sexual union with their consorts. For decades, researchers have tried figuring out how old the Buddhist cave temple at the Mogao site along the ancient Silk Road in China is. Estimates range from the 9th century to the 14th. But now, the discovery of hidden Sanskrit inscriptions on pieces of paper stuck to its ceiling have helped narrow down its origins.On the edge of the Gobi desert, by the Dachuan river, the Mogao Caves have baffled researchers, who have settled on a thousand-year window for when all the 492 caves were carved out of cliffs, one at a long time, starting in the 4th century CE. Each cell, at first, appears isolated in its own history, linked to others through a grid of associations established by identifying pigments, painting styles or plain old radiocarbon dating. But Cave 465, to the north of the site, is unique.

Link:
How machine learning was used to decode an ancient Chinese cave - Times of India