Archive for the ‘Machine Learning’ Category

Early antidepressant treatment response prediction in major … – BMC Psychiatry

Standards and guidelines of machine learning in psychiatry were followed when this study was conducted and reported [20].

This study included 291 inpatients in a tertiary hospital who were diagnosed as major depressive disorders. Patient eligibility was determined based on the criteria of the Diagnostic and Statistical Manual of the American Psychiatric Association, Fourth Edition (DSM-IV). Blood samples were collected before antidepressant treatment.

All patients met the following criteria: Han Chinese, 1865 years old, baseline 17-item Hamilton Depression Rating Scale (HAMD-17) [21] scores>17 points, and their depressive symptoms lasted at least 2 weeks. All patients had just been diagnosed or had recently relapsed and had not been on medication for at least two weeks prior to enrollment. All diagnoses were made independently by two psychiatrists with professional tenure or higher, and confirmed by a third psychiatrist. Participants had never been diagnosed with other DSM-IV Axis I diagnosis (including substance use disorder, schizophrenia, affective disorder, bipolar disorder, generalized anxiety disorder, panic disorder, obsessive-compulsive disorder). They had never been diagnosed with personality disorder or mental retardation. Patients with a history of organic brain syndrome, endocrine, and primary organic diseases, or other medical conditions that would hinder psychiatric evaluation were excluded from the study. Other exclusion criteria included blood, heart, liver, and kidney disorders; electroconvulsive therapy in the past 6 months; or an episode of mania in the previous 12 months. Pregnant and nursing females were also excluded from participation.

All study subjects in the study endorsed written consent that was approved by the Zhongda Hospital Ethics Committee (2016ZDSYLL100-P01) under the Declaration of Helsinki.

Response was defined as 50% reduction in HAMD-17 scores from baseline to two weeks [22]. Accordingly, the two-week treatment participants were divided into two groups, responders and non-responders.

Two retrospective self-report questionnaires, the Childhood Trauma Questionnaire (28-item short-form, CTQ-SF) and the Life Events Scale (LES), were used to evaluate recent stress exposures and childhood adversities, respectively. The evaluation of LES and CTQ scales was completed by the same nurse using consistent, scripted language. LES is a self-assessed questionnaire composed of 48 items, reflecting both positive and negative life events experienced within the past year. The LES is divided into positive and negative life events (NLES). The CTQ-SF was dichotomized for use in the gene-environment interaction analyses.

Twelve considered demographic and clinical features are age, gender, years of education, marital status, family history, first occurrence or not, age of onset, number of occurrences, illness duration, HAMD-17, NLES and CTQ-SF baseline scores (Supplemental Material Table1).

Primers were previously designed by us to encompass 100bp upstream and 100bp downstream of TPH2 SNPs that showed a significant association with the antidepressant response, as well as with GC sequence content of CpGs>20% after methylation [11, 12]. Out of the total 24 TPH2 SNPs, only 11 SNPs (rs7305115, rs2129575, rs11179002, rs11178998, rs7954758, rs1386494, rs1487278, rs17110563, rs34115267, rs10784941, rs17110489) met the DNA methylation status criteria of the sequences to be detected (Supplemental Material Table2). Methylation levels of 38 TPH2 CpGs were calculated and presented as the ratio of the number of methylated cytosines to the total number of cytosines.

In the data set comprising 291 observations of 51 variables (12 demographic and clinical features, 38 CpGs methylation levels and 1 response variable), 6% entries were missing (see Fig.1). Of the CpGs methylation levels, 3 CpGs (TPH2-7-99, TPH2-7-142, TPH2-7-170) were excluded because they had more than 45% missing values. Due to the randomness of experimental/technological errors and interrelatedness of the variables, missing completely at random (MCAR)/missing at random (MAR) was assumed for the DNA methylation data and the mean imputation can deal with the missing data [23, 24]. The values of other features with missing values were imputed with mode and mean in the case of categorical and numerical features, respectively.

Missingness pattern in the DNA methylation data set

Normalization (Linear transformation) was used to improve the numerical stability of the model and reduce training time [25]. To avoid overfitting when harnessing maximum amount of data, cross-validation (CV) using entire sample was used to report prediction performance. The CV was 5-fold and the averaged prediction metrics including the area under the receiver operating curve (AUC), F-Measure, G-Mean, accuracy, sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) were reported. Hyperparameter tuning was based on AUC with random search using the caret default tuning settings. A packaging method (Recursive Feature Elimination with random forest, RFE-RF) [26] with 5-fold CV was employed to select the features that contributed the most to the prediction of the early antidepressant response in MDD patients. The variable importance was also estimated using random forest. For better replicability, the 5-fold CV procedure was repeated 10 times.

ML methods were implemented via their interface with the open-source R package caret in a standardized and reproducible way. Five different supervised ML algorithms were used in this study, including logistic regression, classification and regression trees (CART), support vector machine with radial basis function kernel (SVM-RBF), a boosting method (logitboost) and random forests (RF) to develop predictive models. All analyses were implemented in R statistical software (version 4.0.4). We utilized the caret package which implements rpart, caTools, e1071 and RandomForest packages for CART, logitboost, SVM-RBF and RF, respectively.

Read this article:
Early antidepressant treatment response prediction in major ... - BMC Psychiatry

The Wonders of Machine Learning: Tackling Lint Debug Quickly with … – Design and Reuse

Achieving predictable chip design closure has become progressively challenging for teams worldwide. While linting tools have existed for decades, traditional tools require significant effort to filter out the noise and eventually zero-in on the real design issues. With increasing application specific integrated circuit (ASIC) size and complexity, chip designers require higher debug efficiency when managing a large number of violations in order to ultimately achieve a shorter turnaround time (TAT).

In the first two parts of this linting series, we established how linting offers a comprehensive mechanism to check for fundamental chip design faults and touched on the many benefits of having a guiding methodology for deeper functional lint analysis.

Recognizing the disparities between in-house coding styles, our extensive experience of working with industry leaders has given us an edge to accelerate RTL and system-on-chip (SoC) design flow for customers previously unseen. Solutions such as Synopsys VC SpyGlass CDC have already proven how valuable advanced machine learning (ML) algorithms are to achieve SoC design signoff with scalable performance and high debug productivity. Leveraging industry-standard practices and our decades of expertise, the latest offering of Synopsys VC SpyGlass Lint now includes powerful ML capabilities to significantly improve debug efficiency for designers.

In the finale of this blog series, well cover the downside of traditional linting tools, how the functionality of ML and root-cause analysis (ML-RCA) accelerate design signoff, the key benefits of Synopsys VC SpyGlass Lint, and where we see the future of smart linting headed.

Click here to read more ...

Read more:
The Wonders of Machine Learning: Tackling Lint Debug Quickly with ... - Design and Reuse

Can Artificial Intelligence and Machine Learning Find Life in Space? – BBN Times

Artificial intelligence (AI) and machine learning (ML) are increasingly being used in the field of astrobiology to help in the search for life in space.

The latest advances in artificial intelligence and machine learning could accelerate the search for extraterrestrial lifeby showing the most promising places to look.

With the vastness of the universe, the search for life beyond Earth is a complex and challenging task. AI and ML have the potential to enhance our ability to detect signs of life and to identify the most promising targets for exploration.

The use of AI and ML in space applications have picked up pace as researchers and scientists worldwide deploy machine learning algorithms that analyze vast amounts of data and identify signals and potential targets in space.

The universe is a game of billions - being billions of years old, spanning across billions of light years and harboring billions of stars, galaxies, planets and unidentifiable elements. Amidst this, we are but a tiny speck of life living on the only identified habitable planet in space. Scientists, astronomers, astrologers and common people alike from all over the world have discussed the idea of extraterrestrial life prevailing in any corner of the universe. The likelihood of the existence of life beyond Earth is high, leading to various efforts being put into discovering traces of life through signals, observations, detections and more. And with AI and ML in space applications, detecting life in space has moved beyond just a dream and entered into its practical stages.

The termSETI, or Search for Extraterrestrial Intelligence, refers to the effort to find intelligent extraterrestrial life by searching the cosmos for signs of advanced civilizations. The theory underlying SETI is that there might be intelligent extraterrestrial civilizations out there and they might be sending out signals that we could pick up on. These signals could manifest as deliberate messages, unintended emissions from advanced technology or even proof of enormous engineering undertakings likeDyson spheres. SETIs role includes, but is not limited to:

To analyze the massive volumes of data gathered from radio telescopes and other sensors used in the hunt for extraterrestrial intelligence, SETI researchers employmachine learningtechniques. ML can be used to help analyze data from other instruments, such as optical telescopes, that may be used in the search for extraterrestrial intelligence. For example, machine learning algorithms can be trained to recognize patterns in the light curves of stars that may indicate the presence of advanced technology.

The identification of signals that might be an indication of extraterrestrial intelligence is one of the ways SETI makes use of machine learning. Both natural signals, such as those produced by pulsars and artificial signals, such as those from satellites and mobile phones, can be collected by radio telescopes. The properties of these various signals can be used to train machine learning algorithms to identify them and separate them from potential signals from extraterrestrial intelligence.

A further application of ML in SETI is to assist in locating and categorizing possible targets for further observations. With so much information to sort through, it can be challenging for human researchers to decide which signals are most intriguing and deserving of additional study. Based on criteria like signal strength, frequency and duration, machine learning algorithms can be used to automatically select possible targets.

While artificial intelligence and machine learning in space applications have shown significant promise in the study of astrobiology, finding extraterrestrial life is a complex and ongoing endeavor that requires many different approaches and technologies. Ultimately, it is only through collaborative efforts of scientific ingenuity and technological innovations that will allow us to find life beyond our planet.

Read more here:
Can Artificial Intelligence and Machine Learning Find Life in Space? - BBN Times

Iconic image of M87 black hole just got a machine-learning makeover – Ars Technica

Enlarge / This new, sharper image of the M87 supermassive black hole was generated by the PRIMO algorithm using 2017 EHT data.

Medeiros et al. 2023

The iconic image of a supermassive black hole in the Messier 87 (M87) galaxydescribed by astronomers as a "fuzzy orange donut"was a stunning testament to the capabilities of the Event Horizon Telescope (EHT). But there were still gaps in the observational data, limiting the resolution the EHT was able to achieve. Now four members of the EHT collaboration have applied a new machine-learning technique dubbed PRIMO (principal-component interferometric modeling) to the original 2017 data, giving that famous image its first makeover. They described their achievement in a new paper published in The Astrophysical Journal Letters.

PRIMO is a new approach to the difficult task of constructing images from EHT observations, said co-author Tod Lauer (NOIRLab). It provides a way to compensate for the missing information about the object being observed, which is required to generate the image that would have been seen using a single gigantic radio telescope the size of the Earth.

As we've reported previously, the EHT isn't a telescope in the traditional sense. Instead, it's a collection of telescopes scattered around the globe, including hardware from Hawaii to Europe, and from the South Pole to Greenland, though not all of these were active during the initial observations. The telescope is created by a process called interferometry, which uses light captured at different locations to build an image with a resolution that is the equivalent of a giant telescope (a telescope so big, its as if it were as large as the distance between the most distant locations of the individual telescopes).

Back in 2019,the EHT made headlines with its announcement of the first direct image of a black hole, located in the constellation of Virgo, some 55 million light years away. It was a feat that would have been impossible a mere generation ago, made possible by technological breakthroughs, innovative new algorithms, and of course, connecting several of the world's best radio observatories. Science magazine named the image its Breakthrough of the Year.

The EHT captured photons trapped in orbit around the black hole, swirling around at near the speed of light, creating a bright ring around it. From this, astronomers deduced that the black hole is spinning clockwise. The imaging also revealed the shadow of the black hole, a dark central region within the ring. That shadow is as close as astronomers can get to taking a picture of the actual black hole, from which light cannot escape once it crosses the event horizon. And just as the size of the event horizon is proportional to the black hole's mass, so, too, is the black hole's shadow: the more massive the black hole, the larger the shadow. It was a stunning confirmation of the general theory of relativity, showing that those predictions hold up even in extreme gravitational environments.

Medeiros et al. 2023

Two years later, the EHT releaseda new image of the same black hole, this time showing how it looked in polarized light. The ability to measure that polarization for the first timea signature of magnetic fields at the black hole's edgeyielded fresh insight into how black holes gobble up matter and emit powerful jets from their cores. That polarization enabled astronomers to map the magnetic field lines at the inner edge and to study the interaction between matter flowing in and being blown outward.

And now PRIMO has given astronomers an even sharper look at M87's supermassive black hole. We are using physics to fill in regions of missing data in a way that has never been done before by using machine learning, co-author Lia Medeiros of the Institute for Advanced Study said. This could have important implications for interferometry, which plays a role in fields from exo-planets to medicine.

PRIMO relies upon so-called dictionary learning, in which a computer learns to identify whether an unknown image is, for example, that of a banana, after being trained on large sets of different images of bananas. In the case of M87*, PRIMO analyzed over 30,000 simulated images of black holes accreting gas, taking into account many different models for how this accretion of matter occurs. Structural patterns were sorted by how frequently they showed up in the simulations, and PRIMO then blended them to produce a new, high-fidelity image of the black hole.

Overview of simulations generated for the training set of the PRIMO algorithm. Credit: Medeiros et al. 2023

The new image shows the central large dark region in greater detail, while the surrounding cloud of accreting gas is attenuated to reserve a "skinny donut." Per the authors, the image is consistent with both the 2017 EHT data and with theoretical predictions, most notably the bright rings that result from hot gas falling into the black hole. The higher resolution will help astronomers more accurately peg the mass of the black hole, as well as tighten constraints on alternative models for the event horizon, and enable more robust tests of gravity.

With our new machine-learning technique, PRIMO, we were able to achieve the maximum resolution of the current array, Medeirossaid. Since we cannot study black holes up-close, the detail of an image plays a critical role in our ability to understand its behavior. The width of the ring in the image is now smaller by about a factor of two, which will be a powerful constraint for our theoretical models and tests of gravity.

PRIMO should prove just as useful for other EHT observations, most notably the first image (releasedjust last year) of the black hole (Sagittarius A*) at the center of our own Milky Way galaxy. While M87's black hole was an easier, steadier target, with nearly all images looking the same, that was not the case for Sagittarius A*. The final image was an average of the different images from observational data that the team collected over multiple days. It took five years, multiple supercomputer simulations, and the development of new computational imaging algorithms capable of making inferences to fill in the blanks in the data. PRIMO could improve the resolution even further.

The 2019 image was just the beginning, said Medeiros. If a picture is worth a thousand words, the data underlying that image have many more stories to tell. PRIMO will continue to be a critical tool in extracting such insights.

DOI: The Astrophysical Journal Letters, 2023. 10.3847/2041-8213/acc32d (About DOIs).

See original here:
Iconic image of M87 black hole just got a machine-learning makeover - Ars Technica

The first black hole portrait got sharper thanks to machine learning – Science News Magazine

If the first image of a black hole looked like a fuzzy doughnut, this one is a thin onion ring.

Using a machine learning technique, scientists have sharpened the portrait of the supermassive black hole at the center of galaxy M87, revealing a thinner halo of glowing gas than seen previously.

In 2019, scientists with the Event Horizon Telescope unveiled an image of M87s black hole (SN: 4/10/19). The picture was the first ever taken of a black hole and showed a blurry orange ring of swirling gas silhouetted by the dark behemoth. The new rings thickness is half that of the original, despite being based on the same data, researchers report April 13 in the Astrophysical Journal Letters.

The Event Horizon Telescope takes data using a network of telescopes across the globe. But that technique leaves holes in the data. Since we cant just cover the entire Earth in telescopes, what that means is that there is some missing information, says astrophysicist Lia Medeiros of the Institute for Advanced Study in Princeton, N.J. We need to have an algorithm that can fill in those gaps.

Previous analyses had used certain assumptions to fill in those gaps, such as preferring an image that is smooth. But the new technique uses machine learning to fill in those gaps based on over 30,000 simulated images of matter swirling around a black hole, creating a sharper image.

In the future, this technique could help scientists get a better handle on the black holes mass and perform improved tests of gravity and other studies of black hole physics.

Questions or comments on this article? E-mail us atfeedback@sciencenews.org | Reprints FAQ

Physics writer Emily Conover has a Ph.D. in physics from the University of Chicago. She is a two-time winner of the D.C. Science Writers Association Newsbrief award.

Our mission is to provide accurate, engaging news of science to the public. That mission has never been more important than it is today.

As a nonprofit news organization, we cannot do it without you.

Your support enables us to keep our content free and accessible to the next generation of scientists and engineers. Invest in quality science journalism by donating today.

Read the original here:
The first black hole portrait got sharper thanks to machine learning - Science News Magazine