Learning from virtual experiments to assist users of Small Angle Neutron Scattering in model selection | Scientific Reports – Nature.com
Generation of a dataset of SANS virtual experiments at KWS-1
A code template of the KWS-1 SANS instrument at FRM-II, Garching, was written in McStas (see Supplementary Information for the example code). The instrument description consisted of the following components, set consecutively: a neutron source describing the FRM-II spectrum, a velocity selector, guides that propagate the neutrons to minimize losses, a set of slits to define the divergence of the beam, a sample (one of the recently developed sasmodels component described in the McStas 3.4 documentation), a beamstop, and finally a Position Sensitive Detector (PSD) of size (144times 256) pixels. The sample was changed systematically between 46 SAS models (see Supplementary Information for a complete list of the models considered and their documentation), and for each model, different samples were produced by varying the parameters of the model. The set of 46 SAS models considered presented both isotropic and anisotropic scattering amplitudes. In the anisotropic models, the scattering amplitude is defined to have a dependency on the angle between the incident beam and the orientation of the scattering objects (or structures), which is determined by the model parameters. Consequently, in non-oriented particles with analytical anisotropic models, the resulting scattering pattern can result isotropic. Whenever possible, samples were considered in the dilute regime to avoid structure factor contributions and only observe those arising from the form factor. In models with crystalline structure or with correlations between scatterers where an analytical expression for the scattering amplitude was found, the complete scattering amplitude was considered. In all cases, the analytical expressions were obtained from the small angle scattering models documentation of SasView20 (see Supplementary Information). The instrument template in the Supplementary Information shows how it was also possible to change the instrument configuration when a sample was fixed. The set of parameters that describe the instrument configuration in a given simulation are referred as instrument parameters, and those that define the sample description as sample parameters.
In the case of instrument parameters, a discrete set of 36 instrument configurations were allowed to be selected. This was chosen by the instrument scientist, taking into account the most frequent instrument configurations: two possible values of wavelength (4.5 or 6 ), three possibilities for the distance settings, paired in collimation length - sample to detector distance (8m-1m, 8m-8m, and 20m-20m), three options for the slit configuration (1 cm slit aperture in both directions and a 2 cm wide Hellma Cell; 1.2 cm slit aperture in both directions and a 2cm wide Helma Cell; and 7mm on the horizontal aperture and 1 cm on the vertical aperture with a 1 cm wide Helma Cell), and finally two possible sample holders of different thickness (1mm and 2mm). One of the advantages of MC simulations over analytical approaches to obtain the 2D scattering pattern is that by defining the instrument parameters in the simulation, such as size of apertures for collimation, the sample to detector distance, the size of the detector, the dimensions of the pixels, and so on, the smearing of the data due to instrumental resolution is automatically considered. Therefore, no extra convolution must be performed once the data is collected.
In the case of sample parameters, most parameters describing samples were continuous, and an added difficulty was that the number of parameters per model was not the same nor similar for all models (see Fig. 5).
Distribution of models as a function of the number of parameters, showing the wide range of complexities contemplated in the models set used in this work.There are few models that have more than 15 parameters to set.
There were some models with only two parameters (easy to sample) and several models with more than 15 parameters (hard to sample). Most of the models had around 12 parameters. For p parameters with (n_i) possible choices for parameter i, the number of possible combinations (N) can be calculated as
$$begin{aligned} N = prod _{i=1}^p n_i, end{aligned}$$
(1)
which turns out to be (N=n^p) if (n_i=n) for all (i=1,dots ,p). With only (n=2) possibilities per parameter and (p=15), we rapidly get (N=32768) possible combinations for the complex model, whereas only (N=4) possible combinations for the very simple models. The large complexity of some model descriptions did not allow simulating all possible scenarios without generating a dataset with a large imbalance between classes. Therefore we opted to sample the defined hyper-parameter space strategically by using latin-hypercube sampling21. Briefly explained, this sampling method generates hypercubes in a given high dimensional hyper-parameter space. Then it selects randomly one of these hypercubes, and randomly samples the variables only inside the chosen hypercube. On a later iteration, it selects a new hypercube and repeats the sampling procedure.
Another advantage of MC simulations is that one can perform Monte Carlo integration estimates, which allow to include polydispersity and orientational distributions of scattering objects in a simple and direct manner. On each neutron interaction, the orientation and the polydisperse parameters of the scattering object are randomly chosen from defined probability distributions. For simplicity, distance and dimension parameters (r_i) of the models were allowed to be polydisperse by sampling them from gaussian distributions (taking care of selecting only positive values). The value (r_i) selected on each MC simulation defined the mean value of the gaussian distribution and an extra parameter (Delta r_i) for each (r_i) was included in the MC simulation to define the corresponding variance. The standard deviation of the gaussian distribution on different simulations was allowed to vary between 0 (monodisperse) and (r_i/2) (very polydisperse). In the case of angle parameters that determine the orientation of the scattering object, these were defined by sampling uniformly inside an interval centered at the parameter value (theta _i) and with limits defined by another extra parameter (Delta theta _i). For example, in a cylinder form factor model for the scattering object, both the radius and the length of the cylinders can be polydisperse, and the two angles defining the orientation of the principal axis with respect to the incident beam are allowed to vary uniformly within the simulation defined range. This gives a total of 8 parameters to include polidyspersity and orientational distributions on a single simulation. For more information on how this was implemented in the MC simulation we refer the reader to the documentation of each model that is provided in the Supplementary Information.
We opted for sampling 100 points for each sample model in the models hyper-parameter space due to time-constraints from the simulation side, and to constraints in the database size from the machine learning side. To define the sampling space, we defined upper ((u_b)) and lower ((l_b)) bounds for each sample parameter in each SasView model description. Then we took the default value of the parameter ((p_{0})) given in the SasView documentation as the center point of the sampling region, allowing for sampling in the interval (left[ max (-3 p_{0},l_b),min (3 p_{0},u_b)right]). All sampled parameters were continuous, except the absorption coefficient, which was restricted to have only two possible values (0% or 10%).
The expected dataset size was 331.200 by taking the 46 sample models, 2 absorption coefficients, 100 sample parameters per model, and 36 possible instrument settings. The 46 sample models were chosen so as to be representative, and also to avoid those sample models of high computational cost. Given that some configurations were non optimal, the total dataset was cleaned from zero images (no neutrons arrived in the given virtual experiment) and low statistic images. This was executed by calculating the quantile 0.02 of the standard deviations of the images, and removing them from the database. Also, the quantile 0.99 of the maximum value of the pixels of an image was calculated, and all images with max values higher were removed (for example, images in which simulations failed with saturating pixels). A remaining total of 259.328 virtual experiments defined the final dataset for machine learning purposes, and is the dataset published open access14. For an insight into what the database looks like we show a random selection of one image per model in the dataset in Fig. 6. It is possible to see that there is some variance between models, but also some unfavorable configurations (inadequate instrument paramaters for a given sample) which add noise and difficulties for the classification task. This figure also illustrates that certain anisotropic SAS models can result in isotropic scattering patterns when the scattering objects are completely unoriented (i.e., exhibiting a broad orientational distribution) or oriented in a particular direction with respect to the beam. In such cases, the anisotropy of the scattering pattern due to the form factor cannot be observed. Consequently, from the perspective of machine learning, the observation of an anisotropic scattering pattern directly excludes all isotropic models, whereas the observation of an isotropic scattering pattern does not allow for the direct inference that the model was isotropic.
An insight of the variability present amongst models in random images selected from the dataset. Isotropic (red title) and anisotropic (blue title) images can be found, as well as images with high and poor counting statistics.
Given that we have a dataset of roughly 260.000 virtual experiments, comprising of a set of 46 SANS models measured under different experimental conditions, we can attempt to train supervised machine learning algorithms to predict the SAS model of a sample given the SANS scattering pattern data measured by the PSD at KWS-1. We are taking advantage here of the fact that we know the ground truth of the SAS model used to generate the data by Monte Carlo simulation. The data from a PSD can be seen as an image of one channel, therefore we can use all recent developments in methods for image classification.
It is known by the SANS community that the intensity profile as a function of the scattering vector (q) is normally plotted in logarithmic scale, to be able to see the small features at increasing values of q. In this sense, it is useful for the classification task to perform a logarithmic transformation on the measured data to increase the contribution to the images variance of the features at large q. Since the logarithm is defined only for values larger than 0, and is positive only for values larger than 1, we first add a constant offset of +1 to all pixels and check that there are no negative values in the image. Then we apply the logarithm function to the intensity count in all pixels, emphasizing large q features as can be seen in Fig. 6. Then, we normalized all the images in the dataset to their maximum value in order to take them to values between 0 and 1 as to be independent of the counting statistics of the measurement. The transformed data are then fed to the neural network. Mathematically speaking, the transformation reads
$$begin{aligned} x_{i,j} = frac{log (x_{i,j}+1.0)}{MaxLog}, end{aligned}$$
(2)
for the intensity of pixel (x_{i,j}) in row i and column j, where MaxLog is the maximum of the image after applying the logarithmic transformation. All images were resized to (180times 180) pixels, since the networks used in this work are designed for square input images. The value 180 is a compromise between 144 and 256, in which we believe the loss in information by interpolation and sampling respectively is minimal. We decided to train Convolutional Neural Networks (CNNs) for the task of classification using Pytorch22, by transfering the learning on three architectures (ResNet-5023, DenseNet24, and Inception V325). In all cases, the corresponding PyTorch default weights were used as starting point and all weights were allowed to be modified. Then, we generated an ensemble method, that averaged the last layer weights of all three CNNs and predicted based on the averaged weight. In all cases, we modified the first layer to accept the generated one-channel images of our SANS database in HDF format. We preferred HDF format to keep floating point precision in each pixels intensity count. Also the final fully-connected layer was modified to match the 46 classes, and a soft-max layer was used to obtain values between 0 and 1, to get some notion of probability of classification.
The dataset was split into training, testing, and validation sets in proportions 0.70, 0.20, and 0.10 respectively. For the minimzation problem in multilabel classification, the Cross Entropy loss is a natural selection as the loss function. This function coincides with the multinomial logistic loss and belongs to a set of loss functions that are called comp-sum losses (loss functions obtained by composition of a concave function, such as logarithm in the case of the logistic loss, with a sum of functions of differences of score, such as the negative exponential)15. In our case, we can write the Cross Entropy loss function as
$$begin{aligned} l(x_n,y_n) = -log left( frac{exp (alpha _{y_n}(x_n))}{sum _{c=1}^{C}exp {(alpha _{c}(x_n))}}right) , end{aligned}$$
(3)
where (x_n) is the input, (y_n) is the target label, (alpha _i(x)) is the i-th output value of the last layer when x is the input, and C is the number of classes. In the extreme case where only the correct weight (alpha _{y_n}(x_n)) is equal to 1, the rest are equal to 0, then the quotient is equal to 1, and the logarithm makes the loss function equal to 0. If (alpha _{y_n}(x_n)<1), then the quotient will be between 0 and 1, the logarithm will make it negative, and the -1 pre-factor will transform it to a positive value. Any accepted minimization step of this function forces the weight of the correct label to increase in absolute value.
Finally, for the training phase, Mini-batches were used with a batch size of 64 images during training, and all CNNs were trained during 30 epochs. The Adaptive Moment Estimation (Adam)26 algorithm was used for the minimzation of the loss function, with a learning rate of (eta =1times 10^{-5}). For the testing phase, a batch size of 500 images was used, and for the validation phase, batches of 1000 images were used to increase the support of the estimated final quantities.
The data was obtained from an already completed study that has been published separetly19. It was collected from a sample consisting of a 60(mu)m thick brain slice from a reeler mouse after death. In the cited paper19, they declare that the animal procedures were approved by the institutional animal welfare committee at the Research Centre Jlich GmbH, Germany, and were in accordance with European Union guidelines for the use and care of laboratory animals. For the interest of this work, we only refer to the data for validation of the presented algorithm and we did not sacrifice nor handle any animal lives. The contrast was obtained by deuterated formalin. The irradiation area was of 1 mm(times)1mm. The authors observed an anisotropic Porod scattering ((q<0.04)(^{-1})) that is connected to the preferred orientation of whole nerve fibres, also called axon. They also report a correlation ring ((q=0.083)(^{-1})) that arises from the myelin sheaths, a multilayer of lipid bilayers with the myelin basic protein as a spacer.
Follow this link:
Learning from virtual experiments to assist users of Small Angle Neutron Scattering in model selection | Scientific Reports - Nature.com
- Exploring LLMs with MLX and the Neural Accelerators in the M5 GPU - Apple Machine Learning Research - November 23rd, 2025 [November 23rd, 2025]
- Machine learning model for HBsAg seroclearance after 48-week pegylated interferon therapy in inactive HBsAg carriers: a retrospective study - Virology... - November 23rd, 2025 [November 23rd, 2025]
- IIT Madras Free Machine Learning Course 2026: What to know - Times of India - November 23rd, 2025 [November 23rd, 2025]
- Towards a Better Evaluation of 3D CVML Algorithms: Immersive Debugging of a Localization Model - Apple Machine Learning Research - November 23rd, 2025 [November 23rd, 2025]
- A machine-learning powered liquid biopsy predicts response to paclitaxel plus ramucirumab in advanced gastric cancer: results from the prospective IVY... - November 23rd, 2025 [November 23rd, 2025]
- Monitoring for early prediction of gram-negative bacteremia using machine learning and hematological data in the emergency department - Nature - November 23rd, 2025 [November 23rd, 2025]
- Development and validation of an interpretable machine learning model for osteoporosis prediction using routine blood tests: a retrospective cohort... - November 23rd, 2025 [November 23rd, 2025]
- Snowflake Supercharges Machine Learning for Enterprises with Native Integration of NVIDIA CUDA-X Libraries - Snowflake - November 23rd, 2025 [November 23rd, 2025]
- Rethinking Revenue: How AI and Machine Learning Are Unlocking Hidden Value in the Post-Booking Space - Aviation Week Network - November 23rd, 2025 [November 23rd, 2025]
- Machine Learning Prediction of Material Properties Improves with Phonon-Informed Datasets - Quantum Zeitgeist - November 23rd, 2025 [November 23rd, 2025]
- A predictive model for the treatment outcomes of patients with secondary mitral regurgitation based on machine learning and model interpretation - BMC... - November 23rd, 2025 [November 23rd, 2025]
- Mobvista (1860.HK) Delivers Solid Revenue Growth in Q3 2025 as Mintegral Strengthens Its AI and Machine Learning Technology - Business Wire - November 23rd, 2025 [November 23rd, 2025]
- Machine learning beats classical method in predicting cosmic ray radiation near Earth - Phys.org - November 23rd, 2025 [November 23rd, 2025]
- Top Ways AI and Machine Learning Are Revolutionizing Industries in 2025 - nerdbot - November 23rd, 2025 [November 23rd, 2025]
- Snowflake Supercharges Machine Learning for Enterprises with Native Integration of NVIDIA CUDA-X Libraries - Yahoo Finance - November 18th, 2025 [November 18th, 2025]
- An interpretable machine learning model for predicting 5year survival in breast cancer based on integration of proteomics and clinical data -... - November 18th, 2025 [November 18th, 2025]
- scMFF: a machine learning framework with multiple feature fusion strategies for cell type identification - BMC Bioinformatics - November 18th, 2025 [November 18th, 2025]
- URI professor examines how machine learning can help with depression diagnosis Rhody Today - The University of Rhode Island - November 18th, 2025 [November 18th, 2025]
- Predicting drug solubility in supercritical carbon dioxide green solvent using machine learning models based on thermodynamic properties - Nature - November 18th, 2025 [November 18th, 2025]
- Relationship between C-reactive protein triglyceride glucose index and cardiovascular disease risk: a cross-sectional analysis with machine learning -... - November 18th, 2025 [November 18th, 2025]
- Using machine learning to predict student outcomes for early intervention and formative assessment - Nature - November 18th, 2025 [November 18th, 2025]
- Prevalence, associated factors, and machine learning-based prediction of probable depression among individuals with chronic diseases in Bangladesh -... - November 18th, 2025 [November 18th, 2025]
- Snowflake supercharges machine learning for enterprises with native integration of Nvidia CUDA-X libraries - MarketScreener - November 18th, 2025 [November 18th, 2025]
- Unlocking Cardiovascular Disease Insights Through Machine Learning - BIOENGINEER.ORG - November 18th, 2025 [November 18th, 2025]
- Machine learning boosts solar forecasts in diverse climates of India - researchmatters.in - November 18th, 2025 [November 18th, 2025]
- Big Data Machine Learning In Telecom Market by Type and Application Set for 14.8% CAGR Growth Through 2033 - openPR.com - November 18th, 2025 [November 18th, 2025]
- How Humans Could Soon Understand and Talk to Animals, Thanks to Machine Learning - SYFY - November 10th, 2025 [November 10th, 2025]
- Machine learning based analysis of diesel engine performance using FeO nanoadditive in sterculia foetida biodiesel blend - Nature - November 10th, 2025 [November 10th, 2025]
- Machine Learning in Maternal Care - Johns Hopkins Bloomberg School of Public Health - November 10th, 2025 [November 10th, 2025]
- Machine learning-based differentiation of benign and malignant adrenal lesions using 18F-FDG PET/CT: a two-stage classification and SHAP... - November 10th, 2025 [November 10th, 2025]
- How to Better Use AI and Machine Learning in Dermatology, With Renata Block, MMS, PA-C - HCPLive - November 10th, 2025 [November 10th, 2025]
- Avoiding Catastrophe: The Importance of Privacy when Leveraging AI and Machine Learning for Disaster Management - CSIS | Center for Strategic and... - November 10th, 2025 [November 10th, 2025]
- Efferocytosis-related signatures identified via Single-cell analysis and machine learning predict TNBC outcomes and immunotherapy response - Nature - November 10th, 2025 [November 10th, 2025]
- Arc Raiders' use of AI highlights the tension and confusion over where machine learning ends and generative AI begins - PC Gamer - November 3rd, 2025 [November 3rd, 2025]
- From performance to prediction: extracting aging data from the effects of base load aging on washing machines for a machine learning model - Nature - November 3rd, 2025 [November 3rd, 2025]
- Meet 'kvcached': A Machine Learning Library to Enable Virtualized, Elastic KV Cache for LLM Serving on Shared GPUs - MarkTechPost - October 28th, 2025 [October 28th, 2025]
- Bayesian-optimized machine learning boosts actual evapotranspiration prediction in water-stressed agricultural regions of China - Nature - October 28th, 2025 [October 28th, 2025]
- Using machine learning to shed light on how well the triage systems work - News-Medical - October 28th, 2025 [October 28th, 2025]
- Our Last Hope Before The AI Bubble Detonates: Taming LLMs - Machine Learning Week US - October 28th, 2025 [October 28th, 2025]
- Using multiple machine learning algorithms to predict spinal cord injury in patients with cervical spondylosis: a multicenter study - Nature - October 28th, 2025 [October 28th, 2025]
- The diagnostic potential of proteomics and machine learning in Lyme neuroborreliosis - Nature - October 28th, 2025 [October 28th, 2025]
- Using unsupervised machine learning methods to cluster cardio-metabolic profile of the middle-aged and elderly Chinese with general and central... - October 28th, 2025 [October 28th, 2025]
- The prognostic value of POD24 for multiple myeloma: a comprehensive analysis based on traditional statistics and machine learning - BMC Cancer - October 28th, 2025 [October 28th, 2025]
- Reducing inequalities using an unbiased machine learning approach to identify births with the highest risk of preventable neonatal deaths - Population... - October 28th, 2025 [October 28th, 2025]
- Association between SHR and mortality in critically ill patients with CVD: a retrospective analysis and machine learning approach - Diabetology &... - October 28th, 2025 [October 28th, 2025]
- AI-Powered Visual Storytelling: How Machine Learning Transforms Creative Content Production - About Chromebooks - October 28th, 2025 [October 28th, 2025]
- How beauty brand Shiseido nearly tripled revenue per user with machine learning - Performance Marketing World - October 28th, 2025 [October 28th, 2025]
- Magnite introduces machine learning-powered ad podding for streaming platforms - PPC Land - October 26th, 2025 [October 26th, 2025]
- Krafton is an AI first company and will invest 70M USD on machine learning - Female First - October 26th, 2025 [October 26th, 2025]
- Machine learning prediction of bacterial optimal growth temperature from protein domain signatures reveals thermoadaptation mechanisms - BMC Genomics - October 24th, 2025 [October 24th, 2025]
- Data Proportionality and Its Impact on Machine Learning Predictions of Ground Granulated Blast Furnace Slag Concrete Strength | Newswise - Newswise - October 24th, 2025 [October 24th, 2025]
- The Evolution of Machine Learning and Its Applications in Orthopaedics: A Bibliometric Analysis - Cureus - October 24th, 2025 [October 24th, 2025]
- Sentiment Analysis with Machine Learning Achieves 83.48% Accuracy in Predicting Consumer Behavior Trends - Quantum Zeitgeist - October 24th, 2025 [October 24th, 2025]
- Use of machine learning for risk stratification of chest pain patients in the emergency department - BMC Medical Informatics and Decision Making - October 24th, 2025 [October 24th, 2025]
- Mass spectrometry combined with machine learning identifies novel protein signatures as demonstrated with multisystem inflammatory syndrome in... - October 24th, 2025 [October 24th, 2025]
- How Machine Learning Is Shrinking to Fit the Sensor Node - All About Circuits - October 24th, 2025 [October 24th, 2025]
- Machine learning models for mechanical properties prediction of basalt fiber-reinforced concrete incorporating graphical user interface - Nature - October 24th, 2025 [October 24th, 2025]
- Ohio wins national cybersecurity award for fraud solutions using machine learning - Spectrum News NY1 - October 24th, 2025 [October 24th, 2025]
- Itron Partners with Gordian Technologies to Enhance Grid Edge Intelligence with AI and Machine Learning Solutions - Quiver Quantitative - October 24th, 2025 [October 24th, 2025]
- Wearable sensors and machine learning give leg up on better running data - Medical Xpress - October 23rd, 2025 [October 23rd, 2025]
- Geophysical-machine learning tool developed for continuous subsurface geomaterials characterization - Phys.org - October 23rd, 2025 [October 23rd, 2025]
- Ohio wins national cybersecurity award for fraud solutions using machine learning - Spectrum News 1 - October 23rd, 2025 [October 23rd, 2025]
- Machine learning predictions of climate change effects on nearly threatened bird species ( Crithagra xantholaema) habitat in Ethiopia for conservation... - October 23rd, 2025 [October 23rd, 2025]
- A machine learning tool for predicting newly diagnosed osteoporosis in primary healthcare in the Stockholm Region - Nature - October 23rd, 2025 [October 23rd, 2025]
- ECBs New Perspective on Machine Learning in Banking - KPMG - October 23rd, 2025 [October 23rd, 2025]
- Ensemble Machine Learning for Digital Mapping of Soil pH and Electrical Conductivity in the Andean Agroecosystem of Peru - Frontiers - October 21st, 2025 [October 21st, 2025]
- New UA research develops machine learning to address needs of children with autism - AZPM News - October 21st, 2025 [October 21st, 2025]
- NMDSI Speaker Series on Weather Forecasting: What Machine Learning Can and Can't Do, Oct. 23 - Marquette Today - October 21st, 2025 [October 21st, 2025]
- Polyskill Achieves 1.7x Improved Skill Reuse and 9.4% Higher Success Rates through Polymorphic Abstraction in Machine Learning - Quantum Zeitgeist - October 21st, 2025 [October 21st, 2025]
- University of Strathclyde opens admission for MSc in Machine & Deep Learning for Jan 2026 intake - The Indian Express - October 21st, 2025 [October 21st, 2025]
- Reducing Model Biases with Machine Learning Corrections Derived from Ocean Data Assimilation Increments - ESS Open Archive - October 19th, 2025 [October 19th, 2025]
- Unlocking Obesity: Multi-Omics and Machine Learning Insights - Bioengineer.org - October 19th, 2025 [October 19th, 2025]
- Lockheed Martin advances PAC-3 MSE interceptor using artificial intelligence and machine learning - Defence Industry Europe - October 19th, 2025 [October 19th, 2025]
- Semi-automated surveillance of surgical site infections using machine learning and rule-based classification models - Nature - October 19th, 2025 [October 19th, 2025]
- AI and Machine Learning - City of San Jos to release RFP for generative AI platform - Smart Cities World - October 19th, 2025 [October 19th, 2025]
- Machine learning helps identify 'thermal switch' for next-generation nanomaterials - Phys.org - October 17th, 2025 [October 17th, 2025]
- Machine Learning Makes Wildlife Data Analysis Less of a Trek - Maryland.gov - October 17th, 2025 [October 17th, 2025]
- An interpretable multimodal machine learning model for predicting malignancy of thyroid nodules in low-resource scenarios - BMC Endocrine Disorders - October 17th, 2025 [October 17th, 2025]
- In First-Episode Psychosis Patients, Machine Learning Predicted Illness Trajectories to Potentially Improve Outcomes - Brain and Behavior Research - October 17th, 2025 [October 17th, 2025]
- Novel Machine Learning Model Improves MASLD Detection in Type 2 Diabetes - The American Journal of Managed Care (AJMC) - October 17th, 2025 [October 17th, 2025]