Archive for the ‘Machine Learning’ Category

Multidimensional Mass Spectrometry and Machine Learning: A … – Technology Networks

Register for free to listen to this article

Thank you. Listen to this article using the player above.

Want to listen to this article for FREE?

Complete the form below to unlock access to ALL audio articles.

We developed and demonstrated a new metabolomics workflow for studying engineered microbes in synthetic biology applications. Our workflow combines state-of-the-art analytical instrumentation that generates information-rich data with a novel machine learning (ML)-based algorithm tailored to process it.

In our roles as Pacific Northwest National Laboratory (PNNL) scientists, we led this multi-institutional study, which was published in Nature Communications.

Metabolites are small molecules produced by large networks of cellular processes and biochemical reactions in living systems. The sheer diversity of metabolite classes and structures constitutes a significant analytical challenge in terms of detection and annotation in complex samples.

Analytical instrumentation able to analyze hundreds of samples in ever faster and more accurate ways is critical in various metabolomics applications, including the development of microorganisms that can produce desirable fuels and chemicals in a sustainable way.

Multidimensional measurements using liquid chromatography (LC), ion mobility and data-independent acquisition mass spectrometry (MS) improve metabolite detection by linking the separations in a single analytical platform. The potential for metabolomics has been previously demonstrated, but this kind of multidimensional information-rich data is complex and cannot be processed with traditional tools. Therefore, algorithms and software tools capable of processing it to extract accurate metabolite information are needed.

We optimized a combination of sophisticated instruments for fast analyses and generated multidimensional data, rich in information that can be used to tease apart complex metabolomes.

For the computational method, Dr. Bilbao created a new algorithm, called PeakDecoder, to enable interpretation of the multidimensional data and ultimately identify individual molecules in complex mixtures. Our algorithm learns to distinguish true co-elution and co-mobility directly from the raw data of the studied samples and calculates error rates for metabolite identification. To train the ML model, it proposes a novel method to generate training examples, similar to the target-decoy strategy commonly used in proteomics. Once the model is trained, it can be used to score metabolites of interest from a library with an associated false discovery rate. And contrary to existing methods, it can also be used with libraries of small size.

The key outcomes of the paper were:

The method takes a third of the sample analysis time of previous conventional approaches by using optimized LC conditions. PeakDecoder enables accurate profiling in multidimensional MS measurements for large scale studies.

We used the workflow to study metabolites of various strains of microorganisms engineered by the Agile BioFoundry to make various bioproducts, such as polymers and diesel fuel precursors. We were able to interpret 2,683 metabolite features across 116 microbial samples.

However, it should be noted that the current algorithm is not fully automated due to software dependencies and requires a metabolite library acquired with compatible analytical conditions for inference.

We are working on the next version of the algorithm leveraging advanced artificial intelligence (AI) methods used in other fields, such as computer vision. A user-friendly and fully automated version of PeakDecoder will support other types of molecular profiling workflows, including proteomics and lipidomics. Performance will be evaluated with more types of experimental data and AI-predicted multidimensional molecular libraries. The new version is expected to provide significant advances for multiomics research.

Reference:Bilbao A, Munoz N, Kim J, et al. PeakDecoder enables machine learning-based metabolite annotation and accurate profiling in multidimensional mass spectrometry measurements. Nat Commun. 2023;14(1):2461. doi:10.1038/s41467-023-37031-9

More:
Multidimensional Mass Spectrometry and Machine Learning: A ... - Technology Networks

Machine learning-guided determination of Acinetobacter density in … – Nature.com

A descriptive summary of the physicochemical variables and Acinetobacter density of the waterbodies is presented in Table 1. The mean pH, EC, TDS, and SAL of the waterbodies was 7.760.02, 218.664.76 S/cm, 110.532.36mg/L, and 0.100.00 PSU, respectively. While the average TEMP, TSS, TBS, and DO of the rivers was 17.290.21C, 80.175.09mg/L, 87.515.41 NTU, and 8.820.04mg/L, respectively, the corresponding DO5, BOD, and AD was 4.820.11mg/L, 4.000.10mg/L, and 3.190.03 log CFU/100mL respectively.

The bivariate correlation between paired PVs varied significantly from very weak to perfect/very strong positive or negative correlation (Table 2). In the same manner, the correlation between various PVs and AD varies. For instance, negligible but positive very weak correlation exist between AD and pH (r=0.03, p=0.422), and SAL (r=0.06, p=0.184) as well as very weak inverse (negative) correlation between AD and TDS (r=0.05, p=0.243) and EC (r=0.04, p=0.339). A significantly positive but weak correlation occurs between AD and BOD (r=0.26, p=4.21E10), and TSS (r=0.26, p=1.09E09), and TBS (r=0.26, 1.71E-09) whereas, AD had a weak inverse correlation with DO5 (r=0.39, p=1.31E21). While there was a moderate positive correlation between TEMP and AD (r=0.43, p=3.19E26), a moderate but inverse correlation occurred between AD and DO (r=0.46, 1.26E29).

The predicted AD by the 18 ML regression models varied both in average value and coverage (range) as shown in Fig.1. The average predicted AD ranged from 0.0056 log units by M5P to 3.2112 log unit by SVR. The average AD prediction declined from SVR [3.2112 (1.46464.4399)], DTR [3.1842 (2.23124.3036)], ENR [3.1842 (2.12334.8208)], NNT [3.1836 (1.13994.2936)], BRT [3.1833 (1.68904.3103)], RF [3.1795 (1.35634.4514)], XGB [3.1792 (1.10404.5828)], MARS [3.1790 (1.19014.5000)], LR [3.1786 (2.18954.7951)], LRSS [3.1786 (2.16224.7911)], GBM [3.1738 (1.43284.3036)], Cubist [3.1736 (1.10124.5300)], ELM [3.1714 (2.22364.9017)], KNN [3.1657 (1.49884.5001)], ANET6 [0.6077 (0.04191.1504)], ANET33 [0.6077 (0.09500.8568)], ANET42 [0.6077 (0.06920.8568)], and M5P [0.0056 (0.60240.6916)]. However, in term of range coverage XGB [3.1792 (1.10404.5828)] and Cubist [3.1736 (1.10124.5300)] outshined other models because those models overestimated and underestimated AD at lower and higher values respectively when compared with raw data [3.1865 (14.5611)].

Comparison of ML model-predicted AD in the waterbodies. RAW raw/empirical AD value.

Figure2 represents the explanatory contributions of PVs to AD prediction by the models. The subplot A-R gives the absolute magnitude (representing parameter importance) by which a PV instance changes AD prediction by each model from its mean value presented in the vertical axis. In LR, an absolute change from the mean value of pH, BOD, TSS, DO, SAL, and TEMP corresponded to an absolute change of 0.143, 0.108, 0.069, 0.0045, 0.04, and 0.004 units in the LRs AD prediction response/value. Also, an absolute response flux of 0.135, 0.116, 0.069, 0.057, 0.043, and 0.0001 in AD prediction value was attributed to pH, BOD, TSS, DO. SAL, and TEMP changes, respectively, by LRSS. Similarly, absolute change in DO, BOD, TEMP, TSS, pH, and SAL would achieve 0.155, 0.061. 0.099, 0.144, and 0.297 AD prediction response changes by KNN. In addition, the most contributed or important PV whose change largely influenced AD prediction response was TEMP (decreases or decreases the responses up to 0.218) in RF. Summarily, AD prediction response changes were highest and most significantly influenced by BOD (0.209), pH (0.332), TSS (0.265), TEMP (0.6), TSS (0.233), SAL (0.198), BOD (0.127), BOD (0.11), DO (0.028), pH (0.114), pH (0.14), SAL(0.91), and pH (0.427) in XGB, BTR, NNT, DTR, SVR, M5P, ENR, ANET33, ANNET64, ANNET6, ELM, MARS, and Cubist, respectively.

PV-specific contribution to eighteen ML models forecasting capability of AD in MHWE receiving waterbodies. The average baseline value of PV in the ML is presented on the y-axis. The green/red bars represent the absolute value of each PV contribution in predicting AD.

Table 4 presents the eighteen regression algorithms performance predicting AD given the waterbodies PVs. In terms of MSE, RMSE, and R2, XGB (MSE=0.0059, RMSE=0.0770; R2=0.9912) and Cubist (MSE=0.0117, RMSE=0.1081, R2=0.9827) ranked first and second respectively, to outmatched other models in predicting AD. While MSE and RMSE metrics ranked ANET6 (MSE=0.0172, RMSE=0.1310), ANRT42 (MSE=0.0220, RMSE=0.1483), ANET33 (MSE=0.0253, RMSE=0.1590), M5P (MSE=0.0275, RMSE=0.1657), and RF (MSE=0.0282, RMSE=0.1679) in the 3, 4, 5, 6, and 7 position among the MLs in predicting AD, M5P (R2=0.9589 and RF (R2=0.9584) recorded better performance in term of R-squared metric and ANET6 (MAD=0.0856) and M5P (MAD=0.0863) in term of MAD metric among the 5 models. But Cubist (MAD=0.0437) XGB (MAD=0.0440) in term of MAD metric.

The feature importance of each PV over permutational resampling on the predictive capability of the ML models in predicting AD in the waterbodies is presented in Table 3 and Fig. S1. The identified important variables ranked differently from one model to another, with temperature ranking in the first position by 10/18 of the models. In the 10 algorithms/models, the temperature was responsible for the highest mean RMSE dropout loss, with temperature in RF, XGB, Cubist, BRT, and NNT accounting for 0.4222 (45.90%), 0.4588 (43.00%), 0.5294 (50.82%), 0.3044 (44.87%), and 0.2424 (68.77%) respectively, while 0.1143 (82.31%),0.1384 (83.30%), 0.1059 (57.00%), 0.4656 (50.58%), and 0.2682 (57.58%) RMSE dropout loss was attributed to temperature in ANET42, ANET10, ELM, M5P, and DTR respectively. Temperature also ranked second in 2/18 models, including ANET33 (0.0559, 45.86%) and GBM (0.0793, 21.84%). BOD was another important variable in forecasting AD in the waterbodies and ranked first in 3/18 and second in 8/18 models. While BOD ranked as the first important variable in AD prediction in MARS (0.9343, 182.96%), LR (0.0584, 27.42%), and GBM (0.0812, 22.35%), it ranked second in KNN (0.2660, 42.69%), XGB (0.4119, 38.60); BRT (0.2206, 32.51%), ELM (0.0430, 23.17%), SVR (0.1869, 35.77%), DTR (0.1636, 35.13%), ENR (0.0469, 21.84%) and LRSS (0.0669, 31.65%). SAL rank first in 2/18 (KNN: 0.2799; ANET33: 0.0633) and second in 3/18 (Cubist: 0.3795; ANET42: 0.0946; ANET10: 0.1359) of the models. DO ranked first in 2/18 (ENR [0.0562; 26.19%] and LRSS [0.0899; 42.51%]) and second in 3/18 (RF [0.3240, 35.23%], M5P [0.3704, 40.23%], LR [0.0584, 27.41%]) of the models.

Figure3 shows the residual diagnostics plots of the models comparing actual AD and forecasted AD values by the models. The observed results showed that actual AD and predicted AD value in the case of LR (A), LRSS (B), KNN (C), BRT 9F), GBM (G), NNT (H), DTR (I), SVR (J), ENR (L), ANET33 (M), ANER64 (N), ANET6 (O), ELM (P) and MARS (Q) skewed, and the smoothed trend did not overlap. However, actual AD and predicted AD values experienced more alignment and an approximately overlapped smoothed trend was seen in RF (D), XGB (E), M5P (K), and Cubist (R). Among the models, RF (D) and M5P (K) both overestimated and underestimated predicted AD at lower and higher values, respectively. Whereas XGB and Cubist both overestimated AD value at lower value with XGB closer to the smoothed trend that Cubist. Generally, a smoothed trend overlapping the gradient line is desirable as it shows that a model fits all values accurately/precisely.

Comparison between actual and predicted AD by the eighteen ML models.

The comparison of the partial-dependence profiles of PVs on AD prediction by the 18 modes using a unitary model by PVs presentation for clarity is shown in Figs. S2S7. The partial-dependence profiles existed in i. a form where an average increase in AD prediction accompanied a PV increase (upwards trend), (ii) inverse trend, where an increase in a PV resulted in a decline AD prediction, (iii) horizontal trend, where increase/decrease in a PV yielded no effects on AD prediction, and (iv) a mixed trend, where the shape switch between 2 or more of iiii. The models' response varied with a change in any of the PV, especially changes beyond the breakpoints that could decrease or increase AD prediction response.

The partial-dependence profile (PDP) of DO for models has a downtrend either from the start or after a breakpoint(s) of nature ii and iv, except for ELM which had an upward trend (i, Fig. S2). TEMP PDP had an upward trend (i and iv) and, in most cases filled with one or more breakpoints but had a horizontal trend in LRSS (Fig. S3). SAL had a PDP of a typical downward trend (ii and iv) across all the models (Fig. S4). While pH displayed a typical downtrend PDP in LR, LRSS, NNT, ENR, ANN6, a downtrend filled with different breakpoint(s) was seen in RF, M5P, and SVR; other models showed a typical upward trend (i and iv) filled with breakpoint(s) (Fig. S5). The PDP of TSS showed an upward trend that returned to a plateau (DTR, ANN33, M5P, GBM, RF, XFB, BRT), after a final breakpoint or a declining trend (ANNT6, SVR; Fig. S6). The BOD PDP generally had an upward trend filled with breakpoint(s) in most models (Fig. S7).

Continued here:
Machine learning-guided determination of Acinetobacter density in ... - Nature.com

AI and Machine Learning will help to Build Metaverse Claims Exec – The Coin Republic

According to one of the executives at Facebook, reports related to Metaverses demise have been exaggerated more than they needed to be.

Meta hosted a press event in New York on 11 May announcing a new AI generative Sandbox tool for advertisers. Nicola Mendelsohn who is Metas Head of Global Business expressed that they are still very much interested in the Metaverse and reinstated that Mark Zuckerberg is very clear about that.

Responding to various reports by news media organizations showing how Meta is not interested in the Metaverse, Nicola explained that they are really interested in the Metaverse. He addressed the attendees saying that this whole Metaverse thing can take 5-10 years before they realize the vision of what theyre talking about.

Mendelsohns comments come as a defense against the growing speculation that Meta is focusing on artificial intelligence more than Metaverse in recent months during the period when the social media giant, Facebook Inc rebranded itself as Meta and couldnt stop talking about the Metaverse.

The recent surge in reports suggesting Meta is moving away from the Metaverse is because of AI tools dominating headlines. Speculations rose that Metas rebranding and announcement quickly faded as soon as artificial intelligence started making headlines and it made some analysts and critics think that Meta is moving towards the latest buzz trend and farther away from Metaverse.

The stance by Mendelson comes despite the fact that Metas Reality Labs lost $3.9 billion in the first quarter of 2023 which is $1 billion more than the first quarter of 2022.

Meta explained that to build the Metaverse and to make Quest virtual reality headsets, generative AI will play a huge part and will be used by brands and creators.

The newly launched AI Sandbox by the company will leverage generative AI to create text for ad copy aimed at different demographics, automatically crop photos and videos, and turn text prompts into background images for ads on Facebook and Instagram. Andrew Bosworth, CTO of Meta previewed the first incoming tools in March.

Nicola Mendelson explained that if you want to build a virtual world as a company its very difficult to do that but he said that with the help of machine learning and Generative AI, this can be done. John Hegeman, VP of Monetization at Meta said that the AI will help them to build the Metaverse more effectively. He further added, The Metaverse will be another great opportunity to create value for folks with AI.

Oncyber, which is a 3D world-building platform, launched an AI tool powered by OpenAIs ChatGpt that lets users customize their digital environments via text commands. Mendelson feels that the full vision of the company in relation to the metaverse could be challenged by Apples mixed reality headset, which is set to be announced soon.

Nancy J. Allen is a crypto enthusiast and believes that cryptocurrencies inspire people to be their own banks and step aside from traditional monetary exchange systems. She is also intrigued by blockchain technology and its functioning.

Read this article:
AI and Machine Learning will help to Build Metaverse Claims Exec - The Coin Republic

Development and internal-external validation of statistical and … – The BMJ

Abstract

Objective To develop a clinically useful model that estimates the 10 year risk of breast cancer related mortality in women (self-reported female sex) with breast cancer of any stage, comparing results from regression and machine learning approaches.

Design Population based cohort study.

Setting QResearch primary care database in England, with individual level linkage to the national cancer registry, Hospital Episodes Statistics, and national mortality registers.

Participants 141765 women aged 20 years and older with a diagnosis of invasive breast cancer between 1 January 2000 and 31 December 2020.

Main outcome measures Four model building strategies comprising two regression (Cox proportional hazards and competing risks regression) and two machine learning (XGBoost and an artificial neural network) approaches. Internal-external cross validation was used for model evaluation. Random effects meta-analysis that pooled estimates of discrimination and calibration metrics, calibration plots, and decision curve analysis were used to assess model performance, transportability, and clinical utility.

Results During a median 4.16 years (interquartile range 1.76-8.26) of follow-up, 21688 breast cancer related deaths and 11454 deaths from other causes occurred. Restricting to 10 years maximum follow-up from breast cancer diagnosis, 20367 breast cancer related deaths occurred during a total of 688564.81 person years. The crude breast cancer mortality rate was 295.79 per 10000 person years (95% confidence interval 291.75 to 299.88). Predictors varied for each regression model, but both Cox and competing risks models included age at diagnosis, body mass index, smoking status, route to diagnosis, hormone receptor status, cancer stage, and grade of breast cancer. The Cox models random effects meta-analysis pooled estimate for Harrells C index was the highest of any model at 0.858 (95% confidence interval 0.853 to 0.864, and 95% prediction interval 0.843 to 0.873). It appeared acceptably calibrated on calibration plots. The competing risks regression model had good discrimination: pooled Harrells C index 0.849 (0.839 to 0.859, and 0.821 to 0.876, and evidence of systematic miscalibration on summary metrics was lacking. The machine learning models had acceptable discrimination overall (Harrells C index: XGBoost 0.821 (0.813 to 0.828, and 0.805 to 0.837); neural network 0.847 (0.835 to 0.858, and 0.816 to 0.878)), but had more complex patterns of miscalibration and more variable regional and stage specific performance. Decision curve analysis suggested that the Cox and competing risks regression models tested may have higher clinical utility than the two machine learning approaches.

Conclusion In women with breast cancer of any stage, using the predictors available in this dataset, regression based methods had better and more consistent performance compared with machine learning approaches and may be worthy of further evaluation for potential clinical use, such as for stratified follow-up.

Clinical prediction models already support medical decision making in breast cancer by providing individualised estimations of risk. Tools such as PREDICT Breast1 or the Nottingham Prognostic Index23 are used in patients with early stage, surgically treated breast cancer for prognostication and selection of post-surgical treatment. Such tools are, however, inherently limited to treatment specific subgroups of patients. Accurate estimation of mortality risk after diagnosis across all patients with breast cancer of any stage may be clinically useful for stratifying follow-up, counselling patients about their expected prognosis, or identifying high risk individuals suitable for clinical trials.4

The scope for machine learning approaches in clinical prediction modelling has attracted considerable interest.56789 Some have posited that these flexible approaches might be more suitable for capturing non-linear associations, or for handling higher order interactions without explicit programming.10 Others have raised concerns about model transparency,1112 interpretability,13 risk of algorithmic bias exacerbating extant health inequalities,14 quality of evaluation and reporting,15 ability to handle rare events16 or censoring,17 and appropriateness of comparisons11 to regression based methods.18 Indeed, systematic reviews have shown no inherent benefit of machine learning approaches over appropriate statistical models in low dimensional clinical settings.18 As no a priori method exists to predict which modelling approach may yield the most useful clinical prediction model for a given scenario, frameworks that appropriately compare different models can be used.

Owing to the risks of harm from suboptimal medical decision making, clinical prediction models should be comprehensively evaluated for performance and utility,19 and, if widespread clinical use is intended, heterogeneity in model performance across relevant patient groups should be explored.20 Given developments in treatment for breast cancer over time, with associated temporal falls in mortality, another key consideration is the transportability of risk modelsnot just across regions and subpopulations but also across time periods.21 Although such dataset shift22 is a common issue with any algorithm sought to be deployed prospectively, this is not routinely explored. Robust evaluation is necessary but is non-uniform in the modelling of breast cancer prognostication.23 A systematic review identified 58 papers that assessed prognostic models for breast cancer,24 but only one study assessed clinical effectiveness by means of a simplistic approach measuring the accuracy of classifying patients into high or low risk groups. A more recent systematic review25 appraised 922 breast cancer prediction models using PROBAST (prediction model risk of bias assessment tool)26 and found that most of the clinical prediction models are poorly reported, show methodological flaws, or are at high risk of bias. Of the 27 models deemed to be at low risk of bias, only one was intended to estimate the risks of breast cancer related mortality in women with disease of any stage.27 However, this small study of 287 women using data from a single health department in Spain had methodological limitations, including possibly insufficient data to fit a model (see supplementary table 1) and uncertain transportability to other settings. Therefore, no reliable prediction model exists to provide accurate risk assessment of mortality in women with breast cancer of any stage. Although we refer to women throughout, this is based on self-reported female sex, which may include some individuals who do not identify as female.

We aimed to develop a clinically useful prediction model to reliably estimate the risks of breast cancer specific mortality in any woman with a diagnosis of breast cancer, in line with modern best practice. Utilising data from 141675 women with invasive breast cancer diagnosed between 2000 and 2020 in England from a population representative, national linked electronic healthcare record database, this study comparatively developed and evaluated clinical prediction models using a combination of analysis methods within an internal-external validation strategy.2829 We sought to identify and compare the best performing methods for model discrimination, calibration, and clinical utility across all stages of breast cancer.

We evaluated four model building approaches: two regression methods (Cox proportional hazards and competing risks regression) and two machine learning methods (XGBoost and neural networks). The prediction horizon was 10 year risk of breast cancer related death from date of diagnosis. The study was conducted in accordance with our protocol30 and is reported consistent with the TRIPOD (transparent reporting of a multivariable prediction model for individual prognosis or diagnosis) guidelines.31

Assuming 100 candidate predictor parameters, an annual mortality rate of 0.024 after diagnosis,32 and a conservative 15% of the maximal Cox-Snell R2, we estimated that the minimum sample size for fitting the regression models was 10080, with 1452 events, and 14.52 events for each predictor parameter.3334 No standard method exists to estimate minimum sample size for our machine learning models of interestsome evidence, albeit on binary outcome data, suggests that some machine learning methods may require much more data.35

The QResearch database was used to identify an open cohort of women aged 20 years and older (no upper age limit) at time of diagnosis of any invasive breast cancer between 1 January 2000 and 31 December 2020 in England. QResearch has collected data from more than 1500 general practices in the United Kingdom since 1989 and comprises individual level linkage across general practice data, NHS Digitals Hospital Episode Statistics, the national cancer registry, and the Office for National Statistics death registry.

The outcome for this study was breast cancer related mortality within 10 years from the date of a diagnosis of invasive breast cancer. We defined the diagnosis of invasive breast cancer as the presence of breast cancer related Read/Systemised Nomenclature of Medicine Clinical Terms (SNOMED) codes in general practice records, breast cancer related ICD-10 (international classification of diseases, 10th revision) codes in Hospital Episode Statistics data, or as a patient with breast cancer in the cancer registry (stage >0; whichever occurred first). The outcome, breast cancer death, was defined as the presence of relevant ICD-10 codes as any cause of death (primary or contributory) on death certificates from the ONS register. We excluded women with recorded carcinoma in situ only diagnoses as these are non-obligate precursor lesions and present distinct clinical considerations.36 Clinical codes used to define predictors and outcomes are available in the QResearch code group library (https://www.qresearch.org/data/qcode-group-library/). Follow-up time was calculated from the first recorded date of breast cancer diagnosis (earliest recorded on any of the linked datasets) to the earliest of breast cancer related death, other cause of death, or censoring (reached end of study period, left the registered general practice, or the practice stopped contributing to QResearch). The status at last follow-up depended on the modelling framework (ie, Cox proportional hazards or competing risks framework). The maximum follow-up was truncated to 10 years, in line with the model prediction horizon. Supplementary table 2 shows ascertainment of breast cancer diagnoses across the linked datasets.

Individual participant data were extracted on the candidate predictor parameters listed in Box 1, as well as geographical region, auxiliary variables (breast cancer treatments), and dates of events of interest. Candidate predictors were based on evidence from the clinical, epidemiological, or prediction model literature.12337383940 The most recently recorded values before or at the time of breast cancer diagnosis were used with no time restriction. Data were available from the cancer registry about cancer treatment within one year of diagnosis (eg, chemotherapy) but without any corresponding date. The intended model implementation (prediction time) would be at the breast cancer multidisciplinary team meeting or similar clinical setting, following initial diagnostic investigations and staging. To avoid information leakage, and since we did not seek model treatment selection within a causal framework,41 breast cancer treatment variables were not included as predictors.

Age at breast cancer diagnosiscontinuous or fractional polynomial

Townsend deprivation score at cohort entrycontinuous or fractional polynomial

Body mass index (most recently recorded before breast cancer diagnosis)continuous or fractional polynomial

Self-reported ethnicity

Tumour characteristics:

Cancer stage at diagnosis (ordinal: I, II, III, IV)

Differentiation (categorical: well differentiated, moderately differentiated, poorly or undifferentiated)

Oestrogen receptor status (binary: positive or negative)

Progesterone receptor status (binary: positive or negative)

Human epidermal growth factor receptor 2 (HER2) status (binary: positive or negative)

Route to diagnosis (categorical: emergency presentation, inpatient elective, other, screen detected, two week wait)

Comorbidities or medical history on general practice or Hospital Episodes Statistics data (recorded before or at entry to cohort; categorical unless stated otherwise):

Hypertension

Ischaemic heart disease

Type 1 diabetes mellitus

Type 2 diabetes mellitus

Chronic liver disease or cirrhosis

Systemic lupus erythematosus

Chronic kidney disease (ordinal: none or stage 2, stage 3, stage 4, stage 5)

Vasculitis

Family history of breast cancer (categorical: recorded in general practice or Hospital Episodes Statistics data, before or at entry to cohort)

Drug use (before breast cancer diagnosis):

Hormone replacement therapy

Antipsychotic

Tricyclic antidepressant

Selective serotonin reuptake inhibitor

Monoamine oxidase inhibitor

Oral contraceptive pill

Angiotensin converting enzyme inhibitor

blocker

Renin-angiotensin aldosterone antagonists

Age (fractional polynomial terms)family history of breast cancer

Ethnicityage (fractional polynomial terms)

Fractional polynomial42 terms for the continuous variables age at diagnosis, Townsend deprivation score, and body mass index (BMI) at diagnosis were identified in the complete data. This was done separately for the Cox and competing risks regression models, with a maximum of two powers permitted.

Multiple imputation with chained equations was used to impute missing data for BMI, ethnicity, Townsend deprivation score, smoking status, cancer stage at diagnosis, cancer grade at diagnosis, HER2 status, oestrogen receptor status, and progesterone receptor status under the missing at random assumption.4344 The imputation model contained all other candidate predictors, the endpoint indicator, breast cancer treatment variables, the Nelson-Aalen cumulative hazard estimate,45 and the period of cohort entry (period 1=1 January 2000-31 December 2009; period 2=1 January 2010-31 December 2020). The natural logarithm of BMI was used in imputation for normality, with imputed values exponentiated back to the regular scale for modelling. We generated 50 imputations and used these in all model fitting and evaluation steps. Although missing data were observed in the linked datasets used for model development, in the intended use setting (ie, risk estimation at breast cancer multidisciplinary team after a medical history has been taken), the predictors would be expected to be available for all patients.

Models were fit to the entire cohort and then evaluated using internal-external cross validation,28 which involved splitting the dataset by geographical region (n=10) and time period (see figure 1 for summary). For the internal-external cross validation, we recalculated follow-up so that those women who entered the study during the first study decade and survived into the second study period had their follow-up truncated (and status assigned accordingly) at 31 December 2009. This was to emulate two wholly temporally distinct datasets, both with maximum follow-up of 10 years, for the purposes of estimating temporal transportability of the models.

Summary of internal-external cross validation framework used to evaluate model performance for several metrics, and transportability

For the approach using Cox proportional hazards modelling, we treated other (non-breast cancer) deaths as censored. A full Cox model was fitted using all candidate predictor parameters. Model fitting was performed in each imputed dataset and the results combined using Rubins rules, and then this pooled model was used as the basis for predictor selection. We selected binary or multilevel categorical predictors associated with exponentiated coefficients >1.1 or <0.9 (at P<0.01) for inclusion, and interactions and continuous variables were selected if associated with P<0.01. Then these were used to refit the final Cox model. The predictor selection approach benefits from starting with a full, plausible, maximally complex model,46 and then considers both the clinical and the statistical magnitude of predictors to select a parsimonious model while making use of multiply imputed data.4748 This approach has been used in previous clinical prediction modelling studies using QResearch.495051 Clustered standard errors were used to account for clustering of participants within individual general practices in the database.

Deaths from other, non-breast cancer related causes represent a competing risk and in this framework were handled accordingly.30 We repeated the fractional polynomial term selection and predictor selection processes for the competing risks models owing to potential differential associations between predictors and risk or functional forms thereof. A full model was fit with all candidate predictors, with the same magnitude and significance rule used to select the final predictors.

The competing risks model was developed using jack-knife pseudovalues for the Aalen-Johansen cumulative incidence function at 10 years as the outcome variable52the pseudovalues were calculated for the overall cohort (for fitting the model) and then separately in the data from period 1 and from period 2 for the purposes of internal-external cross validation. These values are a marginal (pseudo) probability that can then be used in a regression model to predict individuals probabilities conditional on the observed predictor values. Pseudovalues for the cumulative incidence function at 10 years were regressed on the predictor parameters in a generalised linear model with a complementary log-log link function525354 and robust standard errors to account for the non-independence of pseudovalues. The resultant coefficients are statistically similar to those of the Fine-Gray model5254 but computationally less burdensome to obtain, and permit direct modelling of probabilities.

All fitting and evaluation of the Cox and competing risks regression models occurred in each separate imputed dataset, with Rubins rules used to pool coefficients and standard errors across all imputations.55

The XGBoost and neural network approaches were adapted to handle right censored data in the setting of competing risks by using the jack-knife pseudovalues for the cumulative incidence function at 10 years as a continuous outcome variable. The same predictor parameters as selected for the competing risks regression model were used for the purposes of benchmarking. The XGBoost model used untransformed values for continuous predictors, but these were minimum-maximum scaled (constrained between 0 and 1) for the neural network. We converted categorical variables with more than two levels to dummy variables for both machine learning approaches.

We fit the XGBoost and neural network models to the entire available cohort and used bayesian optimisation56 with fivefold cross validation to identify the optimal configuration of hyperparameters to minimise the root mean squared error between observed pseudovalues and model predictions. Fifty iterations of bayesian optimisation were used, with the expected improvement acquisition function.

For the XGBoost model, we used bayesian optimisation to tune the number of boosting rounds, learning rate (eta), tree depth, subsample fraction, regularisation parameters (alpha gamma, and lambda), and column sampling fractions (per tree, per level). We used the squared error regression option as the objective, and the root mean squared error as the evaluation metric.

To permit modelling of higher order interactions in this tabular dataset, we used a feed forward artificial neural network approach with fully connected dense layers: the model architecture comprised an input layer of 26 nodes (ie, number of predictor parameters), rectified linear unit activation functions in each hidden layer, and a single linear activation output node to generate predictions for the pseudovalues of the cumulative incidence function. The Adam optimiser was used,57 with the initial learning rate, number of hidden layers, number of nodes in each hidden layer, and number of training epochs tuned using bayesian optimisation. If the loss function had plateaued for three epochs, we halved the learning rate, with early stopping after five epochs if the loss function had not reduced by 0.0001. The loss function was the root mean squared error between observed and predicted pseudovalues due to the continuous nature of the target variable.58

After identification of the optimal hyperparameter configurations, we fit the models accordingly to the entirety of the cohort data. We then assessed the performance of these models using the internal-external cross validation strategythis resembled that for the regression models but with the addition of a hyperparameter tuning component (fig 1). During each iteration of internal-external cross validation, we used bayesian optimisation with fivefold cross validation to identify the optimal hyperparameters for the model fitted to the development data from period 1, which we then tested on the held-out period 2 data. This therefore constituted a form of nested cross validation.59

As the XGBoost and neural network models do not constitute a linear set of parameters and do not have standard errors (therefore not able to be pooled using Rubins rules), we used a stacked imputation strategy. The 50 imputed datasets were stacked to form a single, long dataset, which enabled us to use the same full data as for the regression models, avoiding suboptimal approaches such as complete case analysis or single imputation. For model evaluation after internal-external cross validation, we used approaches based on Rubins rules,55 with performance estimates calculated in each separate imputed dataset using the internal-external cross validation generated individual predictions, and then the estimates were pooled.

Predicted risks when using the Cox model can be derived by combining the linear predictor with the baseline hazard function using the equation: predicted event probability=1Stexp(X) where St is the baseline survival function calculated at 10 years, and X is the individuals linear predictor. For internal-external cross validation, we estimated baseline survival functions separately in each imputation in the period 1 data (continuous predictors centred at the mean, binary predictors set to zero), with results pooled across imputations in accordance with Rubins rules.55 We estimated the final models baseline function similarly but using the full cohort data.

Probabilistic predictions for the competing risks regression model were directly calculated using the following transformation of the linear predictors (X, which included a constant term): predicted event probability=1exp(exp(X)).

As the XGBoost and neural network approaches modelled the pseudovalues directly, we handled the generated predictions as probabilities (conditional on the predictor values). As pseudovalues are not restricted to lie between 0 and 1, we clipped the XGBoost and neural network model predictions to be between 0 and 1 to represent predicted probabilities for model evaluation.

Discrimination was assessed using Harrells C index,60 calculated at 10 years and taking censoring into accountthis used inverse probability of censoring weights for competing risks regression, XGBoost, and neural networks given their competing risks formulation.61 Calibration was summarised in terms of the calibration slope and calibration-in-the-large.6263 Region level results for these metrics were computed during internal-external cross validation and pooled using random effects meta-analysis20 with the Hartung-Knapp-Sidik-Jonkmann method64 to provide an estimate of each metric with a 95% confidence interval, and with a 95% prediction interval. The prediction interval estimates the range of model performance on application to a distinct dataset.20 We also computed these metrics by ethnicity, 10 year age groups, and cancer stage (I-IV) using the pooled, individual level predictions.

Using the individual level predictions from all models, we generated smoothed calibration plots to assess alignment of observed and predicted risks across the spectrum of predicted risks. We generated these using a running smoother through individual risk predictions, and observed individual pseudovalues65 for the Kaplan-Meier failure function (Cox model) or cumulative incidence function (all other models).

Meta-regression following Hartung-Knapp-Sidik-Jonkmann random effects models were used to calculate measures of I2 and R2 to assess the extent to which inter-regional heterogeneity in discrimination and calibration metrics could be attributable to regional variation in age, BMI (standard deviation thereof), mean deprivation score, and ethnic diversity (percentage of people of non-white ethnicity).20 These region level characteristics were estimated using the data from period 2.

We compared the models for clinical utility using decision curve analysis.66 This analysis assesses the trade-off between the benefits of true positives (breast cancer deaths) and the potential harms that may arise from false positives across a range of threshold probabilities. Each model was compared using the two default scenarios of treat all or treat none, with the mean model prediction used for each individual across all imputations. This approach implicitly takes into account both discrimination and calibration and also extends model evaluation to consider the ramifications on clinical decision making.67 The competing risk of other, non-breast-cancer death was taken into account. Decision curves were plotted overall, and by cancer stage to explore potential utility for all breast cancers.

Predictions generated from the Cox proportional hazards model and other, competing risks approaches have different interpretations, owing to their differential handling of competing events and their modelling of hazard functions with distinct statistical properties.

Data processing, multiple imputation, regression modelling, and evaluation of internal-external cross validation results utilised Stata (version 17). Machine learning modelling was performed in R 4.0.1 (xgboost, keras, and ParBayesianOptimization packages), with an NVIDIA Tesla V100 used for graphical processing unit support. Analysis code is available in repository https://github.com/AshDF91/Breast-cancer-prognosis.

Two people who survived breast cancer were involved in discussions about the scope of the project, candidate predictors, importance of research questions, and co-creation of lay summaries before submitting the project for approval. This project was also presented at an Oxfordshire based breast cancer support group to obtain qualitative feedback on the studys aims and face validity or plausibility of candidate predictors, and to discuss the acceptability of clinical risk models to guide stratified breast cancer care.

A total of 141765 women aged between 20 and 97 years at date of breast cancer diagnosis were included in the study. During the entirety of follow-up (median 4.16 (interquartile range 1.76-8.26) years), there were 21688 breast cancer related deaths and 11454 deaths from other causes. Restricting to 10 years maximum follow-up from breast cancer diagnosis, 20367 breast cancer related deaths occurred during a total of 688564.81 person years. The crude mortality rate was 295.79 per 10000 person years (95% confidence interval 291.75 to 299.88). Supplementary figure 1 presents ethnic group specific mortality curves. Table 1 shows the baseline characteristics of the cohort overall and separately by decade defined subcohort.

Summary characteristics of final study cohort overall and separated into temporally distinct subcohorts used in internal-external cross validation. Values are number (column percentage) unless stated otherwise

After the cohort was split by decade of cohort entry and follow-up was truncated for the purposes of internal-external cross validation, 7551 breast cancer related deaths occurred in period 1 during a total of 211006.95 person years of follow-up (crude mortality rate 357.96 per 10000 person years (95% confidence interval 349.87 to 366.02)). In the period 2 data, 8808 breast cancer related deaths occurred during a total of 297066.74 person years of follow-up, with a lower crude mortality rate of 296.50 per 10000 person years (290.37 to 302.76) observed.

We selected non-linear fractional polynomial terms for age and BMI (see supplementary figure 2). The final Cox model after predictor selection is presented as exponentiated coefficients in figure 2 for transparency, with the full model detailed in supplementary table 3. Model performance across all ethnic groups is summarised in supplementary table 4: discrimination ranged between a Harrells C index of 0.794 (95% confidence interval 0.691 to 0.896) in Bangladeshi women to 0.931 (0.839 to 1.000) in Chinese women, but the low numbers of event counts in smaller ethnic groups (eg, Chinese) meant that overall calibration indices were imprecisely estimated for some.

Final Cox proportional hazards model predicting 10 year risk of breast cancer mortality, presented as its exponentiated coefficients (hazard ratios with 95% confidence intervals). Model contains fractional polynomial terms for age (0.5, 2) and body mass index (2, 2), but these are not plotted owing to reasons of scale. Model also includes a baseline survival term (not plottedthe full model as coefficients is presented in the supplementary file). ACE=angiotensin converting enzyme; CI=confidence interval; CKD=chronic kidney disease; ER=oestrogen receptor; GP=general practitioner; HER2= human epidermal growth factor receptor 2; HRT=hormone replacement therapy; PR=progesterone receptor; RAA=renin-angiotensin aldosterone; SSRI=selective serotonin reuptake inhibitor

Overall, the Cox models random effects meta-analysis pooled estimate for Harrells C index was the highest of any model, at 0.858 (95% confidence interval 0.853 to 0.864, 95% prediction interval 0.843 to 0.873). A small degree of miscalibration occurred on summary metrics, with a meta-analysis pooled estimate for the calibration slope of 1.108 (95% confidence interval 1.079 to 1.138, 95% prediction interval 1.034 to 1.182) (table 2). Figure 3, figure 4, and figure 5 show the meta-analysis pooling of performance metrics across regions. Smoothed calibration plots showed generally good alignment of observed and predicted risks across the entire spectrum of predicted risks, albeit with some minor over-prediction (fig 6).

Summary performance metrics for all four models, estimated using random effects meta-analysis after internal-external cross validation.

Results from internal-external cross validation of Cox proportional hazards model for Harrells C index. Plots display region level performance metric estimates and 95% confidence intervals (diamonds with lines), and an overall pooled estimate obtained using random effects meta-analysis and 95% confidence interval (lowest diamond) and 95% prediction interval (line through lowest diamond). CI=confidence interval

Results from internal-external cross validation of Cox proportional hazards model for calibration slope. Plots display region level performance metric estimates and 95% confidence intervals (diamonds with lines), and an overall pooled estimate obtained using random effects meta-analysis and 95% confidence interval (lowest diamond) and 95% prediction interval (line through lowest diamond). CI=confidence interval

Results from internal-external cross validation of Cox proportional hazards model for calibration-in-the-large. Plots display region level performance metric estimates and 95% confidence intervals (diamonds with lines), and an overall pooled estimate obtained using random effects meta-analysis and 95% confidence interval (lowest diamond) and 95% prediction interval (line through lowest diamond). CI=confidence interval

Calibration of the four models tested. Top row shows the alignment between predicted and observed risks for all models with smoothed calibration plots. Bottom row summarises the distribution of predicted risks from each model as histograms

Regional differences in the Harrells C index were relatively slight. None of the inter-region heterogeneity observed for discrimination (I2=53.14%) and calibration (I2=42.35%) appeared to be attributable to regional variation in any of the sociodemographic factors examined (table 3). The model discriminated well across cancer stages, but discriminative capability decreased with increasing stage; moderate variation was observed in calibration across cancer stage groups (supplementary table 9).

Random effects meta-regression of relative contributions of regional variation in age, body mass index, deprivation, and non-white ethnicity on inter-regional differences in performance metrics after internal-external cross validation

Similar fractional polynomial terms were selected for age and BMI in the competing risks regression model (see supplementary figure 2), and predictor selection yielded a model with fewer predictors than the Cox model. The competing risks regression model is presented as exponentiated coefficients in figure 7, with the full model (including constant term) detailed in supplementary table 5. Ethnic group specific discrimination and overall calibration metrics are detailed in supplementary table 4the model generally performed well across ethnic groups, with similar discrimination, but there was some overt miscalibration on summary metricsalthough some metrics were estimated imprecisely owing to small event counts in some ethnic groups.

Final competing risks regression model predicting 10 year risk of breast cancer mortality, presented as its exponentiated coefficients (subdistribution hazard ratios with 95% confidence intervals). Model contains fractional polynomial terms for age (1, 2) and body mass index (2, 2), but these are not plotted owing to reasons of scale. Model also includes an intercept term (not plottedsee supplementary file for full model as coefficients). CI=confidence interval; ER=oestrogen receptor; GP=general practitioner; HER2=human epidermal growth factor receptor 2; HRT=hormone replacement therapy; PR=progesterone receptor

The random effects meta-analysis pooled Harrells C index was 0.849 (95% confidence interval 0.839 to 0.859, 95% prediction interval 0.821 to 0.876). Some evidence suggested systematic miscalibration overallthat is, a pooled calibration slope of 1.160 (95% confidence interval 1.064 to 1.255, 95% prediction interval 0.872 to 1.447). Smoothed calibration plots showed underestimation of risk at the highest predicted values (eg, predicted risk >40%, fig 6). Supplementary figure 3 displays regional performance metrics.

An estimated 41.33% of the regional variation in the Harrells C index for the competing risks regression model was attributable to inter-regional case mix (table 3); ethnic diversity was the leading sociodemographic factor associated therewith (table 3). For calibration, the I2 from the full meta-regression model was 56.68%, with regional variation in age, deprivation, and ethnic diversity associated therewith. Similar to the Cox model, discrimination tended to decrease with increasing cancer stage (supplementary table 9).

Table 4 summarises the selected hyperparameter configuration for the final XGBoost model. The discrimination of this model appeared acceptable overall,68 albeit lower than for both regression models (table 2; supplementary figure 4), with a meta-analysis pooled Harrells C index of 0.821 (95% confidence interval 0.813 to 0.828, 95% prediction interval 0.805 to 0.837). Pooled calibration metrics suggested some mild systemic miscalibrationfor example, the meta-analysis pooled calibration slope was 1.084 (95% confidence interval 1.003 to 1.165, 95% prediction interval 0.842 to 1.326). Calibration plots showed miscalibration across much of the predicted risk spectrum (fig 6), with overestimation in those with predicted risks <0.4 (most of the individuals) before mixed underestimation and overestimation in the patients at highest risk. Discrimination and calibration were poor for stage IV tumours (see supplementary table 9). Regarding regional variation in performance metrics as a result of differences between regions, most of the variation in calibration was attributable to ethnic diversity, followed by regional differences in age (table 3).

Description of machine learning model architectures and hyperparameters tuning performed

Table 4 summarises the selected hyperparameter configuration for the final neural network. This model performed better than XGBoost for overall discriminationthe meta-analysis pooled Harrells C index was 0.847 (95% confidence interval 0.835 to 0.858, 95% prediction interval 0.816 to 0.878, table 2 and supplementary figure 5). Post-internal-external cross validation pooled estimates of summary calibration metrics suggested no systemic miscalibration overall, such as a calibration slope of 1.037 (95% confidence interval 0.910 to 1.165), but heterogeneity was more noticeable across region, manifesting in the wide 95% prediction interval (slope: 0.624 to 1.451), and smoothed calibration plots showed a complex pattern of miscalibration (fig 6). Meta-regression estimated that the leading factor associated with inter-regional variation in discrimination and calibration metrics was regional differences in ethnic diversity (table 3).

Both the XGBoost and neural network approaches showed erratic calibration across cancer stage groups, especially major miscalibration in stage III and IV tumours, such as a slope for the neural network of 0.126 (95% confidence interval 0.005 to 0.247) in stage IV tumours (see supplementary table 9). Overall decision curves showed that when accounting for competing risks, net benefit was generally better for the regression models, and the neural network had lowest clinical utility; when not accounting for competing risks, the regression models had higher net benefit across the threshold probabilities examined (fig 8). Lastly, the clinical utility of the machine learning models was variable across tumour stages, such as null or negative net benefit compared with the scenarios of treat all for stage IV tumours (see supplementary figure 6).

Decision curves to assess clinical utility (net benefit) of using each model. Top plot accounts for the competing risk of other cause mortality. Bottom plot does not account for competing risks

Table 5 illustrates the predictions obtained using the Cox and competing risks regression models for different sample scenarios. When relevant, these are compared with predictions for the same clinical scenarios from PREDICT Breast and the Adjutorium model (obtained using their web calculators: https://breast.predict.nhs.uk/ and https://adjutorium-breastcancer.herokuapp.com).

Risk predictions from Cox and competing risks regression models developed in this study for illustrative clinical scenarios, compared where relevant with PREDICT and Adjutorium*

This study developed and evaluated four models to estimate 10 year risk of breast cancer death after diagnosis of invasive breast cancer of any stage. Although the regression approaches yielded models that discriminated well and were associated with favourable net benefit overall, the machine learning approaches yielded models that performed less uniformly. For example, the XGBoost and neural network models were associated with negative net benefit at some thresholds in stage I tumours, were miscalibrated in stage III and IV tumours, and exhibited complex miscalibration across the spectrum of predicted risks.

Study strengths include the use of linked primary and secondary healthcare datasets for case ascertainment, identification of clinical diagnoses using accurately coded data, and avoidance of selection and recall biases. Use of centralised national mortality registries was beneficial for ascertainment of the endpoint and competing events. Our methodology enabled the adaptation of machine learning models to handle time-to-event data with competing risks and inclusion of multiple imputation so that all models benefitted from maximal available information, and the internal-external cross validation framework28 permitted robust assessment of model performance and heterogeneity across time, place, and population groups.

See more here:
Development and internal-external validation of statistical and ... - The BMJ

Decoding the Quant Market: A Guide to Machine Learning in Trading – Rebellion Research

Decoding the Quant Market: A Guide to Machine Learning in Trading

In the ever-changing world of finance and trading, the search for a competitive edge has been a constant driver of innovation. Over the last few decades, the field of quantitative trading has emerged as a powerful force, pushing the boundaries of what is possible and reshaping the way we approach the market. At the heart of this transformation lies the fusion of cutting-edge technology, data-driven insights, and the unwavering curiosity of the human mind. It is this intersection of disciplines that forms the foundation for Decoding the Quant Market: A Guide to Machine Learning inTrading.

In this book, I aim to share my experiences and insights, offering a comprehensive guide to navigating the world of machine learning in quantitative trading. Furthermore, the journey begins with a foundational understanding of the core principles, theories. Moreover, algorithms that have shaped the field. From there, we delve into the practical applications of these techniques, exploring real-world examples and case studies that illustrate the power of machine learning in trading.

Decoding the Quant Market is designed to be accessible to readers from diverse backgrounds, whether they are seasoned professionals or newcomers to the field of finance and technology. As a result, of combining theoretical knowledge with practical insights and examples. Thus, this book aims to provide a well-rounded understanding of the complex world of machine learning in trading.

Amazon.com: Decoding the Quant Market: A Guide to Machine Learning in Trading eBook : Marti, Gautier: Kindle Store

See the article here:
Decoding the Quant Market: A Guide to Machine Learning in Trading - Rebellion Research