Investigation of the effectiveness of a classification method based on improved DAE feature extraction for hepatitis C … – Nature.com
In this subsection, we evaluate the feature extraction effect of the IDAE by conducting experiments on the Hepatitis C dataset with different configurations to test its generalization ability. We would like to investigate the following two questions:
How effective is IDAE in classifying the characteristics of hepatitis C ?
If the depth of the neural network is increased, can IDAE mitigate the gradient explosion or gradient vanishing problem while improving the classification of hepatitis C disease ?
Does an IDAE of the same depth tend to converge more easily than other encoders on the hepatitis C dataset ?
Firstly, out of public health importance, Hepatitis C (HCV) is a global public health problem due to the fact that its chronic infection may lead to serious consequences such as cirrhosis and liver cancer, and Hepatitis C is highly insidious, leading to a large number of undiagnosed cases.It is worth noting that despite the wide application of traditional machine learning and deep learning algorithms in the healthcare field, especially in the research of acute conditions such as cancer, however, there is a significant lack of in-depth exploration of chronic infectious diseases, such as hepatitis C. In addition, the complex biological attributes of the hepatitis C virus and the significant individual differences among patients together give rise to the challenge of multilevel nonlinear correlation among features. Therefore, the application of deep learning methods to the hepatitis C dataset is not only an important way to validate the efficacy of such algorithms, but also an urgent research direction that needs to be put into practice to fill the existing research gaps.
The Helmholtz Center for Infection Research, the Institute of Clinical Chemistry at the Medical University of Hannover, and other research organizations provided data on people with hepatitis C, which was used to compile the information in this article. The collection includes demographic data, such as age, as well as test results for blood donors and hepatitis C patients. By examining the dataset, we can see that the primary features are the quantity of different blood components and liver function, and that the only categorical feature in the dataset is gender. Table 1 shows the precise definition of these fields.
This essay investigates the categorisation issue. The Table 2 lists the description and sample size of the five main classification labels. In the next training, in order to address the effect of sample imbalance on the classification effect, the model will be first smote32 sampled and then trained using the smote sampled samples. With a sample size of 400 for each classification.
The aim of this paper is to investigate whether IDAE can extract more representative and robust features, and we have chosen a baseline model that includes both traditional machine learning algorithms and various types of autoencoders, which will be described in more detail below:
SVM: support vector machines are used to achieve optimal classification of data by constructing maximally spaced classification hyperplanes and use kernel functions to deal with nonlinear problems, aiming to seek to identify decision boundaries that maximize spacing in the training data.
KNN: the K Nearest Neighbors algorithm determines the class or predictive value of a new sample by calculating its distance from each sample in the training set through its K nearest neighbors.
RF: random forests utilize random feature selection and Bootstrap sampling techniques to construct and combine the prediction results of multiple decision trees to effectively handle classification and regression problems.
AE: autoencoder is a neural network structure consisting of an encoder and a decoder that learns a compact, low-dimensional feature representation of the data through a autoreconfiguration process of the training data, and is mainly used for data dimensionality reduction, feature extraction, and generative learning tasks.
DAE: denoising autoencoder is a autoencoder variant that excels at extracting features from noisy inputs, revealing the underlying structure of the data and learning advanced features by reconstructing the noise-added inputs to improve network robustness, and whose robust features have a gainful effect on the downstream tasks, which contributes to improving the model generalization ability.
SDAE: stacked denoising autoencoder is a multilayer neural network structure consisting of multiple noise-reducing autoencoder layers connected in series, each of which applies noise to the input data during training and learns to reconstruct the undisturbed original features from the noisy data, thus extracting a more abstract and robust feature representation layer by layer.
DIUDA: the main feature of Dual Input Unsupervised Denoising Autoencoder is that it receives two different types of input data at the same time, and further enhances the generalization ability of the model and the understanding of the intrinsic structure of the data by fusing the two types of inputs for the joint learning and extraction of the feature representation.
In this paper, 80% of the Hepatitis C dataset is used as model training and the remaining 20% is used to test the model. Since the samples are unbalanced, this work is repeated with negative samples to ensure that the samples are balanced. For the autoencoder all methods, the learning rate is initialized to 0.001, the number of layers for both encoder and decoder are set to 3, the number of neurons for encoder is 10, 8, 5, the number of neurons for decoder is 5, 8, 10, and the MLP is initialized to 3 layers with the number of neurons 10, 8, 5, respectively, and furthermore all models are trained until convergence, with a maximum training epoch is 200. The machine learning methods all use the sklearn library, and the hyperparameters use the default parameters of the corresponding algorithms of the sklearn library.
To answer the first question, we classified the hepatitis C data after feature extraction using a modified noise-reducing auto-encoder and compared it using traditional machine learning algorithms such as SVM, KNN, and Random Forest with AE, DAE, SDAE, and DIUDA as baseline models. Each experiment was conducted 3 times to mitigate randomness. The average results for each metric are shown in Table 3.From the table, we can make the following observations.
The left figure shows the 3D visualisation of t-SNE with features extracted by DAE, and the right figure shows the 3D visualisation of t-SNE with features extracted by IDAE.
Firstly, the IDAE shows significant improvement on the hepatitis C classification task compared to the machine learning algorithms, and also outperforms almost all machine learning baseline models on all evaluation metrics. These results validate the effectiveness of our proposed improved noise-reducing autoencoder on the hepatitis C dataset. Secondly, IDAE achieves higher accuracy on the hepatitis C dataset compared to the traditional autoencoders such as AE, DAE, SDAE and DIUDA, etc., with numerical improvements of 0.011, 0.013, 0.010, 0.007, respectively. other metrics such as the AUC-ROC and F1 scores, the values are improved by 0.11, 0.10, 0.06,0.04 and 0.13, 0.11, 0.042, 0.032. From Fig. 5, it can be seen that the IDAE shows better clustering effect and class boundary differentiation in the feature representation in 3D space, and both the experimental results and visual analyses verify the advantages of the improved model in classification performance. Both experimental results and visualisation analysis verify the advantages of the improved model in classification performance.
Finally, SVM and RF outperform KNN for classification in the Hepatitis C dataset due to the fact that SVM can handle complex nonlinear relationships through radial basis function (RBF) kernels. The integrated algorithm can also integrate multiple weak learners to indirectly achieve nonlinear classification. KNN, on the other hand, is based on linear measures such as Euclidean distance to construct decision boundaries, which cannot effectively capture and express the essential laws of complex nonlinear data distributions, leading to poor classification results.
In summary, these results demonstrate the superiority of the improved noise-reducing autoencoder in feature extraction of hepatitis C data. It is also indirectly verified by the effect of machine learning that hepatitis C data features may indeed have complex nonlinear relationships.
To answer the second question, we analyze in this subsection the performance variation of different autoencoder algorithms at different depths. To perform the experiments in the constrained setting, we used a fixed learning rate of 0.001. The number of neurons in the encoder and decoder was kept constant and the number of layers added to the encoder and decoder was set to {1, 2, 3, 4, 5, 6}. Each experiment was performed 3 times and the average results are shown in Fig. 6, we make the following observations:
Effects of various types of autoencoders at different depths.
Under different layer configurations, the IDAE proposed in this study shows significant advantages over the traditional AE, DAE, SDAE and SDAE in terms of both feature extraction and classification performance. The experimental data show that the deeper the number of layers, the greater the performance improvement, when the number of layers of the encoder reaches 6 layers, the accuracy improvement effect of IDAE is 0.112, 0.103 , 0.041, 0.021 ,the improvement effect of AUC-ROC of IDAE is 0.062, 0.042, 0.034,0.034, and the improvement effect of F1 is 0.054, 0.051, 0.034,0.028 in the order of the encoder.
It is worth noting that conventional autocoders often encounter the challenges of overfitting and gradient vanishing when the network is deepened, resulting in a gradual stabilisation or even a slight decline in their performance on the hepatitis C classification task, which is largely attributed to the excessive complexity and gradient vanishing problems caused by the over-deep network structure, which restrict the model from finding the optimal solution. The improved version of DAE introduces residual neural network, which optimises the information flow between layers and solves the gradient vanishing problem in deep learning by introducing directly connected paths, and balances the model complexity and generalisation ability by flexibly expanding the depth and width of the network. Experimental results show that the improved DAE further improves the classification performance with appropriate increase in network depth, and alleviates the overfitting problem at the same depth. Taken together, the experimental results reveal that the improved DAE does mitigate the risk of overfitting at the same depth as the number of network layers deepens, and also outperforms other autoencoders in various metrics.
To answer the third question, in this subsection we analyse the speed of model convergence for different autoencoder algorithms. The experiments were also performed by setting the number of layers added to the encoder and decoder to {3, 6}, with the same number of neurons in each layer, and performing each experiment three times, with the average results shown in Fig. 7, where we observe the following conclusions: The convergence speed of the IDAE is better than the other autoencoder at different depths again. Especially, the contrast is more obvious at deeper layers. This is due to the fact that the chain rule leads to gradient vanishing and overfitting problems, and its convergence speed will have a decreasing trend; whereas the IDAE adds direct paths between layers by incorporating techniques such as residual connectivity, which allows the signal to bypass the nonlinear transforms of some layers and propagate directly to the later layers. This design effectively mitigates the problem of gradient vanishing as the depth of the network increases, allowing the network to maintain a high gradient flow rate during training, and still maintain a fast convergence speed even when the depth increases. In summary, when dealing with complex and high-dimensional data such as hepatitis C-related data, the IDAE is able to learn and extract features better by continuously increasing the depth energy, which improves the model training efficiency and overall performance.
Comparison of model convergence speed for different layers of autoencoders.
- Karen Hao on how the AI boom became a new imperial frontier - Machine Learning Week 2025 - July 8th, 2025 [July 8th, 2025]
- Machine Learning and AI in Enhancing Image Analysis of 3D Samples - Drug Target Review - July 8th, 2025 [July 8th, 2025]
- Gartner Predicts Over 40% of Agentic AI Projects Will Be Canceled by End of 2027 - Machine Learning Week 2025 - July 8th, 2025 [July 8th, 2025]
- Explainable machine learning model for predicting the transarterial chemoembolization response and subtypes of hepatocellular carcinoma patients - BMC... - July 8th, 2025 [July 8th, 2025]
- Identification and validation of glucocorticoid receptor and programmed cell death-related genes in spinal cord injury using machine learning - Nature - July 8th, 2025 [July 8th, 2025]
- Multiclass leukemia cell classification using hybrid deep learning and machine learning with CNN-based feature extraction - Nature - July 6th, 2025 [July 6th, 2025]
- Predictive modeling and machine learning show poor performance of clinical, morphological, and hemodynamic parameters for small intracranial aneurysm... - July 6th, 2025 [July 6th, 2025]
- A robust machine learning approach to predicting remission and stratifying risk in rheumatoid arthritis patients treated with bDMARDs - Nature - July 6th, 2025 [July 6th, 2025]
- Ultrabroadband and band-selective thermal meta-emitters by machine learning - Nature - July 4th, 2025 [July 4th, 2025]
- Machine Learning is Surprisingly Good at Simulating the Universe - Universe Today - July 4th, 2025 [July 4th, 2025]
- Machine learning-assisted multi-dimensional transcriptomic analysis of cytoskeleton-related molecules and their relationship with prognosis in... - July 4th, 2025 [July 4th, 2025]
- Machine learning combined with multi-omics to identify immune-related LncRNA signature as biomarkers for predicting breast cancer prognosis - Nature - July 4th, 2025 [July 4th, 2025]
- Comprehensive machine learning analysis of PANoptosis signatures in multiple myeloma identifies prognostic and immunotherapy biomarkers - Nature - July 4th, 2025 [July 4th, 2025]
- Enhancing game outcome prediction in the Chinese basketball league through a machine learning framework based on performance data - Nature - July 4th, 2025 [July 4th, 2025]
- A novel double machine learning approach for detecting early breast cancer using advanced feature selection and dimensionality reduction techniques -... - July 4th, 2025 [July 4th, 2025]
- Machine learning for Parkinsons disease: a comprehensive review of datasets, algorithms, and challenges - Nature - July 4th, 2025 [July 4th, 2025]
- Cervical cancer prediction using machine learning models based on routine blood analysis - Nature - July 4th, 2025 [July 4th, 2025]
- Enhancing anomaly detection in IoT-driven factories using Logistic Boosting, Random Forest, and SVM: A comparative machine learning approach - Nature - July 4th, 2025 [July 4th, 2025]
- Predicting car accident severity in Northwest Ethiopia: a machine learning approach leveraging driver, environmental, and road conditions - Nature - July 4th, 2025 [July 4th, 2025]
- Sensormatic Solutions Adds Machine Learning to Shrink Analyzer - Ink World magazine - July 4th, 2025 [July 4th, 2025]
- Exploring the link between the ZJU index and sarcopenia in adults aged 2059 using NHANES and machine learning - Nature - July 4th, 2025 [July 4th, 2025]
- Combining multi-parametric MRI radiomics features with tumor abnormal protein to construct a machine learning-based predictive model for prostate... - July 2nd, 2025 [July 2nd, 2025]
- New insight into viscosity prediction of imidazolium-based ionic liquids and their mixtures with machine learning models - Nature - July 2nd, 2025 [July 2nd, 2025]
- Implementing partial least squares and machine learning regressive models for prediction of drug release in targeted drug delivery application -... - July 2nd, 2025 [July 2nd, 2025]
- Advanced analysis of defect clusters in nuclear reactors using machine learning techniques - Nature - July 2nd, 2025 [July 2nd, 2025]
- Machine learning analysis of kinematic movement features during functional tasks to discriminate chronic neck pain patients from asymptomatic controls... - July 2nd, 2025 [July 2nd, 2025]
- Enhanced machine learning models for predicting three-year mortality in Non-STEMI patients aged 75 and above - BMC Geriatrics - July 2nd, 2025 [July 2nd, 2025]
- Modeling seawater intrusion along the Alabama coastline using physical and machine learning models to evaluate the effects of multiscale natural and... - July 2nd, 2025 [July 2nd, 2025]
- A comprehensive study based on machine learning models for early identification Mycoplasma pneumoniae infection in segmental/lobar pneumonia - Nature - July 2nd, 2025 [July 2nd, 2025]
- Identifying ovarian cancer with machine learning DNA methylation pattern analysis - Nature - July 2nd, 2025 [July 2nd, 2025]
- High-isolation dual-band MIMO antenna for next-generation 5G wireless networks at 28/38 GHz with machine learning-based gain prediction - Nature - July 2nd, 2025 [July 2nd, 2025]
- Sony and AMD want to focus on machine learning for the PS6 - Instant Gaming News - July 2nd, 2025 [July 2nd, 2025]
- How Machine Learning is Reshaping the Future of Sports Betting? - London Daily News - July 2nd, 2025 [July 2nd, 2025]
- An interpretable machine learning model for predicting depression in middle-aged and elderly cancer patients in China: a study based on the CHARLS... - July 2nd, 2025 [July 2nd, 2025]
- These Eight Projects Showcase the Power of Machine Learning on the Edge - Hackster.io - June 29th, 2025 [June 29th, 2025]
- Build Custom AI Tools for Your AI Agents that Combine Machine Learning and Statistical Analysis - MarkTechPost - June 29th, 2025 [June 29th, 2025]
- Check out these essential tips and trends for SEO in 2025 as AI and machine learning loom large - EdTech Innovation Hub - June 29th, 2025 [June 29th, 2025]
- Using machine learning to predict the severity of salmonella infection - Open Access Government - June 28th, 2025 [June 28th, 2025]
- How AI and machine learning are transforming drug discovery - Pharmaceutical Technology - June 28th, 2025 [June 28th, 2025]
- Capturing the complexity of human strategic decision-making with machine learning - Nature - June 26th, 2025 [June 26th, 2025]
- A framework to evaluate machine learning crystal stability predictions - Nature - June 24th, 2025 [June 24th, 2025]
- Machine learning revealed giant thermal conductivity reduction by strong phonon localization in two-angle disordered twisted multilayer graphene -... - June 24th, 2025 [June 24th, 2025]
- How AI and Machine Learning Are Powering the Next Generation of Pump Maintenance - Robotics Tomorrow - June 24th, 2025 [June 24th, 2025]
- Actuate Therapeutics Reports Positive Biomarker and Machine Learning Data from Phase 2 Elraglusib Trial in First-Line Treatment of Metastatic... - June 24th, 2025 [June 24th, 2025]
- Texas A&M Researchers Introduce a Two-Phase Machine Learning Method Named ShockCast for High-Speed Flow Simulation with Neural Temporal Re-Meshing -... - June 22nd, 2025 [June 22nd, 2025]
- Machine learning method helps bring diagnostic testing out of the lab - Medical Xpress - June 22nd, 2025 [June 22nd, 2025]
- Sebi proposes five-point rulebook for responsible use of AI, machine learning - The New Indian Express - June 22nd, 2025 [June 22nd, 2025]
- HAPIR: a refined Hallmark gene set-based machine learning approach for predicting immunotherapy response in cancer patients - Nature - June 20th, 2025 [June 20th, 2025]
- Machine learning boosts accuracy of point-of-care disease detection - News-Medical - June 20th, 2025 [June 20th, 2025]
- How AI and Machine Learning Are Transforming Food Poisoning Outbreak Detection - Food Poisoning News - June 20th, 2025 [June 20th, 2025]
- Evo 2 machine learning model enlists the power of AI in the fight against diseases - Medical Xpress - June 20th, 2025 [June 20th, 2025]
- Machine learning can predict which babies will be born with low birth weights - Medical Xpress - June 20th, 2025 [June 20th, 2025]
- Development and Validation of a Machine Learning Model for Identifying Novel HIV Integrase Inhibitors - Cureus - June 20th, 2025 [June 20th, 2025]
- IIT launches new online certificate programme in data science and machine learning for working profession - Times of India - June 20th, 2025 [June 20th, 2025]
- Calgary startup tackles referee abuse with microphones and machine learning - Yahoo - June 20th, 2025 [June 20th, 2025]
- New machine learning program accurately predicts who will stick with their exercise program - AOL.com - June 20th, 2025 [June 20th, 2025]
- Machine learning and generative AI: What are they good for in 2025? - MIT Sloan - June 4th, 2025 [June 4th, 2025]
- Researchers use machine learning to improve gene therapy - Stanford Report - June 4th, 2025 [June 4th, 2025]
- Machine learning for workpiece mass prediction using real and synthetic acoustic data - Nature - June 4th, 2025 [June 4th, 2025]
- Analyzing the Effect of Linguistic Similarity on Cross-Lingual Transfer: Tasks and Input Representations Matter - Apple Machine Learning Research - June 4th, 2025 [June 4th, 2025]
- Machine learning models for predicting severe acute kidney injury in patients with sepsis-induced myocardial injury - Nature - June 4th, 2025 [June 4th, 2025]
- A machine learning approach to carbon emissions prediction of the top eleven emitters by 2030 and their prospects for meeting Paris agreement targets... - June 4th, 2025 [June 4th, 2025]
- Augmentation of wastewater-based epidemiology with machine learning to support global health surveillance - Nature - June 4th, 2025 [June 4th, 2025]
- Analysis of a nonsteroidal anti inflammatory drug solubility in green solvent via developing robust models based on machine learning technique -... - June 4th, 2025 [June 4th, 2025]
- Your DNA Is a Machine Learning Model: Its Already Out There - Towards Data Science - June 4th, 2025 [June 4th, 2025]
- Development and validation of a risk prediction model for kinesiophobia in postoperative lung cancer patients: an interpretable machine learning... - June 4th, 2025 [June 4th, 2025]
- Predicting long-term patency of radiocephalic arteriovenous fistulas with machine learning and the PREDICT-AVF web app - Nature - June 4th, 2025 [June 4th, 2025]
- How Machine Learning and Cascade Learning Open Doors of Advanced Automation - Supply & Demand Chain Executive - June 4th, 2025 [June 4th, 2025]
- New Hydrogenation Reaction Mechanism for Superhydride Revealed by Machine Learning - Asia Research News | - June 4th, 2025 [June 4th, 2025]
- AI experiences rapid adoption, but with mixed outcomes Highlights from VotE: AI & Machine Learning - S&P Global - June 4th, 2025 [June 4th, 2025]
- IIPE introduces online M.Tech in Data Science and Machine Learning for working professionals - India Today - June 4th, 2025 [June 4th, 2025]
- Introducing Windows ML: The future of machine learning development on Windows - Windows Blog - May 19th, 2025 [May 19th, 2025]
- Settlement strategies and their driving mechanisms of Neolithic settlements using machine learning approaches: a case study in Zhejiang Province -... - May 19th, 2025 [May 19th, 2025]
- MyWear revolutionizes real-time health monitoring with comparative analysis of machine learning - Nature - May 19th, 2025 [May 19th, 2025]
- Leveraging stacking machine learning models and optimization for improved cyberattack detection - Nature - May 19th, 2025 [May 19th, 2025]
- Predicting land suitability for wheat and barley crops using machine learning techniques - Nature - May 10th, 2025 [May 10th, 2025]
- AI and Machine Learning - Ribeiro Preto adopts Optibus to optimise public bus system - Smart Cities World - May 10th, 2025 [May 10th, 2025]
- Childrens Hospital Los Angeles Leads Development of First Machine Learning Tool to Predict Risk of Cisplatin-Induced Hearing Loss - Business Wire - May 10th, 2025 [May 10th, 2025]
- Google is using machine learning to help Android users avoid unwanted and dangerous notifications - BetaNews - May 10th, 2025 [May 10th, 2025]
- London School of Emerging Technology (LSET) Concludes International Workshop on Emerging AI & Machine Learning Innovation - Barchart.com - May 10th, 2025 [May 10th, 2025]