Archive for the ‘Machine Learning’ Category

Putting the AI in NIA: New opportunities in artificial intelligence – National Institute on Aging

Acknowledgments: Many thanks to the NIA AI Working Group members for their contributions to this blog post.

Artificial intelligence (AI) the science of computer systems that can mimic human-like thinking and decision-making processes has continued to evolve since our 2022 blog on this topic. With that growth comes added fascination for AIs possibilities and caution about its potential pitfalls.

Beyond the headlines, the aging science community is most excited about how AI and its related field of machine learning (ML) can turbocharge tools and models to accelerate research in Alzheimers disease and related dementias as well as other complex health challenges.

As NIA continues to expand its portfolio of AI/ML initiatives, be sure to check out our latest funding opportunity on multi-scale computational models in aging and Alzheimers (RFA-AG-25-016) with an application deadline of June 13, 2024. This RFA encompasses a variety of computational approaches such as mathematical and computational modeling, image analysis, AI, and ML to better understand aging processes and Alzheimers and related dementias across molecules, cells, and cellular networks, and how they affect cognition and behavior.

If youre interested in learning more, the NIH Center for Alzheimers and Related Dementias (CARD) has numerous training opportunities, open-access resources, and tools to help investigators take advantage of AI and ML capabilities. For example, GenoML, an open-source project created by CARD staff and collaborators, offers a streamlined approach to machine learning in genomics and has been downloaded more than 15,000 times since its launch.

NIA also participates in broad efforts to advance cutting-edge AI research in partnership with other federal and international funders through programs such as:

NIA recognizes the transformative potential of AI in analyzing complex datasets, accelerating the understanding of Alzheimers pathology, and identifying novel treatment avenues. Together, we hope these advanced tools and methods will help us better understand the aging process and find a cure for dementia and other age-related diseases.

To be a part of the next chapter, apply for the latest multi-scale computational models in aging and Alzheimers funding opportunity by June 13. To learn more, visit theNIA AI page. As always, we invite comments below!

Read the rest here:
Putting the AI in NIA: New opportunities in artificial intelligence - National Institute on Aging

Uncertainty-aware deep learning for trustworthy prediction of long-term outcome after endovascular thrombectomy … – Nature.com

Global Burden of Disease Stroke Expert Group and others. Global, regional, and country-specific lifetime risks of stroke, 1990 and 2016. N. Engl. J. Med. 379, 24292437 (2018).

Article Google Scholar

Goyal, M. et al. Endovascular thrombectomy after large-vessel Ischaemic stroke: A meta-analysis of individual patient data from five randomised trials. Lancet 387, 17231731 (2016).

Article PubMed Google Scholar

Albers, G. W. et al. Thrombectomy for stroke at 6 to 16 hours with selection by perfusion imaging. N. Engl. J. Med. 378, 708718 (2018).

Article PubMed PubMed Central Google Scholar

Nogueira, R. G. et al. Thrombectomy 6 to 24 hours after stroke with a mismatch between deficit and infarct. N. Engl. J. Med. 378, 1121 (2018).

Article PubMed Google Scholar

Quinn, T., Dawson, J., Walters, M. & Lees, K. Functional outcome measures in contemporary stroke trials. Int. J. Stroke 4, 200205 (2009).

Article CAS PubMed Google Scholar

Johnston, K. C., Wagner, D. P., Haley, E. C. Jr. & Connors, A. F. Jr. Combined clinical and imaging information as an early stroke outcome measure. Stroke 33, 466472 (2002).

Article PubMed PubMed Central Google Scholar

Asadi, H., Dowling, R., Yan, B. & Mitchell, P. Machine learning for outcome prediction of acute ischemic stroke post intra-arterial therapy. PLoS ONE 9, e88225 (2014).

Article ADS PubMed PubMed Central Google Scholar

Monteiro, M. et al. Using machine learning to improve the prediction of functional outcome in ischemic stroke patients. IEEE/ACM Trans. Comput. Biol. Bioinf. 15, 19531959 (2018).

Article Google Scholar

Heo, J. et al. Machine learning-based model for prediction of outcomes in acute stroke. Stroke 50, 12631265 (2019).

Article PubMed Google Scholar

Bacchi, S. et al. Deep learning in the prediction of Ischaemic stroke thrombolysis functional outcomes: A pilot study. Acad. Radiol. 27, e19e23 (2020).

Article PubMed Google Scholar

Alaka, S. A. et al. Functional outcome prediction in ischemic stroke: A comparison of machine learning algorithms and regression models. Front. Neurol. 11, 889 (2020).

Article PubMed PubMed Central Google Scholar

Begoli, E., Bhattacharya, T. & Kusnezov, D. The need for uncertainty quantification in machine-assisted medical decision making. Nat. Mach. Intell. 1, 2023 (2019).

Article Google Scholar

Kim, D.-Y. et al. Deep learning-based personalised outcome prediction after acute ischaemic stroke. J. Neurol. Neurosurg. Psychiatry 94, 369378 (2023).

Article PubMed Google Scholar

Vora, N. A. et al. A 5-item scale to predict stroke outcome after cortical middle cerebral artery territory infarction: Validation from results of the diffusion and perfusion imaging evaluation for understanding stroke evolution (defuse) study. Stroke 42, 645649 (2011).

Article PubMed Google Scholar

Panni, P. et al. Acute stroke with large ischemic core treated by thrombectomy: Predictors of good outcome and mortality. Stroke 50, 11641171 (2019).

Article PubMed Google Scholar

Van Os, H. J. et al. Predicting outcome of endovascular treatment for acute ischemic stroke: Potential value of machine learning algorithms. Front. Neurol. 9, 784 (2018).

Article PubMed PubMed Central Google Scholar

Xie, Y. et al. Use of gradient boosting machine learning to predict patient outcome in acute ischemic stroke on the basis of imaging, demographic, and clinical information. Am. J. Roentgenol. 212, 4451 (2019).

Article Google Scholar

Thakkar, H. K., Liao, W.-W., Wu, C.-Y., Hsieh, Y.-W. & Lee, T.-H. Predicting clinically significant motor function improvement after contemporary task-oriented interventions using machine learning approaches. J. Neuroeng. Rehabil. 17, 110 (2020).

Article Google Scholar

Shao, H. et al. A new machine learning algorithm with high interpretability for improving the safety and efficiency of thrombolysis for stroke patients: A hospital-based pilot study. Digit. Health 9, 20552076221149530 (2023).

PubMed PubMed Central Google Scholar

Bronstein, M. M., Bruna, J., LeCun, Y., Szlam, A. & Vandergheynst, P. Geometric deep learning: Going beyond Euclidean data. IEEE Signal Process. Mag. 34, 1842 (2017).

Article ADS Google Scholar

Parisot, S. et al. Spectral graph convolutions for population-based disease prediction. In International Conference on Medical Image Computing and Computer-Assisted Intervention (eds Parisot, S. et al.) 177185 (Springer, 2017).

Google Scholar

Kazi, A. et al. Inceptiongcn: Receptive field aware graph convolutional network for disease prediction. In International Conference on Information Processing in Medical Imaging (eds Kazi, A. et al.) 7385 (Springer, 2019).

Chapter Google Scholar

Ravindra, N., Sehanobish, A., Pappalardo, J.L., Hafler, D.A. & van Dijk, D. Disease state prediction from single-cell data using graph attention networks. In: Proc. of the ACM conference on health, inference, and learning, 121130 (2020).

Huang, Y. & Chung, A. C. Disease prediction with edge-variational graph convolutional networks. Med. Image Anal. 77, 102375 (2022).

Article PubMed Google Scholar

Loftus, T. J. et al. Uncertainty-aware deep learning in healthcare: A scoping review. PLOS Digit. Health 1, e0000085 (2022).

Article PubMed PubMed Central Google Scholar

Abdar, M. et al. A review of uncertainty quantification in deep learning: Techniques, applications and challenges. Information Fusion 76, 243297 (2021).

Article Google Scholar

Abdar, M., Khosravi, A., Islam, S. M. S., Acharya, U. R. & Vasilakos, A. V. The need for quantification of uncertainty in artificial intelligence for clinical data analysis: Increasing the level of trust in the decision-making process. IEEE Syst. Man Cybern. Magaz. 8, 2840 (2022).

Article Google Scholar

Guo, C., Pleiss, G., Sun, Y. & Weinberger, K. Q. On calibration of modern neural networks. In Proceedings of the 34th International Conference on Machine Learning Vol. 70 (eds Precup, D. & Teh, Y. W.) 13211330 (PMLR, 2017).

Google Scholar

Pearce, T., Brintrup, A. & Zhu, J. Understanding softmax confidence and uncertainty. Preprint at arXiv:2106.04972 (2021).

Alarab, I., Prakoonwit, S. & Nacer, M. I. Illustrative discussion of mc-dropout in general dataset: Uncertainty estimation in bitcoin. Neural Process. Lett. 53, 10011011 (2021).

Article Google Scholar

Alarab, I. & Prakoonwit, S. Uncertainty estimation-based adversarial attacks: a viable approach for graph neural networks. Soft Computing 113 (2023).

Gal, Y. & Ghahramani, Z. Dropout as a Bayesian approximation: Representing model uncertainty in deep learning. In International Conference on Machine Learning (eds Gal, Y. & Ghahramani, Z.) 10501059 (PMLR, 2016).

Google Scholar

Singer, O. C. et al. Collateral vessels in proximal middle cerebral artery occlusion: The endostroke study. Radiology 274, 851858 (2015).

Article PubMed Google Scholar

Bang, O. Y. et al. Impact of collateral flow on tissue fate in acute Ischaemic stroke. J. Neurol. Neurosurg. Psychiatry 79, 625629 (2008).

Article CAS PubMed Google Scholar

Menon, B. K. et al. Assessment of leptomeningeal collaterals using dynamic ct angiography in patients with acute ischemic stroke. J. Cerebral Blood Flow Metabol. 33, 365371 (2013).

Article Google Scholar

Berkhemer, O. A. et al. Collateral status on baseline computed tomographic angiography and intra-arterial treatment effect in patients with proximal anterior circulation stroke. Stroke 47, 768776 (2016).

Article CAS PubMed Google Scholar

Menon, B. et al. Regional leptomeningeal score on ct angiography predicts clinical and imaging outcomes in patients with acute anterior circulation occlusions. Am. J. Neuroradiol. 32, 16401645 (2011).

Article CAS PubMed PubMed Central Google Scholar

Kucinski, T. et al. Collateral circulation is an independent radiological predictor of outcome after thrombolysis in acute ischaemic stroke. Neuroradiology 45, 1118 (2003).

Article CAS PubMed Google Scholar

Sheth, S. A. et al. Collateral flow as causative of good outcomes in endovascular stroke therapy. J. Neurointerv. Surg. 8, 27 (2016).

Article PubMed Google Scholar

Seyman, E. et al. The collateral circulation determines cortical infarct volume in anterior circulation ischemic stroke. BMC Neurol. 16, 19 (2016).

Article Google Scholar

Elijovich, L. et al. Cta collateral score predicts infarct volume and clinical outcome after endovascular therapy for acute ischemic stroke: a retrospective chart review. J. Neurointerv. Surg. 8, 559562 (2016).

Article PubMed Google Scholar

Prasetya, H. et al. Value of ct perfusion for collateral status assessment in patients with acute ischemic stroke. Diagnostics 12, 3014 (2022).

Article PubMed PubMed Central Google Scholar

Potreck, A. et al. Rapid ct perfusion-based relative cbf identifies good collateral status better than hypoperfusion intensity ratio, cbv-index, and time-to-maximum in anterior circulation stroke. Am. J. Neuroradiol. 43, 960965 (2022).

Article CAS PubMed PubMed Central Google Scholar

Olivot, J. M. et al. Hypoperfusion intensity ratio predicts infarct progression and functional outcome in the defuse 2 cohort. Stroke 45, 10181023 (2014).

Article PubMed PubMed Central Google Scholar

Li, B.-H. et al. Cerebral blood volume index may be a predictor of independent outcome of thrombectomy in stroke patients with low aspects. J. Clin. Neurosci. 103, 188192 (2022).

Article ADS PubMed Google Scholar

Laredo, C. et al. Clinical and therapeutic variables may influence the association between infarct core predicted by ct perfusion and clinical outcome in acute stroke. Eur. Radiol. 32, 45104520 (2022).

Article CAS PubMed Google Scholar

Christodoulou, E. et al. A systematic review shows no performance benefit of machine learning over logistic regression for clinical prediction models. J. Clin. Epidemiol. 110, 1222 (2019).

Article PubMed Google Scholar

Ramos, L. A. et al. Predicting poor outcome before endovascular treatment in patients with acute ischemic stroke. Front. Neurol. 11, 580957 (2020).

Article PubMed PubMed Central Google Scholar

Leker, R. R. et al. Post-stroke aspects predicts outcome after thrombectomy. Neuroradiology 63, 769775 (2021).

Article PubMed Google Scholar

Peng, H., Long, F. & Ding, C. Feature selection based on mutual information criteria of max-dependency, max-relevance, and min-redundancy. IEEE Trans. Pattern Anal. Mach. Intell. 27, 12261238 (2005).

Article PubMed Google Scholar

Zhao, Z., Anand, R. & Wang, M. Maximum relevance and minimum redundancy feature selection methods for a marketing machine learning platform. In 2019 IEEE International Conference on Data Science and Advanced Analytics (DSAA) (eds Zhao, Z. et al.) 442452 (IEEE, 2019).

Chapter Google Scholar

Paszke, A. et al. Pytorch: An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems Vol. 32 (eds Paszke, A. et al.) 80248035 (Curran Associates, Inc., 2019).

Visit link:
Uncertainty-aware deep learning for trustworthy prediction of long-term outcome after endovascular thrombectomy ... - Nature.com

AI Engineer Salary: The Lucrative World of AI Engineering – Simplilearn

A few decades ago, the term Artificial Intelligence was reserved for scientific circles and tech-enthusiasts who wanted to sound cool. But, ever since its coining in 1955, AI has only grown in popularity. Today, you wouldnt find a technology magazine that doesnt mention artificial intelligence in every other paragraph.

Here's a quick video explaining the rise in demand for AI engineers and trends in an AI engineer's salary worldwide.

An AI Engineer is a professional skilled in developing, programming, and implementing artificial intelligence (AI) systems and applications. Their expertise lies in utilizing algorithms, data sets, and machine learning (ML) principles to create intelligent systems performing tasks that typically require human intelligence. These tasks may include problem-solving, decision-making, natural language processing, and understanding human speech.

AI Engineers work across various stages of AI project development, from conceptualizing and designing AI models to deploying and maintaining these systems in production environments. Their responsibilities often encompass:

AI Engineers typically have a strong foundation in computer science, mathematics, and statistics, with specialized knowledge in machine learning, deep learning, natural language processing, and computer vision. They must also be proficient in programming languages commonly used in AI, such as Python, and tools and frameworks like TensorFlow, PyTorch, and Keras.

Due to the interdisciplinary nature of AI, engineers often collaborate with data scientists, software engineers, and domain experts to develop solutions tailored to specific business needs or research objectives. The role requires continuous learning to keep up with the rapidly evolving field of artificial intelligence.

Before getting on the question at hand, we need to know top AI engineer's job roles. Machine Learning (ML) Engineers, Data Scientists, Data Analyst, Computer Vision Engineer, Business Intelligence Developer, and Algorithm Engineers are just some of the many different positions that come under the umbrella of AI engineering. Each of these positions entails a different job-profile, but, generally speaking, most AI engineers deal with designing and creating AI models. Everything from the maintenance to performance supervision of the model is the responsibility of the AI engineer.

Most AI engineers come from a computer science background and have strong programming skills, which is a non-negotiable part of an AI engineers position. Proficiency in Python and Object-Oriented Programming is highly desirable. But for an AI engineer, what is even more important than programming languages is the programming aptitude. Since the whole point of an AI system is to work without human supervision, AI algorithms are very different from traditional codes. So, the AI engineer must be able to design algorithms that are adaptable and capable of evolving.

Other than programming, an AI engineer needs to be conversant in an assortment of disciplines like robotics, physics, and mathematics. Mathematical knowledge is especially crucial as linear algebra and statistics play a vital role in designing AI models.

Read More: Gaurav Tyagis love for learning inspired to him to upskill with our AI For Decision Making: Business Strategies And Applications. Read about his journey and his experience with our course in his Simplilearn AI Program Review.

At the moment, AI engineering is one of the most lucrative career paths in the world. The AI job market has been growing at a phenomenal rate for some time now. The entry-level annual average AI engineer salary in India is around 10 lakhs, which is significantly higher than the average salary of any other engineering graduate. At high-level positions, the AI engineer salary can be as high as 50 lakhs.

AI engineers earn an average salary of well over $100,000 annually. According to Glassdoor, the average national salary is over $110,000; and the high salary is $150,000.

However, you must note that these figures can vary significantly based on several factors like:

Companies Hiring for Artificial Intelligence Engineers:

Here is the list of companies/ startups hiring in AI right now are IBM, Fractal.ai, JPMorgan, Intel, Oracle, Microsoft, etc.

City (India)

Average Salary (Annual)

Bangalore

12,00,000

Hyderabad

10,00,000

Mumbai

15,00,000

Chennai

8,00,000

Delhi

12,00,000

The salary for AI professionals in India can vary based on a variety of factors, including experience, job role, industry, and location. However, here's an estimate of the AI salary based on experience in India:

It's important to note that these figures are just estimates and can vary based on individual circumstances. Additionally, the industry and location can also play a role in determining AI salaries, with industries such as finance, healthcare, and technology typically paying higher salaries and cities such as Bangalore, Mumbai, and Delhi generally paying higher salaries than other cities in India.

If you're interested in pursuing a career in Artificial Intelligence (AI), here are some steps that can help you get started:

By following these steps, you can build a successful career in AI and become a valuable contributor to the field.

The top 7 countries with the maximum opportunities for Artificial Intelligence (AI) Professionals are:

There are various positions that an AI engineer can take up. An AI engineers salary depends on the market demand for his/her job profile. Presently, ML engineers are in greater demand and hence bag a relatively higher package than other AI engineers. Similarly, the greater the experience in artificial intelligence, the higher the salary companies will offer. Although you can become an AI engineer without a Masters degree, it is imperative that you keep updating and growing your skillset to remain competitive in the ever-evolving world of AI engineering.

There are a number of exciting and in-demand jobs in the field of artificial intelligence (AI). Here are some of the top AI jobs that you may want to consider:

As a machine learning engineer, you will be responsible for developing and implementing algorithms that enable computers to learn from data. This includes working with large data sets, designing and testing machine learning models, and tuning algorithms for efficient execution.

Data scientists use their expertise in statistics, mathematics, and computer science to analyze complex data sets. They work with organizations to gain insights that can be used to improve decision-making.

As an AI researcher, you will be responsible for investigating and developing new artificial intelligence algorithms and applications. This includes conducting research, writing papers, and presenting your findings at conferences.

Software engineers develop the software that enables computers to function. This includes creating algorithms, testing code, and debugging programs.

Systems engineers design and oversee the implementation of complex systems. This includes planning and coordinating system development, ensuring compatibility between components, and troubleshooting issues.

Hardware engineers design and oversee the manufacture of computer hardware components. This includes circuit boards, processors, and memory devices.

Network engineers design and implement computer networks. This includes configuring networking equipment, developing network architectures, and troubleshooting network problems.

Database administrators maintain databases and ensure that data is stored securely and efficiently. This includes designing database structures, implementing security measures, and backing up data.

Information security analysts plan and implement security measures to protect computer networks and systems. This includes researching security threats, assessing risks, and developing countermeasures.

User experience designers create user interfaces that are both effective and efficient. This includes developing navigation schemes, designing graphical elements, and testing prototypes.

These are just a few of the many exciting and in-demand jobs in the field of artificial intelligence. With the right skills and experience, you can find a position that matches your interests and abilities.

Just as AI is transforming the business landscape, it is also opening up new opportunities in the recruiting sphere. Here are some of the top companies and recruiters who are hiring for AI roles:

These are just some of the top companies and recruiters who are hiring for AI roles. If you have the right skills and experience, don't hesitate to apply!

There are a few key things you can do to help boost your AI salary. First, focus on acquiring in-demand skills. One of the best ways to do this is to enroll in a top-rated certification program. Second, keep up with the latest industry trends and developments. Finally, consider pursuing management or leadership roles within your organization. By taking these steps, you can position yourself for success and earn a higher salary in the AI field.

Supercharge your career in AI and ML with Simplilearn's comprehensive courses. Gain the skills and knowledge to transform industries and unleash your true potential. Enroll now and unlock limitless possibilities!

Even as you read this article, the demand for AI is booming across the globe. AI engineer salaries will keep rising as industries like tech, financial services, and medical research turn to artificial intelligence. As more global brands like Google and Nvidia dive deeper into Artificial Intelligence (AI), the demand and the salaries for AI engineers will only go upwards in 2024 and the decades to follow. Even government agencies in many developed and developing nations will open up AI engineer positions as they realize the enormous impact AI can have on the defense and governance sector.

Looking at the current pandemic scenario, jobs are better left till the dawn of next year. The time you have right now will be far better utilized in upgrading your AI repertoire.

Unlike most other fields, AI of tomorrow will look nothing like the AI of today. It is evolving at a breathtaking speed, and ensuring your Artificial Intelligence (AI) skills are relevant to current market needs, you better keep upgrading it. If you wish to get a step closer to these lucrative salaries, sharpen your AI skills with the world-class Artificial Intelligence Engineer program, and, before you know it, you will be standing in the world of AI engineers!

The salary of an AI Engineer in India can range from 8 lakhs to 50 lakhs annually.

The starting salary for an AI Engineer in India can be from 8 lakhs annually.

50 laksh is the highest salary for an AI Engineer in India

As experience and position increases, the salary also increases.

IT is one of the highest paying industry for AI Engineer.

Popular skills for AI Engineers to have are programming languages, data engineering, exploratory data analysis, deploying, modelling, and security.

Average Artificial Intelligence Engineer salary in the US is around $100k annually.

Top 5 Artificial Intelligence Jobs in the US are Machine Learning Engineer, Data Scientist, Business Intelligence Developer, Research Scientist, and Big Data Engineer/Architect.

The lowest salary for an AI Enginner in the US is around $100k annually.

Highest salary can go over $150 to $200k annually.

See the original post here:
AI Engineer Salary: The Lucrative World of AI Engineering - Simplilearn

Multimodal artificial intelligence-based pathogenomics improves survival prediction in oral squamous cell carcinoma … – Nature.com

Sung, H. et al. Global cancer statistics 2020: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 Countries. CA Cancer J. Clin. 71, 209249 (2021).

Article PubMed Google Scholar

Chen, S.-H., Hsiao, S.-Y., Chang, K.-Y. & Chang, J.-Y. New insights into oral squamous cell carcinoma: From clinical aspects to molecular tumorigenesis. Int J. Mol. Sci. 22, 2252 (2021).

Article CAS PubMed Central PubMed Google Scholar

Adrien, J., Bertolus, C., Gambotti, L., Mallet, A. & Baujat, B. Why are head and neck squamous cell carcinoma diagnosed so late? Influence of health care disparities and socio-economic factors. Oral Oncol. 50, 9097 (2014).

Article CAS PubMed Google Scholar

Gonzlez-Moles, M. ., Aguilar-Ruiz, M. & Ramos-Garca, P. Challenges in the early diagnosis of oral cancer, evidence gaps and strategies for improvement: A scoping review of systematic reviews. Cancers 14, 4967 (2022).

Article PubMed Central PubMed Google Scholar

Russo, D. et al. Development and validation of prognostic models for oral squamous cell carcinoma: A systematic review and appraisal of the literature. Cancers 13, 5755 (2021).

Article PubMed Central PubMed Google Scholar

Carreras-Torras, C. & Gay-Escoda, C. Techniques for early diagnosis of oral squamous cell carcinoma: Systematic review. Med. Oral. Patol. Oral. Cir. Bucal. 20, e305-315 (2015).

Article PubMed Central PubMed Google Scholar

Alabi, R. O. et al. Machine learning in oral squamous cell carcinoma: Current status, clinical concerns and prospects for future-A systematic review. Artif. Intell. Med. 115, 102060 (2021).

Article PubMed Google Scholar

Qiu L, Khormali A, & Liu K. Deep Biological Pathway Informed Pathology-Genomic Multimodal Survival Prediction. (2023) [cited 2023 Apr 3]; https://arxiv.org/abs/2301.02383

Vale-Silva, L. A. & Rohr, K. Long-term cancer survival prediction using multimodal deep learning. Sci. Rep. 11, 13505 (2021).

Article CAS PubMed Central ADS PubMed Google Scholar

Carrillo-Perez, F. et al. Machine-learning-based late fusion on multi-omics and multi-scale data for non-small-cell lung cancer diagnosis. JPM 12, 601 (2022).

Article PubMed Central PubMed Google Scholar

Lipkova, J. et al. Artificial intelligence for multimodal data integration in oncology. Cancer Cell. 40, 10951110 (2022).

Article CAS PubMed Central PubMed Google Scholar

Steyaert, S. et al. Multimodal deep learning to predict prognosis in adult and pediatric brain tumors. Commun. Med. 3, 44 (2023).

Article PubMed Central PubMed Google Scholar

Saravi, B. et al. Artificial intelligence-driven prediction modeling and decision making in spine surgery using hybrid machine learning models. J. Personal. Med. 12, 509 (2022).

Article Google Scholar

Zuley, M.L., Jarosz, R., Kirk, S., Lee, Y., Colen, R., & Garcia, K., et al. The Cancer Genome Atlas Head-Neck Squamous Cell Carcinoma Collection (TCGA-HNSC), The Cancer Imaging Archive, 2016 (Accessed 3 Apr 2023); https://wiki.cancerimagingarchive.net/x/VYG0

Li, X. et al. Multi-omics analysis reveals prognostic and therapeutic value of cuproptosis-related lncRNAs in oral squamous cell carcinoma. Front. Genet. 13, 984911 (2022).

Article CAS PubMed Central PubMed Google Scholar

Zou, C. et al. Identification of immune-related risk signatures for the prognostic prediction in oral squamous cell carcinoma. J. Immunol. Res. 2021, 6203759 (2021).

Article PubMed Central PubMed Google Scholar

Macenko, M., Niethammer, M., Marron, J.S., Borland, D., Woosley, J.T., Xiaojun, G., et al. A method for normalizing histology slides for quantitative analysis. In 2009 IEEE International Symposium on Biomedical Imaging: From Nano to Macro, 2009 (IEEE, accessed 4 Apr 2023]. P. 11071110. http://ieeexplore.ieee.org/document/5193250/

Vahadane, A. et al. Structure-preserving color normalization and sparse stain separation for histological images. IEEE Trans. Med. Imaging 35, 19621971 (2016).

Article PubMed Google Scholar

Salvi, M., Acharya, U. R., Molinari, F. & Meiburger, K. M. The impact of pre- and post-image processing techniques on deep learning frameworks: A comprehensive review for digital pathology image analysis. Comput. Biol. Med. 128, 104129 (2021).

Article PubMed Google Scholar

Carpenter, A. E. et al. Cell Profiler: Image analysis software for identifying and quantifying cell phenotypes. Genome Biol. 7, R100 (2006).

Article PubMed Central PubMed Google Scholar

Hughey, J. J. & Butte, A. J. Robust meta-analysis of gene expression using the elastic net. Nucleic Acids Res. 43, e79 (2015).

Article PubMed Central PubMed Google Scholar

Tschodu, D. et al. Re-evaluation of publicly available gene-expression databases using machine-learning yields a maximum prognostic power in breast cancer. Sci. Rep. 13, 16402 (2023).

Article CAS PubMed Central ADS PubMed Google Scholar

Kanehisa, M. & Goto, S. KEGG: Kyoto encyclopedia of genes and genomes. Nucleic Acids Res. 28, 2730 (2000).

Article CAS PubMed Central PubMed Google Scholar

Kanehisa, M. Toward understanding the origin and evolution of cellular organisms. Protein Sci. 28, 19471951 (2019).

Article CAS PubMed Central PubMed Google Scholar

Kanehisa, M., Furumichi, M., Sato, Y., Kawashima, M. & Ishiguro-Watanabe, M. KEGG for taxonomy-based analysis of pathways and genomes. Nucleic Acids Res. 51, D587D592 (2023).

Article CAS PubMed Google Scholar

Ye, H. et al. Metabolism-related bioinformatics analysis reveals that HPRT1 facilitates the progression of oral squamous cell carcinoma in vitro. J. Oncol. 2022, 116 (2022).

Google Scholar

Ferreira, A.-K. et al. Survival and prognostic factors in patients with oral squamous cell carcinoma. Med. Oral. Patol. Oral. Cir. Bucal. 26, e387e392 (2021).

Article PubMed Google Scholar

Asio, J., Kamulegeya, A. & Banura, C. Survival and associated factors among patients with oral squamous cell carcinoma (OSCC) in Mulago hospital, Kampala, Uganda. Cancers Head Neck. 3, 9 (2018).

Article PubMed Central PubMed Google Scholar

Girod, A., Mosseri, V., Jouffroy, T., Point, D. & Rodriguez, J. Women and squamous cell carcinomas of the oral cavity and oropharynx: Is there something new?. J. Oral Maxillof. Surg. 67, 19141920 (2009).

Article Google Scholar

Wong, K., Rostomily, R. & Wong, S. Prognostic gene discovery in glioblastoma patients using deep learning. Cancers 11, 53 (2019).

Article CAS PubMed Central PubMed Google Scholar

Hsich, E., Gorodeski, E. Z., Blackstone, E. H., Ishwaran, H. & Lauer, M. S. Identifying important risk factors for survival in patient with systolic heart failure using random survival forests. Circ. Cardiovasc. Qual. Outcomes 4, 3945 (2011).

Article PubMed Google Scholar

Ishwaran, H., Kogalur, U. B., Gorodeski, E. Z., Minn, A. J. & Lauer, M. S. High-dimensional variable selection for survival data. J. Am. Stat. Assoc. 105, 20517 (2010).

Article MathSciNet CAS Google Scholar

Ishwaran, H., Kogalur, U. B., Chen, X. & Minn, A. J. Random survival forests for high-dimensional data. Stat. Anal. Data Min. ASA Data Sci. J. 2011(4), 11532 (2011).

Article MathSciNet Google Scholar

Katzman, J. L. et al. Deepsurv: personalized treatment recommender system using a cox proportional hazards deep neural network. BMC Med. Res. Methol. 18, 187202 (2018).

Google Scholar

Sargent, D. J. Comparison of artificial neural networks with other statistical approaches. Cancer 91, 16361642 (2001).

Article CAS PubMed Google Scholar

Xiang, A., Lapuerta, P., Ryutov, A., Buckley, J. & Azen, S. Comparison of the performance of neural network methods and Cox regression for censored survival data. Comput. Stat. Data Anal. 34, 24357 (2000).

Article Google Scholar

Nie, Z., Zhao, P., Shang, Y. & Sun, B. Nomograms to predict the prognosis in locally advanced oral squamous cell carcinoma after curative resection. BMC Cancer 21, 372 (2021).

Article PubMed Central PubMed Google Scholar

Nojavanasghari, B., Gopinath, D., Koushik, J., Baltruaitis, T., & Morency, L. P. Deep multimodal fusion for persuasiveness prediction. In Proceedings of the 18th ACM International Conference on Multimodal Interaction. 284288 (2016).

Kampman, O., Barezi, E. J., Bertero, D., & Fung, P. Investigating audio, video, and text fusion methods for end-to-end automatic personality prediction. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics vol. 2.606611 (2018).

Wang, Z., Li, R., Wang, M. & Li, A. Gpdbn: Deep bilinear network integrating both genomic data and pathological images for breast cancer prognosis prediction. Bioinformatics 27, 29632970 (2021).

Article Google Scholar

Subramanian, V., Syeda-Mahmood, T., & Do, M. N. Multimodal fusion using sparse cca for breast cancer survival prediction. In Proceedings of IEEE 18th International Symposium on Biomedical Imaging (ISBI).14291432 (2021).

Mai, S., Hu, H., & Xing, S. Modality to modality translation: An adversarial representation learning and graph fusion network for multimodal fusion. In Proceedings of the AAAI Conference on Artificial Intelligence 164172 (2020).

Mobadersany, P. et al. Predicting cancer outcomes from histology and genomics. Predicting cancer outcomes from histology and genomics using convolutional networks. Proc. Natl. Acad. Sci. 115, 29702979 (2018).

Article ADS Google Scholar

Wang, C. et al. A cancer survival prediction method based on graph convolutional network. IEEE Trans. Nanobiosci. 19, 117126 (2020).

Article Google Scholar

Zadeh, A., Chen, M., Poria, S., Cambria, E., & Morency, L. P Tensor fusion network for multimodal sentiment analysis. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, 11031114 (2017).

Chen, R. J. et al. Pathomic fusion: An integrated framework for fusing histopathology and genomic features for cancer diagnosis and prognosis. IEEE Trans. Med. Imaging 41, 757770 (2022).

Article PubMed Central PubMed Google Scholar

Kim, J. H., On, K. W., Lim, W., Kim, J., Ha, J. W., & Zhang, B. T. Hadamard product for low-rank bilinear pooling. In Proceedings of International Conference on Learning Representations, 114 (2017)

Liu, Z., Shen, Y., Lakshminarasimhan, V. B., Liang, P. P., Zadeh, A., & Morency, L. P. Efficient low-rank multimodal fusion with modality-specific factors. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. 22472256 (2021)

Li, R., Wu, X., Li, A. & Wang, M. Hfbsurv: Hierarchical multimodal fusion with factorized bilinear models for cancer survival prediction. Bioinformatics 38, 25872594 (2022).

Article CAS PubMed Central PubMed Google Scholar

Original post:
Multimodal artificial intelligence-based pathogenomics improves survival prediction in oral squamous cell carcinoma ... - Nature.com

Northrop Grumman Partners to Advance Deep Sensing for the US Army | Northrop Grumman – Northrop Grumman Newsroom

The TITAN ground system solution will provide multi-domain integrated data directly to the front lines

AZUSA, Calif. March 7, 2024 Northrop Grumman Corporation (NYSE: NOC) is partnering with Palantir USG, Inc. on the newly awarded Tactical Intelligence Targeting Access Node (TITAN) ground system for the U.S. Army. The program supports one of the Armys key modernization imperatives by using artificial intelligence (AI) and machine learning (ML) to enhance the automation of target recognition and geolocation and integrate data from multiple sensors to reduce sensor-to-shooter timelines.

Northrop Grumman will partner to:

The TITAN ground system will enable faster decision making on the frontlines by providing actionable intelligence to reduce sensor-to-shooter timelines and maximize effectiveness of long-range fires. (Photo Credit: Palantir)

Expert:

Aaron Dann, vice president, strategic force programs, Northrop Grumman: Northrop Grummans extensive experience in large-scale system integration will help enable mission success and provide information superiority for our warfighters in complex operating environments.Our work on TITAN continues our long history of supporting our nations need for actionable intelligence when and where it matters most.

Details:

TITAN is a ground system that has access to space, high altitude, aerial, and terrestrial sensors to provide actionable targeting information for enhanced mission command. TITAN will enable the Army to fuse, correlate, and integrate intelligence data from a rapidly expanding series of sensors providing operational forces a full picture of their surroundings. This robust capability allows real-time decision making that will substantially increase the accuracy, precision, and effects of long-range precision fires.

Northrop Grumman is a leading global aerospace and defense technology company. Our pioneering solutions equip our customers with the capabilities they need to connect and protect the world, and push the boundaries of human exploration across the universe. Driven by a shared purpose to solve our customers toughest problems, our employees define possible every day.

The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the U.S. Government.

See more here:
Northrop Grumman Partners to Advance Deep Sensing for the US Army | Northrop Grumman - Northrop Grumman Newsroom