Archive for the ‘Machine Learning’ Category

Machine Learning Stocks to Buy That Are Millionaire-Makers: May – InvestorPlace

Source: Wright Studio / Shutterstock.com

The next phase of technology has been established: machine learning and AI will revolutionize the world for the better. Although it might seem like these stocks are trading in a bubble, investors need to keep a discerning and keen long-term vision for these disruptive, emerging technologies. Some way or another, AI will grow to become a secular movement that nearly every industry, not every company in the world, will incorporate to increase productivity and efficiency.

Of course, anxiousness about the AI bubble is not unwarranted. Preparing a well-diversified portfolio of the right stocks is crucial to avoid such major drawdowns. Just because a company mentions AI doesnt mean it instantly becomes a good investment. Weve already seen this with pullbacks in industries like EVs and fintech. So, if you want to gain machine learning exposure in your portfolio, consider these three machine learning stocks to buy and thank us in the coming five or ten years.

Source: Ascannio / Shutterstock.com

Palantir (NYSE:PLTR) went from a meme stock to a legitimate business, earning hundreds of millions each year in profits. The stock is trading right at the average analyst price target of $21.45 and has a street-high price target of $35.00. This high-end target represents a more than 60% upside from the current price.

This stock has been polarizing on Wall Street since its direct listing debut in September 2020. While the first few years were a roller coaster ride for investors, the stock is earning legitimate backing through its machine-learning integrated production deployment infrastructure. Additionally, the hype doesnt get any more legit than Stanley Druckenmiller, who disclosed that he bought nearly 770,000 shares in the recent quarter! For those who dont know him, Druckenmiller has long supported the ML revolution, with NVIDIA (NASDAQ:NVDA) being his most recent win during its massive rally over the past year.

The problem with Palantir has always been its valuation. Currently, shares trade at 21x sales and 65x forward earnings. Nonetheless, growth prospects are looking strong now, with revenue growing at a five-year compound annual growth rate (CAGR) of 12% and a three-year CAGR of 21%. As multiples begin to compress, investors should consider Palantir to be a legitimate money-making contender in the ML space.

Baidu (NASDAQ:BIDU) is a Chinese technology company that recently amassed over 200 million users on its new Ernie AI chatbot. This year, the stock is down by about 4.0% as Chinese stocks have lagged the broader rally in US equities. Nonetheless, Wall Street has maintained an average analyst price target of $153.36, about 40% higher than the current price.

Baidu recently made headlines after reporting it was interested in partnering with Tesla (NASDAQ:TSLA) to use its robotaxis in China. As China looks to get its hands on some for immediate rollout, investors should keep their eyes peeled for the unveiling of the CyberCabs in America this August. Not only will this potentially be one of the strongest new channels for revenue growth for both these companies, but Baidus race to get first movers advantage could solidify it as a leader in the Chinese automobile space.

As with many Chinese ADR stocks, the multiples for BIDU are low. For example, its P/E ratio of 9.79x is sitting 25% lower than its sectors median! On top of such a discounted valuation, Baidu has maintained a strong 10-year revenue CAGR of 14%. Baidu looks like a bargain for investors who can tolerate the risk that comes with Chinese stocks.

Micron Technologies (NASDAQ:MU) is an American chip maker with a major surge in demand due to AI and machine learning technology. Analysts are bullish on MU, with 28 of 31 recommendations coming in May as a Buy or Strong Buy rating. The average analyst price target is $145.52, nearly 15% higher than the current price.

This chip maker has already hit new all-time highs this month and is seeing revitalized product demand. This growth potential has largely been attributed to Micron being one of three companies in the world that make DRAM memory chips. These chips allow for storing massive amounts of data, which will help accelerate the training of AI and machine learning technologies. These DRAM chips account for 71% of Microns revenue as of Q2 2024, which bodes well for the stocks upward momentum.

Usually, when a stock trades at all-time highs, its valuations also stretch. Thats not exactly true for Micron, as shares are trading at just 7.5x sales and 17x forward earnings. As revenue growth accelerates, Micron sticks out as one of the more under-the-radar ways to gain exposure to AI and potentially join the million-dollar club.

On the date of publication, Ian Hartana and Vayun Chugh did not hold (either directly or indirectly) any positions in the securities mentioned in this article. The opinions expressed in this article are those of the writer, subject to the InvestorPlace.comPublishing Guidelines.

Chandler Capital is the work of Ian Hartana and Vayun Chugh. Ian Hartana and Vayun Chugh are both self-taught investors whose work has been featured in Seeking Alpha. Their research primarily revolves around GARP stocks with a long-term investment perspective encompassing diverse sectors such as technology, energy, and healthcare.

See the article here:
Machine Learning Stocks to Buy That Are Millionaire-Makers: May - InvestorPlace

AI-Driven Robotics and Quality Control: Transforming Electronics Assembly and Manufacturing – Robotics and Automation News

In the ever-evolving landscape of manufacturing, especially electronics assembly, AI-driven robotics is transforming the nature of quality control.

By using deep learning, these systems can detect patterns and defects that humans cannot, which provides ample opportunity for companies to leap ahead of their competition.

As AI continues to integrate into manufacturing, its role in ensuring stringent quality standards, enhancing inspection capabilities, and maintaining compliance with IPC (Institute for Printed Circuits) standards becomes increasingly vital.

AI-driven robotics significantly enhance quality control by using advanced algorithms and machine learning to detect defects more accurately than human inspectors.

These systems can analyse large volumes of data in real time, identify patterns, and make decisions based on stringent quality standards. This is important in the automotive, pharmaceutical, and electronic assembly industries, where even minor defects can lead to substantial issues.

The AI algorithms that work on modern quality control depend upon machine learning and deep learning. Machine learning involves the development of algorithms and data sets that allow AI to perform tasks without explicit instructions.

With machine learning, AI can learn and make decisions on data independently. Deep learning is a type of machine learning that uses layers of networks to interpret complicated data. This enables AI to understand and learn from images, text, video, and audio.

Machine learning and especially deep learning is being used to train AI on large data sets so that it can detect patterns and anomalies. This is then applied to quality control.

For example, in the electronics manufacturing industry, AI can be trained to detect soldering defects, missing components, and alignment issues. These algorithms continuously learn from new data, improving their accuracy over time.

One of the significant advantages of AI-driven robotics is the ability to process and analyse data in real time. High-speed cameras and sensors capture detailed images and measurements of products as they move through the production line.

AI systems then quickly analyse this data, instantly identifying defects and deviations from quality standards. This immediate feedback allows for quick corrective actions, reducing the amount of quality issues in manufacturing.

AI systems excel in pattern recognition, which is crucial for detecting defects that may not be obvious to human inspectors. For example, AI can identify subtle variations in texture, colour, or shape that could indicate a potential issue.

This is particularly important in industries like electronics and automotive manufacturing, where precision and consistency are absolutely vital. Pattern recognition capabilities ensure that even the smallest defects are detected and addressed promptly.

IPC is a trade association that standardises the assembly and production of electric equipment and assemblies and it is the bedrock behind the success of modern electronics manufacturing. AI robotics poses a number of significant opportunities to further enhance IPC moving forward:

The Future of Quality Control in Electronics Manufacturing: As AI-driven robotics evolve, their role in quality control will expand. Future advancements may include sophisticated machine learning models for predictive maintenance, reducing downtime and enhancing production efficiency.

As AI-driven robotics continue to evolve, their role in quality control within electronics manufacturing is ready for significant expansion in several different ways.

Combining Internet of Things (IoT) devices with AI-driven robotics will revolutionise quality control. IoT sensors can monitor production environments in real time, providing data that AI systems analyse to maintain optimal conditions.

Predictive maintenance, powered by this integration, will allow for timely repairs, minimising production disruptions.

AI systems that learn and adapt to new products and manufacturing methods offer greater flexibility and efficiency. These adaptive systems can adjust in real-time based on data analysis, ensuring quality control processes evolve with production innovations.

Advanced machine learning models will continually improve from new data, enhancing their defect detection and process optimisation capabilities.

Predictive analytics will become integral to quality control. By analysing historical and real-time data, AI can foresee potential issues, enabling proactive measures. This reduces unexpected downtime and optimises production schedules, leading to more efficient operations.

Future quality control will see increased collaboration between humans and robots. Collaborative robots (cobots) will handle repetitive or hazardous tasks, while humans focus on complex and creative quality control aspects.

This synergy will enhance productivity and job satisfaction, creating a more skilled workforce that can adapt to advanced technologies.

There can be no debate that AI-driven robotics is revolutionising the nature of quality control in manufacturing, especially for electronics assembly. These systems provide consistent, precise inspections and reduce human error to ensure strict adherence to IPC standards.

As AI technology continues to develop, the integration of IoT devices, adaptive quality control systems, and predictive analytics will further increase efficiency and reduce downtime.

Human-robot collaboration will also play a key role, combining the strengths of both to achieve higher productivity and quality standards.

Go here to see the original:
AI-Driven Robotics and Quality Control: Transforming Electronics Assembly and Manufacturing - Robotics and Automation News

Machine Learning vs. Deep Learning: What’s the Difference? – Gizmodo

Artificial intelligence is everywhere these days, but the fundamentals of how this influential new technology works can be difficult to wrap your head around. Two of the most important fields in AI development are machine learning and its sub-field, deep learning, although the terms are sometimes used interchangeably, leading to a certain amount of confusion. Heres a quick explanation of what these two important disciplines are, and how theyre contributing to the evolution of automation.

Like It or Not, Your Doctor Will Use AI | AI Unlocked

Proponents of artificial intelligence say they hope to someday create a machine that can think for itself. The human brain is a magnificent instrument, capable of making computations that far outstrip the capacity of any currently existing machine. Software engineers involved in AI development hope to eventually make a machine that can do everything a human can do intellectually but can also surpass it. Currently, the applications of AI in business and government largely amount to predictive algorithms, the kind that suggest your next song on Spotify or try to sell you a similar product to the one you bought on Amazon last week. However, AI evangelists believe that the technology will, eventually, be able to reason and make decisions that are much more complicated. This is where ML and DL come in.

Machine learning (or ML) is a broad category of artificial intelligence that refers to the process by which software programs are taught how to make predictions or decisions. One IBM engineer, Jeff Crume, explains machine learning as a very sophisticated form of statistical analysis. According to Crume, this analysis allows machines to make predictions or decisions based on data. The more information that is fed into the system, the more its able to give us accurate predictions, he says.

Unlike general programming where a machine is engineered to complete a very specific task, machine learning revolves around training an algorithm to identify patterns in data by itself. As previously stated, machine learning encompasses a broad variety of activities.

Deep learning is machine learning. It is one of those previously mentioned sub-categories of machine learning that, like other forms of ML, focuses on teaching AI to think. Unlike some other forms of machine learning, DL seeks to allow algorithms to do much of their work. DL is fueled by mathematical models known as artificial neural networks (ANNs). These networks seek to emulate the processes that naturally occur within the human brainthings like decision-making and pattern identification.

One of the biggest differences between deep learning and other forms of machine learning is the level of supervision that a machine is provided. In less complicated forms of ML, the computer is likely engaged in supervised learninga process whereby a human helps the machine recognize patterns in labeled, structured data, and thereby improve its ability to carry out predictive analysis.

Machine learning relies on huge amounts of training data. Such data is often compiled by humans via data labeling (many of those humans are not paid very well). Through this process, a training dataset is built, which can then be fed into the AI algorithm and used to teach it to identify patterns. For instance, if a company was training an algorithm to recognize a specific brand of car in photos, it would feed the algorithm huge tranches of photos of that car model that had been manually labeled by human staff. A testing dataset is also created to measure the accuracy of the machines predictive powers, once it has been trained.

When it comes to DL, meanwhile, a machine engages in a process called unsupervised learning. Unsupervised learning involves a machine using its neural network to identify patterns in what is called unstructured or raw datawhich is data that hasnt yet been labeled or organized into a database. Companies can use automated algorithms to sift through swaths of unorganized data and thereby avoid large amounts of human labor.

ANNs are made up of what are called nodes. According to MIT, one ANN can have thousands or even millions of nodes. These nodes can be a little bit complicated but the shorthand explanation is that theylike the nodes in the human brainrelay and process information. In a neural network, nodes are arranged in an organized form that is referred to as layers. Thus, deep learning networks involve multiple layers of nodes. Information moves through the network and interacts with its various environs, which contributes to the machines decision-making process when subjected to a human prompt.

Another key concept in ANNs is the weight, which one commentator compares to the synapses in a human brain. Weights, which are just numerical values, are distributed throughout an AIs neural network and help determine the ultimate outcome of that AI systems final output. Weights are informational inputs that help calibrate a neural network so that it can make decisions. MITs deep dive on neural networks explains it thusly:

To each of its incoming connections, a node will assign a number known as a weight. When the network is active, the node receives a different data item a different number over each of its connections and multiplies it by the associated weight. It then adds the resulting products together, yielding a single number. If that number is below a threshold value, the node passes no data to the next layer. If the number exceeds the threshold value, the node fires, which in todays neural nets generally means sending the number the sum of the weighted inputs along all its outgoing connections.

In short: neural networks are structured to help an algorithm come to its own conclusions about data that has been fed to it. Based on its programming, the algorithm can identify helpful connections in large tranches of data, helping humans to draw their own conclusions based on its analysis.

Machine and deep learning help train machines to carry out predictive and interpretive activities that were previously only the domain of humans. This can have a lot of upsides but the obvious downside is that these machines can (and, lets be honest, will) inevitably be used for nefarious, not just helpful, stuffthings like government and private surveillance systems, and the continued automation of military and defense activity. But, theyre also, obviously, useful for consumer suggestions or coding and, at their best, medical and health research. Like any other tool, whether artificial intelligence has a good or bad impact on the world largely depends on who is using it.

Read the original here:
Machine Learning vs. Deep Learning: What's the Difference? - Gizmodo

Slack has been using data from your chats to train its machine learning models – Engadget

Slack trains machine-learning models on user messages, files and other content without explicit permission. The training is opt-out, meaning your private data will be leeched by default. Making matters worse, youll have to ask your organizations Slack admin (human resources, IT, etc.) to email the company to ask it to stop. (You cant do it yourself.) Welcome to the dark side of the new AI training data gold rush.

Corey Quinn, an executive at DuckBill Group, spotted the policy in a blurb in Slacks Privacy Principles and posted about it on X (via PCMag). The section reads (emphasis ours), To develop AI/ML models, our systems analyze Customer Data (e.g. messages, content, and files) submitted to Slack as well as Other Information (including usage information) as defined in our Privacy Policy and in your customer agreement.

In response to concerns over the practice, Slack published a blog post on Friday evening to clarify how its customers data is used. According to the company, customer data is not used to train any of Slacks generative AI products which it relies on third-party LLMs for but is fed to its machine learning models for products like channel and emoji recommendations and search results. For those applications, the post says, Slacks traditional ML models use de-identified, aggregate data and do not access message content in DMs, private channels, or public channels. That data may include things like message timestamps and the number of interactions between users.

A Salesforce spokesperson reiterated this in a statement to Engadget, also saying that we do not build or train these models in such a way that they could learn, memorize, or be able to reproduce customer data.

I'm sorry Slack, you're doing fucking WHAT with user DMs, messages, files, etc? I'm positive I'm not reading this correctly. pic.twitter.com/6ORZNS2RxC

Corey Quinn (@QuinnyPig) May 16, 2024

The opt-out process requires you to do all the work to protect your data. According to the privacy notice, To opt out, please have your Org or Workspace Owners or Primary Owner contact our Customer Experience team at feedback@slack.com with your Workspace/Org URL and the subject line Slack Global model opt-out request. We will process your request and respond once the opt out has been completed.

The company replied to Quinns message on X: To clarify, Slack has platform-level machine-learning models for things like channel and emoji recommendations and search results. And yes, customers can exclude their data from helping train those (non-generative) ML models.

How long ago the Salesforce-owned company snuck the tidbit into its terms is unclear. Its misleading, at best, to say customers can opt out when customers doesnt include employees working within an organization. They have to ask whoever handles Slack access at their business to do that and I hope they will oblige.

Inconsistencies in Slacks privacy policies add to the confusion. One section states, When developing Al/ML models or otherwise analyzing Customer Data, Slack cant access the underlying content. We have various technical measures preventing this from occurring. However, the machine-learning model training policy seemingly contradicts this statement, leaving plenty of room for confusion.

In addition, Slacks webpage marketing its premium generative AI tools reads, Work without worry. Your data is your data. We dont use it to train Slack AI. Everything runs on Slacks secure infrastructure, meeting the same compliance standards as Slack itself.

In this case, the company is speaking of its premium generative AI tools, separate from the machine learning models its training on without explicit permission. However, as PCMag notes, implying that all of your data is safe from AI training is, at best, a highly misleading statement when the company apparently gets to pick and choose which AI models that statement covers.

Update, May 18 2024, 3:24 PM ET: This story has been updated to include new information from Slack, which published a blog post explaining its practices in response to the community's concerns.

Update, May 19 2024, 12:41 PM ET: This story and headline have been updated to reflect additional context provided by Slack about how it uses customer data.

Original post:
Slack has been using data from your chats to train its machine learning models - Engadget

Machine learning-based integration develops an immunogenic cell death-derived lncRNA signature for predicting … – Nature.com

Genetic characteristics and transcriptional changes in ICD-related genes in LUAD

Summarized 34 ICD-related genes were identified through a large-scale meta-analysis11. The expression of 34 ICD genes in LUAD samples and normal samples was first analyzed (Figure S1A), and most of the ICD genes expressions were significantly different except for ATG5, IL10, CD8A, and CD8B. Secondly, the location of ICD-related genes in the human genome was analyzed (Figure S1B). the variation of ICD-related genes in LUAD patients in the TCGA cohort was also assessed. The results showed that approximately 69.63% (188/270) of LUAD patients had mutations in ICD-related genes, and the top 20 mutations in ICD-related genes were displayed in the study, with the highest frequency of mutations in TLR4 and NLRP3 (Figure S1C and Figure S1D).

The study also performed GO enrichment analysis of ICD-related genes (Figure S1E), which showed that, in terms of biological processes, the main enrichment was in various receptor activities. In terms of cellular components, the main enrichment was in the cytolytic granule and inflammasome complex. In terms of molecular functions, the main enrichment was in the biological processes of interleukin. In addition, KEGG enrichment analysis showed that ICD-related genes were enriched in the NOD-like receptor signaling pathway, Toll-like receptor signaling pathway, and Necroptosis. (Figure S1F).

A total of 1367 characteristic lncRNAs were selected by matching the training dataset with validation datasets for in-depth analysis. We employed consensus cluster analysis to partition the TCGA-LUAD dataset into two groups based on the high-expression and low-expression of ICD-related genes. Subsequently, 473 lncRNAs were identified by conducting differential expression analysis (Fig.2A and B). These lncRNAs were then compared with the 300 lncRNAs obtained by Pearson correlation analysis (Fig.2C) to identify 176 ICD-related lncRNAs (Fig.2D). As a result, 24 ICD-related lncRNAs were ultimately identified by univariate Cox regression analysis (Supplementary Table 2).

(A) Heatmap displaying 34 ICD gene expression profiles among normal and LUAD samples in the TCGA cohort. (B) The location of ICD-related genes in the human genome. (C) Single Nucleotide Polymorphism analysis of ICD-related genes in the TCGA cohort. (E) Bar plot displaying Gene Ontology analysis based on 34 ICD genes. (F) Bar plot displaying KEGG analysis based on 34 ICD genes.

A total of 24 ICD-related lncRNAs were inputted into a comprehensive machine-learning model, which encompassed the 10 aforementioned methodologies for creating prognostic signatures. Figure3A illustrated the acquisition of a total of 101 prognostic models. The predictive signature created by the combination of RSF+Ridge had the greatest mean C index of 0.674, as determined by analyzing the training and test cohorts. This signature was identified as the ICDI signature, (Fig.3A and B). The obtained equation is as follows (see Supplementary Table 3 for detail):

$${text{ICDIscore}} = min Vert beta x - y Vert_{2}^{2} + {uplambda } Vert beta Vert _{2}^{2}$$

(A) A total of 101 combinations of machine learning algorithms for the ICDI signature via a tenfold cross-validation framework based on the TCGA-LUAD cohort. The C-index of each signature was calculated across validation datasets, including the GSE29013, GSE30219, GSE31210, GSE3141, and GSE50081cohort. (B) 24 ICD-related lncRNAs importance ranking in the RSF algorithm and 19 lncRNAs enrolled in the ICDI signature coefficient finally obtained in the Ridge algorithm. (C) KaplanMeier survival curve of OS between patients with a high score of ICDI signature and with a low score of ICDI signature in TCGA-LUAD, GSE29013, GSE30219, GSE31210, GSE3141, and GSE50081 cohort. (D) Receiver operator characteristic (ROC) analysis for ICDI signature in TCGA-LUAD, GSE29013, GSE30219, GSE31210, GSE3141, and GSE50081 cohort.

As the elastic net mixing parameter, was limited with 01. The is defined as (uplambda =frac{1-alpha }{2}{Vert beta Vert }_{2}^{2}+alpha {Vert beta Vert }_{1}).

LUAD patients were categorized into two groups based on their ICDI score: a high-score group and a low-score group. The median value was used as the cut-off point. Consistent with expectations, LUAD patients with low ICDI scores exhibited higher overall survival rates in the TCGA-LUAD, GSE29013, GSE30129, GSE31210, GSE3141, and GSE50081 datasets (Fig.3C).

The AUC values of 1-, 2-, 3-, 4-, and 5-year for the ICDI signature in the TCGA-LUAD cohort were estimated as 0.709, 0.678, 0.697, 0.716, and 0.660, respectively (Fig.3D), demonstrating that ICDI signature has promising predictive value for LUAD patients. It was validated in the GSE30219 cohort (0.891, 0.758, 0.744, 0.700, and 0.716), GSE31210 cohort (0.750, 0.691, 0.653, 0.677 and 0.718), GSE3141 cohort (0.690, 0.716, 0.819, 0.801 and 0.729), GSE50081 cohort (0.685, 0.694, 0.712, 0.638, and 0.639), and GSE3141 cohort (0.639, 0.697, 0.794, 0.670, and 0.521) (Fig.3D). As a result of insufficient survival data, the GSE29013 cohort only computes the AUC values for 2-, 3-, and 4-year periods. Still, it possesses strong predictive capability (Fig.3D).

In addition, we compared the predictive value of the ICDI signature with other clinical variables (Fig.4A). The C-index of the ICDI signature was significantly higher than other clinical variables, covering staging, age, gender, etc.

(A) The C-index of the ICDI signature and other clinical characteristics in the TCGA-LUAD, GSE29013, GSE30219, GSE31210, GSE3141 and GSE50081 cohorts. (B) The C-index of the ICDI signature and other signatures developed in the TCGA-LUAD, GSE29013, GSE30219, GSE31210, GSE3141 and GSE50081 cohorts.

Gene expression analysis based on machine learning can be leveraged to predict the outcome of diseases, which in turn can facilitate in early screening of diseases, as well as in researching new therapeutic modalities. Substantial predictive signatures have emerged in recent years. To compare the ICDI signature with published signatures, we searched for LUAD-related disease prediction model articles. Excluding articles with unclear prediction model formulas and missing corresponding gene expression data in the training and validation groups, 102 LUAD-related predictive signatures were finally enrolled (Supplementary Table 4). These signatures contained various kinds of Biological processes, such as cuproptosis, ferroptosis, autophagy, epithelial-mesenchymal transition, acetylation, amino acid metabolism, anoikis, DNA repair, fatty acid metabolism, hypoxia, Inflammatory, N6-methyladenosine, mitochondrial homeostasis, and mTOR, which was established in TCGA-LUAD, GSE29013, GSE30219, GSE31210, GSE3141, and GSE50081 and compared with the C-index of ICDI, it can be seen that the ICDI signature outperformed the majority of signatures in each cohort (Fig.4B).

To investigate the contribution of ICDI features in the LUAD TIME, we evaluated the correlation of ICDI features with immune infiltrating cells and immune-related processes. Based on TIMER algorithm, CIBERSORT algorithm, quantiseq algorithm, MCPcounter algorithm, xCell algorithm, and EPIC algorithm, the ICDI signature was correlated with most immune infiltrating cells except for a few (such as activated NK cells and CD8+naive T cells) (Fig.5A). Based on the ssGSEA algorithm, the ICDI signature was significantly correlated with most immune-related processes (Fig.5B). Based on the ESTIMATE algorithm, the ICDI signature was negatively correlated with StromalScore, ImmuneScore, and ESTIMATEScore, and positively correlated with TumorPurity (Fig.5C), as expected.

(A) Heatmap displaying the correlation between the ICDI signature and 13 immune-related processes. (B) Heatmap displaying the correlation between the ICDI signature and immune infiltrating cells. (C) Box plot displaying the correlation between the ICDI signature and The ESTIMATE Immune Score, ImmuneScore, StromalScore, and TumorPurity. (D) Box plot displaying the correlation between the ICDI signature and immune modulators.

In addition, the study also evaluated the relationship between ICDI signature and known immune modulators (CYT, TLS, Davoli_IS, Roh_IS, Ayers_expIS, TIS, RIR, and TIDE) (Fig.5D). The values of most of the immune modulators (CYT, TLS, Davoli_IS, Roh_IS, Ayers_expIS, and TIS) were significantly higher in the low ICDI signature scores group. The RIR values and TIDE score were all significantly higher in the high ICDI signature scores group, which suggested a higher potential for immunological escape (Fig.5D) All of these displayed ICDI signature was a potential immunotherapeutic biomarker.

To further investigate the potential of ICDI signature as an immunotherapeutic biomarker, the study calculated ICDI scores for each immunotherapy cohort respectively to appraise its predictive valuation. The findings indicated that those with a low ICDI score were more prone to derive advantages from immunotherapy. (Fig.6A) The receiver operating characteristic (ROC) analysis conducted in the study showed that the ICDI signature exhibited a consistent ability to predict the efficacy of immunotherapy-based treatment. This finding was further supported by the analysis of immunotherapy datasets, including cohort Melanoma-GSE78220, STAD-PRJEB25780, and GBM-PRJNA482620, which yielded ROC values of 0.771, 0.671, and 0.723, respectively (Fig.6B).

(A) Box plot displaying the correlation between the ICDI signature and immunotherapy response in the immunotherapy dataset (Melanoma-GSE78220, STAD-PRJEB25780, and GBM-PRJNA482620). (B) ROC curves of ICDI signature to predict the benefits of immunotherapy in the immunotherapy dataset (Melanoma-GSE78220, STAD-PRJEB25780, and GBM-PRJNA482620). (C) Box plot displaying the correlation between the ICDI signature and chemotherapy drugs.

Chemotherapy resistance is a significant barrier to the effectiveness of chemotherapy and targeted therapy in treating advanced lung cancer. We analyzed to determine the drug sensitivities of various chemotherapeutics in living organisms. We then compared the drug sensitivities using the ICDI signature. Individuals with low ICDI scores exhibited a notable rise in sensitivity to erlotinib, gefitinib, docetaxel, and paclitaxel. However, there was no significant variation in sensitivity to cisplatin and 5-fluorouracil. (Fig.6C) The study offers instructions on the administration of chemotherapeutic medications in individuals with LUAD.

See original here:
Machine learning-based integration develops an immunogenic cell death-derived lncRNA signature for predicting ... - Nature.com