Archive for the ‘Machine Learning’ Category

Novel machine learning tool IDs early biomarkers of Parkinson’s |… – Parkinson’s News Today

A novel machine learning tool, called CRANK-MS, was able to identify, with high accuracy, people who would go on to develop Parkinsons disease, based on an analysis of blood molecules.

The algorithm identified several molecules that may serve as early biomarkers of Parkinsons.

These findings show the potential of artificial intelligence (AI) to improve healthcare, according to researchers from the University of New South Wales (UNSW), in Australia, who are developing the machine learning tool with colleagues from Boston University, in the U.S.

The application of CRANK-MS to detect Parkinsons disease is just one example of how AI can improve the way we diagnose and monitor diseases, Diana Zhang, a study co-author from UNSW, said in a press release.

The study, Interpretable Machine Learning on Metabolomics Data Reveals Biomarkers for Parkinsons Disease, was published inACS Central Science.

Parkinsons disease now is diagnosed based on the symptoms a person is experiencing; there isnt a biological test that can definitively identify the disease. Many researchers are working to identify biomarkers of Parkinsons, which might be measured to help identify the neurodegenerative disorder or predict the risk of developing it.

Here, the international team of researchers used machine learning to analyze metabolomic data that is, large-scale analyses of levels of thousands of different molecules detected in patients blood to identify Parkinsons biomarkers.

The analysis used blood samples collected from the Spanish European Prospective Investigation into Cancer and Nutrition (EPIC). There were 39 samples from people who would go on to develop Parkinsons after up to 15 years of follow-up, and another 39 samples from people who did not develop the disorder over follow-up. The metabolomic makeup of the samples was assessed with a chemical analysis technique called mass spectrometry.

In the simplest terms, machine learning involves feeding a computer a bunch of data, alongside a set of goals and mathematical rules called algorithms. Based on the rules and algorithms, the computer determines or learns how to make sense of the data.

This study specifically used a form of machine learning algorithm called a neural network. As the name implies, the algorithm is structured with a similar logical flow to how data is processed by nerve cells in the brain.

Machine learning has been used to analyze metabolomic data before. However, previous studies have generally not used wide-scale metabolomic data instead, scientists selected specific markers of interest to include, while not including data for other markers.

Such limits were used because wide-scale metabolomic data typically covers thousands of different molecules, and theres a lot of variation so-called noise in the data. Prior machine learning algorithms have generally had poor results when using such noisy data, because its hard for the computer to detect meaningful patterns amidst all the random variation.

The researchers new algorithm, CRANK-MS short for Classification and Ranking Analysis using Neural network generates Knowledge from Mass Spectrometry has a better ability to sort through the noise, and was able to provide high-accuracy results using full metabolomic data.

Here we feed all the information into CRANK-MS without any data reduction right at the start. And from that, we can get the model prediction and identify which metabolites are driving the prediction the most, all in one step.

Typically, researchers using machine learning to examine correlations between metabolites and disease reduce the number of chemical features first, before they feed it into the algorithm, said W. Alexander Donald, PhD, a study co-author from UNSW, in Sydney.

But here, Donald said, we feed all the information into CRANK-MS without any data reduction right at the start. And from that, we can get the model prediction and identify which metabolites are driving the prediction the most, all in one step.

Including all molecules available in the dataset means that if there are metabolites [molecules] which may potentially have been missed using conventional approaches, we can now pick those up, Donald said.

The researchers stressed that further validation is needed to test the algorithm. But in their preliminary tests, CRANK-MS was able to differentiate between Parkinsons and non-Parkinsons individuals with an accuracy of up to about 96%.

In further analyses, the researchers determined which molecules were picked up by the algorithm as the most important for identifying Parkinsons.

There were several noteworthy findings: For example, patients who went on to develop Parkinsons tended to have lower levels of a triterpenoid chemical known to have nerve-protecting properties. That substance is found at high levels in foods like apples, olives, and tomatoes.

Further, these patients also often had high levels of polyfluorinated alkyl substances (PFAS), which may be a marker of exposure to industrial chemicals.

These data indicate that these metabolites are potential early indicators for PD [Parkinsons disease] that predate clinical PD diagnosis and are consistent with specific food diets (such as the Mediterranean diet) for PD prevention and that exposure to [PFASs] may contribute to the development of PD, the researchers wrote. The team noted a need for further research into these potential biomarkers.

The scientists have made the CRANK-MS algorithm publicly available for other researchers to use. The team says this algorithm likely has applications far beyond Parkinsons.

Weve built the model in such a way that its fit for purpose, Zhang said. Whats exciting is that CRANK-MS can be readily applied to other diseases to identify new biomarkers of interest. The tool is user-friendly where on average, results can be generated in less than 10 minutes on a conventional laptop.

Go here to see the original:
Novel machine learning tool IDs early biomarkers of Parkinson's |... - Parkinson's News Today

Study finds workplace machine learning improves accuracy, but also increases human workload – Tech Xplore

This article has been reviewed according to ScienceX's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

peer-reviewed publication

proofread

by European School of Management and Technology (ESMT)

Credit: Pixabay/CC0 Public Domain

New research from ESMT Berlin shows that utilizing machine-learning in the workplace always improves the accuracy of human decision-making, however, often it can also cause humans to exert more cognitive efforts when making decisions.

These findings come from research by Tamer Boyaci and Francis de Vricourt, both professors of management science at ESMT Berlin, alongside Caner Canyakmaz, previously a post-doctoral fellow at ESMT and now an assistant professor of operations management at Ozyegin University. The researchers wanted to investigate how machine-based predictions may affect the decision process and outcomes of a human decision-maker. Their paper has been published in Management Science.

Interestingly, the use of machines increases human's workload most when the professional is cognitively constrained, for instance, experiencing time pressures or multitasking. However, situations where decision makers experience high workload is precisely when introducing AI to alleviate some of this load appears most tempting. The research suggests that using AI, in this instance, to make the process faster can backfire, and actually increase rather than decrease the human's cognitive effort.

The researchers also found that, although machine input always improves the overall accuracy of human decisions, it can also increase the likelihood of certain types of errors, such as false positives. For the study, a machine learning model was used to identify the differences in accuracy, propensity, and the levels of cognitive effort exerted by humans, comparing solely human-made decisions to machine-aided decisions.

"The rapid adoption of AI technologies by many organizations has recently raised concerns that AI may eventually replace humans in certain tasks," says Professor de Vricourt. "However, when used alongside human rationale, machines can significantly enhance the complementary strengths of humans," he says.

The researchers say their findings clearly showcase the value of collaborations between humans and machines to the professional. But humans should also be aware that, though machines can provide incredibly accurate information, often there still needs to be a cognitive effort from humans to assess their own information and compare the machine's prescription to their own conclusions before making a decision. The researchers say that the level of cognitive effort needed increases when humans are under pressure to deliver a decision.

"Machines can perform specific tasks with incredible accuracy, due to their incredible computing power, while in contrast, human decision-makers are flexible and adaptive but constrained by their limited cognitive capacitytheir skills complement each other," says Professor Boyaci. "However, humans must be wary of the circumstances of utilizing machines and understand when it is effective and when it is not."

Using the example of a doctor and patient, the researchers' findings suggest that the use of machines will improve overall diagnostic accuracy and decrease the number of misdiagnosed sick patients. However, if the disease incidence is low and time is constrained introducing a machine to help doctors make their diagnosis would lead to more misdiagnosed patients, and more human cognitive effort needed to diagnosedue to the additional cognitive effort needed to resolve due to the ambiguity implementing machines can cause.

The researchers state that their findings offer both hope and caution for those looking to implement machines in the work. On the positive side, the average accuracy improves, and when the machine input tends to confirm the rather expected all error rates decrease and the human is more "efficient" as she reduces her cognitive effort.

However, incorporating machine-based predictions in human decisions is not always beneficial, neither in terms of the reduction of errors nor the amount of cognitive effort. In fact, introducing a machine to improve a decision-making process can be counter-productive as it can increase certain error types and the time and cognitive effort it takes to reach a decision.

The findings underscore the critical impact machine-based predictions have on human judgment and decisions. These findings provide guidance on when and how machine input should be considered, and hence on the design of human-machine collaboration.

More information: Tamer Boyac et al, Human and Machine: The Impact of Machine Input on Decision Making Under Cognitive Limitations, Management Science (2023). DOI: 10.1287/mnsc.2023.4744

Journal information: Management Science

Provided by European School of Management and Technology (ESMT)

Read more:
Study finds workplace machine learning improves accuracy, but also increases human workload - Tech Xplore

Non-Invasive Medical Diagnostics: Know Labs’ Partnership With Edge Impulse Has Potential To Improve Healthcare … – Benzinga

Machine learning has revolutionized the field of biomedical research, enabling faster and more accurate development of algorithms that can improve healthcare outcomes. Biomedical researchers are using machine learning tools and algorithms toanalyzevast and complex health data, and quickly identify patterns and relationships that were previously difficult to discern.

Know Labs, an emerging developer of non-invasive medical diagnostic technology is readying a breakthrough for non-invasive glucose monitoring, which has the potential to positively impact the lives of millions. One of the key elements behind this tech is the ability to process large amounts of novel data generated by their Bio-RFID radio frequency sensor, using machine learning algorithms from Edge Impulse.

One significant way in which machine learning is improving algorithm development in the biomedical space is by developing more accurate predictions and insights. Machine learning algorithms use advanced statistical techniques to identify correlations and relationships that may not be apparent to human researchers.

Machine learning algorithms can analyze a patient's entire medical history and provide predictions about their potential health outcomes, which can help medical professionals intervene earlier to prevent diseases from progressing. Machine learning algorithms can also be used to develop more personalized treatments.

Historically, this process was time-consuming and prone to error due to the difficulty in managing large datasets. Machine learning algorithms, on the other hand, can quickly and easily process vast amounts of data and identify patterns without human intervention, resulting in decreased manual workload and reduced error.

As the technology and use cases of machine learning continue to grow, it is evident that it can help realize a future of improved health care by unlocking the potential of large biomedical and patient datasets.

Already, early uses of machine learning in diagnosis and treatment have shownpromiseto diagnose breast cancer from x-rays, discover new antibiotics, predict the onset of gestational diabetes from electronic health records, and identify clusters of patients that share a molecular signature of treatment response.

Withreportsindicating that 400,000 hospitalized patients experience some type of preventable medical error each year, machine learning can help predict and diagnose diseases at a faster rate than most medical professionals,savingapproximately $20 billion annually.

Companies like Linus Health, Viz.ai, PathAI, and Regard are showing artificial intelligence (AI) and machine learning (ML)s ability to reduce errors and save lives.

Advancements in patient care including remote physiologic monitoring and care delivery highlights the growing demand for the use of technology to enhance non-invasive means of medical diagnosis.

One significant area this could benefit is monitoring blood glucose non-invasively withoutpricking the fingerfor blood, important for patients to effectively manage their type 1 and 2 diabetes. While glucose biosensors have existed for over half a century, they can be classified as two groups: electrochemical sensors relying on direct interaction with an analyte and electromagnetic sensors that leverage antennas and/or resonators to detect changes in the dielectric properties of the blood.

Using smart devices essentially involves shining light into the body using optical sensors and quantifying how the light reflects back to measure a particular metric. Already there are smartwatches, fitness trackers, and smart rings from companies like Apple Inc. AAPL, Samsung Electronics Co Ltd. (KRX: 005930) and Google (Alphabet Inc. GOOGL ) that measure heart rate, blood oxygen levels, and a host of other metrics.

But applying this tech to measure blood glucose is much more complicated, and the data may not be accurate. Know Labs seems to be on a path to solving this challenge.

The Seattle-based companyhaspartneredwithEdge Impulse, providers of a machine learning development toolkit, to interpret robust data from its proprietaryBio-RFIDtechnology. The algorithm refinement process that Edge Impulse provides is a critical step towards interpreting the existing large and novel datasets, which will ultimately support large-scale clinical research.

The Bio-RFID technology is a non-invasive medical diagnostic technology that uses a novel radio frequency sensor that can safely see through the full cellular stack to accurately identify a unique molecular signature of a wide range of organic and inorganic materials, molecules, and compositions of matter.

Microwave and Radio Frequency sensors operate over a broader frequency range, and with this comes an extremely broad dataset that requires sophisticated algorithm development. Working with Know Labs, Edge Impulse uses its machine learning tools to train a Neural Network model to interpret this data and make blood glucose level predictions using a popular CGM proxy for blood glucose. Edge Impulse provides a user-friendly approach to machine learning that allows product developers and researchers to optimize the performance of sensory data analysis. This technology is based onAutoML and TinyMLto make AI more accessible, enabling quick and efficient machine learning modeling.

The partnership between Know Labs, a company committed to making a difference in people's lives by developing convenient and affordable non-invasive medical diagnostics solutions, and Edge Impulse, makers of tools that enable the creation and deployment of advanced AI algorithms, is a prime example for how responsible machine learning applications could significantly improve and change healthcare diagnostics.

Featured Photo by JiBJhoY on Shutterstock

This post contains sponsored advertising content. This content is for informational purposes only and is not intended to be investing advice

Continued here:
Non-Invasive Medical Diagnostics: Know Labs' Partnership With Edge Impulse Has Potential To Improve Healthcare ... - Benzinga

17 AI and machine learning terms everyone needs to know – India Today

By India Today Education Desk: Artificial intelligence and machine learning are rapidly evolving fields with many exciting new developments. As these technologies become more pervasive in our lives, it is important for everyone to be familiar with the terminology and concepts behind them.

The terms discussed here are just the tip of the iceberg, but they provide a good foundation for understanding the basics of AI and machine learning.

Read More

By keeping up to date with these developments, students can prepare themselves for the future and potentially even contribute to the field themselves.

Here are 17 AI and machine learning terms everyone needs to know:

This is the phenomenon by which people attribute human-like qualities to AI chatbots. But it's important to remember they are not sentient beings and can only mimic language.

Errors can occur in large language models if training data influences the model's output, leading to inaccurate predictions and offensive responses.

OpenAI's artificial intelligence language model can answer questions, generate code, write poetry, plan vacations, translate languages, and now respond to images and pass the Uniform Bar Exam.

Microsoft's chatbot integrated into its search engine can have open-ended conversations on any topic, but has been criticized for occasional inaccuracies, misleading responses, and strange answers.

Google's chatbot was designed as a creative tool to draft emails and poems, but can also generate ideas, write blog posts, and provide factual or opinion-based answers.

Baidu's rival to ChatGPT, Ernie, was revealed in March 2022 but had a disappointing debut due to a recorded demonstration.

Large language models can exhibit unexpected abilities, such as writing code, composing music, and generating fictional stories based on their learning patterns and training data.

This is technology that creates original content, including text, images, video, and computer code, by identifying patterns in large quantities of training data.

This is a phenomenon in large language models where they may provide factually incorrect, irrelevant, or nonsensical answers due to limitations in their training data and architecture.

This is a neural network that learns skills, such as generating language and conducting conversations, by analyzing vast amounts of text from across the internet.

These are techniques used by large language models to understand and generate human language, including text classification and sentiment analysis, using machine learning algorithms, statistical models, and linguistic rules.

A mathematical system modeled on the human brain that learns skills by finding patterns in data through layers of artificial neurons, outputting predictions or classifications.

These are numerical values that define a language model's structure and behavior, learned during training. They are used to determine output likelihood, more parameters mean more complexity and accuracy but require more computational power.

This is the starting point for a language model to generate text, providing context for text generation in natural-language-processing tasks such as chatbots and question-answering systems.

A technique that teaches an AI model to find the best result through trial and error and receiving rewards or punishments based on its results, often enhanced by human feedback for games and complex tasks.

Neural network architecture using self-attention to understand context and long-term dependencies in language, used in many natural language processing applications such as chatbots and sentiment analysis tools.

This is a type of machine learning where a computer is trained to make predictions based on labeled examples, learning a function that maps input to output. It is used in applications like image and speech recognition, and natural language processing.

See the original post:
17 AI and machine learning terms everyone needs to know - India Today

Harnessing Machine Learning to Make Complex Systems More … – Lawrence Berkeley National Laboratory (.gov)

Getting something for nothing doesnt work in physics. But it turns out that, by thinking like a strategic gamer, and with some help from a demon, improved energy efficiency for complex systems like data centers might be possible.

In computer simulations, Stephen Whitelam of the Department of Energys Lawrence Berkeley National Laboratory (Berkeley Lab) used neural networks (a type of machine learning model that mimics human brain processes) to train nanosystems, which are tiny machines about the size of molecules, to work with greater energy efficiency.

Whats more, the simulations showed that learned protocols could draw heat from the systems by virtue of constantly measuring them to find the most energy efficient operations.

We can get energy out of the system, or we can store work in the system, Whitelam said.

Its an insight that could prove valuable, for example, in operating very large systems like computer data centers. Banks of computers produce enormous amounts of heat that must be extracted using still more energy to prevent damage to the sensitive electronics.

We can get energy out of the system, or we can store work in the system.

Stephen Whitelam

Whitelam conducted the research at the Molecular Foundry, a DOE Office of Science user facility at Berkeley Lab. His work is described in a paper published in Physical Review X.

Asked about the origin of his ideas, Whitelam said, People had used techniques in the machine learning literature to play Atari video games that seemed naturally suited to materials science.

In a video game like Pac Man, he explained, the aim with machine learning would be to choose a particular time for an action up, down, left, right, and so on to be performed. Over time, the machine learning algorithms will learn the best moves to make, and when, to achieve high scores. The same algorithms can work for nanoscale systems.

Whitelams simulations are also something of an answer to an old thought experiment in physics called Maxwells Demon. Briefly, in 1867, physicist James Clerk Maxwell proposed a box filled with a gas, and in the middle of the box there would be a massless demon controlling a trap door. The demon would open the door to allow faster molecules of the gas to move to one side of the box and slower molecules to the opposite side.

Eventually, with all molecules so segregated, the slow side of the box would be cold and the fast side would be hot, matching the energy of the molecules.

The system would constitute a heat engine, Whitelam said. Importantly, however, Maxwells Demon doesnt violate the laws of thermodynamics getting something for nothing because information is equivalent to energy. Measuring the position and speed of molecules in the box costs more energy than that derived from the resulting heat engine.

And heat engines can be useful things. Refrigerators provide a good analogy, Whitelam said. As the system runs, food inside stays cold the desired outcome even though the back of the fridge gets hot as a product of work done by the refrigerators motor.

In Whitelams simulations, the machine learning protocol can be thought of as the demon. In the process of optimization, it converts information drawn from the system modeled into energy as heat.

In one simulation, Whitelam optimized the process of dragging a nanoscale bead through water. He modeled a so-called optical trap in which laser beams, acting like tweezers of light, can hold and move a bead around.

The name of the game is: Go from here to there with as little work done on the system as possible, Whitelam said. The bead jiggles under natural fluctuations called Brownian motion as water molecules are bombarding it. Whitelam showed that if these fluctuations can be measured, moving the bead can then be done at the most energy efficient moment.

Here were showing that we can train a neural-network demon to do something similar to Maxwells thought experiment but with an optical trap, he said.

Whitelam extended the idea to microelectronics and computation. He used the machine learning protocol to simulate flipping the state of a nanomagnetic bit between 0 and 1, which is a basic information-erasure/information-copying operation in computing.

Do this again, and again. Eventually, your demon will learn how to flip the bit so as to absorb heat from the surroundings, he said. He comes back to the refrigerator analogy. You could make a computer that cools down as it runs, with the heat being sent somewhere else in your data center.

Whitelam said the simulations are like a testbed for understanding concepts and ideas. And here the idea is just showing that you can perform these protocols, either with little energy expense, or energy sucked in at the cost of going somewhere else, using measurements that could apply in a real-life experiment, he said.

This research was supported by the Department of Energys Office of Science.

# # #

Founded in 1931 on the belief that the biggest scientific challenges are best addressed by teams, Lawrence Berkeley National Laboratory and its scientists have been recognized with 16 Nobel Prizes. Today, Berkeley Lab researchers develop sustainable energy and environmental solutions, create useful new materials, advance the frontiers of computing, and probe the mysteries of life, matter, and the universe. Scientists from around the world rely on the Labs facilities for their own discovery science. Berkeley Lab is a multiprogram national laboratory, managed by the University of California for the U.S. Department of Energys Office of Science.

DOEs Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit energy.gov/science.

More here:
Harnessing Machine Learning to Make Complex Systems More ... - Lawrence Berkeley National Laboratory (.gov)