Archive for the ‘Machine Learning’ Category

Four Types of Alzheimer’s Disease and How Machine Learning Helped Identify Them – Science Times

Alzheimer's Disease remains one of the most common brain disorders affecting people, especially the elderly, worldwide - and a new study reports that there's not only one, but there are four different types of these progressive brain disorders.

The currently irreversible brain condition has been characterized by slowly declining memory, cognitive capabilities, which eventually lead to the incapability to perform even the simplest type. As mankind learns more about this disease, the better we can address this condition and hopefully in the near future, develop a cure for it. This makes the new discovery particularly important progress in the study of the disease.

A report appearing in the latest Nature Medicine, published last April 29, presents findings from an international team of researchers - including those from the McGill University in Canada, the King's College London in the UK, Skne University Hospital in Sweden, Yonsei University College of Medicine in South Korea, as well as members of AVID Radiopharmaceuticals and the Alzheimer's Disease Neuroimaging Initiative.

(Photo: ADEAR via Wikimedia Commons)Diagram of the brain of a person with Alzheimer's Disease

ALSO READ: Recent Study Shows Link Between Alzheimer's Disease and Major Surgery

In the study titled "Four distinct trajectories of tau deposition identified in Alzheimer's disease," researchers explain how Alzheimer's disease is "characterized by the spread of tau pathology throughout the cerebral cortex."

The brain has a member of the microtubule-associated family called the "Tau protein," which is involved in a number of neurodegenerative diseases like Parkinson's and Alzheimer's disease.

Tau pathology refers to the existence of a pathological aggregation of these proteins in the neurofibrillary tangles (NFTs). These misshapen proteins and the pattern of how they get tangled have long been previously believed to be more or less similar to people having neurodegenerative disease.

This particular phenomenon, which develops in cases of Alzheimer's disease, was examined by the researchers with help from specially-developed machine learning algorithms. The machine learning tool was trained to analyze brain scans of 1,143 people - a mixed data set of healthy brains and those diagnosed with Alzheimer's disease.

"We identified four clear patterns of tau pathology that became distinct over time," said Oskar Hansson, co-author of the study and a neurologist from the Clinical Memory Unit at the Lund University, in a press release from the Swedish university.

Hansson additionally explains that the prevalence of the subgroups was anywhere from 18 to 30 percent of the cases in the study. This means that all of the subtypes of the disease appear to be almost equally common, with no single subtype dominating over the others.

The first variant, Subtype 1: Limbic, was found in 33 percent of the cases. It was characterized by pathologic tau spread mostly within the brain's temporal lobe and is affecting patient memory. It is followed by the Subtype, MTL-Sparing, which was present in 18 percent of the cases and spreads across other sections of the cerebral cortex. Under these cases, memory problems become less common but are dominated by difficulties in planning and performing actions.

The third, Subtype 3: Posterior, was found in 30 percent of the cases - tau proteins spreading in the visual cortex, which is the brain's region for processing eyesight. In this case, patients experience difficulties in orientation, depth and distance perception, and processing shapes. The last one, Subtype 4: L Temporal, was only detected in 19 percent of cases and is asymmetrically spread in the left hemisphere, affecting speech and language.

"We now have reason to reevaluate the concept of typical Alzheimer's, and in the long run also the methods we use to assess the progression of the disease," commented Jacob Vogel, co-author of the study from McGill University.

RELATED ARTICLE: Study Shows Alzheimer's Could Be Predicted Through Writing Tests

Check out more news and information on Alzheimer's Disease in Science Times.

View post:
Four Types of Alzheimer's Disease and How Machine Learning Helped Identify Them - Science Times

Can machine learning help save the whales? How PNW researchers use tech tools to monitor orcas – GeekWire

Aerial image of endangered Southern Resident killer whales in K pod. The image was obtained using a remotely piloted octocopter drone that was flown during health research by Dr. John Durban and Dr. Holly Fearnbach. (Vulcan Image)

Being an orca isnt easy. Despite a lack of natural predators, these amazing mammals face many serious threats most of them brought about by their human neighbors. Understanding the pressures we put on killer whale populations is critical to the environmental policy decisions that will hopefully contribute to their ongoing survival.

Fortunately, marine mammal researchers like Holly Fearnbach of Sealife Response + Rehab + Research (SR3) and John Durban of Oregon State University are working hard to regularly monitor the condition of the Salish Seas southern resident killer whale population (SKRW). Identified as J pod, K pod and L pod, these orca communities have migrated through the Salish Sea for millennia. Unfortunately, in recent years their numbers have dwindled to only 75 whales, with one new calf born in 2021. This is the lowest population figure for the SRKW in 30 years.

For more than a decade, Fearnbach and Durban have flown photographic surveys to capture aerial images of the orcas. Starting in 2008, image surveys were performed using manned helicopter flights. Then beginning in 2014, the team transitioned to unmanned drones.

As the remote-controlled drone flies 100 feet or more above the whales, images are captured of each of the pod members, either individually or in groups. Since the drone is also equipped with a laser altimeter, the exact distance is known making calculations of the whales dimensions very accurate. The images are then analyzed in whats called a photogrammetric health assessment. This assessment helps determine each whales physical condition, including any evidence of pregnancy or significant weight loss due to malnourishment.

As a research tool, the drone is very cost effective and it allows us to do our research very noninvasively, Fearnbach said. When we do detect health declines in individuals, were able to provide management agencies with these quantitative health metrics.

But while the image collection stage is relatively inexpensive, processing the data has been costly and time-consuming. Each flight can capture 2,000 images with tens of thousands of images captured for each survey. Following the drone work, it typically takes about six months to manually complete the analysis on each seasons batch of images.

Obviously, half a year is a very long time if youre starving or pregnant, which is one reason why SR3s new partnership with Vulcan is so important. Working together, the organizations developed a new approach to process the data more rapidly. The Aquatic Mammal Photogrammetry Tool (AMPT) uses machine learning and an end-user tool to accelerate the laborious process, dramatically shortening the time needed to analyze, identify and categorize all of the images.

Applying machine learning techniques to the problem has already yielded huge results, reducing a six-month process to just six weeks with room for further improvements. Machine learning is a branch of computing that can improve its performance through experience and use of data. The faster turnaround time will make it possible to more quickly identify whales of concern and provide health metrics to management groups to allow for adaptive decision making, according to Vulcan.

Were trying to make and leave the world a better place, primarily through ocean health and conservation, said Sam McKennoch, machine learning team manager at Vulcan. We got connected with SR3 and realized this was a great use case, where they have a large amount of existing data and needed help automating their workflows.

AMPT is based on four different machine learning models. First, the orca detector identifies those images that have orcas in them and places a box around each whale. The next ML model fully outlines the orcas body, a process known in the machine learning field as semantic segmentation. After that comes the landmark detector which detects the rostrum (or snout) of the whale, the dorsal fins, blowhole, shape of the eye patches, fluke notch and so forth. This allows the software to measure and calculate the shape and proportions of various parts of the body.

Of particular interest is whether the whales facial fat deposits are so low they result in indentations of the head that marine biologists refer to as peanut head. This only appears when the orca has lost a significant amount of body fat and is in danger of starvation.

Finally, the fourth machine learning model is the identifier. The shape of the gray saddle patch behind the whales dorsal fin is as unique as a fingerprint, allowing each of the individuals in the pod to be identified.

There are a lot of different kinds of information needed for this kind of automation. Fortunately, Vulcan has been able to leverage some of SR3s prior manual work to bootstrap their machine learning models.

We really wanted to understand their pain points and how we could provide them the tools they needed, rather than the tools we might want to give them, McKennoch said.

As successful as AMPT has been, theres a lot of knowledge and information that has yet to be incorporated into its machine learning models. As a result, theres still the need to have users in-the-loop in a semi-supervised way for some of the ML processing. The interface speeds up user input and standardizes measurements made by different users.

McKennoch believes there will be gains with each batch they process for several cycles to come. Because of this, they hope to continue to improve performance in terms of accuracy, workflow and compute time to the point that the entire process eventually takes days, instead of weeks or months.

This is very important because AMPT will provide information that guides policy decisions at many levels. Human impact on the orcas environment is not diminishing and if anything, is increasing. Overfishing is reducing food sources, particularly chinook salmon, the orcas preferred meal. Commercial shipping and recreational boats continue to cause injury and their excessive noise interferes with the orcas ability to hunt salmon. Toxic chemicals from stormwater runoff and other pollution damage the marine mammals health. Ongoing monitoring of each individual whale will be critical to maintaining their wellbeing and the health of the local marine ecosystem.

Vulcan plans to open-source AMPT, giving it a life of its own in the marine mammal research community. McKennoch said they hope to extend the tool so it can be used for other killer whale populations, different large whales, and in time, possibly smaller dolphins and harbor seals.

The rest is here:
Can machine learning help save the whales? How PNW researchers use tech tools to monitor orcas - GeekWire

The Coolest Data Science And Machine Learning Tool Companies Of The 2021 Big Data 100 – CRN

Learning Curve

As businesses and organizations strive to manage ever-growing volumes of data and, even more important, derive value from that data, they are increasingly turning to data engineering and machine learning tools to improve and even automate their big data processes and workflows.

As part of the 2021 Big Data 100, CRN has compiled a list of data science and machine learning tool companies that solution providers should be aware of. While most of these are not exactly household names, some, including DataRobot, Dataiku and H2O, have been around for a number of years and have achieved significant market presence. Others, including dotData, are more recent startups.

This week CRN is running the Big Data 100 list in slideshows, organized by technology category, with vendors of business analytics software, database systems, data management and integration software, data science and machine learning tools, and big data systems and platforms.

(Some vendors market big data products that span multiple technology categories. They appear in the slideshow for the technology segment in which they are most prominent.)

See the article here:
The Coolest Data Science And Machine Learning Tool Companies Of The 2021 Big Data 100 - CRN

Attabotics Partners With AltaML and Amii to Bolster Artificial Intelligence and Machine Learning Cap – DC Velocity

Attabotics, the 3D robotics supply chain company, today announced a partnership with AltaML, a leading Canadian applied artificial intelligence and machine learning company, and the Alberta Machine Intelligence Institute (Amii), one of the worlds preeminent centers of artificial intelligence research and application, to develop capabilities in artificial intelligence (AI) and machine learning (ML) that further optimize efficiency and productivity in Attabotics innovative supply chain infrastructure. Together, the three organizations will begin operationalizing the partnership through projects that combine AI technologies with IoT (Internet of Things) infrastructure to achieve more efficient IoT operations, improve human-machine interactions and enhance Attabotics data management and capabilities.

Requiring 85 percent less space than typical fulfillment warehouses, Attabotics is an entirely new way to store and pick goods in warehouses that is tailor-made to help retailers respond to changing e-commerce demands and empower brands. The company transforms the rows and aisles of a typical warehouse into a single, vertical storage structure thats modular and scalable, and uses 3D robots internally to store and retrieve items for box packers on the outside perimeter. Attabotics offers an ideal applied platform to utilize emerging technologies to optimize the supply chain for modern commerce.

Integrating AI technology into the supply chain for transparency, predictive analytics and network optimization is integral as the pandemic has shown that the traditional supply chain doesnt and wont support modern consumer behavior. Attabotics is building advanced AI/ML capabilities that maximize supply chain system throughput by predictively optimizing fulfillment while minimizing downtime. Attabotics drives these advanced AI models by leveraging IoT data derived from modern cloud based robotic operations. With AltaML and Amii, Attabotics is taking another step toward building out its digitally integrated, distributed network that is optimized for modern commerce.

Were excited to work with two world-renowned organizations to build the future of innovation in Canada, said Scott Gravelle, Attabotics CEO. Creating alliances with industry-leading partners is something weve put an emphasis on, which is why were so grateful to have identified the right partners in AltaML and Amii to help further to optimize our platform as we revolutionize the supply chain.

This collaboration draws on the strengths of three Alberta technology leaders to expand the data analytics capabilities for customers. Combining Attabotics expertise in warehousing and fulfillment with AltaMLs expertise developing applied AI solutions and Amiis world-leading research expertise, the collaboration will enable innovation in areas such as maximizing system automation uptime and throughput. This partnership will also support the growth of Calgary and Alberta as an innovation hub and contributes to an ecosystem where technology and innovation continue to thrive.

AltaML builds and deploys AI-powered software for complex problems, creating new competitive advantage for our partners, said Nicole Janssen, AltaML co-CEO. Attabotics has disrupted traditional warehousing, and we are thrilled to work with them, and Amii, to optimize their processes through applied AI. We are already seeing promising results and look forward to many more to come.

Amii is thrilled to be part of this one-of-a-kind collaboration bringing together three of Albertas leading technology organizations. Together, were demonstrating the provinces reputation as a hub for technology and artificial intelligence through the combination of Attabotics transformational work in advanced robotics for supply chain, Amiis leadership and expertise in artificial intelligence research and development and AltaMLs proven record in applying AI to create business impact. This partnership shows the power of public-private partnerships and is further proof of Albertas leadership in the research and application of AI, said Cam Linke, Amii CEO.

Originally posted here:
Attabotics Partners With AltaML and Amii to Bolster Artificial Intelligence and Machine Learning Cap - DC Velocity

Machine learning security vulnerabilities are a growing threat to the web, report highlights – The Daily Swig

Security industry needs to tackle nascent AI threats before its too late

As machine learning (ML) systems become a staple of everyday life, the security threats they entail will spill over into all kinds of applications we use, according to a new report.

Unlike traditional software, where flaws in design and source code account for most security issues, in AI systems, vulnerabilities can exist in images, audio files, text, and other data used to train and run machine learning models.

This is according to researchers from Adversa, a Tel Aviv-based start-up that focuses on security for artificial intelligence (AI) systems, who outlined their latest findings in their report, The Road to Secure and Trusted AI, this month.

This makes it more difficult to filter, handle, and detect malicious inputs and interactions, the report warns, adding that threat actors will eventually weaponize AI for malicious purposes.

Unfortunately, the AI industry hasnt even begun to solve these challenges yet, jeopardizing the security of already deployed and future AI systems.

Theres already a body of research that shows many machine learning systems are vulnerable to adversarial attacks, imperceptible manipulations that cause models to behave erratically.

BACKGROUND Adversarial attacks against machine learning systems everything you need to know

According to the researchers at Adversa, machine learning systems that process visual data account for most of the work on adversarial attacks, followed by analytics, language processing, and autonomy.

Machine learning systems have a distinct attack surface

With the growth of AI, cyberattacks will focus on fooling new visual and conversational Interfaces, the researchers write.

Additionally, as AI systems rely on their own learning and decision making, cybercriminals will shift their attention from traditional software workflows to algorithms powering analytical and autonomy capabilities of AI systems.

Web developers who are integrating machine learning models into their applications should take note of these security issues, warned Alex Polyakov, co-founder and CEO of Adversa.

There is definitely a big difference in so-called digital and physical attacks. Now, it is much easier to perform digital attacks against web applications: sometimes changing only one pixel is enough to cause a misclassification, Polyakov told The Daily Swig, adding that attacks against ML systems in the physical world have more stringent demands and require much more time and knowledge.

Read more of the latest infosec research news

Polyakov also warned about vulnerabilities in machine learning models served over the web such as API services provided by large tech companies.

Most of the models we saw online are vulnerable, and it has been proven by several research reports as well as by our internal tests, Polyakov. With some tricks, it is possible to train an attack on one model and then transfer it to another model without knowing any special details of it.

Also, you can perform CopyCat attack to steal a model, apply the attack on it and then use this attack on the API.

Most machine learning algorithms require large sets of labeled data to train models. In many cases, instead of going through the effort of creating their own datasets, machine learning developers search and download datasets published on GitHub, Kaggle, or other web platforms.

Eugene Neelou, co-founder and CTO of Adversa, warned about potential vulnerabilities in these datasets that can lead to data poisoning attacks.

Poisoning data with maliciously crafted data samples may make AI models learn those data entries during training, thus learning malicious triggers, Neelou told The Daily Swig. The model will behave as intended in normal conditions, but malicious actors may call those hidden triggers during attacks.

RELATED TrojanNet a simple yet effective attack on machine learning models

Neelou also warned about trojan attacks, where adversaries distribute contaminated models on web platforms.

Instead of poisoning data, attackers have control over the AI model internal parameters, Neelou said. They could train/customize and distribute their infected models via GitHub or model platforms/marketplaces.

Unfortunately, GitHub and other platforms dont yet have any safeguards in place to detect and defend against data poisoning schemes. This makes it very easy for attackers to spread contaminated datasets and models across the web.

Attacks against machine learning and AI systems are set to increase over the coming years

Neelou warned that while AI is extensively used in myriads of organizations, there are no efficient AI defenses.

He also raised concern that under currently established roles and procedures, no one is responsible for AI/ML security.

AI security is fundamentally different from traditional computer security, so it falls under the radar for cybersecurity teams, he said. Its also often out of scope for practitioners involved in responsible/ethical AI, and regular AI engineering hasn't solved the MLOps and QA testing yet.

Check out more machine learning security news

On the bright side, Polyakov said that adversarial attacks can also be used for good. Adversa recently helped one of its clients use adversarial manipulations to develop web CAPTCHA queries that are resilient against bot attacks.

The technology itself is a double-edged sword and can serve both good and bad, he said.

Adversa is one of several organizations involved in dealing with the emerging threats of machine learning systems.

Last year, in a joint effort, several major tech companies released the Adversarial Threat ML Matrix, a set of practices and procedures meant to secure the machine learning training and delivery pipeline in different settings.

RECOMMENDED Emotet clean-up: Security pros draw lessons from botnet menace as kill switch is activated

See the rest here:
Machine learning security vulnerabilities are a growing threat to the web, report highlights - The Daily Swig