Archive for the ‘Machine Learning’ Category

Cloud Security Alliance Releases Guidance on Use of Artificial Intelligence (AI) in Healthcare – Business Wire

SEATTLE--(BUSINESS WIRE)--The Cloud Security Alliance (CSA), the worlds leading organization dedicated to defining standards, certifications, and best practices to help ensure a secure cloud computing environment, today released Artificial Intelligence (AI) in Healthcare. Drafted by the Health Information Management Working Group, the report provides an overview of the ways in which AI and machine learning (ML) can be used to bring about major transformations in healthcare while addressing the challenges their use presents, and offering guidance for how to best incorporate them into healthcare systems now and in the future.

The document shares examples, use cases, and treatment methods for how AI, machine learning, and data mining can be effectively utilized throughout a healthcare system, including in research, diagnosis, and treatment. It also addresses ethical and legal challenges, bias in AI, and how it relates to telehealth, big data, and cloud computing in healthcare.

This is the time when healthcare leaders should be accelerating their use of AI, which when used with cloud computing has the potential for drastically improving patient outcomes. But, as with any new technology entering the healthcare arena, there are several challenges, among them a lack of data exchange, regulatory compliance requirements, and patient and provider adoption. This paper offers a summary of the areas in which healthcare can benefit, while providing healthcare delivery organizations guidance on how to best address the challenges their use brings, said Dr. James Angle, the papers lead author and co-chair of the Health Information Management Working Group.

The emergence of AI as a tool for better healthcare offers opportunities to improve patient and clinical outcomes and reduce costs. The ever-increasing volume and complexity of healthcare data provide an ideal environment for the application of both AI and ML, and there are several applications where these technologies can deliver an incredible value. Even so, healthcare delivery organizations must evaluate each to determine if and how they can be adopted, said Michael Roza, a contributor to the paper.

The CSA Health Information Management Working Group aims to provide a direct influence on how health information service providers deliver secure cloud solutions (services, transport, applications, and storage) to their clients, and to foster cloud awareness within all aspects of healthcare and related industries. Individuals interested in becoming involved in Health Information Management future research and initiatives are invited to join the working group.

Download Artificial Intelligence in Healthcare.

About Cloud Security Alliance

The Cloud Security Alliance (CSA) is the worlds leading organization dedicated to defining and raising awareness of best practices to help ensure a secure cloud computing environment. CSA harnesses the subject matter expertise of industry practitioners, associations, governments, and its corporate and individual members to offer cloud security-specific research, education, training, certification, events, and products. CSA's activities, knowledge, and extensive network benefit the entire community impacted by cloud from providers and customers to governments, entrepreneurs, and the assurance industry and provide a forum through which different parties can work together to create and maintain a trusted cloud ecosystem. For further information, visit us at http://www.cloudsecurityalliance.org, and follow us on Twitter @cloudsa.

See the article here:
Cloud Security Alliance Releases Guidance on Use of Artificial Intelligence (AI) in Healthcare - Business Wire

The convergence of deep neural networks and immunotherapy – TechCrunch

Luis Voloch is the CTO and co-founder of Immunai. He was previously Israel Tech Challenges head of data science, worked on varied machine learning efforts at Palantir and led the machine learning initiatives for ML modeling of DNA data at MyHeritage.

What do deep neural networks and cancer immunotherapy have in common?

While both are among the most transformational areas of modern science, 30 years ago, these fields were all but ridiculed by the scientific community. As a result, progress in each happened at the sidelines of academia for decades.

Between the 1970s and 1990s, some of the most prominent computer scientists, including Marvin Minsky, in his book Perceptrons, argued that neural networks (the backbone of most modern AI) would never work for most applications. He exposed flaws in the early conceptions of neural networks and argued that the whole approach was ineffective.

Meanwhile, during the 1980s through the 2000s, neural network pioneers and believers Geoffrey Hinton, Yoshua Bengio, and Yann LeCun continued their efforts and pursued their intuition that neural networks would succeed. These researchers found that most of the original ideas were correct but simply needed more data (think of ImageNET), computational power and further modeling tweaks to be effective.

Hinton, Bengio and LeCun were awarded the Turing Award in 2018 (the computer science equivalent of a Nobel prize) for their work. Today, their revelations have made neural networks the most vibrant area of computer science and have revolutionized fields such as computer vision and natural language processing.

Cancer immunology faced similar obstacles. Treatment with IL-2 cytokine, one of the first immunomodulatory drugs, failed to meet expectations. These outcomes slowed further research, and for decades, cancer immunology wasnt taken seriously by many cancer biologists. With the effort and intuition of some, however, it was discovered decades later that the concept of boosting the immune system to fight cancer had objective validity. It turned out that we just needed better drug targets and combinations, and eventually, researchers demonstrated that the immune system is the best tool in our fight against cancer.

James P. Allison and Tasuku Honjo, who pioneered the class of cancer immunotherapy drugs known as checkpoint inhibitors, were awarded the Nobel Prize in 2018.

Though widely accepted now, it took decades for the scientific establishment to accept these novel approaches as valid.

Machine learning and immunotherapy have more in common than historical similarities. The beauty of immunotherapy is that it leverages the versatility and flexibility of the immune system to fight different types of cancers. While the first immunotherapies showed results in a few cancers, they were later shown to work in many other cancer types. AI, similarly, utilizes flexible tools to solve a wide range of problems across applications via transfer and multitask learning. These processes are made possible through access to large-scale data.

Heres something to remember: The resurgence of neural networks started in 2012 after the AlexNet architecture demonstrated 84.7% accuracy in the ImageNET competition. This level of performance was revolutionary at the time, with the second-best model achieving 73.8% accuracy. The ImageNET dataset, started by Fei-Fei Li, is robust, well labeled and high quality. As a result, it has been integral to how far neural networks have brought computer vision today.

Interestingly, similar developments are happening now in biology. Life sciences companies and labs are building large-scale datasets with tens of millions of immune cells labeled consistently to ensure the validity of the underlying data. These datasets are the analogs of ImageNET in biology.

Were already seeing these large, high-quality datasets giving rise to experimentation at a rate and scale that was impossible before. For example, machine learning is being used to identify immune cell types in different parts of the body and their involvement in various diseases. After identifying patterns, algorithms can map or predict different immune trajectories, which can then be used to interpret, for example, why some cancer immunotherapies work on particular cancer types and some dont. The datasets act as the Google Maps of the immune system.

Mapping patterns of genes, proteins and cell interactions across diseases allows researchers to understand molecular pathways as the building blocks of disease. The presence or absence of a functional block helps interpret why some cancer immunotherapies work on particular cancer types but not others.

Mapping pathways of genes and proteins across diseases and phenotypes allows researchers to learn how they work together to activate specific pathways and fight multiple diseases. Genes can be part of numerous pathways, and they can cause distinct types of cells to behave differently.

Moreover, different cell types can share similar gene activities, and the same functional pathways can be found in various immune-related disorders. This makes a case for building machine learning models that perform effectively on specific tasks and transfer to other tasks.

Transfer learning works in deep learning models, for example, by taking simple patterns (in images, think of simple lines and curves) learned by early layers of a neural network and leveraging those layers for different problems. In biology, this allows us to transfer knowledge on how specific genes and pathways in one disease or cell type play a role in other contexts.

AI research that addresses the effects of genetic changes (perturbations) on immune cells and their impact on the cells and possible treatments is increasingly common in cancer immunology. This kind of research will enable us to understand these cells more quickly and lead to better drugs and treatments.

With large-scale data fueling further research in immunotherapy and AI, we are confident that more effective drugs to fight cancer will appear soon, thus giving hope to the over 18 million people who are diagnosed with cancer every year.

Read the rest here:
The convergence of deep neural networks and immunotherapy - TechCrunch

How leveraging AI and machine learning can give companies a competitive edge – Business Today

A recent study by Gartner indicates that by 2025 the 10% of enterprises that establish Machine Learning (ML) or Artificial Intelligence (AI) engineering best practices will generate at least three times more value from their AI and ML efforts than the 90% of enterprises that don't. With such a high value estimated to be derived only from the adoption of ML/AI practices, it is difficult to not agree that the future of enterprises rests heavily on AI and ML technologies with other digital technologies.The pandemic has unveiled a world that embraced technology at a pace that would have otherwise taken ages to evolve.

Traditional practices that saw monolithic systems, lack of flexibility and manual processes were all blocking innovation.

Also Read:Artificial Intelligence: A Pathway to success for enterprises

However, mass new-age technology acceptance induced by the pandemic has helped enterprises overcome these challenges. Modern technologies like AI and ML are opening a new world of possibilities for organisations.

Seizing the early-mover advantage will particularly benefit organisations in taking important business decisions in a more informed, intuitive way.

The applicability of new-age technologies is growing every day. For example, marketers are starting to use ML-based tools to personalise offers to their customers and further measure their satisfaction levels through the successful implementation of ML algorithms into their operations.

This and there are more examples of how AI/ML algorithms are enabling organisations run their businesses smartly and make them profitable.Additionally, enterprises are recognising the benefits of cloud infrastructure and applications with ML and AI algorithms built in.

They allow companies to spend less time on manual work and management and instead focus on high-value jobs that drive business results. ML can result in efficiencies in workloads of enterprise IT and ultimately reduce IT infrastructure costs.

This stands especially true in India, where consulting firm Accenture estimates in one of its reports that the use of AI could add $957 billion to the Indian economy in 2035 provided the "right investments" are made in new-age technology. India, with its entrepreneurial spirit, abundance of talent and the right sources of education has mega potential to unleash AI's true capabilities - but they need the right partner.

The biggest limitation in using AI is that companies often run into implementation issues which could be anything from scarcity of data science expertise to making the platform perform in real-time.

As a result, there is slight reluctance in accepting AI among organisations, and this, in turn, is leading to inconsistencies and lack of results.

Also Read:Three ways AI can help transform businesses

However, with the right partner, India's true potential can be harnessed. As we move into an AI/ML led world, we need to lead the change by building the requisite skills.

While many companies don't have enough resources to marshal an army of data science PhDs, a more practical alternative is to build smaller and more focused "MLOps" teams - much like DevOps teams in application development.

Such teams could consist of not just data scientists, but also developers and other IT engineers whose mission would be to deploy, maintain, and constantly improve AI/ML models in a production environment. While there is a huge responsibility lying on IT professionals to develop an AI/ML led ecosystem in India, companies must also align resources to help them be successful. In due course, AI/ML will be the competitive advantage that companies will need to adopt in order to stay relevant and sustain businesses.

Forrester predicts that one in five organisations will double down on "AI inside" - which is AI and ML embedded in their systems and operational practices.

AI and ML are powerful technology tools that hold the key to achieving an organization's digital transformation goals.

(The author is Head-Technology Cloud, Oracle India.)

View original post here:
How leveraging AI and machine learning can give companies a competitive edge - Business Today

Machines that see the world more like humans do – Big Think

Computer vision systems sometimes make inferences about a scene that fly in the face of common sense. For example, if a robot were processing a scene of a dinner table, it might completely ignore a bowl that is visible to any human observer, estimate that a plate is floating above the table, or misperceive a fork to be penetrating a bowl rather than leaning against it.

Move that computer vision system to a self-driving car and the stakes become much higher for example, such systems have failed to detect emergency vehicles and pedestrians crossing the street.

To overcome these errors, MIT researchers have developed a framework that helps machines see the world more like humans do reports MIT News. Their new artificial intelligence system for analyzing scenes learns to perceive real-world objects from just a few images, and perceives scenes in terms of these learned objects.

The researchers built the framework using probabilistic programming, an AI approach that enables the system to cross-check detected objects against input data, to see if the images recorded from a camera are a likely match to any candidate scene. Probabilistic inference allows the system to infer whether mismatches are likely due to noise or to errors in the scene interpretation that need to be corrected by further processing.

This common-sense safeguard allows the system to detect and correct many errors that plague the deep-learning approaches that have also been used for computer vision. Probabilistic programming also makes it possible to infer probable contact relationships between objects in the scene, and use common-sense reasoning about these contacts to infer more accurate positions for objects.

If you dont know about the contact relationships, then you could say that an object is floating above the table that would be a valid explanation. As humans, it is obvious to us that this is physically unrealistic and the object resting on top of the table is a more likely pose of the object. Because our reasoning system is aware of this sort of knowledge, it can infer more accurate poses. That is a key insight of this work, says lead author Nishad Gothoskar, an electrical engineering and computer science (EECS) PhD student with the Probabilistic Computing Project.

In addition to improving the safety of self-driving cars, this work could enhance the performance of computer perception systems that must interpret complicated arrangements of objects, like a robot tasked with cleaning a cluttered kitchen.

Gothoskars co-authors include recent EECS PhD graduate Marco Cusumano-Towner; research engineer Ben Zinberg; visiting student Matin Ghavamizadeh; Falk Pollok, a software engineer in the MIT-IBM Watson AI Lab; recent EECS masters graduate Austin Garrett; Dan Gutfreund, a principal investigator in the MIT-IBM Watson AI Lab; Joshua B. Tenenbaum, the Paul E. Newton Career Development Professor of Cognitive Science and Computation in the Department of Brain and Cognitive Sciences (BCS) and a member of the Computer Science and Artificial Intelligence Laboratory; and senior author Vikash K. Mansinghka, principal research scientist and leader of the Probabilistic Computing Project in BCS. The research is being presented at the Conference on Neural Information Processing Systems in December.

A blast from the past

To develop the system, called 3D Scene Perception via Probabilistic Programming (3DP3), the researchers drew on a concept from the early days of AI research, which is that computer vision can be thought of as the inverse of computer graphics.

Computer graphics focuses on generating images based on the representation of a scene; computer vision can be seen as the inverse of this process.Gothoskar and his collaborators made this technique more learnable and scalable by incorporating it into a framework built using probabilistic programming.

Probabilistic programming allows us to write down our knowledge about some aspects of the world in a way a computer can interpret, but at the same time, it allows us to express what we dont know, the uncertainty. So, the system is able to automatically learn from data and also automatically detect when the rules dont hold, Cusumano-Towner explains.

In this case, the model is encoded with prior knowledge about 3D scenes. For instance, 3DP3 knows that scenes are composed of different objects, and that these objects often lay flat on top of each other but they may not always be in such simple relationships. This enables the model to reason about a scene with more common sense.

Learning shapes and scenes

To analyze an image of a scene, 3DP3 first learns about the objects in that scene. After being shown only five images of an object, each taken from a different angle, 3DP3 learns the objects shape and estimates the volume it would occupy in space.

If I show you an object from five different perspectives, you can build a pretty good representation of that object. Youd understand its color, its shape, and youd be able to recognize that object in many different scenes, Gothoskar says.

Mansinghka adds, This is way less data than deep-learning approaches. For example, the Dense Fusion neural object detection system requires thousands of training examples for each object type. In contrast, 3DP3 only requires a few images per object, and reports uncertainty about the parts of each objects shape that it doesnt know.

The 3DP3 system generates a graph to represent the scene, where each object is a node and the lines that connect the nodes indicate which objects are in contact with one another. This enables 3DP3 to produce a more accurate estimation of how the objects are arranged. (Deep-learning approaches rely on depth images to estimate object poses, but these methods dont produce a graph structure of contact relationships, so their estimations are less accurate.)

Outperforming baseline models

The researchers compared 3DP3 with several deep-learning systems, all tasked with estimating the poses of 3D objects in a scene.

In nearly all instances, 3DP3 generated more accurate poses than other models and performed far better when some objects were partially obstructing others. And 3DP3 only needed to see five images of each object, while each of the baseline models it outperformed needed thousands of images for training.

When used in conjunction with another model, 3DP3 was able to improve its accuracy. For instance, a deep-learning model might predict that a bowl is floating slightly above a table, but because 3DP3 has knowledge of the contact relationships and can see that this is an unlikely configuration, it is able to make a correction by aligning the bowl with the table.

I found it surprising to see how large the errors from deep learning could sometimes be producing scene representations where objects really didnt match with what people would perceive. I also found it surprising that only a little bit of model-based inference in our causal probabilistic program was enough to detect and fix these errors. Of course, there is still a long way to go to make it fast and robust enough for challenging real-time vision systems but for the first time, were seeing probabilistic programming and structured causal models improving robustness over deep learning on hard 3D vision benchmarks, Mansinghka says.

In the future, the researchers would like to push the system further so it can learn about an object from a single image, or a single frame in a movie, and then be able to detect that object robustly in different scenes. They would also like to explore the use of 3DP3 to gather training data for a neural network. It is often difficult for humans to manually label images with 3D geometry, so 3DP3 could be used to generate more complex image labels.

The 3DP3 system combines low-fidelity graphics modeling with common-sense reasoning to correct large scene interpretation errors made by deep learning neural nets. This type of approach could have broad applicability as it addresses important failure modes of deep learning. The MIT researchers accomplishment also shows how probabilistic programming technology previously developed under DARPAs Probabilistic Programming for Advancing Machine Learning (PPAML) program can be applied to solve central problems of common-sense AI under DARPAs current Machine Common Sense (MCS) program, says Matt Turek, DARPA Program Manager for the Machine Common Sense Program, who was not involved in this research, though the program partially funded the study.

Additional funders include the Singapore Defense Science and Technology Agency collaboration with the MIT Schwarzman College of Computing, Intels Probabilistic Computing Center, the MIT-IBM Watson AI Lab, the Aphorism Foundation, and the Siegel Family Foundation.

Republished with permission ofMIT News. Read theoriginal article.

Visit link:
Machines that see the world more like humans do - Big Think

12 Technology Innovations That Will Influence the Future of Healthcare – The Southern Maryland Chronicle

Technology and healthcare go hand in hand. Many people are asking themselves where theyre going. The industry continues to benefit from massive investment in digital health trends such as telemedicine, IoT devices, and virtual reality surgical training, which has helped improve global health equity.

Here are 12 reasons technology is changing how we think about IT and healthcare:

Nanotechnology promises many things, but it may actually be closer than you think. Researchers from the US and South Korea have created nanorobots capable of delivering drugs to clogged arteries and drilling through them. This technology, which is controlled by an MRI machine and has wide-ranging applications, looks promising. However, there are still some issues that need to be resolved in the lab before they can apply it to humans. Google has established Verily, a Life Sciences division within Alphabet that is partnering with Johnson & Johnson in order to further explore the technology.

It has never been easier to deal with large amounts of data. Analytics, cloud computing, machine learning, and machine learning have allowed us to access more data and allow us to see it in new ways. AI promises to allow us to sift through the mountains of data to gain new insights. This will enable us to identify potential risks and reduce costs. Other promising applications include reducing waste and expediting the drug discovery process.

The biggest source of frustration and confusion in healthcare is billing. It is easy to make mistakes and it can be frustrating to chase down people. Patient Access Solutions makes the whole process simpler and the audit process more efficient.

Augmented reality offers many promising applications in healthcare. It can help us keep our information organized, avoid errors, and improve the quality of our care. Its possible to access patient information during an interaction, making it more personal and powerful.

3D printing promises to revolutionize medical technology, from prosthetics to instrumentation, to implants. It has the potential for a complete revolution in the medical field as we continue to refine and improve our processes.

Shockwave therapy, also known as acoustic shockwave therapy (LiESWT) or low-intensity additional corporeal shockwave treatment (Acoustic Soundwave Therapy), is the best method to solve the problem. It increases the blood flow to the penis permanently. This type of therapy has been used in clinics for over a decade. However, a new shockwave therapy device, the Phoenix, allows men to improve their erections from the privacy and comfort of their own homes.

As our demand to interface quickly with computers and digital information grows, it might make sense to use recent advancements in neural interface technology. Cyborgization is a concept that allows humans and machines to work seamlessly together in many contexts. This will allow us to provide quality care in new ways. The possibilities are limitless, from providers being able precisely to control robotic surgical tools to patients having integrated systems that monitor vital stats and alert of impending trouble.

Electronic prescription filing is growing for many reasons. It reduces errors, speeds up medical reconciliation, and alerts providers to potential adverse interactions or patient allergies.

Digital diagnostic tools are becoming more powerful. Its easier than ever to get a second opinion and confirm a difficult diagnosis with 4K video and high-resolution cameras. There are also more options to consult if you have difficulty solving a case.

While patient history is an important part of quality care, it is often the patient who is the most difficult to access. Patient portals allow you to access all the patients information and medical history from one place.

Providers compliance is centered on health records and personal information (PHI). They are also an important source of anxiety for IT professionals in healthcare who are responsible for security. Blockchain is made up of two components. The first is a public transaction log, which cannot be accessed by anyone else.

Cognitive technology increasingly uses digital records and AI advances to process large quantities of data in new ways. It identifies patterns that can be used to predict disease early and help catch it before it happens. Computer vision, machine learning, and natural language processing are just a few of the other uses.

It protects encrypted data from being altered or changed. It can improve patient care by linking patients to their data rather than to their identities.

Like Loading...

Related

See the original post:
12 Technology Innovations That Will Influence the Future of Healthcare - The Southern Maryland Chronicle