Archive for the ‘Machine Learning’ Category

Syapse Unveils Two New Studies on Use of Machine Learning on Real-World Data to Identify and Treat Cancer With Precision at ASCO 2022 – GlobeNewswire

SAN FRANCISCO, May 27, 2022 (GLOBE NEWSWIRE) -- Syapse, a leading real-world evidence company dedicated to extinguishing the fear and burden of serious diseases by advancing real-world care, today announced two new studies focused on how the use of machine learning on real-world data can be used to power precision medicine solutions. Syapse will be presenting at the American Society for Clinical Oncology (ASCO) Annual Meeting being held June 3-7, 2022 in Chicago.

This years ASCO is centered on a theme of innovation to make cancer care more equitable, convenient and efficient. Two studies that we are presenting align well with this objective, with a focus on how machine learning can be applied to real-world data to better bring identification of patient characteristics, and specific patient cohorts of interest, to scale, said Thomas Brown, MD, chief medical officer of Syapse. The transformational effort to pursue more personalized, targeted treatments for patients with cancer can be empowered by leveraging real-world data to produce insights in the form of real world evidence, as a complement to classical clinical trials.

Unveiled at ASCO, the Syapse studies include:

In addition to presenting this research at ASCO, Syapse has created an online ASCO hub with more information about its research, its interactive booth experience and how its work with real-world evidence is transforming data into answers that improve care for patients everywhere. For ASCO attendees, please visit Syapse at booth #18143 during the show.

AboutSyapseSyapse is a company dedicated to extinguishing the fear and burden of oncology and other serious diseases by advancing real-world care. By marrying clinical expertise with smart technologies, we transform data into evidenceand then into experiencein collaboration with our network of partners, who are committed to improving patients lives through community health systems. Together, we connect comprehensive patient insights to our network, to empower our partners in driving real impact and improving access to high-quality care.

Syapse ContactChristian Edgington, Media & Engagementcedgington@realchemistry.com

The rest is here:
Syapse Unveils Two New Studies on Use of Machine Learning on Real-World Data to Identify and Treat Cancer With Precision at ASCO 2022 - GlobeNewswire

What Is AI? Understanding The Real-World Impact Of Artificial Intelligence – Forbes

Artificial intelligence is todays most discussed and debated technology, generating widespread adulation and anxiety, and significant government and business interest and investments. But six years after DeepMind's AlphaGo defeated a Go champion, countless research papers showing AIs superior performance over humans in a variety of tasks, and numerous surveys reporting rapid adoption, what is the actual business impact of AI?

Human intelligence communicating with the artificial kind. (Photo by Jonas Gratzer/LightRocket via ... [+] Getty Images)

2021 was the year that AI went from an emerging technology to a mature technology... that has real-world impact, both positive and negative, declared the 2022 AI Index Report. The 5th installment of the index measures the growing impact of AI in a number of ways, including private investment in AI, the number of AI patents filed, and the number of bills related to AI that were passed into law in legislatures of 25 countries around the world.

There is nothing in the report, however, about real-world impact as I would define itmeasurably successful, long-lasting and significant deployments of AI. There is also no definition of AI in the report.

Going back to the first installment of the AI Index report, published in 2017, still does not yield a definition of what the report is all about. But the goal of the report is stated upfront: the field of AI is still evolving rapidly and even experts have a hard time understanding and tracking progress across the field. Without the relevant data for reasoning about the state of AI technology, we are essentially flying blind in our conversations and decision-making related to AI.

Flying blind is a good description, in my opinion, of gathering data about something you dont define.

The 2017 report was created and launched as a project of the One Hundred Year Study on AI at Stanford University (AI100), released in 2016. That studys first section did ask the question what is artificial intelligence? only to provide the traditional circular definition that AI is what makes machines intelligent, and that intelligence is the quality that enables an entity to function appropriately and with foresight in its environment.

So the very first computers (popularly called Giant Brains) were intelligent because they could calculate, even faster than humans? The One Hundred Year Study answers Although our broad interpretation places the calculator within the intelligence spectrumthe frontier of AI has moved far ahead and functions of the calculator are only one among the millions that today's smartphones can perform. In other words, anything a computer did in the past or does today is AI.

The study also offers an operational definition: AI can also be defined by what AI researchers do. Which is probably the reason this years AI Index measures the real-world impact and progress of AI, among other indicators, by the number of citations and AI papers (defined as AI by the papers authors and indexed with the keyword AI by the publications).

Moving beyond circular definitions, however, the study provides us with a clear and concise description of what prompted the sudden frenzy and fear around a term that was coined back in 1955: Several factors have fueled the AI revolution. Foremost among them is the maturing of machine learning, supported in part by cloud computing resources and wide-spread, web-based data gathering. Machine learning has been propelled dramatically forward by deep learning, a form of adaptive artificial neural networks trained using a method called backpropagation.

Indeed, machine learning (a term coined in 1959) or teaching a computer to classify data (spam or not spam) and/or make a prediction (if you liked book X, you would love book y), is what todays AI is all about. Specifically, since its image classification breakthrough in 2012, its most recent variety or deep learning, involving data classification of very large amounts of data with numerous characteristics.

AI is learning from data. The AI of the 1955 variety, which generated a number of boom-and-bust cycles, was based on the assumption that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. That was the vision and, by and large, so far it hasnt materialized in a meaningful and sustained way, demonstrating significant real-world impact.

One serious problem with that vision was that it predicted the arrival in the not-so-distance future of a machine with human intelligence capabilities (or even surpassing humans), a prediction reiterated periodically by very intelligent humans, from Turing to Minsky to Hawking. This desire to play God, associated with the old-fashioned AI, has confounded and confused the discussion (and business and government actions) of present-day AI. This is what happens when you dont define what you are talking about (or define AI as what AI researchers do).

The combination of new methods of data analysis (backpropagation), the use of specialized hardware (GPUs) best suited for the type of calculations performed, and, most important, the availability of lots of data (already tagged and classified data used for teaching the computer the correct classification), is what led to todays AI revolution.

Call it the triumph of statistical analysis. This revolution is actually a 60-year evolution of the use of increasingly sophisticated statistical analysis to assist in a wide variety of business (or medical or governmental, etc.) decisions, actions, and transactions. It has been called data mining and predictive analytics and most recently, data science.

Last year, a survey of 30,000 American manufacturing establishments found that productivity is significantly higher among plants that use predictive analytics. (Incidentally, Erik Brynjolfsson, the lead author on that study has also been a steering committee member of the AI Index Report since its inception). It seems that its possible to find a measurable real-world Impact of AI, as long as you define it correctly.

AI is learning from data. And successful, measurable, business use of learning from data is what I would call Practical AI.

Read more from the original source:
What Is AI? Understanding The Real-World Impact Of Artificial Intelligence - Forbes

How ArtificiaI Intelligence and Machine Learning are transforming the business landscape – Times of India

Artificial intelligence (AI) and machine learning (ML) have taken the world by storm. From music to loan and credit card recommendations, solutions powered by AI and ML are fast penetrating into many aspects of lives.

Today every enterprise wishes to add AI and ML to their technology mix. In an Accenture survey, 84% of executives said they wouldnt achieve their growth objectives without scaling AI. AI and ML are gaining high traction as enterprises can gain competitive differentiation and accelerate their business growth using solutions powered by these technologies.

AI and ML-based solutions augment human effort in ways that help enterprises optimize their costs, enhance operational efficiency, and deliver customer-centric services. In a McKinsey survey, most survey respondents said their organizations had adopted AI capabilities, as AIs impact on both the bottom line and cost saved is growing.

As the impact of AI and ML grows manifold, lets learn about the top advantages enterprises can gain by implementing these technologies:

Enterprises are always looking for ways to enhance customer engagement to improve acquisition and retention. AI and ML empower enterprises to understand their customers at a deeper, personal level by generating insights based on customer behaviour and their transaction history. These insights help enterprises enable personalized customer engagement and deliver tailor-made offerings. Enterprises can also explore cross-sell and up-sell opportunities by anticipating customers needs. For instance, banks can provide customized loan recommendations and financial offerings based on customers credit history and risk appetite.

Today customers expect round-the-clock assistance while using services on their preferred channels. To improve customer support services and communications, enterprises can deploy an AI-enabled virtual assistant or chatbot system that helps customers find solutions to their queries on demand. For example, insurance companies can quickly cater to customer queries by deploying self-service portals. Customer support teams can also leverage insights generated through AI systems to gain context into the customer journey and provide better assistance.

Investing in technologies like AI and ML significantly enhances the accuracy of complex business processes. AI and ML systems have self-learning capabilities. These systems become more intelligent as more data is fed. Enterprises can leverage these systems dynamic, trainable capabilities to enable accurate content extraction, automatic document classification, and efficient sentiment analysis. This also ensures effective content governance and streamlined content-centric processes. For instance, banks can accelerate customer onboarding processes with the help of document verification and various identity verification tools powered by AI.

As data volumes and competition are growing simultaneously, enterprises must stay ahead of the curve and futureproof themselves. Predictive analytics, enabled through AI and ML, can help enterprises minimize risks, ensure intelligent decision-making, and improve overall business outcomes. Banks can utilize predictive analytics for assessing loan applications based on customers previous transactions. Similarly, insurance companies can leverage analytics for risk prevention and fraud detection. Also, retailers can ensure intelligent decision-making through AI-enabled predictive analytics to prevent instances of understocking and overstocking.

To Conclude

AI and ML can tremendously contribute to an organizations growth irrespective of its size or sector. However, business leaders must refrain from outrightly implementing these technologies without the right context. To successfully leverage AI and ML, leaders must identify the areas where these technologies can add value and implement them based on their suitability.

As more data is added to global servers, scaling AI and ML technologies will be critical for enterprises to transform their data into data wealth in the true sense.

Views expressed above are the author's own.

END OF ARTICLE

Follow this link:
How ArtificiaI Intelligence and Machine Learning are transforming the business landscape - Times of India

Madrona Partners with PitchBook to Bring Machine Intelligence to the #IA40 – Madrona Venture Group

Madrona saw the move to intelligent applications early on we have been investing in the founders building them, the technology powering them and the infrastructure to support them for over 10 years. Today, we believe machine intelligence is the future of software: every successful application built now and in the future will be an intelligent application. Out with SaaS, in with Intelligent Apps!_______________________________________________________________________Want to listen to Ishani and Daniel Cook from PitchBook talk through this new partnership? Listen here

_______________________________________________________________________

In 2021 we launched the inaugural IA40, a ranking of the top 40 intelligent application companies. Created in partnership with Goldman Sachs and over 50 of the nations leading venture capital firms, the list covers early to late-stage private companies building the future of software.And since the list was announced in December 2021, IA40 companies, in aggregate, have raised over $3 billion in new rounds of financing!

Madrona saw the move to intelligent applications early on we have been investing in the founders building them, the technology powering them and the infrastructure to support them for over 10 years.

Looking ahead to the 2022 IA40, we are thrilled to announce the addition of our data partnership with PitchBook, the industry-leading private and public equity data provider. Madrona has worked with PitchBook for many years now (they are also located just a couple of blocks away from our offices) and their platform has become a valuable tool for the entire team.

This partnership presents an opportunity to make a significant change to the IA40. PitchBook is well-known for delivering timely, comprehensive, and transparent data on private and public equity markets collected through its proprietary information infrastructure. Now, they are harnessing that data to build powerful machine learning algorithms to predict financial outcomes. In this way, PitchBook embodies the broader shift from software as a service to an intelligent application.

To arrive at last years list, we asked more than 50 judges at 40 top venture capital firms to nominate and vote on intelligent application companies. They nominated over 300 companies and voted for the Top 40 10 companies for each category: early, mid, late-stage and enabler companies. You can find the list of 2021 winning companies like SeekOut, Gong, Starburst, dbt and others here.

This year, in addition to the judges and voting process, were leveraging PitchBooks new and proprietary machine learning algorithm to help determine the top 40 companies. This algorithm leverages data from the PitchBook platform to predict the likelihood of different outcomes for each company.

As the world around us becomes more and more data-driven, we looked for an approach that would mirror that shift in our own methodology. The voting process for 2021 was based on leading venture investors viewpoints a great proxy. Yet, for reference, PitchBook reports $115B invested in AI and ML companies in 2021 across over 5,000 companies in the space. And now more than ever, there is concrete data at our fingertips about each of these companies.

Employee growth, founder information, and funding rounds are valuable individual data points. Collectively, and using a machine learning approach, that data can be used to derive signal from noise. That signal will help create a better, more robust and intelligent ranking of the IA40. In short, PitchBooks new machine learning algorithm will parse a number of data points across all nominated companies to help us generate a more data-driven ranking of the top intelligent applications. We couldnt be more excited about this evolution of the IA40.

Now a sneak peek via retrospective. We ran the PitchBook algorithm against the 2021 IA40 companies. Here are some of the key takeaways.

PitchBooks participation in this years ranking process allows us to power the IA40 list using machine learning and, subsequently, create the industrys most accurate ranking of promising intelligent application companies. We couldnt be more excited to showcase this algorithms capabilities and to partner with an industry-leading firm here in the Pacific Northwest.

May 1, 2017

Machine Learning technology is everywhere! Yes, it is hyped but its also rapidly transforming how we work and live. Would you like to join the movement of innovators changing the world? Come to Madrona Venture

December 14, 2016

New Role Brings Deeper Level of Technology Expertise to Madrona Venture Labs Team We are thrilled to announce that Jay Bartot has joined our Madrona Venture Labs team as Chief Technology Officer. Jay has been

October 25, 2017

Today we are pleased to announce that Ted Kummert is rejoining Madrona as Venture Partner. Ted spent the last four years at Apptio as EVP of Engineering and Cloud Operations. While at Apptio (NASDAQ:APTI), Ted

See the original post:
Madrona Partners with PitchBook to Bring Machine Intelligence to the #IA40 - Madrona Venture Group

Deep Learning at the Edge Simplifies Package Inspection – Vision Systems Design

By Brian Benoit, Senior Manager Product Marketing, In-Sight Products, Cognex

Machine vision helps the packaging industry improve process control, improve product quality, and comply with packaging regulations. By removing human error and subjectivity with tightly controlled processes based on well-defined, quantifiable parameters, machine vision automates a variety of package inspection tasks. Machine vision tasks in the packaging industry include label inspection, optical character reading and verification (OCR/OCV), presence-absence inspection, counting, safety seal inspection, measurement, barcode reading, identification, and robotic guidance.

Machine vision systems deliver consistent performance when dealing with well-defined packaging defects. Parameterized, analytical, rule-based algorithms analyze package or product features captured within images that can be mathematically defined as either good or bad. However, analytical machine vision tools get pushed to their limits when potential defects are difficult to numerically define and the appearance of a defect significantly varies from one package to the next, making some applications difficult or even impossible to solve with more traditional tools.

In contrast, deep learning software relies on example-based training and neural networks to analyze defects, find and classify objects, and read printed characters. Instead of relying on engineers, systems integrators, and machine vision experts to tune a unique set of parameterized analytical tools until application requirements are satisfied, deep learning relies on operators, line managers, and other subject-matter experts to label images. By showing the deep learning system what a good part looks like and what a bad part looks like, deep learning software can make a distinction between good and defective parts, as well as classify the type of defects present.

Not so long ago, perhaps a decade, deep learning was available only to researchers, data scientists, and others with big budgets and highly specialized skills. However, over the last few years many machine vision system and solution providers have introduced powerful deep learning software tools tailored for machine vision applications.

In addition to VisionPro Deep Learning software from Cognex (Natick, MA, USA; http://www.cognex.com), Adaptive Vision (Gliwice, Poland; http://www.adaptive-vision.com) offers a deep learning add-on for its Aurora Vision Studio; Cyth Systems (San Diego, CA, USA; http://www.cyth.com) offers Neural Vision; Deevio (Berlin, Germany; http://www.deevio.ai) has a neural net supervised learning mode; MVTec Software (Munich, Germany; http://www.mvtec.com) offers MERLIC; and numerous other companies offer open-source toolkits to develop software specifically targeted at machine vision applications.

However, one common barrier to deploying deep learning in factory automation environments is the level of difficulty involved. Deep learning projects typically consist of four project phases: planning, data collection and ground truth labeling, optimization, and factory acceptance testing (FAT). Deep learning also frequently requires many hundreds of images and powerful hardware in the form of a PC with a GPU used to train a model for any given application. But, deep learning is now easier to use with the introduction of innovative technologies that process images at the edge.

Deep learning at the edge (edge learning), a subset of deep learning, uses a set of pretrained algorithms that process images directly on-device. Compared with more traditional deep learning-based solutions, edge learning requires less time and fewer images, and involves simpler setup and training.

Edge learning requires no automation or machine vision expertise for deployment and consequently offers a viable automation solution for everyonefrom machine vision beginners to experts. Instead of relying on engineers, systems integrators, and machine vision experts, edge learning uses the existing knowledge of operators, line engineers, and others to label images for system training.

Consequently, edge learning helps line operators looking for a straightforward way to integrate automation into their lines as well as expert automation engineers and systems integrators who use parameterized, analytical, rule-based machine vision tools but lack specific deep learning expertise. By embedding efficient, rules-based machine vision within a set of pretrained deep learning algorithms, edge learning devices provide the best of both worlds, with an integrated tool set optimized for packaging and factory automation applications.

With a single smart camera-based solution, edge learning can be deployed on any line within minutes. This solution integrates high-quality vision hardware, machine vision tools that preprocess images to reduce computational load, deep learning networks pretrained to solve factory automation problems, and a straightforward user interface designed for industrial applications.

Edge learning differs from existing deep learning frameworks in that it is not general purpose but is specifically tailored for industrial automation. And, it differs from other methods in its focus on ease of use across all stages of application deployment. For instance, edge learning requires fewer images to achieve proof of concept, less time for image setup and acquisition, no external GPU, and no specialized programming.

Developing a standard classification application using traditional deep learning methodology may require hundreds of images and several weeks. Edge learning makes defect classification much simpler. By analyzing multiple regions of interest (ROIs) in its field of view (FOV) and classifying each of those regions into multiple categories, edge learning lets anyone quickly and easily set up sophisticated assembly verification applications.

In the food packaging industry, edge learning technology is increasingly being used for verification and sorting of frozen meal tray sections. In many frozen meal packing applications, robots pick and place various food items into trays passing by on a high-speed line. For example, robots may place protein in the bottom center section, vegetables in the top left section, a side dish or dessert item in the top middle section, and some type of starch in the top right section of each tray.

Each section of a tray may contain multiple SKUs. For example, the protein section may include either meat loaf, turkey, or chicken. The starch section may contain pasta, rice, or potatoes. Edge learning makes it possible for operators to click and drag bounding boxes around characteristic features on a meal tray, fixing defined tray sections for training.

Next, the operator reviews a handful of images, classifying each possible class. Frequently, this can be done in a few minutes, with as few as three to five images for each class. During high-speed operation, the edge learning system can accurately classify the different sections. To accommodate entirely new classes or new varieties of existing classes during production, the tool can be updated with a few images in each new category.

For complex or highly customized applications, traditional deep learning is an ideal solution because it provides the capacity to process large and highly detailed image sets. Often, such applications involve objects with significant variations, which demands robust training capabilities and advanced computational power. Image sets with hundreds or thousands of images must be used for training to account for such significant variation and to capture all potential outcomes.

Enabling users to analyze such image sets quickly and efficiently, traditional deep learning delivers an effective solution for automating sophisticated tasks. Full-fledged deep learning products and open-source frameworks are well-designed to address complex applications. However, many factory automation applications entail far less complexity, making edge learning a more suitable solution.

With algorithms designed specifically for factory automation requirements and use cases, edge learning eliminates the need for an external GPU and hundreds or thousands of training images. Such pretraining, supported by appropriate traditional parameterized analytical machine vision tools, can vastly improve many machine vision tasks. The result is edge learning, which combines the power of deep learning with a light and fast set of vision tools that line engineers can apply daily to packaging problems and other factory automation challenges.

Compared with deep learning solutions that can require hours to days of training and hundreds to thousands of images, edge learning tools are typically trained in minutes using a few images per class. Edge learning streamlines deployment to allow fast ramp-up for manufacturers and the ability to adjust quickly and easily to changes.

This ability to find variable patterns in complex systems makes deep learning machine vision an exciting solution for inspecting objects with inconsistent shapes and defects, such as flexible packaging in first aid kits.

For the purposes of edge learning, Cognex has combined traditional analytical machine vision tools in ways specific to the demands of each application, eliminating the need to chain vision tools or devise complex logic sequences. Such tools offer fast preprocessing of images and the ability to extract density, edge, and other feature information that is useful for detecting and analyzing manufacturing defects. By finding and clarifying the relevant parts of an image, these tools reduce the computational load of deep learning.

For example, packing a lot of sophisticated hardware into a small form factor, Cognexs In-Sight 2800 vision system runs edge learning entirely on the camera. The embedded smart camera platform includes an integrated autofocus lens, lighting, and an image sensor. The heart of the device is a 1.6-MPixel sensor.

An autofocus lens keeps the object of interest in focus, even as the FOV or distance from the camera changes. Smaller and lighter than equivalent mechanical lenses, liquid autofocus lenses also offer improved resistance to shock and vibration.

Key for a high-quality image, the smart camera is available with integrated lighting in the form of a multicolor torchlight that offers red, green, blue, white, and infrared options. To maximize contrast, minimize dark areas, and bring out necessary detail, the torchlight comes with field-interchangeable optical accessories such as lenses, color filters, and diffusers, increasing system flexibility for handling numerous applications.

With 24 V of power, the In-Sight 2800 vision system has an IP67-rated housing, and Gigabit Ethernet connectivity delivers fast communication speed and image offloading. This edge learning-based platform also includes traditional analytical machine vision tools that can be parameterized for a variety of specialized tasks, such as location, measurement, and orientation.

Training edge learning is like training a new employee on the line. Edge learning users dont need to understand machine vision systems or deep learning. Rather, they only need to understand the classification problem that needs to be solved. If it is straightforwardfor instance, classifying acceptable and unacceptable parts as OK/NGthe user must only understand which items are acceptable and which are not.

Sometimes line operators can include process knowledge not readily apparent, derived from testing down the line, which can reveal defects that are hard for even humans to detect. Edge learning is particularly effective at figuring out which variations in a part are significant and which variations are purely cosmetic and do not affect functionality.

Edge learning is not limited to binary classification into OK/NG; it can classify objects into any number of categories. If parts need to be sorted into three or four distinct categories, depending on components or configurations, that can be set up just as easily.

To simplify factory automation and handle machine vision tasks of varying complexity, edge learning is useful in a wide range of industries, including medical, pharmaceutical, and beverage packaging applications.

Automated visual inspection is essential for supporting packaging quality and compliance while improving packaging line speed and accuracy. Fill level verification is an emerging use of edge learning technology. In the medical and pharmaceutical industries, vials filled with medication to a preset level must be inspected before they are capped and sealed to confirm that levels are within proper tolerances.

Unconfused by reflection, refraction, or other image variations, edge learning can be easily trained to verify fill levels. Fill levels that are too high or too low can be quickly classified as NG, while only those within the proper tolerances are classified as OK.

Another emerging use of edge learning technology is cap inspection in the beverage industry. Bottles are filled with soft drinks and juices and sealed with screw caps. If the rotary capper cross-threads a cap, applies improper torque, or causes other damage during the capping process, it can leave a gap that allows for contamination or leakage.

To train an edge learning system in capping, images showing well-sealed caps are labeled as good; images showing caps with slight gaps, which might be almost imperceptible to the human eye, are labeled as no good. After training is complete, only fully sealed caps are categorized as OK. All other caps are classified as NG.

While challenges for traditional rule-based machine vision continue to arise as packaging application complexity increases, easy-to-use edge learning on embedded smart camera platforms has proved to be a game-changing technology. Edge learning is more capable than traditional machine vision analytical tools and is extremely easy to use with previously challenging applications.

Read more from the original source:
Deep Learning at the Edge Simplifies Package Inspection - Vision Systems Design