Archive for the ‘Artificial Intelligence’ Category

Ayar Labs to Accelerate Development and Application of Optical Interconnects in Artificial Intelligence/Machine Learning Architectures with NVIDIA -…

SANTA CLARA, Calif.--(BUSINESS WIRE)--Ayar Labs, the leader in chip-to-chip optical connectivity, is developing with NVIDIA groundbreaking artificial intelligence (AI) infrastructure based on optical I/O technology to meet future demands of AI and high performance computing (HPC) workloads. The collaboration will focus on integrating Ayar Labs technology to develop scale-out architectures enabled by high-bandwidth, low-latency and ultra-low-power optical-based interconnects for future NVIDIA products. Together, the companies plan to accelerate the development and adoption of optical I/O technology to support the explosive growth of AI and machine learning (ML) applications and data volumes.

Meeting Future Performance and Power Requirements with Optical I/O

Optical I/O uniquely changes the performance and power trajectories of system designs by enabling compute, memory and networking ASICs to communicate with dramatically increased bandwidth, at lower latency, over longer distances and at a fraction of the power of existing electrical I/O solutions. The technology is also foundational to enabling emerging heterogeneous compute systems, disaggregated/pooled designs, and unified memory architectures that are critical to accelerating future data center innovation.

Todays state-of-the-art AI/ML training architectures are limited by current copper-based compute-to-compute interconnects to build scale-out systems for tomorrows requirements, said Charles Wuischpard, CEO of Ayar Labs. Our work with NVIDIA to develop next-generation solutions based on optical I/O provides the foundation for the next leap in AI capabilities to address the worlds most sophisticated problems.

Delivering the Next Million-X Speedup for AI with Optical Interconnect

Over the past decade, NVIDIA-accelerated computing has delivered a million-X speedup in AI, said Rob Ober, Chief Platform Architect for Data Center Products at NVIDIA. The next million-X will require new, advanced technologies like optical I/O to support the bandwidth, power and scale requirements of future AI and ML workloads and system architectures.

As AI model sizes continue to grow, by 2023 NVIDIA believes that models will have 100 trillion or more connections a 600X increase from 2021 exceeding the technical capabilities of existing platforms. Traditional electrical-based interconnects will reach their bandwidth limits, driving lower application performance, higher latency and increased power consumption. New interconnect solutions and system architectures are needed to address the scale, performance and power demands of the next generation of AI. Ayar Labs collaboration with NVIDIA is focused on addressing these future challenges by developing next-generation architectures with optical I/O.

To learn more about Ayar Labs chip-to-chip optical technology, please visit: https://ayarlabs.com/

About Ayar Labs

Ayar Labs is disrupting the traditional performance, cost, and efficiency curves of the semiconductor and computing industries by driving a 1000x improvement in interconnect bandwidth density at 10x lower power. Ayar Labs patented approach uses industry standard cost-effective silicon processing techniques to develop high speed, high density, low power optical-based interconnect chiplets and lasers to replace traditional electrical based I/O. The company was founded in 2015 and is funded by a number of domestic and international venture capital firms as well as strategic investors. For more information, visit http://www.ayarlabs.com.

Read this article:
Ayar Labs to Accelerate Development and Application of Optical Interconnects in Artificial Intelligence/Machine Learning Architectures with NVIDIA -...

Artificial Intelligence Comes to Campus: Gray Associates Announces Predict Program Size Software That Improves Higher Education Decisions for Academic…

CONCORD, Mass., May 24, 2022 /PRNewswire/ --GRAY Associatestoday announces its continued commitment to accelerating higher education's future and growth strategy through innovative new software tools and expanded data-informed academic program evaluation support.

Gray's sophisticated new Predict Program Size offering has launched, joining the PES+ (Program Evaluation System) Software to empower academic program professionals to make smart data-informed program planning decisions. Powered by artificial intelligence that encompasses the latest in machine learning, Predict Program Size maximizes outcomes by accurately estimating the potential size of current and new programs for an institution, identifying programs that will increase enrollment and revenue, and reducing the risk of new program failures.

This new machine learning-based approach provides universities a way to predict enrollments and graduations and to make better decisions about the use of their valuable resources. "We have always been committed to our higher education community. I am proud of the advanced academic program planning tools and features being developed for our PES+ subscribers. These advancements not only include Predict Program Size but also Predict Margins for Economics, and a new Program Portfolio Management dashboard." Robert Gray Atkins, Gray's CEO.

As higher education becomes more competitive, institutions embracecustomized data and analytics to help themmake better-informedprogram decisions. Gray's PES+software integrates local, regional, and national market data on educational programs, including data on student demand, competition, and employer needs.

Gray collects and analyzesdata from the Bureau of Labor Statistics, job postings, Google Searches, the American Community Survey, IPEDS, and more, so schools can use PES+ to score and rank thousands of programs and decide what actions to take.PES+ also calculates the revenue, cost, and margin of academic programs, courses, and sections to enable institutions to understand the markets and margins for their programs.

About Gray Associates

Gray helps colleges and universities make data-informed decisions about their academic programs. Gray's software integrates the best available data on academic program economics, student demand, employer needs, and competitive intensity for the precise market served by each institution. Faculty and administrative leaders use the software to score, rank, and evaluate programs in a collaborative process that builds consensus on programs to start, sunset, sustain, or grow. With Gray's tools and processes, institutions identify paths to increase enrollment, revenue, and efficiency, while investing in their mission and strengthening relationships among faculty and administrators.

Press Contact:Jackie LucasVera Voce Communication978-255-1159[emailprotected]

SOURCE Gray Associates

Originally posted here:
Artificial Intelligence Comes to Campus: Gray Associates Announces Predict Program Size Software That Improves Higher Education Decisions for Academic...

More Than 2 Billion Shipments of Devices with Machine Learning will Bring On-Device Learning and Inference Directly to Consumers by 2027 – PR Newswire

Federated, distributed, and few-shot learning can make consumers direct participants in Artificial Intelligence processes

NEW YORK, May 25, 2022 /PRNewswire/ -- Artificial Intelligence (AI) is all around us, but the processes of inference and learning that form the backbone of AI typically take place in big servers, far removed from consumers. New models are changing all that, according to ABI Research, a global technology intelligence firm, as the more recent frameworks of Federated Learning, Distributed Learning, and Few-shot Learning can be deployed directly on consumers' devices that have lower compute and smaller power budget, bringing AI to end users.

"This is the direction the market has increasingly been moving to, though it will take some time before the full benefits of these approaches become a reality, especially in the case of Few-Shot Learning, where a single individual smartphone would be able to learn from the data that it is itself collecting. This might well prove to be an attractive proposition for many, as it does not involve uploading data onto a cloud server, making for more secure and private data. In addition, devices can be highly personalized and localized as they can possess high situational awareness and better understanding of the local environments," explains David Lobina, Research Analyst at ABI Research.

ABI Research believes that it will take up to 10 years for such on-device learning and inference to be operative, and these will require adopting new technologies, such as neuromorphic chips. The shift will take place in more powerful consumer devices, such as autonomous vehicles and robots, before making its way into the likes of smartphones, wearables, and smart home devices. Big players such as Intel, NVIDIA, and Qualcomm have been working on these models in recent years, which in addition to neuromorphic chipset players such as BrainChip and GrAI Matter Labs, have provided chips that offer improved performance on a variety of training and inference tasks. The take-up is still small, but it can potentially disrupt the market.

"Indeed, these learning models have the potential to revolutionize a variety of sectors, most probably the fields of autonomous driving and the deployment of robots in public spaces, both of which are currently difficult to pull off, particularly in co-existence with other users," Lobina concludes. "Federated Learning, Distributed Learning, and Few-shot Learning reduces the reliance on cloud infrastructure, allowing AI implementers to create low latency, localized, and privacy preserving AI that can deliver much better user experience for end users."

These findings are from ABI Research's Federated, Distributed and Few-Shot Learning: From Servers to Devicesapplication analysis report.This report is part of the company'sAI and Machine Learningresearch service, which includes research, data, and ABI Insights. Application Analysisreports present in-depth analysis on key market trends and factors for a specific technology.

About ABI ResearchABI Research is a global technology intelligence firm delivering actionable research and strategic guidance to technology leaders, innovators, and decision makers around the world. Our research focuses on the transformative technologies that are dramatically reshaping industries, economies, and workforces today.

ABI Research

For more information about ABI Research's services, contact us at +1.516.624.2500 in the Americas, +44.203.326.0140 in Europe, +65.6592.0290 in Asia-Pacific, or visitwww.abiresearch.com.

Contact Info:

Global Deborah PetraraTel: +1.516.624.2558[emailprotected]

SOURCE ABI Research

Originally posted here:
More Than 2 Billion Shipments of Devices with Machine Learning will Bring On-Device Learning and Inference Directly to Consumers by 2027 - PR Newswire

Harnessing Artificial Intelligence for Higher Quality Data in Preclinical Trials and Translational Research, Upcoming Webinar Hosted by Xtalks -…

In this free webinar, learn how deep learning artificial intelligence (AI) can be used in histology or pathology image analysis. Attendees will learn how AI augments preclinical investigation workflows. The featured speaker will discuss case studies of CROs, pharma and biotech and the benefits they experienced from using AI software. The speaker will also discuss how to create AI models without the need for coding for any image analysis task in histology or pathology.

TORONTO (PRWEB) May 25, 2022

The preclinical phase of drug discovery commonly includes going through numerous histopathological samples. This contributes to the drug development process being time-consuming and labour-intensive for pharmaceutical and contract research organizations. Difficulties also stem from having to detect very subtle changes with high precision and accuracy. Manual quantification of small changes, or specific cell counting, is not only cumbersome but also, often involves high costs.

The digitization of glass slides has paved the way for even more advanced technology like artificial intelligence (AI), to further advance image analysis in a variety of medical fields. AI-based methods have the potential to standardize slide review by reducing bias while increasing the speed and accuracy of analysis.

In this webinar, the featured speaker discusses utilizing a cloud-based software from Aiforia Technologies, for automating image analysis tasks with AI to enhance the CRO's work by providing higher quality data and therefore confidence in this data to their clients across pharmaceutical companies. Through the software, the speaker created several AI models for assessing different markers from the central nervous system (CNS) tissue.

Join this webinar to hear case studies with large pharmaceutical and biotechnology clients, discover ways to harness artificial intelligence for higher quality data in preclinical trials and translational research and discuss how deep learning augments workflows, providing quantifiable benefits from CRO to client.

Join Tate York, Director of Digital Image and Analysis, NSA Labs, for the live webinar on Monday, June 13, 2022, at 12pm EDT (9am PDT).

For more information, or to register for this event, visit Harnessing Artificial Intelligence for Higher Quality Data in Preclinical Trials and Translational Research.

ABOUT XTALKS

Xtalks, powered by Honeycomb Worldwide Inc., is a leading provider of educational webinars to the global life science, food and medical device community. Every year, thousands of industry practitioners (from life science, food and medical device companies, private & academic research institutions, healthcare centers, etc.) turn to Xtalks for access to quality content. Xtalks helps Life Science professionals stay current with industry developments, trends and regulations. Xtalks webinars also provide perspectives on key issues from top industry thought leaders and service providers.

To learn more about Xtalks visit http://xtalks.comFor information about hosting a webinar visit http://xtalks.com/why-host-a-webinar/

For the original version on PRWeb visit: https://www.prweb.com/releases/harnessing_artificial_intelligence_for_higher_quality_data_in_preclinical_trials_and_translational_research_upcoming_webinar_hosted_by_xtalks/prweb18700673.htm

See more here:
Harnessing Artificial Intelligence for Higher Quality Data in Preclinical Trials and Translational Research, Upcoming Webinar Hosted by Xtalks -...

Artificial intelligence to be used in monitoring illegal fishing in New Zealand – Newshub

A drone developed by AI company Qrious and MAUI63has been using artificial intelligence to recognise a Mui dolphin and follow them. Similar gear - only fixed cameras - will eventually be used on 300 fishing vessels.

"This will improve trust and accountability in the seafood sector. It will further burnish their credentials," said Oceans and Fisheries Minister David Parker.

Qrious, which is part of Spark, has been appointed to manage the rollout and supply the technology.

Qrious CEO Stephen Ponsford said the gear could detect things such as fishing dumping, nets or lines being retrieved, and sorting of fish.

"You can think of it as a human doing a new job. We simply need to train what to do and the system is fully capable of learning that," Ponsford said.

This means only pertinent footage will be kept, saving the laborious task of trawling through hundreds of hours of footage.

"This will absolutely save time in the review process," said Bubba Cook, WWF Western and Central Pacific tuna programme manager.

Cook was consulted on the plan.

"This could be groundbreaking. It's probably the most advanced system that's being proposed for electronic monitoring on fishing vessels around the world," Cook said.

The rollout has seen years of controversy and interference.

In 2016, Newshub revealed Operation Achilles, a report detailing widespread illegal fish dumping where MPI failed to prosecute.

In 2017, then-Minister Nathan Guy then promised cameras on "every boat", but it didn't happen.

In 2019, Stuart Nash said 20 boats in Mui dolphin habitat would get cameras.

In 2020, he delayed a wider rollout, with Nash stating in a recording Newshub obtained that NZ First didn't want them.

Then, new Minister David Parker said 300 vessels would get cameras by late 2024.

Finally there's momentum, with 50 of the 300 vessels - ones that work in Mui dolphin and yellow-eyed penguin habitats - to be fitted with the equipment and start transmitting data to MPI from late November.

"It's quite a sophisticated project so it's taken a while to put together," Parker said.

A project that will eventually accurately assess catch records and give detailed data about the plight of some of our most threatened species.

The cost of the rollout - which will be staggered - is estimated to be $68 million, with the industry paying back about $10 million of the total costs.

Cameras will be fitted to all vessels that use set nets and are eight metres or larger, surface longline vessels, and bottom longline boats. Trawlers of 32 metres or less are also included.

Originally posted here:
Artificial intelligence to be used in monitoring illegal fishing in New Zealand - Newshub