Archive for the ‘Artificial Intelligence’ Category

Citizen science program uses photos and artificial intelligence to track thousands of humpback whales – KTOO

Old Timer may be the oldest known humpback, first sighted Lynn Canal, Southeast Alaska 1972. Also sighted as PWF-NP1117 and HIHWNMS-2017-2-25WWG01A01 (Photo by Jim Nahmens, courtesy of Happy Whale)

Its a special moment, watching a gigantic humpback going for a deep dive. The whales back arches and the tail swings up, disappearing below the surface like the pointed toes of an Olympic diver.

The black-and-white patterns on the underside of a whales tail fins, or flukes, are unique. Now a citizen science program called Happy Whale uses artificial intelligence to quickly identify humpbacks from those patterns.

Through photographs shared by whale watchers, Happy Whale has recorded thousands of whales that travel to and from Alaska.

Like facial recognition, we can tell who it is, said Ted Cheeseman, an expedition scientist who has studied whales all over the world, including in Antarctica. He co-founded Happy Whaleas a way to track humpbacks, a species thats known to travel thousands of miles.

Its helping to answer a lot of questions about their individual behavior.

Who does the whale hang out with? Does the whale have a calf? Cheeseman said. What is the larger story here such that we can build family relationships and so on, tell more of the story of the individual. To me, thats a huge part of this.

The difference between this photo ID program and others in the past is the manpower needed. Happy Whale uses an automated computer program to ID the photos instead of people doing it by hand. Just one full-time and two part-time employees run the database and confirm the results.

The program started in 2015 but took years to test and fine tune. Now, whale watchers can share their fluke photos and locations to the online database, which has identified 68,000 humpbacks worldwide.

The program started with 18,000 whale photos that had been previously identified by hand. Cheeseman says Happy Whale is more efficient.

Somebody gives me a dataset of a thousand photos, it used to be that that would be an hour per photo, Cheeseman said. The actual matching time is now insignificant. If someone gives me a thousand photos I can tell them the next day that, Oh, 700 of them are these known whales and these 300, those are probably new.'

The program has documented about 30,000 humpbacks in the North Pacific, which Cheeseman expects is about 70% of the population.

Participants are rewarded for their work. They usually get an initial response within a few days to a week and get notices when their whale is spotted again.

Dennis Rogers, a long-time whale watching guide in Petersburg, has uploaded over 5,500 photos to the program.

Its very interesting just to see the migrations, Rogers said. Some of these whales go to Hawaii for the winter, and theyre re-sighted there, which we get a notification when that re-sighting happens. Some of our whales go to Mexico. Its real interesting, some of our whales go to Mexico one year and to Hawaii the next year.

Rogers encourages his clients to send in their photos as well. He says other tracking systems, including satellite tags, can fall off whales within days.

This is purely un-invasive and gives a great amount of information over time. Some of our whales, weve been tracking close to 40 years, Rogers said.

The program has found some unusual migrations in Alaskas individual whales, said Scott Roberge, a board member for Petersburgs Marine Mammal Center.

Theyve followed one from Alaska to Hawaii to Japan back to Alaska, Roberge said. Made the loop of the North Pacific.

Roberge also contributes photographs and enjoys getting the feedback.

Its incredible to get that information and to get the email that says, Oh, the whale that you took a picture of last summer was just found in Hawaii, and it just had a baby, he said.

Cheeseman believes that over 95% of humpbacks in Southeast Alaska are in the database already. But the program is expanding. Cheeseman hopes to automate dorsal fin recognition within the year, which would allow them to identify and track orcas and other species a lot faster.

Cheeseman gave a presentation in Petersburg on May 18 at the Wright Auditorium.

See the original post here:
Citizen science program uses photos and artificial intelligence to track thousands of humpback whales - KTOO

Ayar Labs to Accelerate Development and Application of Optical Interconnects in Artificial Intelligence/Machine Learning Architectures with NVIDIA -…

SANTA CLARA, Calif.--(BUSINESS WIRE)--Ayar Labs, the leader in chip-to-chip optical connectivity, is developing with NVIDIA groundbreaking artificial intelligence (AI) infrastructure based on optical I/O technology to meet future demands of AI and high performance computing (HPC) workloads. The collaboration will focus on integrating Ayar Labs technology to develop scale-out architectures enabled by high-bandwidth, low-latency and ultra-low-power optical-based interconnects for future NVIDIA products. Together, the companies plan to accelerate the development and adoption of optical I/O technology to support the explosive growth of AI and machine learning (ML) applications and data volumes.

Meeting Future Performance and Power Requirements with Optical I/O

Optical I/O uniquely changes the performance and power trajectories of system designs by enabling compute, memory and networking ASICs to communicate with dramatically increased bandwidth, at lower latency, over longer distances and at a fraction of the power of existing electrical I/O solutions. The technology is also foundational to enabling emerging heterogeneous compute systems, disaggregated/pooled designs, and unified memory architectures that are critical to accelerating future data center innovation.

Todays state-of-the-art AI/ML training architectures are limited by current copper-based compute-to-compute interconnects to build scale-out systems for tomorrows requirements, said Charles Wuischpard, CEO of Ayar Labs. Our work with NVIDIA to develop next-generation solutions based on optical I/O provides the foundation for the next leap in AI capabilities to address the worlds most sophisticated problems.

Delivering the Next Million-X Speedup for AI with Optical Interconnect

Over the past decade, NVIDIA-accelerated computing has delivered a million-X speedup in AI, said Rob Ober, Chief Platform Architect for Data Center Products at NVIDIA. The next million-X will require new, advanced technologies like optical I/O to support the bandwidth, power and scale requirements of future AI and ML workloads and system architectures.

As AI model sizes continue to grow, by 2023 NVIDIA believes that models will have 100 trillion or more connections a 600X increase from 2021 exceeding the technical capabilities of existing platforms. Traditional electrical-based interconnects will reach their bandwidth limits, driving lower application performance, higher latency and increased power consumption. New interconnect solutions and system architectures are needed to address the scale, performance and power demands of the next generation of AI. Ayar Labs collaboration with NVIDIA is focused on addressing these future challenges by developing next-generation architectures with optical I/O.

To learn more about Ayar Labs chip-to-chip optical technology, please visit: https://ayarlabs.com/

About Ayar Labs

Ayar Labs is disrupting the traditional performance, cost, and efficiency curves of the semiconductor and computing industries by driving a 1000x improvement in interconnect bandwidth density at 10x lower power. Ayar Labs patented approach uses industry standard cost-effective silicon processing techniques to develop high speed, high density, low power optical-based interconnect chiplets and lasers to replace traditional electrical based I/O. The company was founded in 2015 and is funded by a number of domestic and international venture capital firms as well as strategic investors. For more information, visit http://www.ayarlabs.com.

Read this article:
Ayar Labs to Accelerate Development and Application of Optical Interconnects in Artificial Intelligence/Machine Learning Architectures with NVIDIA -...

Artificial Intelligence Comes to Campus: Gray Associates Announces Predict Program Size Software That Improves Higher Education Decisions for Academic…

CONCORD, Mass., May 24, 2022 /PRNewswire/ --GRAY Associatestoday announces its continued commitment to accelerating higher education's future and growth strategy through innovative new software tools and expanded data-informed academic program evaluation support.

Gray's sophisticated new Predict Program Size offering has launched, joining the PES+ (Program Evaluation System) Software to empower academic program professionals to make smart data-informed program planning decisions. Powered by artificial intelligence that encompasses the latest in machine learning, Predict Program Size maximizes outcomes by accurately estimating the potential size of current and new programs for an institution, identifying programs that will increase enrollment and revenue, and reducing the risk of new program failures.

This new machine learning-based approach provides universities a way to predict enrollments and graduations and to make better decisions about the use of their valuable resources. "We have always been committed to our higher education community. I am proud of the advanced academic program planning tools and features being developed for our PES+ subscribers. These advancements not only include Predict Program Size but also Predict Margins for Economics, and a new Program Portfolio Management dashboard." Robert Gray Atkins, Gray's CEO.

As higher education becomes more competitive, institutions embracecustomized data and analytics to help themmake better-informedprogram decisions. Gray's PES+software integrates local, regional, and national market data on educational programs, including data on student demand, competition, and employer needs.

Gray collects and analyzesdata from the Bureau of Labor Statistics, job postings, Google Searches, the American Community Survey, IPEDS, and more, so schools can use PES+ to score and rank thousands of programs and decide what actions to take.PES+ also calculates the revenue, cost, and margin of academic programs, courses, and sections to enable institutions to understand the markets and margins for their programs.

About Gray Associates

Gray helps colleges and universities make data-informed decisions about their academic programs. Gray's software integrates the best available data on academic program economics, student demand, employer needs, and competitive intensity for the precise market served by each institution. Faculty and administrative leaders use the software to score, rank, and evaluate programs in a collaborative process that builds consensus on programs to start, sunset, sustain, or grow. With Gray's tools and processes, institutions identify paths to increase enrollment, revenue, and efficiency, while investing in their mission and strengthening relationships among faculty and administrators.

Press Contact:Jackie LucasVera Voce Communication978-255-1159[emailprotected]

SOURCE Gray Associates

Originally posted here:
Artificial Intelligence Comes to Campus: Gray Associates Announces Predict Program Size Software That Improves Higher Education Decisions for Academic...

More Than 2 Billion Shipments of Devices with Machine Learning will Bring On-Device Learning and Inference Directly to Consumers by 2027 – PR Newswire

Federated, distributed, and few-shot learning can make consumers direct participants in Artificial Intelligence processes

NEW YORK, May 25, 2022 /PRNewswire/ -- Artificial Intelligence (AI) is all around us, but the processes of inference and learning that form the backbone of AI typically take place in big servers, far removed from consumers. New models are changing all that, according to ABI Research, a global technology intelligence firm, as the more recent frameworks of Federated Learning, Distributed Learning, and Few-shot Learning can be deployed directly on consumers' devices that have lower compute and smaller power budget, bringing AI to end users.

"This is the direction the market has increasingly been moving to, though it will take some time before the full benefits of these approaches become a reality, especially in the case of Few-Shot Learning, where a single individual smartphone would be able to learn from the data that it is itself collecting. This might well prove to be an attractive proposition for many, as it does not involve uploading data onto a cloud server, making for more secure and private data. In addition, devices can be highly personalized and localized as they can possess high situational awareness and better understanding of the local environments," explains David Lobina, Research Analyst at ABI Research.

ABI Research believes that it will take up to 10 years for such on-device learning and inference to be operative, and these will require adopting new technologies, such as neuromorphic chips. The shift will take place in more powerful consumer devices, such as autonomous vehicles and robots, before making its way into the likes of smartphones, wearables, and smart home devices. Big players such as Intel, NVIDIA, and Qualcomm have been working on these models in recent years, which in addition to neuromorphic chipset players such as BrainChip and GrAI Matter Labs, have provided chips that offer improved performance on a variety of training and inference tasks. The take-up is still small, but it can potentially disrupt the market.

"Indeed, these learning models have the potential to revolutionize a variety of sectors, most probably the fields of autonomous driving and the deployment of robots in public spaces, both of which are currently difficult to pull off, particularly in co-existence with other users," Lobina concludes. "Federated Learning, Distributed Learning, and Few-shot Learning reduces the reliance on cloud infrastructure, allowing AI implementers to create low latency, localized, and privacy preserving AI that can deliver much better user experience for end users."

These findings are from ABI Research's Federated, Distributed and Few-Shot Learning: From Servers to Devicesapplication analysis report.This report is part of the company'sAI and Machine Learningresearch service, which includes research, data, and ABI Insights. Application Analysisreports present in-depth analysis on key market trends and factors for a specific technology.

About ABI ResearchABI Research is a global technology intelligence firm delivering actionable research and strategic guidance to technology leaders, innovators, and decision makers around the world. Our research focuses on the transformative technologies that are dramatically reshaping industries, economies, and workforces today.

ABI Research

For more information about ABI Research's services, contact us at +1.516.624.2500 in the Americas, +44.203.326.0140 in Europe, +65.6592.0290 in Asia-Pacific, or visitwww.abiresearch.com.

Contact Info:

Global Deborah PetraraTel: +1.516.624.2558[emailprotected]

SOURCE ABI Research

Originally posted here:
More Than 2 Billion Shipments of Devices with Machine Learning will Bring On-Device Learning and Inference Directly to Consumers by 2027 - PR Newswire

Harnessing Artificial Intelligence for Higher Quality Data in Preclinical Trials and Translational Research, Upcoming Webinar Hosted by Xtalks -…

In this free webinar, learn how deep learning artificial intelligence (AI) can be used in histology or pathology image analysis. Attendees will learn how AI augments preclinical investigation workflows. The featured speaker will discuss case studies of CROs, pharma and biotech and the benefits they experienced from using AI software. The speaker will also discuss how to create AI models without the need for coding for any image analysis task in histology or pathology.

TORONTO (PRWEB) May 25, 2022

The preclinical phase of drug discovery commonly includes going through numerous histopathological samples. This contributes to the drug development process being time-consuming and labour-intensive for pharmaceutical and contract research organizations. Difficulties also stem from having to detect very subtle changes with high precision and accuracy. Manual quantification of small changes, or specific cell counting, is not only cumbersome but also, often involves high costs.

The digitization of glass slides has paved the way for even more advanced technology like artificial intelligence (AI), to further advance image analysis in a variety of medical fields. AI-based methods have the potential to standardize slide review by reducing bias while increasing the speed and accuracy of analysis.

In this webinar, the featured speaker discusses utilizing a cloud-based software from Aiforia Technologies, for automating image analysis tasks with AI to enhance the CRO's work by providing higher quality data and therefore confidence in this data to their clients across pharmaceutical companies. Through the software, the speaker created several AI models for assessing different markers from the central nervous system (CNS) tissue.

Join this webinar to hear case studies with large pharmaceutical and biotechnology clients, discover ways to harness artificial intelligence for higher quality data in preclinical trials and translational research and discuss how deep learning augments workflows, providing quantifiable benefits from CRO to client.

Join Tate York, Director of Digital Image and Analysis, NSA Labs, for the live webinar on Monday, June 13, 2022, at 12pm EDT (9am PDT).

For more information, or to register for this event, visit Harnessing Artificial Intelligence for Higher Quality Data in Preclinical Trials and Translational Research.

ABOUT XTALKS

Xtalks, powered by Honeycomb Worldwide Inc., is a leading provider of educational webinars to the global life science, food and medical device community. Every year, thousands of industry practitioners (from life science, food and medical device companies, private & academic research institutions, healthcare centers, etc.) turn to Xtalks for access to quality content. Xtalks helps Life Science professionals stay current with industry developments, trends and regulations. Xtalks webinars also provide perspectives on key issues from top industry thought leaders and service providers.

To learn more about Xtalks visit http://xtalks.comFor information about hosting a webinar visit http://xtalks.com/why-host-a-webinar/

For the original version on PRWeb visit: https://www.prweb.com/releases/harnessing_artificial_intelligence_for_higher_quality_data_in_preclinical_trials_and_translational_research_upcoming_webinar_hosted_by_xtalks/prweb18700673.htm

See more here:
Harnessing Artificial Intelligence for Higher Quality Data in Preclinical Trials and Translational Research, Upcoming Webinar Hosted by Xtalks -...