Archive for the ‘Artificial Intelligence’ Category

Interface Masters Develops Industry-Leading Tahoe 8828 Network Appliance for Advanced Artificial Intelligence and Machine Learning Applications -…

SAN JOSE, Calif.--(BUSINESS WIRE)--Interface Masters Technologies announces Tahoe 8828, an off-the-shelf artificial intelligence and machine learning powerhouse, which rounds out Interface Masters family of ARM and MIPs based network appliances.

The modular appliance features a Marvell OCTEON TX2 ARM64 multicore processor (18 to 24-cores at 2.4GHz) as a primary CPU, and supports a range of high-performance offload processing options, as well as a single configurable I/O module supporting a range (1G to 100G) port configurations.

Optional high-performance offload processing upgrades include Interface Masters proprietary Intel based DataSlammerTM technology or any of Nvidias advanced industry-leading graphic processing units (GPUs).

Interface Masters is an OEM hardware solutions provider. Our new Tahoe 8828 is entirely designed and manufactured in the USA our Hardware is NEVER compromised and always secure, says Ben Askarinam. DataSlammer technology is an excellent fit for data intensive applications such as artificial intelligence [AI], deep packet inspection [DPI], machine learning [ML] as well as the edge compute.

As with all Interface Masters appliances, the device includes out-of-the-box support for the Marvells impressive SDK 11 Software Development Kit (SDK), and Linux. Interface Masters support for the SDK and additional built-in software and firmware enable rapid development.

Long Product Life Cycle

Interface Masters network appliance users benefit from a long-lasting product life cycle (seven-years), which enables continuity through all phases of product rollouts and servicing.

Designed and Manufactured in the United States / About Us

For over 26 years, Interface Masters Technologies has provided custom and off-the-shelf innovative networking solutions to OEMs, Fortune 100, and startup companies. Headquartered in Silicon Valley, we proudly design and manufacture all products here. Based on MIPS, ARM, PowerPC, x86 processors, and switch fabrics up to 12.8T, Interface Masters appliance models enable OEMs to significantly reduce time-to-market. Our solutions are reliable, pre-tested, pre-integrated and support a seven-year long-life cycle. Learn about Interface Masters: http://www.interfacemasters.com.

*All third-party trademarks are the property of their respective owners.

Go here to see the original:
Interface Masters Develops Industry-Leading Tahoe 8828 Network Appliance for Advanced Artificial Intelligence and Machine Learning Applications -...

NCAR will collaborate on new initiative to integrate AI with climate modeling | NCAR & UCAR News – UCAR

Sep 10, 2021 - by Laura Snider

The National Center for Atmospheric Research (NCAR) is a collaborator on a new $25 million initiative that will use artificial intelligence to improve traditional Earth system models with the goal of advancing climate research to better inform decision makers with more actionable information.

The Center for Learning the Earth with Artificial Intelligence and Physics (LEAP) is one of six new Science and Technology Centers announced by the National Science Foundation to work on transformative science that will broadly benefit society. LEAP will be led by Columbia University in collaboration with several other universities as well as NCAR and NASAs Goddard Institute for Space Studies.

The goals of LEAP support NCARs Strategic Plan, which emphasizes the importance of actionable Earth system science.

LEAP is a tremendous opportunity for a multidisciplinary team to explore the potential of using machine learning to improve our complex Earth system models, all for the long-term benefit of society, said NCAR scientist David Lawrence, who is the NCAR lead on the project. NCARs models have always been developed in collaboration with the community, and were excited to work with skilled data scientists to develop new and innovative ways to further advance our models.

LEAP will focus its efforts on the NCAR-based Community Earth System Model. CESM is an incredibly sophisticated collection of component models that when connected can simulate atmosphere, ocean, land, sea ice, and ice sheet processes that interact with and influence each other, which is critical to accurately project how the climate will change in the future. The result is a model that produces a comprehensive and high-quality representation of the Earth system.

Despite this, CESM is still limited by its ability to represent certain complex physical processes in the Earth system that are difficult to simulate. Some of these processes, like the formation and evolution of clouds, happen at such a fine scale that the model cannot resolve them. (Global Earth system models are typically run at relatively low spatial resolution because they need to simulate decades or centuries of time and computing resources are limited.) Other processes, including land ecology, are so complicated that scientists struggle to identify equations that accurately capture what is happening in the real world.

In both cases, scientists have created simplified subcomponents known as parameterizations to approximate these physical processes in the model. A major goal of LEAP is to improve on these parameterizations with the help of machine learning, which can leverage the incredible wealth of Earth system observations and high-resolution model data that has become available.

By training the machine learning model on these data sets, and then collaborating with Earth system modelers to incorporate these subcomponents into CESM, the researchers expect to improve the accuracy and detail of the resulting simulations.

Our goal is to harness data from observations and simulations to better represent the underlying physics, chemistry, and biology of Earths climate system, said Galen McKinley, a professor of earth and environmental sciences at Columbia. More accurate models will help give us a clearer vision of the future.

To learn more, read the NSF announcement and the Columbia news release.

See all News

See more here:
NCAR will collaborate on new initiative to integrate AI with climate modeling | NCAR & UCAR News - UCAR

Artificial intelligence is Changing Logistics Automation – RTInsights

Automation will continue to grow and expand within logistics operations through the use of technologies such as artificial intelligence.

Automation uses technology to augment human effort across a myriad of tasks. In logistics, the potential for automation is massive, and the benefits are significant, especially when operations experience large variations or increases in demand. Scaling operations up typically requires additional staff who are often not immediately available, particularly during times when demand is also coming from other industries. Reacting quickly to market fluctuations requires fast action and additional capacity across the entire operation.

Logistics automation allows for rapid increases in capacity as demand changes. When used strategically, logistics automation increases productivity, reduces human error, and improves working efficiency. And with the right logistics automation software, hardware, and platforms resources in place, the impact on operational expenditures during periods of low demand are minimal and much lower than maintaining a large human workforce. As demand increases, the capacity is already in place and ready to be activated. While this gives logistics companies the flexibility needed to react quickly to changes in demand, there is the opportunity to do more.

See also: Logistics Market Needs Digital Transformation to Overcome Challenges

The introduction of artificial intelligence (AI) into logistics automation amplifies AIs impact. AI reduces errors in common semi-skilled tasks such as sorting and categorizing products. Autonomous mobile robots (AMRs), for instance, improve package delivery, including the last mile of delivery which is typically the most expensive. AI helps AMRs with route planning and feature recognition, such as people, obstacles, delivery portals, and doorways.

Integrating logistics automation into any environment comes with challenges. It can be as simple as replacing a repetitive process with a powered conveyor or as complex as introducing a collaborative, autonomous robot into the workplace. When AI is added to this automation and integration process, the challenges become more complex, but the benefits also increase.

The effectiveness of individual automation elements increases as the solutions become more connected and more aware of all the other stages in the process. Putting AI closer to where the data is generated, and actions are taken, is referred to as edge AI. The adoption of edge AI is already redefining logistics automation.

Edge AI is developing rapidly, and its use is not restricted to logistics automation. The benefits of putting AI at the network edge have to be balanced with the availability of resources, such as power, the environmental operating conditions, the physical location, and the space available.

Edge computing brings computation and data closer together. In a traditional IoT application, most data is sent over a network to a (cloud) server, where the data is processed, and results are sent back to the edge of the network, such as at the physical piece of equipment. Cloud-only computing introduces latency, which is unacceptable in time-critical systems. One example where edge computing comes into play is capturing and processing the image data of a package locally during sorting enables the logistics automation system responds in as little as 0.2 seconds. Network latency in this part of the system would slow down the sorting process, but edge computing is removing that potential bottleneck.

While edge computing brings the computation closer to the data, adding AI to the edge makes the process more flexible and even less prone to error. Similarly, last-mile logistics relies heavily on humans, but this too is improved with AMRs using edge AI.

Adding AI has a significant impact on the hardware and software used in logistics automation, and there is an increasing number of potential solutions. Typically, the solutions used to train an AI model are not suitable for deploying the model at the networks edge. The processing resources used for training are designed for servers, where resources such as power and memory are almost infinite. At the edge, power and memory are far from infinite.

In terms of hardware, large multicore processors are not well suited for edge AI applications. Instead, developers are turning to heterogeneous hardware solutions optimized for AI deployment at the edge. This includes CPUs and GPUs, of course, but it extends to application-specific integrated circuits (ASICs), microcontrollers (MCUs), and FPGAs. Some architectures, like GPUs, are good at parallel processing, while others, like CPUs, are better at sequential processing. Today, there is no single architecture that can really claim to provide the best solution for an AI application. The general trend is to configure systems using the hardware that offers the most optimal solution, rather than using multiple instances of the same architecture.

This trend points towards a heterogeneous architecture, where there are many different hardware processing solutions configured to work together, rather than a homogeneous architecture that uses multiple devices all based on the same processor. Being able to bring in the right solution for any given task, or consolidate multiple tasks on a specific device, provides greater scalability and the opportunity to optimize for performance per watt and/or per dollar.

Moving from a homogeneous system architecture to heterogeneous processing requires a large ecosystem of solutions and a proven capability to configure those solutions at the hardware and software level. Thats why its important to work with a vendor that has significant tier 1 partnerships with all the major silicon vendors, offering solutions for edge computing and working with them to develop systems that are scalable and flexible.

In addition, these solutions use general open-source technologies like Linux, as well as specialist technologies such as the robot operating system, ROS 2. In fact, there is a growing number of open-source resources being developed to support both logistics and edge AI. There is no single right software solution from this point of view, and the same is also true for the hardware platform on which the software runs.

To increase flexibility and reduce vendor lock-in, one approach is to use modularization at the hardware level, making hardware configuration within any solution more flexible. In practice, modularization at the hardware level allows an engineer to change any part of the systems hardware, such as a processor, without causing system-wide disruptions.

The ability to upgrade an underlying platform (whether that be software, processors, etc.) is particularly important when deploying a new technology like edge AI. Every new generation of processor and module technology often provides a better power/performance balance for an inferencing engine operating at the networks edge, so being able to take advantage of these performance and power gains quickly and with minimal disruption to the overall logistics automation system and edge AI hardware system design is a distinct advantage.

Modularization in the hardware is extended into the software by using a micro-service architecture and container technology like Docker. If a more optimal processor solution becomes available, even if it is from a different manufacturer, the software leveraging the processor is modularized and can be used in place of the module for the previous processor without changing the rest of the system. Software containers also provide a simple and robust way to add new features, which applies to running AI at the edge, for example.

The software inside of a container can also be modularized.

A modular and container approach to hardware and software minimizes vendor lock-in, meaning a solution is not tied to any one particular platform. It also increases the abstraction between platform and application, making it easier for end-users to develop their own applications that are not platform-dependent.

Deploying edge AI within logistics automation doesnt require replacing entire systems. Start by assessing the workspace and identifying stages that can really benefit from AI-powered automation. The main objective is to increase efficiency while decreasing operation expenditure, particularly in response to increased demand during a time of labor shortages.

There is an increasing number of technology companies working on AI solutions, but often these are aimed at the cloud, not edge computing. At the edge, the conditions are very different, resources may be limited, and there may even be a need for a dedicated private communications network.

Automation will continue to grow and expand within logistics operations through the use of technologies such as AI. These system solutions need to be designed for use in harsh environments, very different from the cloud or data center. We address this using a modular approach that offers highly competitive solutions, short development cycles, and flexible platforms.

Read the original:
Artificial intelligence is Changing Logistics Automation - RTInsights

Artificial Intelligence: Future directions in technology and law – Australian Academy of Science

The Australian Academy of Science and the Australian Academy of Law are delivering their annual joint symposium for 2021. This year the topic is Artifical Intelligence: Future directions in technology and law.

Speakers will each give a 10-minute presentation, followed by a Q&A session. The event will be moderated by The Hon Dr Annabelle Bennett AC FAA FAAL SC.

Professor Lyria Bennett Moses FAAL: Director of Allens Hub for Technology, Law and Innovation at UNSW, Sydney in the Faculty of Law and Justice. Professor Bennett Moses has written about the limitations of AI and data-driven approaches to decision-making in government, law enforcement and the legal system, and why it is crucial for everyone to understand how smart machines are impacting on our society.

Professor Sventha Venkatesh FAA FTSE: Co-Director, Applied Artificial Intelligence Institute at Deakin University, Alfred Deakin Professor, ARC Laureate Fellow, Co-Director of Applied Artificial Intelligence Institute and a leading Australian computer scientist who has made fundamental and influential contributions to the field of activity and event recognition in multimedia data.

Professor Toby Walsh FAA: Scientia Professor of Artificial Intelligence at the University of NSW a leading researcher in Artificial Intelligence, a Laureate Fellow and Scientia Professor of Artificial Intelligence in the School of Computer Science and Engineering at UNSW Sydney, and leader of the Algorithmic Decision Theory group at CSIRO Data61.

Mr Edward Santow FAAL: served as Australias Human Rights Commissioner from 2016-2021. He recently started as Industry Professor - Responsible Technology at the University of Technology Sydney. He leads a major UTS initiative to build Australias strategic capability in AI and new technology. This will support Australian business and government to be leaders in responsible innovation by developing and using AI that is powerful, effective and fair.

View original post here:
Artificial Intelligence: Future directions in technology and law - Australian Academy of Science

Chatbots Allow Educators to Delegate Repetitive Tasks and Focus on Teaching – EdTech Magazine: Focus on K-12

Chatbot Serves as Virtual College Adviser

Colleges have had success with chatbotsfor a few years, but high school students can now benefit from the first nationally accessible (and free) AI college adviser chatbot, Oli. The tool is the result of apartnership between Common App and Mainstay(formerly AdmitHub). Oli stands ready to help students around the clock with a wide range of tasks, such as selecting the right school, completing college and scholarship applications and understanding financial aid forms. It responds to questions via text and also sends users deadline reminders, updates and resources several times a week. When extra help is required, Oli connects students with a trained college adviser fromCollege Advising Corps.

RELATED:Counselors take an online approach to helping high school students with college decisions.

While education leaders and policymakers have been pushing tutoring as a solution to thecrisis of COVID-19 learning disruption, supporting every student in need with a human tutor isnt feasible nor affordable. Researcher Neil Heffernan, a computer science professor and director of the Learning Sciences and Technologies graduate program at Worcester Polytechnic Institute, is working on technology to support human tutors.

These AI-powered tutor chatbots would democratize private tutoring, something thats not available to many students. Heffernan believes that on-demand, AI-driven tutor chatbots are an important addition to the learning experience and will easily integrate across a schools system existing technology.

To me, AI is just a set of simple tools that we can use, in this case, to figure out some problems that teachers and kids are persistently having, says Heffernan. The real magic is giving human tutors and teachers a little bit of information on whats going on so they can be more efficient.

AI-enabled chatbots are likely coming soon to a school near you to help with class scheduling, tutoring, college applications, collecting feedback and a lot more.

Teachers shouldnt be scared, Heffernan says. Like many things in AI, the chatbots are going to slowly come in and, I hope, actually help kids when humans are not available.

KEEP READING:Schools can use artificial intelligence to keep students engaged in online learning.

Link:
Chatbots Allow Educators to Delegate Repetitive Tasks and Focus on Teaching - EdTech Magazine: Focus on K-12