Archive for the ‘Artificial Intelligence’ Category

Security In The Cloud Is Enhanced By Artificial Intelligence – Forbes

Artificial Intelligence

One of the initial hesitations in many enterprise organizations moving into the cloud in the last decade was the question of security. Significant amounts of money had been put into corporate firewalls, and now technology companies were suggesting corporate data reside outside that security barrier. Early questions were addressed, and information began to move into the cloud. However, nothing stands still, and the extra volume of data and networking intersects with the increased complexity of attacks, and artificial intelligence (AI) is being used to keep things safe.

The initial hesitation for enterprise organizations to move to the cloud was met by data centers improving hardware and networking security, while the cloud software providers, both cloud hosts and application providers, increased software security past what was initially offered in the cloud. Much of that was taking knowledge from on-premises security and scaling it to the larger systems in the cloud. However, theres also more flexibility for attacks in the cloud, so new techniques had to be added. In addition, most organizations are in a hybrid ecosystem, so the on-premises and cloud security must coordinate.

This means an opportunity for AI to provide enhanced security. As mentioned with other machine solutions, security is a mix different AI and non-AI techniques to fit the problem. For instance, theres deep learning. Supervised learning can be used for known attacks, while unsupervised learning can be used to detect anomalous events in a sparse dataset. Reinforcement learning classification can even be done with statistical analysis in time series, and not always require AI. That can provide faster performance in appropriate cases.

On a quick tangent, lets talk about supervised learning and reinforcement learning. Some folks present them as different; I think of the latter as an extension of the former. Classic supervised learning is when input is labeled and the labels are important for the AI system, as they are used to understand and organize the data. When there are errors, humans add more annotations and labels to existing data, or they add more data. In reinforcement learning, feedback for the neural network is given as to how far the results of an iteration are from a set goal. That feedback can be put back into the system by programmers changing weights or, in more advanced systems, by the AI software doing the comparison and adapting on its own. That is a type of supervision, but Ill admit its a philosophical argument.

Back on track, lets add another complexity. In the early days of the cloud, applications were larger but still followed a similar pattern of scale-up and scale-out. Now theres something changing both environments: containers. Simply put, a container is a piece of software that wraps around an application, it has basic services and even a virtual operating system. That allows containers to run on multiple operating systems regardless of internal application code. It also allows cloud platforms and servers to more finely control services to their clients in order to meet service level agreements (SLAs) that provide quality performance to the end customer.

As more applications migrate to a container architecture, its important for security to keep up, said Tanuj Gulati, CTO, Securonix. Light weight collectors can run within application containers, such as with Docker, collecting and sending relevant event logs to the more robust security monitoringapplications running separately. This provides strong security in the new environments without significant burden being added to application performance.

In my discussion with Tanuj Gulati, he explained that they first worked in the virtual machine (VM) environment in local data centers. That provided both an understand that helped extend security to Docker, but also in integrating security between on-premises and cloud systems in a hybrid environment.

Artificial intelligence is focused on detection, but a complete system must also address the response to a perceived threat. The basic system can detect attacks, and based on known problems rules can then determine responses. Unknown problems have unknown responses. Humans must be flagged to handle those questionable transactions, then feedback can be given to reinforce the system. Depending on how complex a system is created, those new rules can be incorporated into the neural network or added to a rules set.

The state of the industry, both in technology and human comfort levels, shows that there will continue to be human oversight before responses to new attacks as the predominant method in the next few years. Advances will push the security industry into more system action and then reporting, review, and adjustment by humans, but that will happen slowly. What will help is that better explainability will be required, as the deep learning black box will have to become more transparent.

Cloud computing and artificial intelligence are growing in parallel. The complexity of the cloud is driving the need for AI, but the complexity of AI is also creating the need for it to work better in the cloud environment with efficiency, transparency and control.

More here:
Security In The Cloud Is Enhanced By Artificial Intelligence - Forbes

Spring 2021 HiPerGator Symposium highlights artificial intelligence research – News – University of Florida – University of Florida

Winners of UF Research's Artificial Intelligence (AI) Research Catalyst Fund presented how they are pursuing multidisciplinary applications of artificial intelligence across the university at the Spring 2021 HiPerGator Symposium Tuesday.

Presenters covered a wide range of topics, sharing how they are using AI to tackle issues like uncovering decades of ecological change to detecting online students who are at risk of dropping out. In the afternoon, attendees and presenters interacted in Q-and-A panels. Roughly 275 people attended the event.

We established the AI Research Catalyst Fund as a way to encourage multidisciplinary teams of faculty and students to rapidly pursue imaginative applications of AI across the institution, said David Norton, UFs vice president for research. The research shared in this Symposium clearly indicated that is whats happening. We anticipate that this initial research will lead to significant external research funding in the future.

The Symposiums focus on AI is part of a sweeping initiative to establish UF as a national leader in the field, which is widely expected to fuel future advances in research and workforce development. The projects presented at the Symposium will leverage the capabilities of HiPerGator AI, the most powerful AI supercomputer in higher education, which UF recently made widely available for teaching and research purposes. The supercomputer, as well as the broader initiative, is made possible by a $100 million public-private partnership with Silicon Valley-based technology company NVIDIA and UF alumnus and NVIDIA co-founder Chris Malachowsky.

The universitys AI initiative empowers faculty to explore real world problems, like how to eliminate bias and create culturally inclusive communications via machine learning. Sylvia Chan-Olmsted, telecommunications professor in the College of Journalism and Media Consumer Research director, posed the question of how to find ways to increase cultural resonance in sharing information.

"Fairness has been touted as one of the most important issues for responsible AI as AI-powered systems increasingly impact human minds," she said. "At the same time, access to information is essential in today's knowledge economy and fundamental to our democracy."

Obstacles that stem from cross-cultural communication means certain groups of the population might be excluded or lack access and not be able to participate fully. To address this, Chan-Olmsted and Huan Chen, College of Journalism and Communications Advertising associate professor, will use social theories to build a culturally aware machine learning system that addresses communication in a multicultural society.

The Spring HiPerGator Symposium was a success for UF in many ways, said Erik Deumens, director of UFIT Research Computing. Having more than 275 faculty and students attend this virtual event was great."

The Symposium started three years ago as a fall semester event as a way to showcase graduate and postdoctoral work. When COVID-19 struck, the symposium transitioned online, which enabled a much wider audience to attend and learn about the research happening at UF.

"Sharing ideas with the panelists and hearing how the catalyst awardees are using machine learning will spur even more ideas for using HiPerGator AI. Plus, the attendance by faculty representing 25 universities across the U.S., shows the interest of the University of Florida's AI initiative, Deumens said.

Emily Cardinali April 1, 2021

Read the rest here:
Spring 2021 HiPerGator Symposium highlights artificial intelligence research - News - University of Florida - University of Florida

What the CSPC Has to Say About Artificial Intelligence – The National Law Review

Wednesday, March 31, 2021

American households are increasingly connected internally through the use of artificially intelligent appliances.1But who regulates the safety of those dishwashers, microwaves, refrigerators, and vacuums powered by artificial intelligence (AI)? On March 2, 2021, at a virtual forum attended by stakeholders across the entire industry, the Consumer Product Safety Commission (CPSC) reminded us all that it has the last say on regulating AI and machine learning consumer product safety.

The CPSC is an independent agency comprised of five commissioners who are nominated by the president and confirmed by the Senate to serve staggered seven-year terms. With the Biden administrations shift away from the deregulation agenda of the prior administration and three potential opportunities to staff the commission, consumer product manufacturers, distributors, and retailers should expect increased scrutiny and enforcement.2

The CPSC held the March 2, 2021 forum to gather information on voluntary consensus standards, certification, and product-specification efforts associated with products that use AI, machine learning, and related technologies. Consumer product technology is advancing faster than the regulations that govern it, even with a new administration moving towards greater regulation. As a consequence, many believe that the safety landscape for AI, machine learning, and related technology is lacking. The CPSC, looking to fill the void, is gathering information through events like this forum with a focus on its next steps for AI-related safety regulation.

To influence this developing regulatory framework, manufacturers and importers of consumer products using these technologies must understand and participate in the ongoing dialogue about future regulation and enforcement. While guidance in these evolving areas is likely to be adaptive, the CPSCs developing regulatory framework may surprise unwary manufacturers and importers who have not participated in the discussion.

The CPSC defines AI as any method for programming computers or products to enable them to carry out tasks or behaviors that would require intelligence if performed by humans and machine learning as an iterative process of applying models or algorithms to data sets to learn and detect patterns and/or perform tasks, such as prediction or decision making that can approximate some aspects of intelligence.3To inform the ongoing discussion on how to regulate AI, machine learning, and related technologies, the CPSC provides the following list of considerations:

Identification: Determine presence of AI and machine learning in consumer products. Does the product have AI and machine learning components?

Implications: Differentiate what AI and machine learning functionality exists. What are the AI and machine learning capabilities?

Impact: Discern how AI and machine learning dependencies affect consumers. Do AI and machine learning affect consumer product safety?

Iteration: Distinguish when AI and machine learning evolve and how this transformation changes outcomes. When do products evolve/transform, and do the evolutions/transformations affect product safety?4

These factors and corresponding questions will guide the CPSCs efforts to establish policies and regulations that address current and potential safety concerns.

As indicated at the March 2, 2021 forum, the CPSC is taking some of its cues for its fledgling initiative from organizations that have promulgated voluntary safety standards for AI, including Underwriters Laboratories (UL) and the International Organization for Standardization (ISO). UL 4600 Standard for Safety for the Evaluation of Autonomous Products covers fully autonomous systems that move such as self-driving cars along with applications in mining, agriculture, maintenance, and other vehicles including lightweight unmanned aerial vehicles.5Using a claim-based approach, UL 4600 aims to acknowledge the deviations from traditional safety practices that autonomy requires by assessing the reliability of hardware and software necessary for machine learning, ability to sense the operating environment, and other safety considerations of autonomy. The standard covers topics like safety case construction, risk analysis, safety relevant aspects of the design process, testing, tool qualification, autonomy validation, data integrity, human-machine interaction (for non-drivers), life cycle concerns, metrics and conformance assessment.6While UL 4600 mentions the need for a security plan, it does not define what should be in that plan.

Since 2017, ISO has had an AI working group of 30 participating members and 17 observing members.7This group, known as SC 42, develops international standards in the area of AI and for AI applications. SC 42 provides guidance to JTC 1a specific joint technical committee of ISO and the International Electrotechnical Commission (IEC)and other ISO and IEC committees. As a result of their work, ISO has published seven standards that address AI-related topics and sub-topics, including AI trustworthiness and big data reference architecture.8Twenty-two standards remain in development.9

The CPSC might also look to the European Unions (EU) recent activity on AI, including a twenty-six-page white paper published in February 2020 that includes plans to propose new regulations this year.10On the heels of the General Data Protection Regulation, the EUs regulatory proposal is likely to emphasize privacy and data governance in its efforts to build[] trust in AI.11Other areas of emphasis include human agency and oversight, technical robustness and safety, transparency, diversity, non-discrimination and fairness, societal and environmental wellbeing, and accountability.12

***

Focused on AI and machine learning, the CPSC is contemplating potential new consumer product safety regulations. Manufacturers and importers of consumer products that use these technologies would be well served to pay attention toand participate infuture CPSC-initiated policymaking conversations, or risk being left behind or disadvantaged by what is to come.

-------------------------------------------------------

1SeeCrag S. Smith,A.I. Here, There, Everywhere, N.Y. Times (Feb. 23, 2021),https://www.nytimes.com/2021/02/23/technology/ai-innovation-privacy-seniors-education.html.

2Erik K. Swanholt & Kristin M. McGaver,Consumer Product Companies Beware! CPSC Expected to Ramp up Enforcement of Product Safety Regulations(Feb. 24, 2021),https://www.foley.com/en/insights/publications/2021/02/cpsc-enforcement-of-product-safety-regulations.

385 Fed. Reg. 77183-84.

4Id.

5Underwriters Laboratories,Presenting the Standard for Safety for the Evaluation of Autonomous Vehicles and Other Products,https://ul.org/UL4600(last visited Mar. 30, 2021). It is important to note that autonomous vehicles fall under the regulatory purview of the National Highway Traffic Safety Administration.SeeNHTSA,Automated Driving Systems,https://www.nhtsa.gov/vehicle-manufacturers/automated-driving-systems.

6Underwriters Laboratories,Presenting the Standard for Safety for the Evaluation of Autonomous Vehicles and Other Products,https://ul.org/UL4600(last visited Mar. 30, 2021).

7ISO, ISO/IEC JTC 1/SC 42,Artificial Intelligence,https://www.iso.org/committee/6794475.html(last visited Mar. 30, 2021).

8ISO, Standards by ISO/IEC JTC 1/SC 42,Artificial Intelligence,https://www.iso.org/committee/6794475/x/catalogue/p/1/u/0/w/0/d/0(last visited Mar. 30, 2021).

9Id.

10See Commission White Paper on Artificial Intelligence, COM (2020) 65 final (Feb. 19, 2020),https://ec.europa.eu/info/sites/info/files/commission-white-paper-artificial-intelligence-feb2020_en.pdf.

11European Commission, Policies,A European approach to Artificial Intelligence,https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence(last updated Mar. 9, 2021).

12Commission White Paper on Artificial Intelligence, at 9, COM (2020) 65 final (Feb. 19, 2020),https://ec.europa.eu/info/sites/info/files/commission-white-paper-artificial-intelligence-feb2020_en.pdf.

See the article here:
What the CSPC Has to Say About Artificial Intelligence - The National Law Review

Global Healthcare Artificial Intelligence (AI) Deals Report 2020: Details of the Latest AI Deals, Oligonucletides Including Aptamers Agreements…

DUBLIN, March 31, 2021 /PRNewswire/ -- The "Global Artificial Intelligence (AI) Partnering Terms and Agreements 2010 to 2020" report has been added to ResearchAndMarkets.com's offering.

This report contains a comprehensive listing of all artificial intelligence partnering deals announced since 2010 including financial terms where available including over 440 links to online deal records of actual artificial intelligence partnering deals as disclosed by the deal parties.

The report provides a detailed understanding and analysis of how and why companies enter artificial intelligencepartnering deals. The majority of deals are early development stage whereby the licensee obtains a right or an option right to license the licensors artificial intelligencetechnology or product candidates. These deals tend to be multicomponent, starting with collaborative R&D, and commercialization of outcomes.

This report provides details of the latest artificial intelligence, oligonucletides including aptamers agreements announced in the healthcare sectors.

Understanding the flexibility of a prospective partner's negotiated deals terms provides critical insight into the negotiation process in terms of what you can expect to achieve during the negotiation of terms. Whilst many smaller companies will be seeking details of the payments clauses, the devil is in the detail in terms of how payments are triggered - contract documents provide this insight where press releases and databases do not.

In addition, where available, records include contract documents as submitted to the Securities Exchange Commission by companies and their partners.

Contract documents provide the answers to numerous questions about a prospective partner's flexibility on a wide range of important issues, many of which will have a significant impact on each party's ability to derive value from the deal.

In addition, a comprehensive appendix is provided organized by artificial intelligence partnering company A-Z, deal type definitions and artificial intelligence partnering agreements example. Each deal title links via Weblink to an online version of the deal record and where available, the contract document, providing easy access to each contract document on demand.

The report also includes numerous tables and figures that illustrate the trends and activities in artificial intelligence partnering and dealmaking since 2010.

In conclusion, this report provides everything a prospective dealmaker needs to know about partnering in the research, development and commercialization of artificial intelligence technologies and products.

Report scope

Analyzing actual company deals and agreements allows assessment of the following:

Global Artificial Intelligence Partnering Terms and Agreements includes:

In Global Artificial Intelligence Partnering Terms and Agreements, the available contracts are listed by:

Key Topics Covered:

Executive Summary

Chapter 1 - Introduction

Chapter 2 - Trends in artificial intelligence dealmaking2.1. Introduction2.2. Artificial intelligence partnering over the years2.3. Most active artificial intelligence dealmakers2.4. Artificial intelligence partnering by deal type2.5. Artificial intelligence partnering by therapy area2.6. Deal terms for artificial intelligence partnering

Chapter 3 - Leading artificial intelligence deals3.1. Introduction3.2. Top artificial intelligence deals by value

Chapter 4 - Most active artificial intelligence dealmakers4.1. Introduction4.2. Most active artificial intelligence dealmakers4.3. Most active artificial intelligence partnering company profiles

Chapter 5 - Artificial intelligence contracts dealmaking directory5.1. Introduction5.2. Artificial intelligence contracts dealmaking directory

Chapter 6 - Artificial intelligence dealmaking by technology type

Chapter 7 - Partnering resource center7.1. Online partnering7.2. Partnering events7.3. Further reading on dealmaking

Appendices

For more information about this report visit https://www.researchandmarkets.com/r/ze6mu2

Media Contact:

Research and Markets Laura Wood, Senior Manager [emailprotected]

For E.S.T Office Hours Call +1-917-300-0470 For U.S./CAN Toll Free Call +1-800-526-8630 For GMT Office Hours Call +353-1-416-8900

U.S. Fax: 646-607-1907 Fax (outside U.S.): +353-1-481-1716

SOURCE Research and Markets

http://www.researchandmarkets.com

Read more from the original source:
Global Healthcare Artificial Intelligence (AI) Deals Report 2020: Details of the Latest AI Deals, Oligonucletides Including Aptamers Agreements...

A Solution for the Future Needs of artificial intelligence – ARC Viewpoints

Arm introduced the Armv9 architecture in response to the global demand for ubiquitous specialized processing with increasingly capable security and artificial intelligence (AI). Armv9 is the first new Arm architecture in a decade, building on the success of Armv8.

The new capabilities in Armv9 are designed to accelerate the move from general-purpose to more specialized compute across every application as AI, the Internet of Things (IoT) and 5G gain momentum globally.

To address the greatest technology challenge today securing the worlds data the Armv9 roadmap introduces the Arm Confidential Compute Architecture (CCA). Confidential computing shields portions of code and data from access or modification while in-use, even from privileged software, by performing computation in a hardware-based secure environment.

The Arm CCA will introduce the concept of dynamically created Realms, useable by all applications, in a region that is separate from both the secure and non-secure worlds. For example, in business applications, Realms can protect commercially sensitive data and code from the rest of the system while it is in-use, at rest, and in transit.

The ubiquity and range of AI workloads demands more diverse and specialized solutions. For example, it is estimated there will be more than eight billion AI-enabled voice-assisted devices in use by the mid-2020s, and 90 percent or more of on-device applications will contain AI elements along with AI-based interfaces, like vision or voice.

To address this need, Arm partnered with Fujitsu to create the Scalable Vector Extension (SVE) technology, which is at the heart of Fugaku, the worlds fastest supercomputer. Building on that work, Arm has developed SVE2 for Armv9 to enable enhanced machine learning (ML) and digital signal processing (DSP) capabilities across a wider range of applications.

SVE2 enhances the processing ability of 5G systems, virtual and augmented reality, and ML workloads running locally on CPUs, such as image processing and smart home applications. Over the next few years, Arm will further extend the AI capabilities of its technology with substantial enhancements in matrix multiplication within the CPU, in addition to ongoing AI innovations in its Mali GPUs and Ethos NPUs.

Go here to see the original:
A Solution for the Future Needs of artificial intelligence - ARC Viewpoints