Archive for the ‘Artificial Intelligence’ Category

Artificial intelligence is the future of cybersecurity – Technology Record

Cybercriminals are using artificial intelligence (AI) to evolve the sophistication of attacks at a rapid pace. In response, an increasing number of organisations are also adopting the technology as part of their cybersecurity strategies. According to research conducted in Mimecasts State of Email Security Report 2021, 39 per cent of organisations are utilising AI to bolster their email defences.

Although were still in the early phases of these technologies and their application to cybersecurity, this is a rising trend. Businesses using advanced technologies such as AI and layered email defences, while also regularly training their employees in attack-resistant behaviours, will be in the best possible position to sidestep future attacks and recover quickly.

Mimecast is integrating AI capabilities to help halt some of cybersecuritys most pervasive threats. Take the use of tracking pixels in emails, for example, which both BBC and ZDNet have called endemic. Spy trackers embedded in emails have become ubiquitous often by marketers but also, increasingly, by cybercriminals looking to gather information to weaponise highly targeted business email compromise attacks.

Mimecasts CyberGraph uses machine learning, a subset of AI, to block these hard-to-detect email threats, thus limiting reconnaissance and mitigating human error. CyberGraph disarms embedded trackers and uses machine learning and identity graph technologies to detect anomalous malicious behaviour. Because the AI is continually learning, it requires no configuration, thus lessening the burden on IT teams and reducing the likelihood of unsafe misconfiguration. Plus, as an add-on to Mimecast Email Security, CyberGraph offers differentiated capability integrated into an existing secure email gateway, streamlining your email security strategy.

AI is here, and here to stay. Although its use is not a silver bullet, theres a strong case for it in the future of cybersecurity. Mimecast CyberGraph combines with many other layers of protection. It embeds colour-coded warning banners in emails to highlight detected risks, and it solicits user feedback. This feedback strengthens the machine learning model and can update banners across all similar emails to highlight the new risk levels.

As more cyber resilience strategies begin to adopt AI, it will be vital that people and technology continue to inform one another to provide agile protection against ever-evolving threat landscapes. Innovations such as CyberGraph provide evidence that AI has a promising value proposition in cybersecurity.

Duncan Mills is the senior product marketing manager at Mimecast

This article was originally published in the Summer2021 issue of The Record. To get future issues delivered directly to your inbox, sign up for a free subscription.

Excerpt from:
Artificial intelligence is the future of cybersecurity - Technology Record

Artificial Intelligence: Should You Teach It To Your Employees? – Forbes

Back view of a senior professor talking on a class to large group of students.

AI is becoming strategic for many companies across the world.The technology can be transformative for just about any part of a business.

But AI is not easy to implement.Even top-notch companies have challenges and failures.

So what can be done?Well, one strategy is to provide AI education to the workforce.

If more people are AI literate and can start to participate and contribute to the process, more problemsboth big and smallacross the organization can be tackled, said David Sweenor, who is the Senior Director of Product Marketing at Alteryx.We call this the Democratization of AI and Analytics. A team of 100, 1,000, or 5,000 working on different problems in their areas of expertise certainly will have a bigger impact than if left in the hands of a few.

Just look at Levi Strauss & Co.Last year the company implemented a full portfolio of enterprise training programsfor all employees at all levelsfocused on data and AI for business applications.For example, there is the Machine Learning Bootcamp, which is an eight-week program for learning Python coding, neural networks and machine learningwith an emphasis on real-world scenarios.

Our goal is to democratize this skill set and embed data scientists and machine learning practitioners throughout the organization, said Louis DeCesari, who is the Global Head of Data, Analytics, and AI at Levi Strauss & Co.In order to achieve our vision of becoming the worlds best digital apparel company, we need to integrate digital into all areas of the enterprise.

Granted, corporate training programs can easily become a waste.This is especially the case when there is not enough buy-in at the senior levels of management.

It is also important to have a training program that is more than just a bunch of lectures.You need to have outcomes-based training, said Kathleen Featheringham, who is the Director of Artificial Intelligence Strategy at Booz Allen.Focus on how AI can be used to push forward the mission of the organization, not just training for the sake of learning about AI. Also, there should be roles-based training.There is no one-size-fits-all approach to training, and different personas within an organization will have different training needs.

AI training can definitely be daunting because of the many topics and the complex concepts.In fact, it might be better to start with basic topics.

A statistics course can be very helpful, said Wilson Pang, who is the Chief Technology Officer at Appen.This will help employees understand how to interpret data and how to make sense of data. It will equip the company to make data driven decisions.

There also should be coverage of how AI can go off the rails.There needs to be training on ethics, said Aswini Thota, who is a Principal Data Scientist at Bose Corporation.Bad and biased data only exacerbate the issues with AI systems.

For the most part, effective AI is a team sport.So it should really involve everyone in an organization.

The acceleration of AI adoption is inescapablemost of us experience AI on a daily basis whether we realize it or not, said Alex Spinelli, who is the Chief Technology Officer at LivePerson.The more companies educate employees about AI, the more opportunities theyll provide to help them stay up-to-date as the economy increasingly depends on AI-inflected roles. At the same time, nurturing a workforce thats ahead of the curve when it comes to understanding and managing AI will be invaluable to driving the companys overall efficiency and productivity.

Tom (@ttaulli) is an advisor/board member to startups and the author of Artificial Intelligence Basics: A Non-Technical Introduction, The Robotic Process Automation Handbook: A Guide to Implementing RPA Systems and Implementing AI Systems: Transform Your Business in 6 Steps. He also has developed various online courses, such as for the COBOL.

Read more from the original source:
Artificial Intelligence: Should You Teach It To Your Employees? - Forbes

A.I. Can Now Write Its Own Computer Code. Thats Good News for Humans. – The New York Times

As soon as Tom Smith got his hands on Codex a new artificial intelligence technology that writes its own computer programs he gave it a job interview.

He asked if it could tackle the coding challenges that programmers often face when interviewing for big-money jobs at Silicon Valley companies like Google and Facebook. Could it write a program that replaces all the spaces in a sentence with dashes? Even better, could it write one that identifies invalid ZIP codes?

It did both instantly, before completing several other tasks. These are problems that would be tough for a lot of humans to solve, myself included, and it would type out the response in two seconds, said Mr. Smith, a seasoned programmer who oversees an A.I. start-up called Gado Images. It was spooky to watch.

Codex seemed like a technology that would soon replace human workers. As Mr. Smith continued testing the system, he realized that its skills extended well beyond a knack for answering canned interview questions. It could even translate from one programming language to another.

Yet after several weeks working with this new technology, Mr. Smith believes it poses no threat to professional coders. In fact, like many other experts, he sees it as a tool that will end up boosting human productivity. It may even help a whole new generation of people learn the art of computers, by showing them how to write simple pieces of code, almost like a personal tutor.

This is a tool that can make a coders life a lot easier, Mr. Smith said.

About four years ago, researchers at labs like OpenAI started designing neural networks that analyzed enormous amounts of prose, including thousands of digital books, Wikipedia articles and all sorts of other text posted to the internet.

By pinpointing patterns in all that text, the networks learned to predict the next word in a sequence. When someone typed a few words into these universal language models, they could complete the thought with entire paragraphs. In this way, one system an OpenAI creation called GPT-3 could write its own Twitter posts, speeches, poetry and news articles.

Much to the surprise of even the researchers who built the system, it could even write its own computer programs, though they were short and simple. Apparently, it had learned from an untold number of programs posted to the internet. So OpenAI went a step further, training a new system Codex on an enormous array of both prose and code.

The result is a system that understands both prose and code to a point. You can ask, in plain English, for snow falling on a black background, and it will give you code that creates a virtual snowstorm. If you ask for a blue bouncing ball, it will give you that, too.

You can tell it to do something, and it will do it, said Ania Kubow, another programmer who has used the technology.

Codex can generate programs in 12 computer languages and even translate between them. But it often makes mistakes, and though its skills are impressive, it cant reason like a human. It can recognize or mimic what it has seen in the past, but it is not nimble enough to think on its own.

Sometimes, the programs generated by Codex do not run. Or they contain security flaws. Or they come nowhere close to what you want them to do. OpenAI estimates that Codex produces the right code 37 percent of the time.

When Mr. Smith used the system as part of a beta test program this summer, the code it produced was impressive. But sometimes, it worked only if he made a tiny change, like tweaking a command to suit his particular software setup or adding a digital code needed for access to the internet service it was trying to query.

In other words, Codex was truly useful only to an experienced programmer.

But it could help programmers do their everyday work a lot faster. It could help them find the basic building blocks they needed or point them toward new ideas. Using the technology, GitHub, a popular online service for programmers, now offers Copilot, a tool that suggests your next line of code, much the way autocomplete tools suggest the next word when you type texts or emails.

It is a way of getting code written without having to write as much code, said Jeremy Howard, who founded the artificial intelligence lab Fast.ai and helped create the language technology that OpenAIs work is based on. It is not always correct, but it is just close enough.

Mr. Howard and others believe Codex could also help novices learn to code. It is particularly good at generating simple programs from brief English descriptions. And it works in the other direction, too, by explaining complex code in plain English. Some, including Joel Hellermark, an entrepreneur in Sweden, are already trying to transform the system into a teaching tool.

The rest of the A.I. landscape looks similar. Robots are increasingly powerful. So are chatbots designed for online conversation. DeepMind, an A.I. lab in London, recently built a system that instantly identifies the shape of proteins in the human body, which is a key part of designing new medicines and vaccines. That task once took scientists days or even years. But those systems replace only a small part of what human experts can do.

In the few areas where new machines can instantly replace workers, they are typically in jobs the market is slow to fill. Robots, for instance, are increasingly useful inside shipping centers, which are expanding and struggling to find the workers needed to keep pace.

With his start-up, Gado Images, Mr. Smith set out to build a system that could automatically sort through the photo archives of newspapers and libraries, resurfacing forgotten images, automatically writing captions and tags and sharing the photos with other publications and businesses. But the technology could handle only part of the job.

It could sift through a vast photo archive faster than humans, identifying the kinds of images that might be useful and taking a stab at captions. But finding the best and most important photos and properly tagging them still required a seasoned archivist.

We thought these tools were going to completely remove the need for humans, but what we learned after many years was that this wasnt really possible you still needed a skilled human to review the output, Mr. Smith said. The technology gets things wrong. And it can be biased. You still need a person to review what it has done and decide what is good and what is not.

Codex extends what a machine can do, but it is another indication that the technology works best with humans at the controls.

A.I. is not playing out like anyone expected, said Greg Brockman, the chief technology officer of OpenAI. It felt like it was going to do this job and that job, and everyone was trying to figure out which one would go first. Instead, it is replacing no jobs. But it is taking away the drudge work from all of them at once.

Link:
A.I. Can Now Write Its Own Computer Code. Thats Good News for Humans. - The New York Times

AAMC Comments on National Artificial Intelligence Initiative – AAMC

The AAMC submitted a letter to the White House Office of Science and Technology Policy (OSTP) and the National Science Foundation (NSF) on Sept. 1 in response to a request for information (RFI) geared toward developing a shared, national artificial intelligence (AI) research infrastructure that is referred to as the National Artificial Intelligence Research Resource (NAIRR).

The RFI will inform the work of the NAIRR Task Force, which has been directed by Congress to develop a first-of-its-kind AI infrastructure that provides AI researchers and students across scientific disciplines with access to computational resources, high-quality data, educational tools, and user support.

In its comments, the AAMC expressed strong support for Congress prioritization of AI, which has tremendous potential to advance human health and usher in a new era of biomedicine. The AAMC also commended the aspirations of the OSTP and the NSF to develop an inclusive AI infrastructure that allows all of America's diverse AI researchers to fully participate in exploring innovative ideas for advancing AI, including communities, institutions, and regions that have been traditionally underserved.

The letter outlined strategies on how the NAIRR should reinforce principles of ethical and responsible research and development of AI. In particular, the AAMC underscored the necessity of building a NAIRR that identifies and addresses systemic inequities at the interface of AI and biomedicine, mitigates bias by promoting representative datasets and algorithms, provides users with a data management and sharing plan that promotes community engagement and transparency, and fosters a diverse AI workforce and leadership.

Given the vast amounts of data, industries, and applications that will converge with the NAIRR, the AAMC also noted the importance of a multisector approach for identifying, researching, and mitigating bias, discrimination, health inequities, and social determinants of health all components that currently preclude the formation of an equitable AI framework that benefits all communities equally.

Finally, the AAMC recommended that the NAIRR partner with diverse communities in the development of this framework, thereby culminating a diverse expertise and fostering community trust. On Aug. 18, the OSTP and the NSF extended the RFIs public comment period by one month to Oct. 1, providing further opportunity for researchers and academic institutions to respond.

Originally posted here:
AAMC Comments on National Artificial Intelligence Initiative - AAMC

Ethical Artificial Intelligence is Focus of New Robotics Program – UT News – UT News | The University of Texas at Austin

AUSTIN, Texas Ethics will be at the forefront of robotics education thanks to a new University of Texas at Austin program that will train tomorrows technologists to understand the positive and potentially negative implications of their creations.

Today, much robotic technology is developed without considering its potentially harmful effects on society, including how these technologies can infringe on privacy or further economic inequity. The new UT Austin program will fill an important educational gap by prioritizing these issues in its curriculum.

In the next 10 years, we are going to live more closely alongside robots, and we want to be sure that those robots are fair, inclusive and free from bias, said Junfeng Jiao, associate professor in the School of Architecture and the program lead. And because the robots we create are reflections of ourselves, it is imperative that technologists receive an excellent ethics education. We want our students to work directly with companies to create practices and technologies that are equitable and fair.

Called CREATE (Convergent, Responsible, and Ethical AI Training Experience for Roboticists), it will offer graduate coursework and professional development in responsible design and implementation.

CREATE is a collaboration among Texas Robotics, industry partners and the UT grand challenge research initiative Good Systems, which seeks to design AI technologies that benefit society. The program has been recently awarded a $3 million grant from the National Science Foundation through its Research Traineeship Program, which will support 32 doctoral students to receive coursework, mentorship, professional development, internships, and research and public service opportunities.

Students will focus specifically on how to ethically design, develop and deploy service robots, which can make deliveries, work in factories and clean homes. They will consider factors such as how to design delivery service robots so they are more inclusive and can reach all people and how to ensure home service robots protect occupants privacy. Several notable robotics companies have also said they will offer students internships, including Sony AI, Bosch, Amazon, SparkCognition and Apptronik.

Researchers involved in the program cross many disciplines at UT, including computer science, architecture, engineering, information, and public affairs. Faculty members from these units will teach courses as part of the curriculum, and two faculty members will mentor each trainee during the five-year program. Additionally, each trainee will receive help with career development, grant writing, and exposure to local startup companies.

More than half of the programs trainees will be chosen from underrepresented groups in STEM education, including women and racial minorities, to help bring much-needed diversity to the field of robotics. The coursework component, which includes five classes in ethical robotics, will be institutionalized as a graduate portfolio program and will be available to all STEM graduate students at UT Austin.

This program will enable us to educate well-rounded roboticists who are not only grounded in the technical details of designing and building autonomous robots but also are equipped to fully consider the societal implications of their work, said Peter Stone, director of Texas Robotics and a professor of computer science. That is a missing part in robotics education in the U.S. and the world. We believe this is a game changer for the future of robotics.

Here is the original post:
Ethical Artificial Intelligence is Focus of New Robotics Program - UT News - UT News | The University of Texas at Austin