Archive for the ‘Artificial Intelligence’ Category

Artificial intelligence and the classroom of the future | BrandeisNOW – Brandeis University

By Tessa Venell '08Nov. 19, 2020

Imagine a classroom in the future where teachers are working alongside artificial intelligence partners to ensure no student gets left behind.The AI partners careful monitoring picks up on a student in the back who has been quiet and still for the whole class and the AI partner prompts the teacher to engage the student. When called on, the student asks a question. The teacher clarifies the material that has been presented and every student comes away with a better understanding of the lesson.This is part of a larger vision of future classrooms where human instruction and AI technology interact to improve educational environments and the learning experience.James Pustejovsky, the TJX Feldberg Professor of Computer Science, is working towards that vision with a team led by the University of Colorado Boulder, as part of the new $20 million National Science Foundation-funded AI Institute for Student-AI Teaming.The research will play a critical role in helping ensure the AI agent is a natural partner in the classroom, with language and vision capabilities, allowing it to not only hear what the teacher and each student is saying, but also notice gestures (pointing, shrugs, shaking a head), eye gaze, and facial expressions (student attitudes and emotions).

Pustejovsky took some time to answer questions from BrandeisNOW about his research.

How does your research help build this classroom of the future?For the past five years, we have been working to create a multimodal embodied avatar system, called Diana, that interacts with a human to perform various tasks. She can talk, listen, see, and respond to language and gesture from her human partner, and then perform actions in a 3D simulation environment called VoxWorld. This is work we have been conducting with our collaborators at Colorado State University, led by Ross Beveridge in their vision lab. We are working together again (CSU and Brandeis) to help bring this kind of embodied human computer interaction into the classroom. Nikhil Krishnaswamy, my former Ph.D. student and co-developer of Diana, has joined CSU as part of their team.How does it work in the context of a classroom setting?At first its disembodied, a virtual presence on an iPad, for example, where it is able to recognize the voices of different students. So imagine a classroom: Six to 10 children in grade school. The initial goal in the first year is to have the AI partner passively following the different students, in the way they're talking and interacting, and then eventually the partner will learn to intervene to make sure that everyone is equitably represented and participating in the classroom.Are there other settings that Diana would be useful in besides a classroom?Let's say I've got a Julia Child app on my iPad and I want her to help me make bread. If I start the program on the iPad, the Julia Child avatar would be able to understand my speech. If I have my camera set up, the program allows me to be completely embedded and embodied in a virtual space with her so that she can help me.

Screenshot of the embodied avatar system Diana."

How does she help you?She would look at my table and say, Okay, do you have everything you need. And then Id say, I think so. So the camera will be on, and if you had all your baking materials laid out on your table, she would scan the table. She'd say, I see flour, yeast, salt, and water, but I don't see any utensils: you're going to need a cup, you're going to need a teaspoon. After you had everything you needed, she would tell you to put the flour in that bowl over there. And then she'd show you how to mix it.

Is that where Diana comes in?Yes, Diana is basically becoming an embodied presence in the human-computer interaction: she can see what you're doing, you can see what she's doing. In a classroom interaction, Diana could help with guiding students through lesson plans, through dialogue and gesture, while also monitoring the students progress, mood, and levels of satisfaction or frustration.Does Diana have any uses in virtual learning in education?

Using an AI partner for virtual learning could be a fairly natural interaction. In fact, with a platform such as Zoom, many of the computational issues are actually easier since voice and video tracks of different speakers have already been segmented and identified. Furthermore, in a Hollywood Squares display of all the students, a virtual AI partner may not seem as unnatural, and Diana might more easily integrate with the students online.What stage is the research at now?Within the context of the CU Boulder-led AI Institute, the research has just started. Its a five-year project, and its getting off the ground. This is exciting new research that is starting to answer questions about using our avatar and agent technology with students in the classroom.

The research is funded by the National Science Foundation, and partners with CU Boulder on the research include Brandeis University, Colorado State University, the University of California, Santa Cruz, UC Berkeley, Worcester Polytechnic Institute, Georgia Institute of Technology, University of Illinois at Urbana-Champaign, and University of Wisconsin-Madison.

See the original post:
Artificial intelligence and the classroom of the future | BrandeisNOW - Brandeis University

New York City wants to restrict artificial intelligence in hiring – CBS News

New York City is trying to rein in the use of algorithms used to screen job applicants. It's one of the first cities in the U.S. to try to regulate what is an increasingly common and opaque hiring practice.

The city council is considering a bill that would require potential employers to notify job candidates about the use of these tools, referred to as "automated decision systems." Companies would also have to complete an annual audit to make sure the technology doesn't result in bias.

The move comes as the use of artificial intelligence in hiring skyrockets, increasingly replacing human screeners. Fortune 500 companies including Delta, Dunkin, Ikea and Unilever have turned to AI for help assessing job applicants. These tools run the gamut from a simple text reader that screens applications for particular words and phrases, to a system that evaluates videos of potential applicants to judge their suitability for the job.

"We have all the reasons to believe that every major company uses some algorithmic hiring," Julia Stoyanovich, a founding director of the Center for Responsible AI at New York University, said in a recent webinar.

At a time when New Yorkers are suffering double-digit unemployment, legislators are concerned about the brave new world of digital hiring. Research has shown that AI systems can introduce more problems than they solve. Facial-recognition tools that use AI have demonstrated trouble in identifying faces of Black people, determining people's sex and wrongly matching members of Congress to a mugshot database.

In perhaps the most notorious example of AI bias, a hiring tool developed internally at Amazon had to be scrapped because it discriminatedagainst women. The tool was developed using a 10-year history of resumes submitted to the company, whose workforce skews male. As a result, the software effectively "taught" itself that male candidates were preferable and demoted applications that included the word "women," or the names of two all-women's colleges. While the tool was never used, it demonstrates the potential pitfalls of substituting machine intelligence for human judgment.

"As legislators in a city home to some of the world's largest corporations, we must intervene and prevent unjust hiring," city council member Laurie Cumbo, the bill's sponsor, said at a hearing for the legislation last week.

Several civil rights groups say New York's proposed bill doesn't go far enough. A dozen groups including the AI Now Institute, New York Civil Liberties Union and New York Communities for Change issued a letter last week pushing for the law to cover more types of automated tools and more steps in the hiring process. They want the measure to include heavier penalties, enabling people to sue if they've been passed over for a job because of biased algorithms. This would be in line with existing employment law, which allows applicants to sue for discrimination because of race or sex.

"If we pass [the bill] as it is worded today, it will be a rubber stamp for some of the worst forms of algorithmic discrimination," Albert Fox Cahn, executive director of the Surveillance Technology Oversight Project, told the city council.

"We need much stronger penalties," he said. "Just as we do with every other form of employment discrimination, we need that private-sector enforcement."

Alicia Mercedes, a spokesperson for Cumbo, the bill's sponsor, said the bill is still in its early stages and is likely to change in response to feedback.

"We're committed to seeing this legislation come out as something that can be effective, so we will of course take any input that we can get from those who are working on these issues every day," Mercedes said.

For hiring professionals, the main appeal of AI is its capacity to save time. But technologists have also touted the potential for automated programs, if used correctly, to eliminate human biases, such as the well-documented tendency for hiring managers to overlook African-American applicants or look favorably on candidates who physically resemble the hiring manager.

"When only a human reviews a resume, unfortunately, humans can't un-see the things that cause unconscious biases if someone went to the same alma mater or grew up in same community," said Athena Karp, CEO of HiredScore, an AI hiring platform.

Karp said she supports the New York bill. "If technologies are used in hiring, the makers of technology, and candidates, can and should know how they're being used," she said at the hearing.

In the U.S., the only place where this is currently the case is in Illinois, whose Biometric Privacy Act requires employers to tell candidates if AI is being used to evaluate them and allows the candidates to opt out. On the federal level, a bill to study bias in algorithms has been introduced in Congress. In New York, most job candidates have no clue they're being screened by software even those who are computer scientists themselves.

"I've received my fair share of job and internship rejections in my graduate and undergraduate careers," said Lauren D'Arinzo, a master's degree candidate in data science and AI at New York University. "It is unsettling to me that a future employer might disregard my application based on the output of an algorithm."

She added, "What worries me most is, had I not been recruited into a project explicitly doing research in this space, I would likely not have even known that these types of tools are regularly used by Fortune 500 companies."

Link:
New York City wants to restrict artificial intelligence in hiring - CBS News

Artificial Intelligence and Machine Learning, 5G and IoT will be the Most Important Technologies in 2021, According to new IEEE Study – PRNewswire

PISCATAWAY, N.J., Nov. 19, 2020 /PRNewswire/ --IEEE, the world's largest technical professional organization dedicated to advancing technology for humanity, today released the results of a survey of Chief Information Officers (CIOs) and Chief Technology Officers (CTOs) in the U.S., U.K., China, India and Brazil regarding the most important technologies for 2021 overall, the impact of the COVID-19 pandemic on the speed of their technology adoption and the industries expected to be most impacted by technology in the year ahead.

2021 Most Important Technologies and ChallengesWhich will be the most important technologies in 2021? Among total respondents, nearly one-third (32%) say AI and machine learning, followed by 5G (20%) and IoT (14%).

Manufacturing (19%), healthcare (18%), financial services (15%) and education (13%) are the industries that most believe will be impacted by technology in 2021, according to CIOs and CTOS surveyed. At the same time, more than half (52%) of CIOs and CTOs see their biggest challenge in 2021 as dealing with aspects of COVID-19 recovery in relation to business operations. These challenges include a permanent hybrid remote and office work structure (22%), office and facilities reopenings and return (17%), and managing permanent remote working (13%). However, 11% said the agility to stop and start IT initiatives as this unpredictable environment continues will be their biggest challenge. Another 11% cited online security threats, including those related to remote workers, as the biggest challenge they see in 2021.

Technology Adoption, Acceleration and Disaster Preparedness due to COVID-19CIOs and CTOs surveyed have sped up adopting some technologies due to the pandemic:

The adoption of IoT (42%), augmented and virtual reality (35%) and video conferencing (35%) technologies have also been accelerated due to the global pandemic.

Compared to a year ago, CIOs and CTOs overwhelmingly (92%) believe their company is better prepared to respond to a potentially catastrophic interruption such as a data breach or natural disaster. What's more, of those who say they are better prepared, 58% strongly agree that COVID-19 accelerated their preparedness.

When asked which technologies will have the greatest impact on global COVID-19 recovery, one in four (25%) of those surveyed said AI and machine learning,

CybersecurityThe top two concerns for CIOs and CTOs when it comes to the cybersecurity of their organization are security issues related to the mobile workforce including employees bringing their own devices to work (37%) and ensuring the Internet of Things (IoT) is secure (35%). This is not surprising, since the number of connected devices such as smartphones, tablets, sensors, robots and drones is increasing dramatically.

Slightly more than one-third (34%) of CIO and CTO respondents said they can track and manage 26-50% of devices connected to their business, while 20% of those surveyed said they could track and manage 51-75% of connected devices.

About the Survey"The IEEE 2020 Global Survey of CIOs and CTOs" surveyed 350 CIOs or CTOs in the U.S., China, U.K., India and Brazil from September 21 - October 9, 2020.

About IEEEIEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. Through its highly cited publications, conferences, technology standards, and professional and educational activities, IEEE is the trusted voice in a wide variety of areas ranging from aerospace systems, computers, and telecommunications to biomedical engineering, electric power, and consumer electronics

SOURCE IEEE

https://www.ieee.org

Read this article:
Artificial Intelligence and Machine Learning, 5G and IoT will be the Most Important Technologies in 2021, According to new IEEE Study - PRNewswire

Joint Artificial Intelligence Center Has Substantially Grown To Aid The Warfighter – Department of Defense

It was just two years ago when the Joint Artificial Intelligence Center was created to grab the transformative potential of artificial intelligence technology for the benefit of America's national security, and it has grown substantially from humble beginnings.

Dana Deasy, the Defense Department's chief information officer, and Marine Corps Lt. Gen. Michael Groen, the director of the JAIC, virtually discussed from the Pentagon the growth and goals of JAIC at a FedTalks event during National AI Week.

''One of the things we've wanted to keep in our DNA is this idea that we want to hire a lot of diversity of thought into [JAIC],'' Deasy said, ''but yet do that in a way where that diversity of thought coalesces around a couple of really important themes.''

When JAIC began, it needed to grab hold of some projects that can show people that it can be nimble, agile, and it has the talent to give something that is meaningful back to the Defense Department, he noted.

So JAIC started in a variety of different places, Deasy said. ''But now as we've matured, we really need to focus on what was the core mission for JAIC. And that was, we have to figure out what the role is that AI plays in enabling the warfighter. And I've always said that JAIC should be central to any and all future discussions in that place,'' the CIO said.

''Transformation is our vision,'' Groen said.

''So, it's a big job. We discovered pretty quickly that seeding the environment with lots of small AI projects was not transformational in and of itself. We knew we had to do more. And so, what we're calling JAIC 2.0 is a focused transition in a couple of ways. [For example], we're going to continue to build AI products, because the talent in the JAIC is just superb,'' the JAIC director said.

Groen noted that the JAIC is thinking about solution spaces for a broad base of customers, which really gets it focused.

''There are, you know, the application, and the utilization of AI across the department [that] is very uneven. We have places that are really good. And there, some of the services are just doing fantastic things. And we have some places, large-scale enterprises with fantastic use cases [that] really could use AI, but they don't know where to start. So, we're going to shift from a transformational perspective to start looking at that broad base of customers and enable them,'' he said.

JAIC is going to continue to work with the military services on the cutting edge of AI and AI application, especially in the integration space, where JAIC is bringing together intelligence or intelligence of maneuver, Groen said, ''The warfighting functions have superb stovepipes. But now we need to bring those stovepipes together and integrate them through AI,'' he added.

We have to figure out what the role is that AI plays in enabling the warfighter. And I've always said that JAIC should be central to any and all future discussions in that place.''

The history books of the future will say JAIC was about joint common foundation, Deasy said. ''JAIC could never do all of the AI initiatives with the Department of Defense, nor was it ever created to do that. But what we did say was that people who are going to roll up [their] sleeves, and seriously start trying to leverage AI to help the warfighter every day. at the core of JAIC's success has got to be this joint common foundation,'' he noted.

Deasy noted that the JAIC was powerful and very real.

Into next year, he added, JAIC will have some basic services. And then it's a minimum viable product approach, where JAIC is building some basic services, a lot of native services from cloud providers, but then adding services to that.

''And where we hope to grow the technical platform is a place where people can bring their data, places where we can offer data services, data conditioning, maybe table data labeling and we can start curating data,'' Deasy projected. ''One of the things we'd really like to be able to do for the department is start cataloging and storing algorithms and data. So now we'll have an environment so we can share training data, for example, across programs.''

The modernized software foundation now gives JAIC a platform so it can build AI, Groen said, adding AI has to be a conscious application layer that's applied, leveraging the platform and the things that digital modernization provides.

''But when you think of it that way, holy cow, what a platform to operate from,'' he said.

So now JAIC will really have a have a place where the joint force can effectively operate, he said, adding that the JAIC can now start integrating intel in fires, intel in a maneuver command and control, the logistics enterprise, the combat logistics enterprise and sort of the broad support enterprise, Groen noted.

''You can't do any of that without a platform, and you can't do any of that without those digital modernization tenets,'' the JAIC director said.

If JAIC is going to have the whole force operating at the speed of machines, then it has to start bringing these artificial intelligence applications together into an ecosystem, Groen said, noting that it has to be a trusted ecosystem, meaning "we actually have to know, if we're going to bring data into a capability, we have to know that's good data."

''So how do we build an ecosystem so that we can know the provenance of data, and we can ensure that the algorithms are tested to set in a satisfactory way that we can comfortably and safely integrate data and decision making across warfighting functions,'' the JAIC director asked. ''That's the kind of stuff that I think it's really exciting, because that's the real transformation that we're after.''

See the original post:
Joint Artificial Intelligence Center Has Substantially Grown To Aid The Warfighter - Department of Defense

Artificial intelligence could be used to hack connected cars, drones warn security experts – ZDNet

Cyber criminals could exploit emerging technologies including artificial intelligence and machine learning to help conduct attacks against autonomous cars, drones and Internet of Things-connected vehicles, according to a report from the United Nations, Europol and cybersecurity company Trend Micro.

While AI and machine learning can bring "enormous benefits" to society, the same technologies can also bring a range of threats that can enhance current forms of crime or even lead to the evolution of new malicious activity.

"As AI applications start to make a major real-world impact, it's becoming clear that this will be a fundamental technology for our future," said Irakli Beridze, head of the Centre for AI and Robotics at the United Nations Interregional Crime and Justice Research Institute. "However, just as the benefits to society of AI are very real, so is the threat of malicious use," he added.

SEE:Cybersecurity: Let's get tactical(ZDNet/TechRepublic special feature) |Download the free PDF version(TechRepublic)

In addition to super-powering phishing, malware and ransomware attacks, the paper warns that by abusing machine learning, cyber criminals could conduct attacks that could have an impact on the physical world.

For example, machine learning is being implemented in autonomous vehicles to allow them to recognise the environment around them and obstacles that must be avoided such as pedestrians.

However, these algorithms are still evolving and it's possible that attackers could exploit them for malicious purposes, to aid crime or just to create chaos. For example, AI systems that manage autonomous vehicles and regular vehicle traffic could be manipulated by attackers if they gain access to the networks that control them.

By causing traffic delays perhaps even with the aid of using stolen credit card details to swamp a chosen area with hire cars cyber attackers could provide other criminals with extra time needed to carry out a robbery or other crime, while also getting away from the scene.

The report notes that as the number of automated vehicles on the roads increases, the potential attack surface also increases, so it's imperative that vulnerabilities and issues are considered sooner rather than later.

But it isn't just road vehicles that cyber criminals could exploit by exploiting new technologies and increased connectivity; there's the potential for attackers to abuse machine learning to impact airspace too.

Here, the paper suggests that autonomous drones could be of particular interest to cyber attackers both criminal or nation-state-backed because they have the potential to carry 'interesting' payloads like intellectual property.

Exploiting autonomous drones also provides cyber criminals with a potentially easy route to making money by hijacking delivery drones used by retailers and redirecting them to a new location taking the package and selling it on themselves.

Not only this, but there's the potential that a drone with a single board computer could also be exploited to collect Wi-Fi passwords or breach routers as it goes about its journeys, potentially allowing attackers access to networks and any sensitive data transferred using them.

SEE: 10 tech predictions that could mean huge changes ahead

And the report warns that these are just a handful of the potential issues that can arise from the use of new technology and the ways in which cyber criminals will attempt to exploit them.

"Cybercriminals have always been early adopters of the latest technology and AI is no different. As this report reveals, it is already being used for password guessing, CAPTCHA breaking and voice cloning, and there are many more malicious innovations in the works," said Martin Roesler, head of forward-looking threat research at Trend Micro

One of the reasons the UN, Europol and Trend Micro have released the report is in the hope that it'll be seen by technology companies and manufacturers and that they become aware of the potential dangers they could face and work to solve problems before they become a major issue.

Follow this link:
Artificial intelligence could be used to hack connected cars, drones warn security experts - ZDNet