Archive for the ‘Artificial Intelligence’ Category

Using Artificial Intelligence to Address Criminal Justice …

Intelligent machines have long been the subject of science fiction. However, we now live in an era in which artificial intelligence (Al) is a reality, and it is having very real and deep impacts on our daily lives. From phones to cars to finances and medical care, AI is shifting the way we live.

AI applications can be found in many aspects of our lives, from agriculture to industry, communications, education, finance, government, service, manufacturing, medicine, and transportation. Even public safety and criminal justice are benefiting from AI. For example, traffic safety systems identify violations and enforce the rules of the road, and crime forecasts allow for more efficient allocation of policing resources. AI is also helping to identify the potential for an individual under criminal justice supervision to reoffend.[1]

Research supported by NIJ is helping to lead the way in applying AI to address criminal justice needs, such as identifying individuals and their actions in videos relating to criminal activity or public safety, DNA analysis, gunshot detection, and crime forecasting.

AI is a rapidly advancing field of computer science. In the mid-1950s, John McCarthy, who has been credited as the father of AI, defined it as the science and engineering of making intelligent machines.[2] Conceptually, AI is the ability of a machine to perceive and respond to its environment independently and perform tasks that would typically require human intelligence and decision-making processes, but without direct human intervention.

See A Brief History of Artificial Intelligence

One facet of human intelligence is the ability to learn from experience. Machine learning is an application of AI that mimics this ability and enables machines and their software to learn from experience.[3] Particularly important from the criminal justice perspective is pattern recognition. Humans are efficient at recognizing patterns and, through experience, we learn to differentiate objects, people, complex human emotions, information, and conditions on a daily basis. AI seeks to replicate this human capability in software algorithms and computer hardware. For example, self-learning algorithms use data sets to understand how to identify people based on their images, complete intricate computational and robotics tasks, understand purchasing habits and patterns online, detect medical conditions from complex radiological scans, and make stock market predictions.

AI is being researched as a public safety resource in numerous ways. One particular AI application facial recognition can be found everywhere in both the public and the private sectors.[4] Intelligence analysts, for example, often rely on facial images to help establish an individuals identity and whereabouts. Examining the huge volume of possibly relevant images and videos in an accurate and timely manner is a time-consuming, painstaking task, with the potential for human error due to fatigue and other factors. Unlike humans, machines do not tire. Through initiatives such as the Intelligence Advanced Research Projects Activitys Janus computer-vision project, analysts are performing trials on the use of algorithms that can learn how to distinguish one person from another using facial features in the same manner as a human analyst.[5]

See

The U.S. Department of Transportation is also looking to increase public safety through researching, developing, and testing automatic traffic accident detection based on video to help maintain safe and efficient commuter traffic over various locations and weather, lighting, and traffic conditions.[6] AI algorithms are being used in medicine to interpret radiological images, which could have important implications for the criminal justice and medical examiner communities when establishing cause and manner of death.[7] AI algorithms have also been explored in various disciplines in forensic science, including DNA analysis.[8]

AI is also quickly becoming an important technology in fraud detection.[9] Internet companies like PayPal stay ahead of fraud attempts by using volumes of data to continuously train their fraud detection algorithms to predict and recognize anomalous patterns and to learn to recognize new patterns.[10]

The AI research that NIJ supports falls primarily into four areas: public safety video and image analysis, DNA analysis, gunshot detection, and crime forecasting.

Video and image analysis is used in the criminal justice and law enforcement communities to obtain information regarding people, objects, and actions to support criminal investigations. However, the analysis of video and image information is very labor-intensive, requiring a significant investment in personnel with subject matter expertise. Video and image analysis is also prone to human error due to the sheer volume of information, the fast pace of changing technologies such as smartphones and operating systems, and a limited number of specialized personnel with the knowledge to process such information.

AI technologies provide the capacity to overcome such human errors and to function as experts. Traditional software algorithms that assist humans are limited to predetermined features such as eye shape, eye color, and distance between eyes for facial recognition or demographics information for pattern analysis. AI video and image algorithms not only learn complex tasks but also develop and determine their own independent complex facial recognition features/parameters to accomplish these tasks, beyond what humans may consider. These algorithms have the potential to match faces, identify weapons and other objects, and detect complex events such as accidents and crimes in progress or after the fact.

In response to the needs of the criminal justice and law enforcement communities, NIJ has invested in several areas to improve the speed, quality, and specificity of data collection, imaging, and analysis and to improve contextual information.

For instance, to understand the potential benefits of AI in terms of speed, researchers at the University of Texas at Dallas, with funding from NIJ and in partnership with the FBI and the National Institute of Standards and Technology, are assessing facial identification by humans and examining methods for effectively comparing AI algorithms and expert facial examiners. Preliminary results show that when the researchers limit the recognition time to 30 seconds, AI-based facial-recognition algorithms developed in 2017 perform comparably to human facial examiners.[11] The implications of these findings are that AI-based algorithms can potentially be used as a second pair of eyes to increase the accuracy of expert human facial examiners and to triage data to increase productivity.

In addition, in response to the need for higher quality information and the ability to use lower quality images more effectively, Carnegie Mellon University is using NIJ funding to develop AI algorithms to improve detection, recognition, and identification. One particularly important aspect is the universitys work on images in which an individuals face is captured at different angles or is partially to the side, and when the individual is looking away from the camera, obscured by masks or helmets, or blocked by lamp posts or lighting. The researchers are also working with low-quality facial image construction, including images with poor resolution and low ambient light levels, where the image quality makes facial matching difficult. NIJs test and evaluation center is currently testing and evaluating these algorithms.[12]

Finally, to decipher a license plate (which could help identify a suspect or aid in an investigation) or identify a person in extremely low-quality images or video, researchers at Dartmouth College are using AI algorithms that systematically degrade high-quality images and compare them with low-quality ones to better recognize lower quality images and video. For example, clear images of numbers and letters are slowly degraded to emulate low-quality images. The degraded images are then expressed and catalogued as mathematical representations. These degraded mathematical representations can then be compared with low-quality license plate images to help identify the license plate.[13]

Also being explored is the notion of scene understanding, or the ability to develop text that describes the relationship between objects (people, places, and things) in a series of images to provide context. For example, the text may be Pistol being drawn by a person and discharging into a store window. The goal is to detect objects and activities that will help identify crimes in progress for live observation and intervention as well as to support investigations after the fact.[14] Scene understanding over multiple scenes can indicate potentially important events that law enforcement should view to confirm and follow. One group of researchers at the University of Central Florida, in partnership with the Orlando Police Department, is using NIJ funding to develop algorithms to identify objects in videos, such as people, cars, weapons, and buildings, without human intervention. They are also developing algorithms to identify actions such as traffic accidents and violent crimes.

Another important aspect of AI is the ability to predict behavior. In contrast to the imaging and identification of criminal activity in progress, the University of Houston has used NIJ funding to develop algorithms that provide continuous monitoring to assess activity and predict emergent suspicious and criminal behavior across a network of cameras. This work also concentrates on using clothing, skeletal structure, movement, and direction prediction to identify and re-acquire people of interest across multiple cameras and images.[15]

AI can also benefit the law enforcement community from a scientific and evidence processing standpoint. This is particularly true in forensic DNA testing, which has had an unprecedented impact on the criminal justice system over the past several decades.

Biological material, such as blood, saliva, semen, and skin cells, can be transferred through contact with people and objects during the commission of a crime. As DNA technology has advanced, so has the sensitivity of DNA analysis, allowing forensic scientists to detect and process low-level, degraded, or otherwise unviable DNA evidence that could not have been used previously. For example, decades-old DNA evidence from violent crimes such as sexual assaults and homicide cold cases is now being submitted to laboratories for analysis. As a result of increased sensitivity, smaller amounts of DNA can be detected, which leads to the possibility of detecting DNA from multiple contributors, even at very low levels. These and other developments are presenting new challenges for crime laboratories. For instance, when using highly sensitive methods on items of evidence, it may be possible to detect DNA from multiple perpetrators or from someone not associated with the crime at all thus creating the issue of DNA mixture interpretation and the need to separate and identify (or deconvolute) individual profiles to generate critical investigative leads for law enforcement.

AI may have the potential to address this challenge. DNA analysis produces large amounts of complex data in electronic format; these data contain patterns, some of which may be beyond the range of human analysis but may prove useful as systems increase in sensitivity. To explore this area, researchers at Syracuse University partnered with the Onondaga County Center for Forensic Sciences and the New York City Office of Chief Medical Examiners Department of Forensic Biology to investigate a novel machine learning-based method of mixture deconvolution. With an NIJ research award, the Syracuse University team worked to combine the strengths of approaches involving human analysts with data mining and AI algorithms. The team used this hybrid approach to separate and identify individual DNA profiles to minimize the potential weaknesses inherent in using one approach in isolation. Although ongoing evaluation of the use of AI techniques is needed and there are many factors that can influence the ability to parse out individual DNA donors, research shows that AI technology has the potential to assist in these complicated analyses.[16]

The discovery of pattern signatures in gunshot analysis offers another area in which to use AI algorithms. In one project, NIJ funded Cadre Research Labs, LLC, to analyze gunshot audio files from smartphones and smart devices based on the observation that the content and quality of gunshot recordings are influenced by firearm and ammunition type, the scene geometry, and the recording device used.[17] Using a well-defined mathematical model, the Cadre scientists are working to develop algorithms to detect gunshots, differentiate muzzle blasts from shock waves, determine shot-to-shot timings, determine the number of firearms present, assign specific shots to firearms, and estimate probabilities of class and caliber all of which could help law enforcement in investigations.[18]

Predictive analysis is a complex process that uses large volumes of data to forecast and formulate potential outcomes. In criminal justice, this job rests mainly with police, probation practitioners, and other professionals, who must gain expertise over many years. The work is time-consuming and subject to bias and error.[19]

With AI, volumes of information on law and legal precedence, social information, and media can be used to suggest rulings, identify criminal enterprises, and predict and reveal people at risk from criminal enterprises. NIJ-supported researchers at the University of Pittsburgh are investigating and designing computational approaches to statutory interpretation that could potentially increase the speed and quality of statutory interpretation performed by judges, attorneys, prosecutors, administrative staff, and other professionals. The researchers hypothesize that a computer program can automatically recognize specific types of statements that play the most important roles in statutory interpretation. The goal is to develop a proof-of-concept expert system to support interpretation and perform it automatically for cybercrime.[20]

AI is also capable of analyzing large volumes of criminal justice-related records to predict potential criminal recidivism. Researchers at the Research Triangle Institute, in partnership with the Durham Police Department and the Anne Arundel Sheriffs Department, are working to create an automated warrant service triage tool for the North Carolina Statewide Warrant Repository. The NIJ-supported team is using algorithms to analyze data sets with more than 340,000 warrant records. The algorithms form decision trees and perform survival analysis to determine the time span until the next occurrence of an event of interest and predict the risk of re-offending for absconding offenders (if a warrant goes unserved). This model will help practitioners triage warrant service when backlogs exist. The resulting tool will also be geographically referenced so that practitioners can pursue concentrations of high-risk absconders along with others who have active warrants to optimize resources.[21]

AI can also help determine potential elder victims of physical and financial abuse. NIJ-funded researchers at the University of Texas Health Science Center at Houston used AI algorithms to analyze elder victimization. The algorithms can determine the victim, perpetrator, and environmental factors that distinguish between financial exploitation and other forms of elder abuse. They can also differentiate pure financial exploitation (when the victim of financial exploitation experiences no other abuse) from hybrid financial exploitation (when physical abuse or neglect accompanies financial exploitation). The researchers hope that these data algorithms can be transformed into web-based applications so that practitioners can reliably determine the likelihood that financial exploitation is occurring and quickly intervene.[22]

Finally, AI is being used to predict potential victims of violent crime based on associations and behavior. The Chicago Police Department and the Illinois Institute of Technology used algorithms to collect information and form initial groupings that focus on constructing social networks and performing analysis to determine potential high-risk individuals. This NIJ-supported research has since become a part of the Chicago Police Departments Violence Reduction Strategy.[23]

Every day holds the potential for new AI applications in criminal justice, paving the way for future possibilities to assist in the criminal justice system and ultimately improve public safety.

Video analytics for integrated facial recognition, the detection of individuals in multiple locations via closed-circuit television or across multiple cameras, and object and activity detection could prevent crimes through movement and pattern analysis, recognize crimes in progress, and help investigators identify suspects. With technology such as cameras, video, and social media generating massive volumes of data, AI could detect crimes that would otherwise go undetected and help ensure greater public safety by investigating potential criminal activity, thus increasing community confidence in law enforcement and the criminal justice system. AI also has the potential to assist the nations crime laboratories in areas such as complex DNA mixture analysis.

Pattern analysis of data could be used to disrupt, degrade, and prosecute crimes and criminal enterprises. Algorithms could also help prevent victims and potential offenders from falling into criminal pursuits and assist criminal justice professionals in safeguarding the public in ways never before imagined.

AI technology also has the potential to provide law enforcement with situational awareness and context, thus aiding in police well-being due to better informed responses to possibly dangerous situations. Technology that includes robotics and drones could also perform public safety surveillance, be integrated into overall public safety systems, and provide a safe alternative to putting police and the public in harms way. Robotics and drones could also perform recovery, provide valuable intelligence, and augment criminal justice professionals in ways not yet contrived.

By using AI and predictive policing analytics integrated with computer-aided response and live public safety video enterprises, law enforcement will be better able to respond to incidents, prevent threats, stage interventions, divert resources, and investigate and analyze criminal activity. AI has the potential to be a permanent part of our criminal justice ecosystem, providing investigative assistance and allowing criminal justice professionals to better maintain public safety.

On May 3, 2016, the White House announced a series of actions to spur public dialogue on artificial intelligence (AI), identify challenges and opportunities related to this technology, aid in the use of Al for more effective government, and prepare for the potential benefits and risks of Al. As part of these actions, the White House directed the creation of a national strategy for AI research and development. Following is a summary of the plans areas and intent.[24]

Manufacturing

Logistics

Finance

Transportation

Agriculture

Marketing

Communications

Science and Technology

Education

Medicine

Law

Personal Services

Security and Law Enforcement

Safety and Prediction

Return to text.

Christopher Rigano is a senior computer scientist in NIJs Office of Science and Technology.

This article was published as part of NIJ Journal issue number 280, December 2018.

This article discusses the following grants:

See the original post here:
Using Artificial Intelligence to Address Criminal Justice ...

What Is Artificial Intelligence? | Live Science

When most people think of artificial intelligence (AI) they think of HAL 9000 from "2001: A Space Odyssey," Data from "Star Trek," or more recently, the android Ava from "Ex Machina." But to a computer scientist that isn't what AI necessarily is, and the question "what is AI?" can be a complicated one.

One of the standard textbooks in the field, by University of California computer scientists Stuart Russell and Google's director of research, Peter Norvig, puts artificial intelligence in to four broad categories:

The differences between them can be subtle, notes Ernest Davis, a professor of computer science at New York University. AlphaGo, the computer program that beat a world champion at Go, acts rationally when it plays the game (it plays to win). But it doesn't necessarily think the way a human being does, though it engages in some of the same pattern-recognition tasks. Similarly, a machine that acts like a human doesn't necessarily bear much resemblance to people in the way it processes information.

Even IBM's Watson, which acted somewhat like a human when playing Jeopardy, wasn't using anything like the rational processes humans use.

Davis says he uses another definition, centered on what one wants a computer to do. "There are a number of cognitive tasks that people do easily often, indeed, with no conscious thought at all but that are extremely hard to program on computers. Archetypal examples are vision and natural language understanding. Artificial intelligence, as I define it, is the study of getting computers to carry out these tasks," he said.

Computer vision has made a lot of strides in the past decade cameras can now recognize faces in the frame and tell the user where they are. However, computers are still not that good at actually recognizing faces, and the way they do it is different from the way people do. A Google image search, for instance, just looks for images in which the pattern of pixels matches the reference image. More sophisticated face recognition systems look at the dimensions of the face to match them with images that might not be simple face-on photos. Humans process the information rather differently, and exactly how that process works is still something of an open question for neuroscientists and cognitive scientists.

Other tasks, though, are proving tougher. For example, Davis and NYU psychology professor Gary Marcus wrote in the Communications of the Association for Computing Machinery of "common sense" tasks that computers find very difficult. A robot serving drinks, for example, can be programmed to recognize a request for one, and even to manipulate a glass and pour one. But if a fly lands in the glass the computer still has a tough time deciding whether to pour the drink in and serve it (or not).

The issue is that much of "common sense" is very hard to model. Computer scientists have taken several approaches to get around that problem. IBM's Watson, for instance, was able to do so well on Jeopardy! because it had a huge database of knowledge to work with and a few rules to string words together to make questions and answers. Watson, though, would have a difficult time with a simple open-ended conversation.

Beyond tasks, though, is the issue of learning. Machines can learn, said Kathleen McKeown, a professor of computer science at Columbia University. "Machine learning is a kind of AI," she said.

Some machine learning works in a way similar to the way people do it, she noted. Google Translate, for example, uses a large corpus of text in a given language to translate to another language, a statistical process that doesn't involve looking for the "meaning" of words. Humans, she said, do something similar, in that we learn languages by seeing lots of examples.

That said, Google Translate doesn't always get it right, precisely because it doesn't seek meaning and can sometimes be fooled by synonyms or differing connotations.

One area that McKeown said is making rapid strides is summarizing texts; systems to do that are sometimes employed by law firms that have to go through a lot of it.

McKeown also thinks personal assistants is an area likely to move forward quickly. "I would look at the movie 'Her,'" she said. In that 2013 movie starring Joaquin Phoenix, a man falls in love with an operating system that has consciousness.

"I initially didn't want to go see it, I said that's totally ridiculous," McKeown said. "But I actually enjoyed it. People are building these conversational assistants, and trying to see how far can we get."

The upshot is AIs that can handle certain tasks well exist, as do AIs that look almost human because they have a large trove of data to work with. Computer scientists have been less successful coming up with an AI that can think the way we expect a human being to, or to act like a human in more than very limited situations.

"I don't think we're in a state that AI is so good that it will do things we hadn't imagined it was going to do," McKeown said.

Additional resources

Original post:
What Is Artificial Intelligence? | Live Science

7 business areas ripe for an artificial intelligence boost …

Artificial intelligence has captured everyone's imagination, but what can we expect from the technology? A recent surveyof more than 550 executives from IBM finds plenty of support from the top -- everyone wants to plunge full-force into AI to increase the speed and capabilities of their businesses. At the same time, AI is still very much in the early stages. More than half of the executives are still either experimenting or testing on a limited basis around their organizations, and one in seven is only at the planning stage.

Before enterprises begin sinking large sums of funds into AI approaches, it's important to understand where AI can have the greatest impact. In his latest book, The AI Age, Adam Riccoboni, founder of AI consulting firm Critical Future, explores the areas where AI is already making a difference, and where we stand on the AI evolutionary scale.

There are multiple areas of the business that can benefit from AI right now. Riccoboni identifies the key business areas where AI can be applied to enhance or increase capabilities:

Supply Chain Management

(AI's potential in the supply chain is explored more deeply in this recent post.)

Sales

Marketing

Operations

IT

Human Resources:

Finance

But AI means much more than simply boosting intelligence within narrow bands of business functions. Ultimately, it means new ways of doing business. The AI revolution will move through four stages, and we are just moving into the second stage, Riccoboni states. Here's what to expect as the AI revolution unfolds:

Finally, Riccoboni provides some career guidance for professionals seeking to build careers in the AI age. While many opportunities are arising as a result of the growth of AI -- AI systems development and data science, for example, Riccoboni advises professionals to focus on being generalists, not specialists, to prepare for this new world. "Human creativity is an area of comparative advantage over AI because of its generality. Empathy is a narrow skill, so AI can be trained to do it. But when empathy means being able to connect the dots across domains, machines cannot master this... Instead of learning narrow technical skills, we need versatile, cross-domain, generalist skills."

See the original post:
7 business areas ripe for an artificial intelligence boost ...

How Educators Can Use Artificial Intelligence as a Teaching Tool – Education Week

Getty

Deb Norton spends her days helping teachers in Wisconsins Oshkosh Area school district get more comfortable with technology tools theyre using to engage students. A few years ago, she started seeing increasing mentions of artificial intelligence. Around then, the International Society for Technology in Education asked her to lead a course on the uses of artificial intelligence in the K-12 classroom.

She was initially intrigued when she saw students light up at the mention of artificial intelligence. It soon became clear to her that they were already experiencing AI in their daily lives, with tools like Instagram filters or chatbots on websites. Watching them interact with this content really draws me in, Norton said.

Since then, shes been connecting an increasingly diverse set of educators with the possibilities of AI as a teaching tool. The course includes sections on the definition of artificial intelligence; machine learning; voice experiences and chatbots; and the role of data in AI systems. Attendees include K-12 teachers, administrators, and tech leaders, as well as representatives of technology companies.

Part of her mission has been to communicate that AI isnt newthe term was coined in 1956, and research has been underway for even longer, but now, were starting to use it in our everyday lives, she said.

AI is so strong today that it can create a written paper, a song, a poem, a dance, Norton said. Humans can perceive it as something that was created by a human when, in fact, AI created it on its own.

Heres what she thinks about its potential and the challenges to broader implementation.

Deb Norton

I think the most important thing that people have to realize is that artificial intelligence does encompass more than just a computer that can perform a task. Many people think artificial intelligence is just when my little Alexa Dot over there talks to me or when Netflix makes a recommendation for me. They often think its a task-oriented type of thing. Our goal is often to think of AI beyond just performing tasks to something that is able to make decisions and hold conversations.

Many teachers will put together some type of interactive presentation just to present AI to the class, using real interactive components with the lessons so students are creating some of these cool AI experiments. Tech-coach administrators might present AI to teachers, getting them that knowledge or information through some type of workshop or webinar.

I had a group not long ago that created a lesson about machine learning using AI, and it was all tied to yoga, and how the student could do the yoga pose that could be recognized through machine learning, and then the machine could give them feedback on their yoga poses.

A lot of folks use the idea of how big data drives artificial intelligence. A lot of people go back [after the course] with creating chatbots or voice experiences. If youre working with elementary students, it might be a simple coding site like Scratch where you can create an interactive character or a program for creating an Alexa skill.

AI could become a really big part of virtual learning and at-home learning, but I just dont think were quite there yet. For many of our educators, theyre just dipping their feet into how this would work. Having a virtual tutor is something that is becoming more and more in the conversation of AI, but it is not something I see at this point in time being implemented.

Im seeing little pieces of it globally, thoughsome seniors who were graduating in Japan could participate in their graduation via an AI robot that represents them. Ive seen quite a few articles coming out of other countries on the ability to have a virtual tutor that cannot just spew information at you and test your knowledge but rather learn your way of learning. Were not quite there yet.

With at-home learning, that need will be more prevalent. It will most likely grow quicker than if we didnt have at-home learning.

I think its just both students and teachers knowing how it would work. Some of it is cost. A true chatbot that works on a website costs money. If you want something that will engage and work, thats a funding issue as well.

I think privacy is one of the big barriers. Many districts dont allow schools to open up Alexas and Google Homes because of the privacy of the data thats being collected. One suggestion is to set up a separate network at schools for the use of a smart speaker. Another suggestion is to use the Alexa App on a tablet instead of an actual smart speaker such as an Echo Dot or Google Home speaker. The app can be set up to only listen when you initiate it, unlike a smart speaker that is always listening.

Artificial intelligence can know what would be the best mode of delivering the content and at what pace and how deep. To be able to differentiate for every learner and know every learners strengths and weaknesses, that would be incredible.

I also see the capabilities, from a teacher-educator point of view, to be able to engage and monitor and track the types of lessons and strategies that can be delivered in the most effective way in the classroom. AI could help with that, even if its just as simple as an AI-powered search engine for a teacher in which they are able to search for content in a far deeper way than what we currently can.

Even voice experienceslets say in the future a student had an earbud and a microphone. What if we could ask Alexa something deeper than fact? What if we can ask Alexa, what would be the best way for me to get information on such and such? What would be the best way for me to demonstrate this information to my peers?

Any time I talk about AI, not just in the course but in a webinar or live in person, it is a gamut of people from all walks of everything. We get elementary, middle, and high school teachers. We get professors. We get people who are leading a tech company and developing a product; theyre asking, what can we do with our robots to incorporate AI?

We also get tech directors, a lot of administrators. Sometimes well get a superintendent of a school district. Sometimes, its a person whos not even in education who just wants to learn more about AI.

Web Only

Back to Top

See more here:
How Educators Can Use Artificial Intelligence as a Teaching Tool - Education Week

Artificial Intelligence in K-12: The Right Mix for Learning or a Bad Idea? – Education Week

Getty

Last year, officials at the Montour school district in western Pennsylvania approached band director Cyndi Mancini with an idea: How about using artificial intelligence to teach music?

Mancini was skeptical.

As soon as I heard AI, I had this panic, she said. All I thought about were these crazy robots that can think for themselves.

There were no robots. Just a web application that uses AI to build original instrumental tracks from a library of prerecorded samples after a user selects a few parameters.

Equipped with Chromebooks, Mancinis students could program mood and genre, manipulate the tempo or key, mute sections, and switch instrument kits with a couple of clicks. And just like that, an original piece is produced instantly.

The AI programdesigned for use by anyone who needs cheap background tunes for media contentenabled Mancini to teach in ways not possible before: Students in an elective course who do not play instruments or read sheet music were now creating their own compositions. For the musically inclined students, Mancini said the software allowed for an even deeper fusion of computer and humantheyd create a track and play over it, combining AI-generated rhythms with live instrumentation.

For me, music is an emotional experience. I know what I put into my playing and teaching of music. For that emotion to come out of an algorithm, I couldn't wrap my head around it at first. How can a computer replicate that? she said. But it can. Im a convert.

While Montour is embracing AI technology with a full-blown bear hug, most school districts are notat least not yet. Some are dabbling with applications. Others arent using AI at all.

And still other educators cant say if their districts are using AI, oftentimes because theyre not familiar enough with the technology to recognize it.

Whether that changes with the nationwide distance learning experiment that happened this spring is still to be seen.

This much, however, is clear: School budgets are going to be devastated from the economic onslaught wrought by the virus, and strapped-for-cash districts could delay tech acquisitions other than the devices and hotspots students need to go online as they prioritize necessities. Still lingering are serious questions about privacy, data bias, and just how effective AI solutions are for education.

The 3,000-student Montour district, in the suburbs of Pittsburgh, is using AI inside and outside the classroom.

The district teaches courses focused on artificial intelligence, ranging from ethics to robotics. It partners with universities and technology companies working on the cutting edge of AI. Theres even a 4-foot tall autonomous robot, a boxy machine that looks like a filing cabinet on wheels, zooming around the hallways of its elementary school delivering packages.

And on the districts backend IT infrastructure, there are dashboards and programs powered by AI providing educators with real-time data about each student, producing metrics that monitor progress and even forecast future success.

When we come back to school next year after the coronavirus, were going to have data on every single kid from their remote learning experience, said Justin Aglio, the director of academic achievement and district innovation at Montour. Not your traditional A,B,C data, either.

Districts, already inundated with trying to keep up, might also shy away from AI tools in the immediate future while teachers and staff adjust to a new digital ecosystem already pushing the boundaries for many.

Its not even on our radar right now, said Andrew McDaniel, the principal of Southwood High School in central Indiana, when asked if hes considering incorporating some of the most basic forms of AI, such as Alexa voice devices, into classrooms. A lot of teachers are looking at what they know works now and sticking to that. Theyre not going to mess around with much that goes beyond that.

Increasingly, though, voice-activated devices such as Alexa, Siri, and Google Home are being used as teaching assistants in classes. Schools are turning to smart thermostats to save money on energy costs and using AI programs to monitor their computer networks. AI is helping districts identify students who are at risk of dropping out, and math tutors and automated essay-scoring systems that have been used for decades now feature more sophisticated AI software than they did in the past.

Until recently, though, most of those tools have relied on simpler AI algorithms that work on a basis of preset rules and conditions.

But a new age of AI-based ed-tech tools are emerging using machine-learning techniques to discover patterns and identify relationships that are not part of their original programming. These systems consistently learn from data collected every time theyre in use and more truly mirror human intelligence.

Ed-tech vendors are pitching advanced statistical AI tools as a way to provide greater personalized learning, tailoring curriculum to a students strengths and weaknesses. Researchers say it is unlikely advanced AI will transform K-12 education, but it can have a positive impact in areas like adaptive instruction, automated essay scoring and feedback, language learning, and online curriculum-recommendation engines.

Most of the startups pioneering education solutions with this type of AI arent yet in a position to offer their products on a mass scale in the United States. Thats because highly accurate advanced AI systems require access to massive data sets to populate and train the machine-learning algorithm to make reliable predictions. Those algorithms must also have access to high-quality data to avoid reinforcing racial, gender, and other biases.

Bill Salak, the chief technology officer for Brainly, an AI-based content generator and homework assistant that uses machine learning, said his company has traditionally worked directly with students, not districts. Now, however, Brainly is diving into more advanced statistical models for its AI to allow for even deeper personalization, and it is planning to eventually start creating products that could go into the classroom.

Salak said that all AI-based technology vendors face an uphill climb because school districts are consistently underfunded, and if theyre going to spend money on a tech tool, it has to be proven to be effective and contributing to academic goals.

The education systems prioritize things that will help them meet their goals, and not many outcomes relate to teaching with new tech, he said. Even if the teacher may see a huge amount of value in something, at the end of the day, that teacher has to have a certain percentage of their kids meeting certain competency standards.

April DeGennaro, a teacher in the gifted program at Peeples Elementary in Fayetteville, Ga., knows firsthand what its like for district administrators to buy into the idea of using AI-tech tools but not backing up that commitment with funding.

DeGennaro runs a lab where students focus on robotics, and her 4th graders use an AI-based robot called Cozmo. Shaped like a mini bulldozer that can fit in your palm, Cozmo uses facial recognition and a so-called emotion engine, allowing it to react to different situations with a humanlike personality by showing a range of emotions, from happy or sad to bored and grumpy. Because of COVID-19-related school closures, the AI robots currently arent being used.

But under normal circumstances, up to four students can use one of the robots at a time with an iPad, coding it to carry out different tasks. At $150 each, DeGennaro said the robots amount to a low-cost investment, but shes had to find her own funding for all seven Cozmo robots in her class.

DeGennaro raised money online, where she got parents to chip in to buy robots. Shes also made it clear to those that know her: For Christmas, for an end-of-the-year gift, or whenever you want to buy Mrs. D a present, buy a robot.

School districts may like things, DeGennaro said, but that doesnt mean they're going to fund them.

At the Saddle Mountain Unified School District in Arizona, a new policy allowing high school teachers to use Alexa or Google Home went into effect this year after a group of district officials and teachers walked through several STEM schools in the Phoenix area and saw the devices being used in classrooms.

Joel Wisser, the technology integration specialist for the 2,300-student district, said teachers walked away impressed, and several decided to incorporate the devices into their daily classroom activities. The district didnt pay for the devices, however. Instead, teachers had to bring their own, and Wisser said he doesnt expect that to change.

One history teacher uses his Alexa as a mini-assistant: reminding him when to return papers to students, answering student and teacher inquiries, providing a Jeopardy-style quiz game, or even playing music set from a time period the class is studying to add ambience to a lesson.

Its really just a personal assistant, a helper, for him. His eyesight is not great. He has a 46-inch computer monitor and hes not a fast typer, said Wisser. Being able to talk to a device is much more efficient for him, so hes not spending time at a keyboard typing in the words 'ancient Greek music.

Everyone didnt welcome the devices at first. The districts technology director, for one, was hesitant because the Alexa was going to be tapped into the districts network, and he wasnt going to have complete control over it, Wisser said.

The voice-activated speakers are also at the center of an ongoing privacy debate since they can record conversations. Wisser said there hadnt been any pushback from parents so far, and class conversations were not recorded.

Christina Gardner-McCune, the director of the Engaging Learning Labs at the University of Florida, said parents, students, and teachers have concerns about what kind of data an Alexa device is collecting in the classroom and what is it doing on the districts network while there. Even though the recording function on an Alexa can be turned off, Gardner-McCune said some districts dont want anything to do with them.

A lot of districts are not allowing those devices in the classroom even though they could have some educational purposes, said Gardner-McCune, who is also a steering committee co-chair of the AI for K-12 Initiative, a national working group of teachers and AI experts focused on jump-starting discussion on how to incorporate AI learning into school curricula.

It will take more time and use of AI devices and tech tools in classrooms before districts become increasingly comfortable with them on a larger scale, she said. And more research is needed showing the benefits of advanced AI systems before districts are willing to pony up for them: For major school districts, said Gardner-McCune, its going to come down to how does it affect test scores.

Back in the Montour district, band director and teacher Mancini said her apprehension about the AI music program vanished when she became familiar with the web application and realized there wasnt going to be a robot in the middle of my room. One of her favorite class exercises using the AI music program involved muting the background music on a movie cliplike the scene where the ship is sinking in Titanicand letting students rework the general vibe by adding their own music.

Music education has been so traditionally taught one way. We play instruments or sing or learn music theory. This is so far from traditional, and Im glad I did it because it was so much fun when I got into it, she said. As teachers, we just need to not be afraid of technology.

Web Only

Back to Top

Go here to see the original:
Artificial Intelligence in K-12: The Right Mix for Learning or a Bad Idea? - Education Week