Archive for the ‘Artificial Intelligence’ Category

Justice, Equity, And Fairness: Exploring The Tense Relationship Between Artificial Intelligence And The Law With Joilson Melo – Forbes

Law Library

AI is becoming more and more prevalent in society, with many people wondering how it will affect the law. How artificial intelligence is impacting our laws and what we can expect for future technology/legal interactions.

The conversation surrounding the relationship between AI and law also touches quite clearly on the ability to rely on Artificial Intelligence to deliver fair decisions and to enhance the legal systems delivery of equity and justice.

In this article, I share insights from my conversations on this topic with Joilson Melo, a Brazilian law expert, and programmer whose devotion to equity and fairness led to a historic change in the Brazilian legal system in 2019, this change mainly affected the system that controls all processes processed digitally in Brazil, the PJe (Electronic Judicial Process).

As a law student, Melo filed a request for action in the National Council of Justice (CNJ) against the Court of Justice of Mato Grosso, resulting in a decision allowing citizens to file applications in court electronically without a lawyer and within the Special Court, observing the value of the case, so that it does not exceed 20 minimum wages. Melos petition revealed provisions in the law that allowed for this and his victory enforced those provisions. The results for the underprivileged and those who couldnt afford lawyers have been immense.

On the relationship between AI and the Law, Melo remains a bit on the fence;

The purpose of the law is justice, equity, and fairness, says Melo.

Any technology that can enhance that is welcome in the legal arena. Artificial Intelligence has already been shown that it can be as biased as the data that it is fed. This instantly places a greater burden of care on us to ensure that it is adopted through a careful process in the legal space and society at large

The use of AI to predict jury verdicts has been around for quite some time now, but it's unclear whether or not an algorithm can accurately predict human behavior. There have also been studies that prove that machine learning algorithms can be used to help judges make sentencing decisions based on factors such as recidivism rates.

In theory, this seems to solve a glaring problem, the algorithm tools are supposed to predict criminal behavior and help judges make decisions based on data-driven recommendations and not their gut.

However, as Melo explains, this also presents some deep concerns for legal experts, AI risk assessment tools run on algorithms that are trained on historical crime data. In countries like America and many other nations, law enforcement has already been accused of targeting certain minorities and this is shown by the high number of these minorities in prisons. If the same data is fed, the AI is going to be just as biased.

Melo continues, Besides, the Algorithms turn correlative insights into causal insights. If the data shows that a particular neighborhood is correlated with high recidivism, it doesnt prove that this neighborhood caused recidivism in any given case. These are things that a Judge should be able to tell from his observations. Anything less is a far cry from justice, unless we figure out a way to cure the data.

As we continue developing smarter technologies, data protection becomes an increasingly important issue. This includes protecting private information from hackers and complying with GDPR standards across all industries that collect personal data about their customers.

Apart from the GDPR, not many countries have passed targeted laws that affect big data. According to the 2018 Technology Survey by the International Legal Technology Association, 100 percent of law firms with 700 or more lawyers use AI tools or are pursuing AI projects.

If this trend continues and meets with the willingness of courts and judges to adopt AI, then they would eventually fall into the category of companies that need to abide by the data protection rules. Client/Attorney privilege could be at risk of a hack and court decisions as well.

The need for stringent local laws that help regulate how data is received and managed has never been more clear, and this is why it is shocking that many governments have not acted faster.

Joilson Melo

Many governments have an unholy alliance with tech giants and the companies that deal most with data, says Melo.

These companies are at the front of national development and are the most attractive national propositions for investments. Leaders do not want to stifle them or be seen as impeding technological advancement. However, if the law must apply equally, governments should take a cue from the GDPR and start now before we see privacy violation worse than we already have.

As Artificial Intelligence becomes more ingrained in our lives, so do the legal issues that surround it.

One of the most prevalent legal questions is whether machines should be allowed to possess self-driving cars and deadly weapons. Self-driving cars are already on the market but they have a long way to go before they could replace human drivers. The technology has not been perfected yet and will require huge strides forward before we can say with certainty that these vehicles are safe for society at large.

The larger concerns about these touch on how easily these algorithms can be hacked and influenced externally.

AI and Weapons/War Crimes: The possibility of autonomous weapons systems has been touted in many spheres as a powerful way to identify and eliminate threats. This has come against strong pushback for obvious reasons. Empathy, concession, and a certain big-picture approach have always played crucial roles in war and border security. These are traits that we still cannot inculcate into an algorithm.

Human Rights Questions: One of the main questions that arise in the area of human rights is with regards to algorithmic transparency. There have been reports of people losing jobs, being denied loans, and being put on no-fly zones with no explanation other than, it was an algorithmic determination.

If this pattern persists the risk to human rights is enormous. The questions of cybersecurity vulnerabilities, AI bias, and lack of contestability are also concerns that touch on human rights.

Melos concern seems more targetted at the law and how it can be preserved as an arbiter of justice and enforcer of human rights and he rightly points out the implications of leaving these questions unanswered;

Deciding not to adopt AI in society and legal systems is deciding not to move forward as a civilization, Melo comments.

However, deciding to adopt AI blindly would see us move back into a barbaric civilization.I believe that the best approach is to take a piece-meal approach towards adoption; take a step, spot the problems, eliminate them and then take another step.

The law and legal practitioners stand to gain a lot from a proper adoption of AI into the legal system. Legal research is one area that AI has already begun to help out with. AI can streamline the thousands of results an internet or directory search would otherwise provide, offerring a smaller digestible handful of relevant authorities for legal research. This is already proving helpful and with more targeted machine learning it would only get better.

The possible benefits go on; automated drafts of documents and contracts, document review, and contract analysis are some of those considered imminent.

Many have even considered the possibilities of AI in helping with more administrative functions like the appointment of officers and staff, administration of staff, and making the citizens aware of their legal rights.

A future without AI seems bleak and laborious for most industries including the legal and while we must march on, we must be cautious about our strategies for adoption. This point is better put in the words of Joilson Melo; The possibilities are endless, but the burden of care is very heavy we must act and evolve with cautiously.

Thank you for your feedback!

Read the rest here:
Justice, Equity, And Fairness: Exploring The Tense Relationship Between Artificial Intelligence And The Law With Joilson Melo - Forbes

New report assesses progress and risks of artificial intelligence – Brown University

While many reports have been written about the impact of AI over the past several years, the AI100 reports are unique in that they are both written by AI insiders experts who create AI algorithms or study their influence on society as their main professional activity and that they are part of an ongoing, longitudinal, century-long study, said Peter Stone, a professor of computer science at the University of Texas at Austin, executive director of Sony AI America and chair of the AI100 standing committee. The 2021 report is critical to this longitudinal aspect of AI100 in that it links closely with the 2016 report by commenting on what's changed in the intervening five years. It also provides a wonderful template for future study panels to emulate by answering a set of questions that we expect future study panels to reevaluate at five-year intervals.

Eric Horvitz, chief scientific officer at Microsoft and co-founder of the One Hundred Year Study on AI, praised the work of the study panel.

"I'm impressed with the insights shared by the diverse panel of AI experts on this milestone report," Horvitz said. The 2021 report does a great job of describing where AI is today and where things are going, including an assessment of the frontiers of our current understandings and guidance on key opportunities and challenges ahead on the influences of AI on people and society.

In terms of AI advances, the panel noted substantial progress across subfields of AI, including speech and language processing, computer vision and other areas. Much of this progress has been driven by advances in machine learning techniques, particularly deep learning systems, which have made the leap in recent years from the academic setting to everyday applications.

In the area of natural language processing, for example, AI-driven systems are now able to not only recognize words, but understand how theyre used grammatically and how meanings can change in different contexts. That has enabled better web search, predictive text apps, chatbots and more. Some of these systems are now capable of producing original text that is difficult to distinguish from human-produced text.

Elsewhere, AI systems are diagnosing cancers and other conditions with accuracy that rivals trained pathologists. Research techniques using AI have produced new insights into the human genome and have sped the discovery of new pharmaceuticals. And while the long-promised self-driving cars are not yet in widespread use, AI-based driver-assist systems like lane-departure warnings and adaptive cruise control are standard equipment on most new cars.

Some recent AI progress may be overlooked by observers outside the field, but actually reflect dramatic strides in the underlying AI technologies, Littman says. One relatable example is the use of background images in video conferences, which became a ubiquitous part of many people's work-from-home lives during the COVID-19 pandemic.

To put you in front of a background image, the system has to distinguish you from the stuff behind you which is not easy to do just from an assemblage of pixels, Littman said. Being able to understand an image well enough to distinguish foreground from background is something that maybe could happen in the lab five years ago, but certainly wasnt something that could happen on everybodys computer, in real time and at high frame rates. Its a pretty striking advance.

As for the risks and dangers of AI, the panel does not envision a dystopian scenario in which super-intelligent machines take over the world. The real dangers of AI are a bit more subtle, but are no less concerning.

Some of the dangers cited in the report stem from deliberate misuse of AI deepfake images and video used to spread misinformation or harm peoples reputations, or online bots used to manipulate public discourse and opinion. Other dangers stem from an aura of neutrality and impartiality associated with AI decision-making in some corners of the public consciousness, resulting in systems being accepted as objective even though they may be the result of biased historical decisions or even blatant discrimination, the panel writes. This is a particular concern in areas like law enforcement, where crime prediction systems have been shown to adversely affect communities of color, or in health care, where embedded racial bias in insurance algorithms can affect peoples access to appropriate care.

As the use of AI increases, these kinds of problems are likely to become more widespread. The good news, Littman says, is that the field is taking these dangers seriously and actively seeking input from experts in psychology, public policy and other fields to explore ways of mitigating them. The makeup of the panel that produced the report reflects the widening perspective coming to the field, Littman says.

The panel consists of almost half social scientists and half computer science people, and I was very pleasantly surprised at how deep the knowledge about AI is among the social scientists, Littman said. We now have people who do work in a wide variety of different areas who are rightly considered AI experts. Thats a positive trend.

Moving forward, the panel concludes that governments, academia and industry will need to play expanded roles in making sure AI evolves to serve the greater good.

View post:
New report assesses progress and risks of artificial intelligence - Brown University

A Closer Look at Artificial Intelligence-Inspired Policing Technologies – University of Virginia

Artificial intelligence-inspired policing technology and techniques like facial recognition software and digital surveillance continue to find traction and champions among law enforcement agencies, but at what cost to the public?

Some cities like Wilmington, North Carolina, have even adopted AI-driven policing, where technology like ShotSpotter identifies gunshots and their locations. The software also recommends to patrol officers next best action based on their current location, police data on past crime records, time of day, and housing and population density.

Rene Cummings, data activist in residence at the University of Virginias School of Data Science, warns that the rules of citizenship are changing with the development of AI-inspired policing technologies. She explains, If the rules are changing, then the public needs to have a voice and has the right to provide input on where we need to go with these technologies as well as demand solutions that are accountable, explainable and ethical.

As artificial intelligence is used toward the development of technology-based solutions, Cummings research questions the ethical use of technology to collect and track citizen data, aiming to hold agencies more accountable and to provide citizens greater transparency.

Law enforcement, national security, and defense agencies are spending a lot of money on surveillance tools with little oversight as to their impact on communities and an individuals right to privacy, Cummings said. Were creating a tool that would give citizens the ability to see how these powerful tools are used and how they impact our lives.

Cummings and a team of data science graduate students are developing an algorithmic tool to evaluate the impact of AI-inspired law enforcement technologies. Their goal is to create an algorithmic force score that would eventually be used in an application that tracks technologies currently used by law enforcement agencies by force and zip code.

Sarah Adams and Claire Setser, both students in the online M.S. in Data Science program, said they chose the project because they wanted to put their data science skills to work for the public good. Cummings praised their effort. The algorithmic foundation was created with tremendous effort by Sarah and Claire who went through massive amounts of existing data to create an algorithm force model.

Adams said she wanted to work on a capstone project that contributed to and supported the ongoing efforts toward increasing police accountability and citizen activism. Our cohort chose our capstone projects at the beginning of 2021, which was less than one year after the loss of George Floyd and our country had been in civil unrest for quite some time. I was inspired by Rene Cummings energy and passion for data ethics and its application in criminology.

Setser agreed. I was attracted to this capstone project because of the possibility to enact and help push for real change. Citizens have a right to understand the technologies that are used to police them and surveil their lives every day. The problem is that this information is not readily available, so the idea of creating a tool to educate the public and encourage dialogue was of great interest to me.

Students in the M.S. in Data Science program are required to complete a capstone project sponsored by corporate, government and non-profit organizations. Students collaborate closely with sponsors and faculty across disciplines to tackle applied problems and generate data-driven solutions. Capstone projects range in scope and focus, and past projects have explored health disparities, consumer behavior, election forecasting, disease diagnosis, mental health, credit card fraud and climate change.

The capstone project was a valuable opportunity to combine and implement almost all of the skills and knowledge that we gained throughout the program, Setser said. Its an opportunity to experience the data pipeline from beginning to end while providing your sponsor a better understanding of the data. This is incredibly rewarding.

The projects next stage is to fine-tune and test, and Cummings and her team hope to collaborate with UVA and the wider Charlottesville community. What makes this so exciting is that were creating something brand new and adding new insights into emerging technology. Sarah and Claire have been amazing, delivering something extraordinary in such a short space of time. It really speaks to their expertise, determination, and commitment toward AI for the public and social good.

Cummings joined the School of Data Science in 2020 as its first data activist in residence. She is a criminologist, criminal psychologist, therapeutic jurisprudence specialist, AI ethicist and AI strategist. Her research places her on the frontline of artificial intelligence for social good, justice-oriented AI design, and social justice in AI policy and governance. She is the founder of Urban AI and a community scholar at Columbia University.

Link:
A Closer Look at Artificial Intelligence-Inspired Policing Technologies - University of Virginia

San Diego ranks relatively high in national ranking for artificial intelligence innovation – The San Diego Union-Tribune

Artificial Intelligence is jockeying to become the focal point of U.S. technology innovation in coming years, and San Diego is among the cities well positioned to be a frontrunner in this looming AI race.

A new report from the Metropolitan Policy Program at the Brookings Institution ranked more than 360 cities based on their AI economic prowess.

Bay Area metros San Francisco and San Jose- topped the list, according to Brookings, a public policy think tank based in Washington, D.C. They were followed by 13 earlier adopter cities that managed to claw out a toehold in AI, including San Diego.

Not everywhere should be looking to artificial intelligence for a major change in its economy, but places like San Diego really need to, said Mark Muro, a Brookings fellow and co-author of the report. I think the costs of being out of position on it are pretty high for San Diego, and the benefits of leveraging it fully are really high.

To rank cities, Brookings combined data on federal research grants, AI academic papers, AI patents, job postings and AI-related companies, among other factors.

Besides San Diego, Los Angeles, Seattle, Boston, Austin, Washington, D.C., and Raleigh, N.C., are in strong positions. Smaller cities with significant AI footprints relative to their size include Santa Barbara, Santa Cruz, Boulder, Colo., Lincoln, Neb., and Santa Fe, N.M.

An additional 87 cities have the potential to become players but so far have limited AI activities, according to the study.

For most of us. AI is best known through recommendations that pop up on Amazon or Spotify, when smart speakers answer voice commands, or when navigation apps give turn-by-turn directions.

But AI is much more than that, with the potential to permeate thousands of industries. It could prevent power outages and help heal grids quickly, better route shipping to cut emissions, aid in medical diagnoses, and power navigation for self-driving vehicles.

Muro said Brookings undertook the research after receiving requests from economic development officials.

They watched the digitization of everything during the pandemic, he said. Theyre asking where do we stand on these advanced digital technologies? How do we engage with this?

As with other technologies, artificial intelligence tends to be clustered on the coasts. Of the 363 metro areas in the study, 261 had no significant AI footprint.

This is not everywhere, said Muro. But we think there can be a happy medium where we retain our coastal innovation centers while also taking steps to help other places make progress and counter some of this massive concentration.

In San Diego, companies such as Qualcomm, Oracle, Intuit, Teradata, Cubic, Viasat, Thermo Fisher and Illumina develop artificial intelligence and machine learning algorithms.

But key drivers of the regions AI prowess stems from the military and universities.

The Naval Information Warfare Systems Command (NAVWAR) is based locally, creating a magnet for defense contractors and cyber security firms working in AI.

San Diegos affiliation with the military has been extremely important, said Nate Kelley, senior researcher at the San Diego Regional Economic Development Corp. There are more and more contracts coming, particularly through NAVWAR. Those federal contracts tend to be large, and theyre multi-year. So, theyre less vulnerable to business cycles.

UC San Diego was an early researcher in neural networks, said Rajesh Gupta, director of the Halicioglu Data Sciences Institute. That work helped pave the way for the machine learning engines that banks use to uncover credit card transaction fraud.

Gupta thinks the Brookings report underestimates San Diegos AI capabilities. This summer, a new AI Research Institute at UCSD won a $20 million grant from the National Science Foundation to tackle big, complicated problems.

Among them: tapping artificial intelligence to cut the time and cost of designing semiconductors; finding ways to improve communications networks; and researching how robots interact with humans to make self-driving cars safer.

The San Diego Super Computer Center also performs research related to AI, and the San Diego Association of Governments (SANDAG) has been an early proponent of AI-based smart cities technologies, said Gupta.

We have a $39 million effort going on today basically on grid response and making it intelligent, said Gupta. Its smart buildings, smart parking, smart transportation. These are what will define the metropolitan areas of tomorrow with AI embedded in them.

Here is the original post:
San Diego ranks relatively high in national ranking for artificial intelligence innovation - The San Diego Union-Tribune

New institute aims to unlock the secrets of corn using artificial intelligence – Agri-Pulse

Iowa State University researchers are growing two kinds of corn plants.

If you drive past the many fields near the universitys campus in Ames, you can see row after row of the first. But the second exists in a location that hasnt been completely explored yet: cyberspace.

The researchers, part of the AI Institute for Resilient Agriculture, are using photos, sensor data and artificial intelligence to create digital twins of corn plants that, through analysis, can lead to a better understanding of their real-life counterparts. They hope the resulting software and techniques will lead to better management, improved breeding, and ultimately, smarter crops.

We need to use lots of real-time, high-resolution data to make decisions, Patrick Schnable, an agronomy professor and director of Iowa States Plant Sciences Institute,told Agri-Pulse. Just collecting data for data's sake is not something that production ag wants. But data which is then linked to statistical models or other kinds of mathematical models that advise farmers on what to do has a lot of value.

The idea of machine learning systems that can improve or take over typical human tasks has been seeing increased attention over the past couple of years in many industries, including agriculture. In 2019, the National Science Foundation and several partner agencies, including the USDA, began establishing and funding AI institutes to research and advance artificial intelligence in fields like agriculture.

In their call for proposals, the organizations said AI could spur the next revolution in food and feed production.

The Green Revolution of the 1960s greatly enhanced food production and resulted in positive impacts on food security, human health, employment, and overall quality of life for many, the solicitation said. There were also unintended consequences on natural resource use, water and soil quality, and pest population expansion. An AI-based approach to agriculture can go much further by addressing whole food systems, inputs and outputs, internal and external consequences, and issues and challenges at micro, meso, and macro scales that include meeting policy requirements of ecosystem health.

Among the seven inaugural institutes established in 2020 were two focusing on agriculture: the AI Institute for Future Agricultural Resilience, Management and Sustainability at the University of Illinois at Urbana-Champaign, and the AI Institute for Next Generation Food Systems at the University of California, Davis. The 2021 lineup includesthe AIIRA and the Institute for Agricultural AI for Transformation Workforce and Decision Support (AgAID) at Washington State University.

Lakshmi Attigala, a senior scientist and lab manager at Iowa State University, prepares a corn plant to be photographed.

The AIIRA, which received $20 million in funding from these governmental organizations, plans to pool the expertise of researchers at Iowa State, Carnegie Mellon University, the University of Arizona, New York University, George Mason University, the Iowa Soybean Association, the University of Nebraska-Lincoln and the University of Missouri to study the intersection of plant science, agronomics and AI.

The institute hopes to develop AI algorithms that can take all of the collectible data from a field whether by ground robots, drones, or satellites and analyze it to create tools farmers can use to improve production of crops for resilience to the pressures brought about by climate change.

This is a game-changer, Baskar Ganapathysubramian, the director of the institute, told Agri-Pulse as he walked toward a nondescript white shed tucked between crop fields on the Iowa State University campus.

Scouting is based on the visual, he said. By using multimodal things, you can actually go beyond the visual and do early detection and early mitigation. That's not only sustainable, because you're going to use less of the chemicals needed, but also amazingly profitable.

Ganapathysubramian opened the door to reveal a flurry of activity. Directly inside, genetics graduate student Yawei Li held a protractor up to a corn plant in various positions, trying to measure the angles of its leaves.

Across the room, Lakshmi Attigala, a senior scientist and lab manager, grabbed a fully headed corn plant from a gray tote and walked it over to the labs makeshift photography studio, where a sheet of blue cloth hanging from the ceiling served as a backdrop.

She placed the corn plant in a small, rotating green vase ringed by light stands and adjusted its leaves, preparing it for a photo shoot. She gave it a unique number, 21-3N3125-1, which was printed on a piece of paper she attached to the front of it.

As the vase rotated, she used two cameras one hanging from the ceiling and the other sitting atop a tripod in front of the corn plant to take shots of the plant.

On the north side of the building two researchers senior staff member Zaki Jubery and graduate student Koushik Nagasubramanian placed eight more corn plants in a ring surrounding a terrestrial laser scanner. The scanner sends out a signal to detect point clouds, or find the exact dimensions of these plants based on which points the lasers bounce off.

Interested in more news on farm programs, trade and rural issues? Sign up for a four-week free trial toAgri-Pulse.Youll receive our content absolutely free during the trial period.

All three of these actions, while happening separately and in different parts of the room, feed data from the 80 corn plants scanned that day to a computer learning program that can study their features to learn what the plants look like. If cameras, lasers and sensors can collect enough data on corn plants, the software should be able to create near-identical models of them when fully developed.

The idea is that we perfect something from here and then we do that on a higher scale in the field, said Nagasubramanian. Thats a more complicated thing if you have plants in the background and you have changing light intensities and clouds.

The institute, which collaborates with the Genomes to Fields Initiative to phenotype corn hybrid varieties across 162 environments in North America, also monitors a corn field lined with cameras mounted on poles. The solar-powered cameras sit above the corn plants and take photos every 15 minutes to watch each one develop over time.

The resulting data can be fed to AI programs to get a better understanding of how these plants grow and what genetic traits they share.

Certainly it is going to help us understand for example, with the photography what is the genetic control of leaf angle. And then that would allow us to develop varieties with different leaf angles more readily, Schnable said.

Schnable said its too soon for the developing technology to be widely deployed in fields or used for breeding purposes and that for now, the research funding is limited. But he believes private companies will use AI technology to develop their own products.

These things do have significant impacts out there in the world, he said.

For more news, go to http://www.Agri-Pulse.com.

View post:
New institute aims to unlock the secrets of corn using artificial intelligence - Agri-Pulse