Archive for the ‘Artificial Intelligence’ Category

Could AI save the Amazon rainforest? – The Guardian

Artificial intelligence (AI)

Conservationists in the Brazilian Amazon are using a new tool to predict the next sites of deforestation and it may prove a gamechanger in the war on logging

Jill Langlois

Sat 29 Apr 2023 11.00 EDT

It took just the month of March this year to fell an area of forest in Triunfo do Xingu equivalent to 700 football pitches. At more than 16,000 sq km, this Environmental Protection Area (APA) in the south-eastern corner of the Brazilian Amazon, in the state of Par, is one of the largest conservation areas in the world. And according to a new tool that predicts where deforestation will happen next, its also the APA at highest risk of even more destruction.

The tool, PrevisIA, is an artificial intelligence platform created by researchers at environmental nonprofit Imazon. Instead of trying to repair damage done by deforestation after the fact, they wanted to find a way to prevent it from happening at all.

PrevisIA pinpointed Triunfo do Xingu as the APA at highest risk of deforestation in 2023, with 271.52 sq km of forest in the conservation area expected to be lost by the end of the year. About 5 sq km had already been destroyed in March.

Home to the endangered white-cheeked spider monkey and other vulnerable and near-threatened species, such as the hyacinth macaw and the jaguar, the conservation area is rich in biodiversity often found nowhere else in the world. But its land runs through two municipalities, Altamira and So Flix do Xingu, with some of the highest rates of deforestation in the country. And despite Triunfo do Xingu being protected under Brazilian law, illegal activities mining, logging, land-grabbing have ravaged the area, stripping it bare in places.

But with PrevisIA, there is the potential for change. Imazon is now establishing partnerships with authorities across the region, with the aim of stopping deforestation before it starts.

Destruction across the Brazilian Amazon is creeping close to an all-time high. According to SAD, Imazons Deforestation Alert System, deforestation this March tripled compared to the same month last year, and the first quarter of 2023 saw 867 sq km of rainforest destroyed the second largest area felled in the past 16 years.

The idea for PrevisIA emerged in 2016, when the team at Imazon analysed data collected from SAD satellite images. Tired of getting notifications after large swaths of forest had already been cleared, they asked themselves: is it possible to generate short-term deforestation prediction models?

Existing deforestation prediction models were long-term, looking at what would happen in decades, says Carlos Souza Jr, senior researcher at Imazon and project coordinator of PrevisIA and SAD. We needed a new tool that could get ahead of the devastation.

Souza and his team a computer engineer, a consultant in geostatistics and two researchers began developing a new model capable of generating annual predictions. They published their findings in the journal Spatial Statistics in August 2017.

The model takes a two-pronged approach. First, it focuses on trends present in the region, looking at geostatistics and historical data from Prodes, the annual government monitoring system for deforestation in the Amazon. Understanding what has happened can help make predictions more precise. When already deforested areas are recent, this indicates gangs are operating in the area, so theres a higher risk that nearby forest will soon be wiped out.

Second, it looks at variables that put the brakes on deforestation land protected by Indigenous and quilombola (descendent of rebel slaves) communities, and areas with bodies of water, or other terrain that doesnt lend itself to agricultural expansion, for instance and variables that make deforestation more likely, including higher population density, the presence of settlements and rural properties, and higher density of road infrastructure, both legal and illegal.

They are the arteries of destruction of the forest, says Souza, referring to unofficial roads that snake through the Amazon to facilitate illegal industrial activities. These roads create the conditions for new deforestation.

Monitoring the construction of these roads is crucial to predicting and eventually preventing deforestation. According to Imazon, 90% of accumulated deforestation is concentrated within 5.5km of a road. Logging is even closer, with 90% taking place within 3km, and 85% of fires within 5km.

Researchers used to comb through thousands of satellite images to see whether they could spot new roads slicing through the biome. With PrevisIA, the work is handed over to an AI algorithm that automates mapping, allowing for quicker analysis and, in turn, more frequent updates.

But without a robust computational platform and the ability to update road maps more quickly, PrevisIA couldnt be put into action. It wasnt until 2021 that the team at Imazon partnered with Microsoft and Fundo Vale, acquiring the cloud computing power they needed to run the AI algorithm for mapping roads.

Technology has always been the reason weve been able to control deforestation, says Juliano Assuno, executive director of the Climate Policy Initiative and professor at the Pontifical Catholic University of Rio de Janeiro (PUC-Rio). PrevisIA is a natural evolution of this incorporation of technology in the fight to protect the Amazon, and one with a lot of potential.

While technology is crucial for PrevisIA to work, who uses it will be what makes the difference. Assuno notes the obvious entities who could benefit from using PrevisIA government agencies at all levels, tasked with protecting the rainforest but he also cites those not directly involved in monitoring the Amazon, banks, investors and those who buy products from the region, who could use the information to make better decisions, both from an economic and an environmental point of view.

So far, Imazon has official partnerships with a handful of state prosecutors offices in the region. They hope that their use of PrevisIA will lead to less punishment and more prevention.

We dont want to have to keep coming in after the damage has already been done, says Jos Godofredo Pires dos Santos, a public prosecutor in Par and coordinator of the environmental operational support centre. Were always working to penalise these environmental crimes and irregularities. But from the environmental side, the damage has already been done. We want to reverse that logic. We want to find a way to prevent it from ever happening.

Pires dos Santoss team has been having weekly meetings with Imazon to get up to speed on how they can best use PrevisIA. He expects theyll start putting the system to use in the second half of 2023.

In Acre in western Brazil, the state prosecutors office hopes for the same. The idea, says prosecutor Arthur Cezar Pinheiro Leite, is for PrevisIA to notify monitoring agencies of high-risk areas, so they can keep a closer watch and so that prosecutors can warn property owners or others in the region that they will be held responsible if deforestation occurs.

We want them to know were aware of whats going on, Leite says. And if that deforestation does still manage to happen, theyll be punished and serve as an example for others considering doing the same.

So far, Souza says PrevisIAs accuracy has been fantastic. Of all its deforestation alerts, 85% have been within 4km of the predicted location. Just over 49% of alerts have been in areas classified as high or very high risk. He and his team are constantly working to improve their model, but he also hopes that, one day, they get it wrong.

If that happens, he says, itll mean prevention is working.

{{topLeft}}

{{bottomLeft}}

{{topRight}}

{{bottomRight}}

{{.}}

View original post here:
Could AI save the Amazon rainforest? - The Guardian

Which Jobs Will Be Most Impacted by ChatGPT? – Visual Capitalist

Jobs Most Impacted by ChatGPT and Similar AI Models

On November 30, 2022, OpenAI heralded a new era of artificial intelligence (AI) by introducing ChatGPT to the world.

The AI chatbot stunned users with its human-like and thorough responses. ChatGPT could comprehend and answer a variety of different questions, make suggestions, research and write essays and briefs, and even tell jokes (amongst other tasks).

Many of these skills are used by workers in their jobs across the world, which begs the question: which jobs will be transformed, or even replaced, by generative AI in the coming future?

This infographic from Harrison Schell visualizes the March 2023 findings of OpenAI on the potential labor market impact of large language models (LLMs) and various applications of generative AI, including ChatGPT.

The OpenAI working paper specifically examined the U.S. industries and jobs most exposed to large language models like GPT, which the chatbot ChatGPT operates on.

Key to the paper is the definition of what exposed actually means:

A proxy for potential economic impact without distinguishing between labor-augmenting or labor-displacing effects. OpenAI

Thus, the results include both jobs where humans could possibly use AI to optimize their work, along with jobs that could potentially be automated altogether.

OpenAI found that 80% of the American workforce belonged to an occupation where at least 10% of their tasks can be done (or aided) by AI. One-fifth of the workforce belonged to an occupation where 50% of work tasks would be impacted by artificial intelligence.

Here is a list of jobs highlighted in the paper as likely to see (or already seeing) AI disruption, where AI can reduce the time to do tasks associated with the occupation by at least 50%.

Analysis was provided by a variety of human-made models as well as ChatGPT-4 models, with results from both showing below:

Editors note: The paper only highlights some jobs impacted. One AI model found a list of 84 additional jobs that were fully exposed, but not all were listed. One human model found 15 additional fully exposed jobs that were not listed.

Generally, jobs that require repetitive tasks, some level of data analysis, and routine decision-making were found to face the highest risk of exposure.

Perhaps unsurprisingly, information processing industries that involve writing, calculating, and high-level analysis have a higher exposure to LLM-based artificial intelligence. However, science and critical-thinking jobs within those industries negatively correlate with AI exposure.

On the flipside, not every job is likely to be affected. Heres a list of jobs that are likely least exposed to large language model AI disruption.

Naturally, hands-on industries like manufacturing, mining, and agriculture were more protected, but still include information processing roles at risk.

Likewise, the in-person service industry is also expected to see minimal impact from these kinds of AI models. But, patterns are beginning to emerge for job-seekers and industries that may have to contend with artificial intelligence soon.

OpenAI analyzed correlations between AI exposure in the labor market against a jobs requisite education level, wages, and job-training.

The paper found that jobs with higher wages have a higher exposure to LLM-based AI (though there were numerous low-wage jobs with high exposure as well).

Professionals with higher education degrees also appeared to be more greatly exposed to AI impact, compared to those without.

However, occupations with a greater level of on-the-job training had the least amount of work tasks exposed, compared to those jobs with little-to-no training.

The potential impact of ChatGPT and similar AI-driven models on individual job titles depends on several factors, including the nature of the job, the level of automation that is possible, and the exact tasks required.

However, while certain repetitive and predictable tasks can be automated, others that require intangibles like creative input, understanding cultural nuance, reading social cues, or executing good judgement cannot be fully hands-off yet.

And keep in mind that AI exposure isnt limited to job replacement. Job transformation, with workers utilizing the AI to speed up or improve tasks output, is extremely likely in many of these scenarios. Already, there are employment ads for AI Whisperers who can effectively optimize automated responses from generalist AI.

As the AI arms race moves forward at a rapid pace rarely seen before in the history of technology, it likely wont take long for us to see the full impact of ChatGPT and other LLMs on both jobs and the economy.

This article was published as a part of Visual Capitalist's Creator Program, which features data-driven visuals from some of our favorite Creators around the world.

Visit link:
Which Jobs Will Be Most Impacted by ChatGPT? - Visual Capitalist

David Williamson Shaffer shares expertise on artificial intelligence in … – UW-Madison

April 28, 2023

David Williamson Shaffer recently lent his expertise on artificial intelligence to news reports featured on two Wisconsin television stations.

Shaffer is the Sears Bascom Professor of Learning Analytics and the Vilas Distinguished Achievement Professor of Learning Sciences at the UWMadison School of Education and a Data Philosopher at the Wisconsin Center for Education Research.

In a story aired on WKOW in Madison, Shaffer argued schools shouldnt ban AI tools like ChatGPT, but instead figure out how to teach students to use the tools appropriately. He said he could envision AI becoming commonplace in educational environments in the future.

Its wrong to ban ChatGPT, he said. Because students are going to need to know how to use these technologies correctly, theyre going to need to know how to use them without plagiarizing and theyre going to need to know how to use them to ask the right questions.

Shaffer recently outlined this argument in an op-ed published in Newsweek.

In a story aired on WAOW in Wausau, Shaffer explained and weighed in on a new AI feature rolled out on the social media platform Snapchat. Some, including law enforcement, have raised concerns about the new features ability to spread incorrect or harmful information or violate the privacy rights of minors.

Shaffer said parents can and should play an important role in helping their children navigate the ever-changing social media landscape.

In the same way that you dont follow your kids around when they go out with their friends in the evening, but you talk with them about what they did and talk about what some of the dangers are, you assess their level of responsibility, he said in the interview.

The full WKOW story is available here.

The full WAOW story is available here.

Read more here:
David Williamson Shaffer shares expertise on artificial intelligence in ... - UW-Madison

Scientists use brain scans and AI to ‘decode’ thoughts – Economic Times

Scientists said Monday they have found a way to use brain scans and artificial intelligence modelling to transcribe "the gist" of what people are thinking, in what was described as a step towards mind reading. While the main goal of the language decoder is to help people who have the lost the ability to communicate, the US scientists acknowledged that the technology raised questions about "mental privacy".

Aiming to assuage such fears, they ran tests showing that their decoder could not be used on anyone who had not allowed it to be trained on their brain activity over long hours inside a functional magnetic resonance imaging (fMRI) scanner.

Alexander Huth, a neuroscientist at the University of Texas at Austin and co-author of a new study, said that his team's language decoder "works at a very different level".

It is the first system to be able to reconstruct continuous language without an invasive brain implant, according to the study in the journal Nature Neuroscience.

This allowed the researchers to map out how words, phrases and meanings prompted responses in the regions of the brain known to process language.

The model was trained to predict how each person's brain would respond to perceived speech, then narrow down the options until it found the closest response.

The study's first author Jerry Tang said the decoder could "recover the gist of what the user was hearing".

For example, when the participant heard the phrase "I don't have my driver's license yet", the model came back with "she has not even started to learn to drive yet".

The decoder struggled with personal pronouns such as "I" or "she," the researchers admitted.

But even when the participants thought up their own stories -- or viewed silent movies -- the decoder was still able to grasp the "gist," they said.

This showed that "we are decoding something that is deeper than language, then converting it into language," Huth said.

Because fMRI scanning is too slow to capture individual words, it collects a "mishmash, an agglomeration of information over a few seconds," Huth said.

"So we can see how the idea evolves, even though the exact words get lost."

- Ethical warning - David Rodriguez-Arias Vailhen, a bioethics professor at Spain's Granada University not involved in the research, said it went beyond what had been achieved by previous brain-computer interfaces.

This brings us closer to a future in which machines are "able to read minds and transcribe thought," he said, warning this could possibly take place against people's will, such as when they are sleeping.

The researchers anticipated such concerns.

They ran tests showing that the decoder did not work on a person if it had not already been trained on their own particular brain activity.

The three participants were also able to easily foil the decoder.

While listening to one of the podcasts, the users were told to count by sevens, name and imagine animals or tell a different story in their mind. All these tactics "sabotaged" the decoder, the researchers said.

Next, the team hopes to speed up the process so that they can decode the brain scans in real time.

They also called for regulations to protect mental privacy.

"Our mind has so far been the guardian of our privacy," said bioethicist Rodriguez-Arias Vailhen.

"This discovery could be a first step towards compromising that freedom in the future."

See original here:
Scientists use brain scans and AI to 'decode' thoughts - Economic Times

Opinion: Artificial intelligence is the future of hiring – The San Diego Union-Tribune

Cooper is a professor of law at California Western School of Law and a research fellow at Singapore University of Social Sciences. He lives in San Diego. Kompella is CEO of industry analyst firm RPA2AI Research and visiting professor for artificial intelligence at the BITS School of Management, Mumbai, and lives in Bangalore, India.

Hiring is the lifeblood of the economy. In 2022, there were 77 million hires in the United States, according to the U.S. Department of Labor. Artificial intelligence is expected to make this hiring process more efficient and more equitable. Despite such lofty goals, there are valid concerns that using AI can lead to discrimination. Meanwhile, the use of AI in the hiring process is widespread and growing by leaps and bounds.

A Society of Human Resources Management survey last year showed that about 80 percent of employers use AI for hiring. And there is good reason for the assist: Hiring is a high-stakes decision for the individual involved and the businesses looking to employ talent. It is no secret, though, that the hiring process can be inefficient and subject to human biases.

AI offers many potential benefits. Consider that human resources teams spend only 7 seconds skimming a resume, a document which is itself a one-dimensional portrait of a candidate. Recruiters instead end up spending more of their time on routine tasks like scheduling interviews. By using AI to automate such routine tasks, human resources teams can spend more quality time on assessing candidates. AI tools can also use a wider range of data points about candidates that can result in a more holistic assessment and lead to a better match. Research shows that the overly masculine language used in job descriptions puts off women from applying. AI can be used to create job descriptions and ads that are more inclusive.

But using AI for hiring decisions can also lead to discrimination. A majority of recruiters in the 2022 Society of Human Resources Management survey identified flaws in their AI systems. For example, they excluded qualified applicants or had a lack of transparency around the way in which the algorithms work. There is also disparate impact (also known as unintentional discrimination) to consider. According to University of Southern California research in 2021, job advertisements are not shown to women despite them being qualified for the roles being advertised. Also, advertisements for high-paying jobs are often hidden from women. Many states suffer a gender pay gap. When the advertisements themselves are invisible, the pay equity gap is likely not going to solve itself, even with the use of artificial intelligence.

Discrimination, even in light of new technologies, is still discrimination. New York City has fashioned a response by enacting Local Law 144, scheduled to come into effect on July 15. This law requires employers to provide notice to applicants when AI is being used to assess their candidacy. AI systems are subject to annual independent third-party audits and audit results must be displayed publicly. Independent audits of such high-stakes AI usage is a welcome move by New York City.

California, long considered a technology bellwether, has been off to a slow start. The California Workplace Technology Accountability Act, a bill that focused on employee data privacy, is now dead. On the anvil are updates to Chapter 5 (Discrimination in Employment) of the California Fair Employment and Housing Act. Initiated a year ago by the Fair Employment and Housing Council (now called the Civil Rights Department), these remain a work in progress. These are not new regulations per se but an update of existing anti-discrimination provisions. The proposed draft is open for public comments but there is no implementation timeline yet. The guidance for compliance, the veritable dos and donts, including penalties for violations, are all awaited. There is also a recently introduced bill in the California Legislature that seeks to regulate the use of AI in business, including education, health care, housing and utilities, in addition to employment.

The issue is gaining attention globally. Among state laws on AI in hiring is one in Illinois that regulates AI tools used for video interviews. At the federal level, the Equal Employment Opportunity Commission has updated guidance on employer responsibilities. And internationally, the European Unions upcoming Artificial Intelligence Act classifies such AI as high-risk and prescribes stringent usage rules.

Adoption of AI can help counterbalance human biases and reduce discrimination in hiring. But the AI tools used must be transparent, explainable and fair. It is not easy to devise regulations for emerging technologies, particularly for a fast-moving one like artificial intelligence. Regulations need to prevent harm but not stifle innovation. Clear regulation coupled with education, guidance and practical pathways to compliance strikes that balance.

Read the original here:
Opinion: Artificial intelligence is the future of hiring - The San Diego Union-Tribune