Archive for the ‘Ai’ Category

Why AI’s diversity crisis matters, and how to tackle it – Nature.com

Inclusivity groups focus on promoting diverse builders for future artificial-intelligence projects.Credit: Shutterstock

Artificial intelligence (AI) is facing a diversity crisis. If it isnt addressed promptly, flaws in the working culture of AI will perpetuate biases that ooze into the resulting technologies, which will exclude and harm entire groups of people. On top of that, the resulting intelligence will be flawed, lacking varied social-emotional and cultural knowledge.

In a 2019 report from New York Universitys AI Now Institute, researchers noted that more than 80% of AI professors were men. Furthermore, Black individuals made up just 2.5% of Google employees and 4% of those working at Facebook and Microsoft. In addition, the report authors noted that the overwhelming focus on women in tech when discussing diversity issues in AI is too narrow and likely to privilege white women over others.

Some researchers are fighting for change, but theres also a culture of resistance to their efforts. Beneath this veneer of oh, AI is the future, and we have all these sparkly, nice things, both AI academia and AI industry are fundamentally conservative, says Sabine Weber, a scientific consultant at VDI/VDE Innovation + Technik, a technology consultancy headquartered in Berlin. AI in both sectors is dominated by mostly middle-aged white men from affluent backgrounds. They are really attached to the status quo, says Weber, who is a core organizer of the advocacy group Queer in AI. Nature spoke to five researchers who are spearheading efforts to change the status quo and make the AI ecosystem more equitable.

Senior data science manager at Shopify in Atlanta, Georgia, and a general chair of the 2023 Deep Learning Indaba conference.

I am originally from Ghana and did my masters in statistics at the University of Akron in Ohio in 2011. My background is in using machine learning to solve business problems in customer-experience management. I apply my analytics skills to build models that drive customer behaviour, such as customer-targeting recommendation systems, aspects of lead scoring the ranking of potential customers, prioritizing which ones to contact for different communications and things of that nature.

This year, Im also a general chair for the Deep Learning Indaba, a meeting of the African machine-learning and AI community that is held in a different African country every year. Last year, it was held in Tunisia. This year, it is taking place in Ghana in September.

Our organization is built for all of Africa. Last year, 52 countries participated. The goal is to have all 54 African countries represented. Deep Learning Indaba empowers each country to have a network of people driving things locally. We have the flagship event, which is the annual conference, and country-specific IndabaX events (think TED and TEDx talks).

During Ghanas IndabaX conferences, we train people in how to program and how to deal with different kinds of data. We also do workshops on what is happening in the industry outside of Ghana and how Ghana should be involved. IndabaX provides funding and recommends speakers who are established researchers working for companies such as Deep Mind, Microsoft and Google.

To strengthen machine learning and AI and inclusion in Ghana, we need to build capacity by training young researchers and students to understand the skill sets and preparation they need to excel in this field. The number one challenge we face is resources. Our economic status is such that the focus of the government and most Ghanaians is on peoples daily bread. Most Ghanaians are not even thinking about technological transformation. Many local academics dont have the expertise to teach the students, to really ground them in AI and machine learning.

Most of the algorithms and systems we use today were created by people outside Africa. Africas perspective is missing and, consequently, biases affect Africa. When we are doing image-related AI, there arent many African images available. African data points make up no more than 1% of most industry machine-learning data sets.

When it comes to self-driving cars, the US road network is nice and clean, but in Africa, the network is very bumpy, with a lot of holes. Theres no way that a self-driving car trained on US or UK roads could actually work in Africa. We also expect that using AI to help diagnose diseases will transform peoples lives. But this will not help Africa if people are not going there to collect data, and to understand African health care and related social-support systems, sicknesses and the environment people live in.

Today, African students in AI and machine learning must look for scholarships and leave their countries to study. I want to see this change and I hope to see Africans involved in decision-making, pioneering huge breakthroughs in machine learning and AI research.

Researchers outside Africa can support African AI by mentoring and collaborating with existing African efforts. For example, we have Ghana NLP, an initiative focused on building algorithms to translate English into more than three dozen Ghanaian languages. Global researchers volunteering to contribute their skill set to African-specific research will help with efforts like this. Deep Learning Indaba has a portal in which researchers can sign up to be mentors.

Maria Skoularidou has worked to improve accessibility at a major artificial-intelligence conference. Credit: Maria Skoularidou

PhD candidate in biostatistics at the University of Cambridge, UK, and founder and chair of {Dis}Ability in AI.

I founded {Dis}Ability in AI in 2018, because I realized that disabled people werent represented at conferences and it didnt feel right. I wanted to start such a movement so that conferences could be inclusive and accessible, and disabled people such as me could attend them.

That year, at NeurIPS the annual conference on Neural Information Processing Systems in Montreal, Canada, at least 4,000 people attended and I couldnt identify a single person who could be categorized as visibly disabled. Statistically, it doesnt add up to not have any disabled participants.

I also observed many accessibility issues. For example, I saw posters that were inconsiderate with respect to colour blindness. The place was so crowded that people who use assistive devices such as wheelchairs, white canes or service dogs wouldnt have had room to navigate the poster session. There were elevators, but for somebody with limited mobility, it would not have been easy to access all the session rooms, given the size of the venue. There were also no sign-language interpreters.

Since 2019, {Dis}Ability in AI has helped facilitate better accessibility at NeurIPS. There were interpreters, and closed captioning for people with hearing problems. There were volunteer escorts for people with impaired mobility or vision who requested help. There were hotline counsellors and silent rooms because large conferences can be overwhelming. The idea was: this is what we can provide now, but please reach out in case we are not considerate with respect to something, because we want to be ethical, fair, equal and honest. Disability is part of society, and it needs to be represented and included.

Many disabled researchers have shared their fears and concerns about the barriers they face in AI. Some have said that they wouldnt feel safe sharing details about their chronic illness, because if they did so, they might not get promoted, be treated equally, have the same opportunities as their peers, be given the same salary and so on. Other AI researchers who reached out to me had been bullied and felt that if they spoke up about their condition again, they could even lose their jobs.

People from marginalized groups need to be part of all the steps of the AI process. When disabled people are not included, the algorithms are trained without taking our community into account. If a sighted person closes their eyes, that does not make them understand what a blind person must deal with. We need to be part of these efforts.Being kind is one way that non-disabled researchers can make the field more inclusive. Non-disabled people could invite disabled people to give talks or be visiting researchers or collaborators. They need to interact with our community at a fair and equal level.

William Agnew is a computer science PhD candidate at the University of Washington in Seattle. Sabine Weber is a scientific consultant at VDI/VDE Innovation + Technik in Erfurt, Germany. They are organizers of the advocacy organization Queer in AI.

Agnew: I helped to organize the first Queer in AI workshop for NeurIPS in 2018. Fundamentally, the AI field doesnt take diversity and inclusion seriously. Every step of the way, efforts in these areas are underfunded and underappreciated. The field often protects harassers.

Most people doing the work in Queer in AI are graduate students, including me. You can ask, Why isnt it the senior professor? Why isnt it the vice-president of whatever? The lack of senior members limits our operation and what we have the resources to advocate for.

The things we advocate for are happening from the bottom up. We are asking for gender-neutral toilets; putting pronouns on conference registration badges, speaker biographies and in surveys; opportunities to run our queer-AI experiences survey, to collect demographics, experiences of harm and exclusion, and the needs of the queer AI community; and we are opposing extractive data policies. We, as a bunch of queer people who are marginalized by their queerness and who are the most junior people in our field, must advocate from those positions.

In our surveys, queer people consistently name the lack of community, support and peer groups as their biggest issues that might prevent them from continuing a career path in AI. One of our programmes gives scholarships to help people apply to graduate school, to cover the fees for applications, standardized admissions tests, such as the Graduate Record Examination (GRE) and university transcripts. Some people must fly to a different country to take the GRE. Its a huge barrier, especially for queer people, who are less likely to have financial support from their families and who experience repressive legal environments. For instance, US state legislatures are passing anti-trans and anti-queer laws affecting our membership.

In large part because of my work with Queer in AI, I switched from being a roboticist to being an ethicist. How queer peoples data are used, collected and misused is a big concern. Another concern is that machine learning is fundamentally about categorizing items and people and predicting outcomes on the basis of the past. These things are antithetical to the notion of queerness, where identity is fluid and often changes in important and big ways, and frequently throughout life. We push back and try to imagine machine-learning systems that dont repress queerness.

You might say: These models dont represent queerness. Well just fix them. But queer people have long been the targets of different forms of surveillance aimed at outing, controlling or suppressing us, and a model that understands queer people well can also surveil them better. We should avoid building technologies that entrench these harms, and work towards technologies that empower queer communities.

Weber: Previously, I worked as an engineer at a technology company. I said to my boss that I was the only person who was not a cisgender dude in the whole team of 60 or so developers. He replied, You were the only person who applied for your job who had the qualification. Its so hard to find qualified people.

But companies clearly arent looking very hard. To them it feels like: Were sitting on high. Everybody comes to us and offers themselves. Instead, companies could recruit people at queer organizations, at feminist organizations. Every university has a women in science, technology, engineering and mathematics (STEM) group or women in computing group that firms could easily go to.

But the thinking, Thats how we have always done it; dont rock the boat, is prevalent. Its frustrating. Actually, I really want to rock the boat, because the boat is stupid. Its such a disappointment to run up against these barriers.

Laura Montoya encourages those who, like herself, came to the field of artificial intelligence through a non-conventional route. Credit: Tim McMacken Jr (tim@accel.ai)

Executive director of the Accel.AI Institute and LatinX in AI in San Francisco, California.

In 2016, I started the Accel.AI Institute as an education company that helps under-represented or underserved people in AI. Now, its a non-profit organization with the mission of driving AI for social impact initiatives. I also co-founded the LatinX in AI programme, a professional body for people of Latin American background in the field. Im first generation in the United States, because my family emigrated from Colombia.

My background is in biology and physical science. I started my career as a software engineer, but conventional software engineering wasnt rewarding for me. Thats when I found the world of machine learning, data science and AI. I investigated the best way to learn about AI and machine learning without going to graduate school. Ive always been an alternative thinker.

I realized there was a need for alternative educational options for people like me, who dont take the typical route, who identify as women, who identify as people of colour, who want to pursue an alternative path for working with these tools and technologies.

Later on, while attending large AI and machine-learning conferences, I met others like myself, but we made up a small part of the population. I got together with these few friends to brainstorm, How can we change this?. Thats how LatinX in AI was born. Since 2018, weve launched research workshops at major conferences, and hosted our own call for papers in conjunction with NeurIPS.

We also have a three-month mentorship programme to address the brain drain resulting from researchers leaving Latin America for North America, Europe and Asia. More senior members of our community and even allies who are not LatinX can serve as mentors.

In 2022, we launched our supercomputer programme, because computational power is severely lacking in much of Latin America. For our pilot programme, to provide research access to high-performance computing resources at the Guadalajara campus of the Monterey Institute of Technology in Mexico, the technology company NVIDIA, based in Santa Clara, California, donated a DGX A100 system essentially a large server computer. The government agency for innovation in the Mexican state of Jalisco will host the system. Local researchers and students can share access to this hardware for research in AI and deep learning. We put out a global call for proposals for teams that include at least 50% Latinx members who want to use this hardware, without having to be enrolled at the institute or even be located in the Guadalajara region.

So far, eight teams have been selected to take part in the first cohort, working on projects that include autonomous driving applications for Latin America and monitoring tools for animal conservation. Each team gets access to one graphics processing unit, or GPU which is designed to handle complex graphics and visual-data processing tasks in parallel for the period of time they request. This will be an opportunity for cross-collaboration, for researchers to come together to solve big problems and use the technology for good.

See original here:

Why AI's diversity crisis matters, and how to tackle it - Nature.com

A Bug in the Logic: Regulators try to solve the workplace AI problem … – The Federalist Society

Earlier this month, the Biden administration published a request for information on artificial intelligence in the workplace. The request asked workers to submit, among other things, anecdotes about how they had been affected by AI. These anecdotes would then be used to develop new policy proposals.

The request failed to say, however, why new policies were needed. The administration had already conceded that AI tools were covered by existing law. And in fact, it had already issued guidance under those laws. So it didnt seem to be covering any legal or policy gap. Instead, it seemed to be making a political statement. It seemed to be targeting AI because AI is poorly understood, and therefore unpopular. But that kind of approach to regulation promises to produce no real solutions. Instead, it promises only talking points and red tape.

The administration is hardly the first to see AI as an easy political target. States and cities have already started planting their flags. First out of the gate was New York City, which passed the nations first law regulating AI-powered selection tools. The New York law requires employers to disclose their AI-powered selection tools, put the tools through annual bias audits, and give candidates a chance to ask for other selection methods. Likewise, there are at least four AI bills pending in California. The most far-reaching one, AB 331, would require employers not only to disclose their AI tools, but also to report AI-related data to a state agency. The law would also create a private right of action, serving up still more work to the busy Golden State plaintiffs bar.

In short, lawmakers are clearly interested in AI and its effects on workers. Less clear, however, is what they hope to add to existing law. Just last week, the EEOC published updated guidance explaining how Title VII applies to AI-powered tools. Similarly, the NLRBs General Counsel recently announced that the National Labor Relations Act already forbids AI tools that chill protected concerted activity. And Lina Khan, chair of the FTC, has written that [e]xisting laws prohibiting discrimination will apply [to AI tools], as well as existing authorities proscribing exploitative collection or use of personal data.

Given this existing coverage, its unclear what new policies the administration thinks it needs. Nor is it clear what harms the administration is trying to prevent. In the RFI, the administration linked to a handful of articles published on general-interest websites. But some of the articles were more than seven years old, and none of them established any discriminatory effects. One even suggested that companies were using AI tools to keep workers and consumers safe. How any of this called for a new policy response was left unsaid.

One suspects the administration left so much unsaid because it has so little to say. It cited no real evidence that AI is harming workers. But finding real harm didnt seem to be the point. Rather, the point seemed to be scoring an easy political win. The administration is targeting AI because few people understand the technology. It can therefore crack down on AI tools without generating much backlash.

That kind of thinking is short-sighted. Not only has the administration identified no harm; it has failed to consider AIs potential benefits. For example, AI-powered tools might help workers be more productive. The tools might help workers find jobs more suited to their skillsets. The tools might even help workers stay safe. Without more real-world experience, those benefits are impossible to quantify. Yet the administration is rushing ahead anyway, assuming the tools are nefarious without considering their possible upside.

For now, then, the RFI looks like a regulatory misstep in the making. Workplace AI is too new and too unfamiliar to know whether regulation is necessary, much less what a proper regulatory regime would look like. For once, regulators should aim before they fire.

Note from the Editor: The Federalist Society takes no positions on particular legal and public policy matters. Any expressions of opinion are those of the author. To join the debate, please email us atinfo@fedsoc.org.

Go here to read the rest:

A Bug in the Logic: Regulators try to solve the workplace AI problem ... - The Federalist Society

Here’s What AI Thinks an Illinoisan Looks Like And Apparently, Real Illinoisans Agree – NBC Chicago

Does this person look like he lives in Illinois? AI thinks so. And a handful of posts, allegedly from real people on social media, agree.

That's the basis of a Reddit post titled "The Most Stereotypical People in the States." The post, shared in a section of Reddit dedicated to discussions on Artificial Intelligence, shares AI-generated photos of what the an average person looks like in each state.

The results, according to commenters, are relatively accurate -- at least for Illinois.

Each of the photos shows the portrait of person, most often a male, exhibiting some form of creative expression -- be it through clothing, environment, facial expression or otherwise -- that's meant to clearly represent a location.

For example, the AI-generated photo of a stereotypical person shows a man sitting behind a giant block of cheese.

A stereotypical person in Illinois, according to the post, appears less distinctive, and rather ordinary. In fact, one commenter compares the man from Illinois to Waldo.

"Illinois is Waldo," the comment reads.

"Illinois," another begins. "A person as boring as it sounds to live there."

To other commenters, the photo of the average person who lives in Illinois isn't just dull. It's spot on.

"Hahaha," one commenter says. "Illinois is PRECISELY my brother-in-law."

"Illinois' is oddly accurate," another says.

Accurate or not, in nearly all the AI-generated photos -- Illinois included -- no smiles are captured, with the exception of three states: Connecticut, Hawaii and West Virginia.

You can take a spin through all the photos here. Just make sure you don't skip over Illinois, since, apparently, that one is easy to miss.

Continued here:

Here's What AI Thinks an Illinoisan Looks Like And Apparently, Real Illinoisans Agree - NBC Chicago

From Amazon to Wendy’s, how 4 companies plan to incorporate AIand how you may interact with it – CNBC

Smith Collection/Gado | Archive Photos | Getty Images

Artificial intelligence is no longer limited to the realm of science-fiction novels it's increasingly becoming a part of our everyday lives.

AI chatbots, such as OpenAI's ChatGPT, are already being used in a variety of ways, from writing emails to booking trips. In fact, ChatGPT amassed over 100 million users within just months of launching.

But AI goes beyond large language models (LLMs) like ChatGPT. Microsoft defines AI as "the capability of a computer system to mimic human-like cognitive functions such as learning and problem-solving."

For example, self-driving cars use AI to simulate the decision-making processes a human driver would usually make while on the road such as identifying traffic signals or choosing the best route to reach a given destination, according to Microsoft.

AI's boom in popularity has many companies racing to integrate the technology into their own products. In fact, 94% of business leaders believe that AI development will be critical to the success of their business over the next five years, according to Deloitte's latest survey.

For consumers, this means AI may be coming to a store, restaurant or supermarket nearby. Here are four companies that are already utilizing AI's capabilities and how it may impact you.

Amazon delivery package seen in front of a door.

Sopa Images | Lightrocket | Getty Images

Amazon uses AI in a number of ways, but one strategy aims to get your orders to you faster, Stefano Perego, vice president of customer fulfilment and global ops services for North America and Europe at Amazon, told CNBC on Monday.

The company's "regionalization" plan involves shipping products from warehouses that are closest to customers rather than from a warehouse located in a different part of the country.

To do that, Amazon is utilizing AI to analyze data and patterns to determine where certain products are in demand. This way, those products can be stored in nearby warehouses in order to reduce delivery times.

Screens displaying the logos of Microsoft and ChatGPT, a conversational artificial intelligence application software developed by OpenAI.

Lionel Bonaventure | Afp | Getty Images

Microsoft is putting its $13 billion investment in OpenAI to work. In March, the tech behemoth announced that a new set of AI features, dubbed Copilot, will be added to its Microsoft 365 software, which includes popular apps such as Excel, PowerPoint and Word.

When using Word, for example, Copilot will be able to produce a "first draft to edit and iterate on saving hours in writing, sourcing, and editing time," Microsoft says. But Microsoft acknowledges that sometimes this type of AI software can produce inaccurate responses and warns that "sometimes Copilot will be right, other times usefully wrong."

A Brain Corp. autonomous floor scrubber, called an Auto-C, cleans the aisle of a Walmart's store. Sam's Club completed the rollout of roughly 600 specialized scrubbers with inventory scan towers last October in a partnership Brain Corp.

Source: Walmart

Walmart is using AI to make sure shelves in its nearly 4,700 stores and 600 Sam's Clubs stay stocked with your favorite products. One way it's doing that: automated floor scrubbers.

As the robotic scrubbers clean Sam's Club aisles, they also capture images of every item in the store to monitor inventory levels. The inventory intelligence towers located on the scrubbers take more than 20 million photos of the shelves every day.

The company has trained its algorithms to be able to tell the difference between brands and determine how much of the product is on the shelf with more than 95% accuracy, Anshu Bhardwaj, senior vice president of Walmart's tech strategy and commercialization, told CNBC in March. And when a product gets too low, the stock room is automatically alerted to replenish it, she said.

A customer waits at a drive-thru outside a Wendys Co. restaurant in El Sobrante, California, U.S.

Bloomberg | Bloomberg | Getty Images

An AI chatbot may be taking your order when you pull up to a Wendy's drive-thru in the near future.

The fast-food chain partnered with Google to develop an AI chatbot specifically designed for drive-thru ordering, Wendy's CEO Todd Penegor told CNBC last week. The goal of this new feature is to speed up ordering at the speaker box, which is "the slowest point in the order process," the CEO said.

In June, Wendy's plans to test the first pilot of its "Wendy's FreshAI" at a company-operated restaurant in the Columbus, Ohio area, according to a May press release.

Powered by Google Cloud's generative AI and large language models, it will be able to have conversations with customers, understand made-to-order requests and generate answers to frequently asked questions, according to the company's statement.

DON'T MISS: Want to be smarter and more successful with your money, work & life?Sign up for our new newsletter!

Get CNBC's free report,11 Ways to Tell if We're in a Recession,where Kelly Evans reviews the top indicators that a recession is coming or has already begun.

CHECK OUT: Mark Cuban says the potential impact of AI tools like ChatGPT is beyond anything Ive ever seen in tech

Read more:

From Amazon to Wendy's, how 4 companies plan to incorporate AIand how you may interact with it - CNBC

Boston Isn’t Afraid of Generative AI – WIRED

After ChatGPT burst on the scene last November, some government officials raced to prohibit its use. Italybanned the chatbot. New York City, Los Angeles Unified, Seattle, and Baltimore School Districts eitherbanned or blocked access to generative AI tools, fearing that ChatGPT, Bard, and other content generation sites could tempt students to cheat on assignments, induce rampant plagiarism, and impede critical thinking. This week, US Congressheard testimony from Sam Altman, CEO of OpenAI, and AI researcher Gary Marcus as it weighed whether and how to regulate the technology.

In a rapid about-face, however, a few governments are now embracing a less fearful and more hands-on approach to AI. New York City Schools chancellor David Banksannounced yesterday that NYC is reversing its ban because the knee jerk fear and risk overlooked the potential of generative AI to support students and teachers, as well as the reality that our students are participating in and will work in a world where understanding generative AI is crucial. And yesterday, City of Boston chief information officer Santiago Garces sentguidelines to every city official encouraging them to start using generative AI to understand their potential. The city also turned on use of Google Bard as part of the City of Bostons enterprise-wide use of Google Workspace so that all public servants have access.

The responsible experimentation approach adopted in Bostonthe first policy of its kind in the UScould, if used as a blueprint, revolutionize the public sectors use of AI across the country and cause a sea change in how governments at every level approach AI. By promoting greater exploration of how AI can be used to improve government effectiveness and efficiency, and by focusing on how to use AI for governance instead of only how to govern AI, the Boston approach might help to reduce alarmism and focus attention on how to use AI for social good.

Bostons policy outlines several scenarios in which public servants might want to use AI to improve how they work, and even includes specific how-tos for effective prompt writing.

Generative AI, city officials were told in an email that went out from the CIO to all city officials on May 18, is a great way to get started on memos, letters, and job descriptions, and might help to alleviate the work of overburdened public officials.

The tools can also help public servants translate government-speak and legalese into plain English, which can make important information about public services more accessible to residents. The policy explains that public servants can indicate the reading level or audience in the prompt, allowing the AI model to generate text suitable for elementary school students or specific target audiences.

Generative AI can also help with translation into other languages so that a citys non-English speaking populations can enjoy equal and easier access to information about policies and services affecting them.

City officials were also encouraged to use generative AI tosummarize lengthy pieces of text or audio into concise summaries, which could make it easier for government officials to engage in conversations with residents.

View original post here:

Boston Isn't Afraid of Generative AI - WIRED