Archive for the ‘Artificial Intelligence’ Category

How to report better on artificial intelligence – Columbia Journalism Review

In the past few months we have been deluged with headlines about new AI tools and how much they are going to change society.

Some reporters have done amazing work holding the companies developing AI accountable, but many struggle to report on this new technology in a fair and accurate way.

Wean investigative reporter, a data journalist, and a computer scientisthave firsthand experience investigating AI. Weve seen the tremendous potential these tools can havebut also their tremendous risks.

As their adoption grows, we believe that, soon enough, many reporters will encounter AI tools on their beat, so we wanted to put together a short guide to what we have learned.

So well begin with a simple explanation of what they are.

In the past, computers were fundamentally rule-based systems: if a particular condition A is satisfied, then perform operation B. But machine learning (a subset of AI) is different. Instead of following a set of rules, we can use computers to recognize patterns in data.

For example, given enough labeled photographs (hundreds of thousands or even millions) of cats and dogs, we can teach certain computer systems to distinguish between images of the two species.

This process, known as supervised learning, can be performed in many ways. One of the most common techniques used recently is called neural networks. But while the details vary, supervised learning tools are essentially all just computers learning patterns from labeled data.

Similarly, one of the techniques used to build recent models like ChatGPT is called self-supervised learning, where the labels are generated automatically.

Be skeptical of PR hype

People in the tech industry often claim they are the only people who can understand and explain AI models and their impact. But reporters should be skeptical of these claims, especially when coming from company officials or spokespeople.

Reporters tend to just pick whatever the author or the model producer has said, Abeba Birhane, an AI researcher and senior fellow at the Mozilla Foundation, said. They just end up becoming a PR machine themselves for those tools.

In our analysis of AI news, we found that this was a common issue. Birhane and Emily Bender, a computational linguist at the University of Washington, suggest that reporters talk to domain experts outside the tech industry and not just give a platform to AI vendors hyping their own technology. For instance, Bender recalled that she read a story quoting an AI vendor claiming their tool would revolutionize mental health care. Its obvious that the people who have the expertise about that are people who know something about how therapy works, she said.

In the Dallas Morning Newss series of stories on Social Sentinel, the company repeatedly claimed its model could detect students at risk of harming themselves or others from their posts on popular social media platforms and made outlandish claims about the performance of their model. But when reporters talked to experts, they learned that reliably predicting suicidal ideation from a single post on social media is not feasible.

Many editors could also choose better images and headlines, said Margaret Mitchell, chief ethics scientist of the AI company Hugging Face, said. Inaccurate headlines about AI often influence lawmakers and regulation, which Mitchell and others then have to try to fix.

If you just see headline after headline that are these overstated or even incorrect claims, then thats your sense of whats true, Mitchell said. You are creating the problem that your journalists are trying to report on.

Question the training data

After the model is trained with the labeled data, it is evaluated on an unseen data set, called the test or validation set, and scored using some sort of metric.

The first step when evaluating an AI model is to see how much and what kind of data the model has been trained on. The model can only perform well in the real world if the training data represents the population it is being tested on. For example, if developers trained a model on ten thousand pictures of puppies and fried chicken, and then evaluated it using a photo of a salmon, it likely wouldnt do well. Reporters should be wary when a model trained for one objective is used for a completely different objective.

In 2017, Amazon researchers scrapped a machine learning model used to filter through rsums, after they discovered it discriminated against women. The culprit? Their training data, which consisted of the rsums of the companys past hires, who were predominantly men.

Data privacy is another concern. In 2019, IBM released a data set with the faces of a million people. The following year a group of plaintiffs sued the company for including their photographs without consent.

Nicholas Diakopoulos, a professor of communication studies and computer science at Northwestern, recommends that journalists ask AI companies about their data collection practices and if subjects gave their consent.

Reporters should also consider the companys labor practices. Earlier this year, Time magazine reported that OpenAI paid Kenyan workers $2 an hour for labeling offensive content used to train ChatGPT. Bender said these harms should not be ignored.

Theres a tendency in all of this discourse to basically believe all of the potential of the upside and dismiss the actual documented downside, she said.

Evaluate the model

The final step in the machine learning process is for the model to output a guess on the testing data and for that output to be scored. Typically, if the model achieves a good enough score, it is deployed.

Companies trying to promote their models frequently quote numbers like 95 percent accuracy. Reporters should dig deeper here and ask if the high score only comes from a holdout sample of the original data or if the model was checked with realistic examples. These scores are only valid if the testing data matches the real world. Mitchell suggests that reporters ask specific questions like How does this generalize in context? Was the model tested in the wild or outside of its domains?

Its also important for journalists to ask what metric the company is using to evaluate the modeland whether that is the right one to use. A useful question to consider is whether a false positive or false negative is worse. For example, in a cancer screening tool, a false positive may result in people getting an unnecessary test, while a false negative might result in missing a tumor in its early stage when it is treatable.

The difference in metrics can be crucial to determine questions of fairness in the model. In May 2016, ProPublica published an investigation in an algorithm called COMPAS, which aimed to predict a criminal defendants risk of committing a crime within two years. The reporters found that, despite having similar accuracy between Black and white defendants, the algorithm had twice as many false positives for Black defendants as for white defendants.

The article ignited a fierce debate in the academic community over competing definitions of fairness. Journalists should specify which version of fairness is used to evaluate a model.

Recently, AI developers have claimed their models perform well not only on a single task but in a variety of situations. One of the things thats going on with AI right now is that the companies producing it are claiming that these are basically everything machines, Bender said. You cant test that claim.

In the absence of any real-world validation, journalists should not believe the companys claims.

Consider downstream harms

As important as it is to know how these tools work, the most important thing for journalists to consider is what impact the technology is having on people today. Companies like to boast about the positive effects of their tools, so journalists should remember to probe the real-world harms the tool could enable.

AI models not working as advertised is a common problem, and has led to several tools being abandoned in the past. But by that time, the damage is often done. Epic, one of the largest healthcare technology companies in the US, released an AI tool to predict sepsis in 2016. The tool was used across hundreds of US hospitalswithout any independent external validation. Finally, in 2021, researchers at the University of Michigan tested the tool and found that it worked much more poorly than advertised. After a series of follow-up investigations by Stat News, a year later, Epic stopped selling its one-size-fits-all tool.

Ethical issues arise even if a tool works well. Face recognition can be used to unlock our phones, but it has already been used by companies and governments to surveil people at scale. It has been used to bar people from entering concert venues, to identify ethnic minorities, and to monitor workers and people living in public housing, often without their knowledge.

In March reporters at Lighthouse Reports and Wired published an investigation into a welfare fraud detection model utilized by authorities in Rotterdam. The investigation found that the tool frequently discriminated against women and nonDutch speakers, sometimes leading to highly intrusive raids of innocent peoples homes by fraud controllers. Upon examination of the model and the training data, the reporters also found that the model performed little better than random guessing.

It is more work to go find workers who were exploited or artists whose data has been stolen or scholars like me who are skeptical, Bender said.

Jonathan Stray, a senior scientist at the Berkeley Center for Human-Compatible AI and former AP editor, said that talking to the humans who are using or are affected by the tools is almost always worth it.

Find the people who are actually using it or trying to use it to do their work and cover that story, because there are real people trying to get real things done, he said.

Thats where youre going to find out what the reality is.

Link:
How to report better on artificial intelligence - Columbia Journalism Review

Hackers: We wont let artificial intelligence get the better of us – ComputerWeekly.com

Artificial intelligence (AI) doesnt stand a chance of being able to replicate the human creativity needed to become an ethical hacker, but it will disrupt how hackers conduct penetration testing and work on bug bounty programmes, and is already increasing the value of hacking to organisations that are prepared to engage with the hacking community rather than dismiss it outright.

This is according to the hackers who contributed to the latest edition of Inside the mind of a hacker (ITMOAH), an annual report from crowdsourced penetration testing firm Bugcrowd, which sets out to offer an in-depth look at how hackers think and function, and why they do the things they do. This year unsurprisingly leans into AI in a big way.

When it came to the existential questions around whether or not AI could outperform the average hacker or render them irrelevant, 21% of respondents said AI was already outperforming them, and a third said it will be able to do so given another five years or so.

The vast majority, 78%, said AI would disrupt how they work on penetration testing or bug bounty programmes some time between now and 2028, with 40% saying it has already changed the way people hack, and 91% of hackers saying generative AI either has already, or will in future, increase the value of their work.

Outperforming a human doing repetitive, sometimes monotonous, work such as data analysis is one thing, but hacking as a vocation also encourages creativity of thought, and it is here that the community seems to feel humans will continue to have an edge, with 72% saying they did not think AI will ever be able to replicate these qualities.

Ive done a fair amount with AI, and as impressive as it is, I dont think it will be replacing humans for quite some time, if ever, said one respondent, a 20-year cyber security veteran who hacks on the Bugcrowd platform using the handle Nerdwell.

AI is very good at what it does pattern recognition and applying well-known solutions to well-known problems, he said. Humans are biologically designed to seek out novelty and curiosity. Our brains are literally wired to be creative and find novel solutions to novel problems.

Another Bugcrowd hacker, who goes by the handle OrwaGodfather, added: AI is great, but it will not replace me. There are some bugs and issues, just like any other technology.

It can have an effect on my place in hacking, though. For example, automation has huge potential to help hackers, said OrwaGodfather, who started hacking in 2020 and when away from his keyboard works as a professional chef.

It can make things easier and save time, he said. If I find a bug when performing a pen test and I dont want to spend 30 minutes writing a report, I can start by using AI to write descriptions for me. AI makes hacking faster.

Whatever their gut feelings may be, Bugcrowds hackers are scrambling aboard the AI train, with 85% saying they had played around with generative AI technology, and 64% already incorporating it into their security workflows in some way a further 30% said they planned to do this in the future.

Hackers who have adopted or who plan to adopt generative AI are most inclined to use Open AIs ChatGPT (a Bugcrowd customer) cited by 98% of respondents with Googles Bard and Microsofts Bing Chat AI at 40%.

Those that have taken the plunge are using generative AI technology in a wide variety of ways, with the most commonly used functions being text summarisation or generation, code generation, search enhancement, chatbots, image generation, data design, collection or summarisation, and machine learning.

Within security research workflows specifically, hackers said they found generative AI most useful to automate tasks, analyse data, and identify and validate vulnerabilities. Less widely used applications included conducting reconnaissance, categorising threats, detecting anomalies, prioritising risk and building training models.

Many hackers who are not native English speakers or not fluent in English are also using services such as ChatGPT to translate or write reports and bug submissions, and fuel more collaboration across national borders.

Over the past decade, Bugcrowds annual report has also served a secondary purpose, that of helping to humanise the hacking community and disrupt negative and unhelpful stereotypes of what a hacker actually is.

This is particularly important given that, in spite of years of pushback and attempts to educate, many people who should know better readily and intentionally conflate the term hacker with the term cyber criminal.

Weve taken on the responsibility of helping the market understand what a hacker actually is, Casey Ellis, Bugcrowd founder, chief technology officer and report co-author told Computer Weekly at the recent Infosecurity Europe cyber trade fair.

I think when we started, everyone assumed it was a bad thing, he said. Some 10 years on, were now at a point where people understand that hacking is actually a skill set. Like most skill sets, its dual-use. Its like lockpicking. If youve got that skill, you can become a locksmith, or a burglar. Theres nothing wrong with lockpicking its how youre actually using it. Hacking is the same.

The 2023 ITMOAH report shows how some fundamental shifts in hacker culture and demographics look set to shake up the cyber security landscape in the coming years.

For the first time, the report reveals, the majority of active hackers, between 55% and 60%, are now members of the Generation Z cohort currently in their teens and early 20s, while between 33% and 36% are Millennials aged from their late 20s to early 40s.

And despite hackings cultural roots in the 1980s, only 2% are members of Generation X, those born between the mid-1960s and approximately 1980, the youngest of whom are now about 45 years old.

So, are the stereotypes of teenage hackers actually proving accurate, and more pertinently, are the kids all right? Were seeing a pretty rapid acceleration of participation from people that are under 18, said Ellis. Its still a very small population, only 6%, but its up from 3% year-on-year, which is a big shift.

He said this trend will become increasingly relevant because todays teenagers think about technology in a fundamentally different way to those born even a few short years earlier.

Ive got a 15-year-old daughter and the way she interacts with technology is completely different to me, said Ellis. Her introduction to technology was all about the interface mine was all about the plumbing. We just think about the internet in a fundamentally different way.

Now, I know stuff that shell never know because I grew up with the nuts and bolts, but shell think about the interface in a way that I probably never will because Im so consumed with the nuts and bolts.

You talk about Millennials as digital natives, but Gen Z and younger are actually digital natives, he said. Theyre able to wander through that environment in an intuitive way that we cant really understand. I can try to empathise with that, and I can get most of the way there, but I recognise the fact Ill never fully understand because its not my experience.

This generation is also proving adept at challenging the mores and assumptions of their elders that have often been built into technology, and Ellis said this gives them an advantage in figuring out what is coming next, and where future vulnerabilities may lie.

The other part of this trend is that todays teens are more politically and socially motivated, and more diverse, in ways that older people are not. This factor is already changing the cyber landscape and will certainly continue to do so.

Take Lapsus$, the teenage-run cyber extortion collective that attacked the systems of ride-sharing service Uber in 2022 for no particular reason other than they didnt care for Ubers ethics.

One of the big things that Ive been saying since Lapssus$ is that as defenders, were not ready for a chaotic act, said Ellis. Weve been thinking about cyber criminals, nation states, threat actors as having a symmetric motivation.

A nation state wants to advance the nation, cyber criminals want money. Theyre predictable. And there is symmetry in what theyre doing. Folks that come in with more of an activism bent, you dont really know what they want. And in the case of Lapsus$, its like we just want to make a mess because those guys suck. How do you defend against that? We havent really been thinking in that way since Lulzsec, which was probably the last example of a group that did that.

Of course, the teens on Bugcrowds platform are not attacking organisations in the same sense as Lapssus$ did, but in its story there is a lesson for the hacking community, and the defenders, and clearly the potential to channel activity that might otherwise be expended on malicious acts into legitimate security work is immense.

The full report, which can be downloaded to read in full from Bugcrowd, contains a wealth of additional insight into hacker demographics the gender gap is increasing, likely due to the extra pressure the Covid-19 pandemic put on many women motivations to hack, what hackers think ordinary security teams need to do better, and more besides.

Read the rest here:
Hackers: We wont let artificial intelligence get the better of us - ComputerWeekly.com

The EU wants to regulate Artificial Intelligence. What impact could … – Euronews

By Aoibhinn Mc Bride

In April 2021, the European Commission proposed its first regulatory framework for AI, with the hope that the final legislation will be passed by the end of this year.

At the crux of the legislation is a central theme: AI systems should be overseen by people rather than by automation to minimise risk, make it safe, transparent, traceable and non-discriminatory.

This is a sentiment echoed by Dr Patricia Scanlon, the founder of Soapbox Labs and Irelands first government-appointed AI ambassador, who recently delivered the opening keynote at the Dublin Tech Summit.

It is on all of us to be able to regulate and treat it (AI) the same as climate crisis, or the pandemic or nuclear, Dr Scanlon said.

And that's a really provocative statement to make. But the idea here is to provoke discussion, to convey urgency, and to ensure that we don't just sit on our laurels and say, well, lets just see what happens.

Much of the conversation surrounding AI has centred on automations impact on jobs with recent data predicting that generative AI could impact as many as 300 million full-time jobs globally.

Dr Scanlon says that AI shouldnt be considered a fad or reduced to a productivity tool as its potential and consequential impact has a far greater reach.

It's a revolution because the innovations in AI will persist, she explained.

It will have an impact on society, the economy will be impacted, the global economy will be impacted and every industry will be impacted. And that's really, really important to realise where we are today.

Unsurprisingly, there has been significant pushback to the EUs proposed legislation from business leaders.

In an open letter, over 150 executives from companies including Siemens, Renault and Airbus and Yann LeCun, the chief AI scientist at Meta, stipulated that the new laws would jeopardise Europes competitiveness without effectively tackling the challenges we are and will be facing.

However, Dr Scanlon emphasised the importance of taking a responsible approach, and proposed that regulation should be the foundation upon which innovation is built.

There is this mindset, Ive heard it a lot and Im sure you all have, that regulation stifles innovation. If that was the case, I dont think youd see innovation in fintech, biotech and medtech in healthcare because theyre heavily regulated spaces, but people still manage to operate in them.

She continued: Misinformation and disinformation is a huge risk. Weve seen it already on social media. Imagine doing that on scale. If we allow it to go unregulated, and nobody has to worry about misinformation, disinformation, you know, freedom of speech, whatever you want to call it, we could end up destabilising our own governments, because of the race in politics and people indiscriminately using these tools because they're not regulation.

Another key area that Dr Scanlon highlighted was the importance of mitigating bias, particularly within educational, healthcare or workplace settings.

She referenced the Dutch governments failed attempt to detect welfare fraud, which resulted in 20,000 people unjustly losing their benefits as the perfect example of when bias can have a detrimental effect.

No AI is biassed. AI is made biassed by the state of the data lazily pulled from the internet or legacy data, Dr Scanlon added.

For instance, with the last 40 years of employment data, who got the job? Were trying to correct that in society but if you were to take legacy data and pull that into a model, you're propagating that bias into the future.

But if we build models right and we actually carefully design the models in the data, in the deployment, in how that makes decisions, you can actually create objective decision making as opposed to biassed decision making.

If you want to be part of the AI revolution and pivot to a career in machine learning, the Euronews Job Board is the perfect place to start your search. It features thousands of jobs in companies that are actively hiring, like the three below.

SumUp is a digital ecosystem dedicated to local entrepreneurs and offers payment, click-and-collect and reservation solutions to merchants. It is seeking a Data Scientist in Berlin to build and optimise state-of-the-art AI models and algorithms to drive the success of its initiatives.

As such, youll develop and implement AI models to solve complex business problems, contribute to the next generation of chatbots and develop and implement ways to collect, clean and process large amounts of data.

Get the full job description here.

The Machine Learning Fairness team at ByteDance is hiring a Researcher to achieve technical breakthroughs and conduct cutting-edge research in machine learning fairness and related fields. In this London-based role you will implement new technologies to deliver results aligned with products and collaborate with business teams globally to provide technical support.

View more information here.

As Principal Machine Learning Engineer in the Berlin-based customer profile and personalisation team, you will execute bold research experiments, drive Zalandos scientific roadmap and work with engineers and science leaders to spearhead the strategy on building platform capabilities, while collaborating closely with the customer profile and personalisation teams and the central machine learning productivity teams.

Access more details here.

Future proof your career today via Euronews.Jobs

More here:
The EU wants to regulate Artificial Intelligence. What impact could ... - Euronews

AI nursing ethics: Viability of robots and artificial intelligence in … – Science Daily

The recent progress in the field of robotics and artificial intelligence (AI) promises a future where these technologies would play a more prominent role in society. Current developments, such as the introduction of autonomous vehicles, the ability to generate original artwork, and the creation of chatbots capable of engaging in human-like conversations, highlight the immense possibilities held by these technologies. While these advancements offer numerous benefits, they also pose some fundamental questions. The characteristics such as creativity, communication, critical thinking, and learning -- once considered to be unique to humans -- are now being replicated by AI. So, can intelligent machines be considered 'human'?

In a step toward answering this question, Associate Professor Tomohide Ibuki from Tokyo University of Science, in collaboration with medical ethics researcher Dr. Eisuke Nakazawa from The University of Tokyo and nursing researcher Dr. Ai Ibuki from Kyoritsu Women's University, recently explored whether robots and AI can be entrusted with nursing, a highly humane practice. Their work was made available online on 12 June 2023 and published in the journal Nursing Ethics on 12 June 2023.

"This study in applied ethics examines whether robotics, human engineering, and human intelligence technologies can and should replace humans in nursing tasks," says Dr. Ibuki.

Nurses demonstrate empathy and establish meaningful connections with their patients. This human touch is essential in fostering a sense of understanding, trust, and emotional support. The researchers examined whether the current advancements in robotics and AI can implement these human qualities by replicating the ethical concepts attributed to human nurses, including advocacy, accountability, cooperation, and caring.

Advocacy in nursing involves speaking on behalf of patients to ensure that they receive the best possible medical care. This encompasses safeguarding patients from medical errors, providing treatment information, acknowledging the preferences of a patient, and acting as mediators between the hospital and the patient. In this regard, the researchers noted that while AI can inform patients about medical errors and present treatment options, they questioned its ability to truly understand and empathize with patients' values and to effectively navigate human relationships as mediators.

The researchers also expressed concerns about holding robots accountable for their actions. They suggested the development of explainable AI, which would provide insights into the decision-making process of AI systems, improving accountability.

The study further highlights that nurses are required to collaborate effectively with their colleagues and other healthcare professionals to ensure the best possible care for patients. As humans rely on visual cues to build trust and establish relationships, unfamiliarity with robots might lead to suboptimal interactions. Recognizing this issue, the researchers emphasized the importance of conducting further investigations to determine the appropriate appearance of robots for facilitating efficient cooperation with human medical staff.

Lastly, while robots and AI have the potential to understand a patient's emotions and provide appropriate care, the patient must also be willing to accept robots as care providers.

Having considered the above four ethical concepts in nursing, the researchers acknowledge that while robots may not fully replace human nurses anytime soon, they do not dismiss the possibility. While robots and AI can potentially reduce the shortage of nurses and improve treatment outcomes for patients, their deployment requires careful weighing of the ethical implications and impact on nursing practice.

"While the present analysis does not preclude the possibility of implementing the ethical concepts of nursing in robots and AI in the future, it points out that there are several ethical questions. Further research could not only help solve them but also lead to new discoveries in ethics," concludes Dr. Ibuki.

Here's hoping for such novel applications of robotics and AI to emerge soon!

Read the rest here:
AI nursing ethics: Viability of robots and artificial intelligence in ... - Science Daily

Artificial Intelligence and the Evolution of Journalism Research Design – Fagen wasanni

Exploring the Impact of Artificial Intelligence on the Evolution of Journalism Research Design

Artificial Intelligence (AI) has been a game-changer in various sectors, and journalism is no exception. The impact of AI on journalism research design has been profound, opening up new avenues for exploration and transforming the way news is gathered, analyzed, and disseminated.

Traditionally, journalism research design involved manual data collection, analysis, and interpretation. Journalists would spend hours, sometimes days, pouring over data, trying to find patterns, trends, and stories. This process was not only time-consuming but also prone to human error. However, with the advent of AI, this landscape has dramatically changed.

AI algorithms can sift through vast amounts of data in a fraction of the time it would take a human. They can identify patterns and trends that might be missed by the human eye, making the process of data analysis more efficient and accurate. This has revolutionized the way journalists approach their research, allowing them to focus more on crafting compelling narratives and less on the tedious task of data analysis.

Moreover, AI has also transformed the way journalists gather information. With AI-powered tools, journalists can now automate the process of information gathering, making it faster and more efficient. For instance, AI can scrape data from various online sources, analyze social media trends, and even monitor real-time events, providing journalists with a wealth of information at their fingertips. This has not only streamlined the research process but also expanded the scope of journalism, enabling journalists to cover a wider range of topics and stories.

In addition to data collection and analysis, AI has also made significant strides in content creation. AI-powered tools can now generate news articles, summaries, and reports, freeing up journalists to focus on more complex tasks. While the quality of AI-generated content may not yet match that of human-written content, the technology is rapidly improving, and its not hard to envision a future where AI plays a significant role in content creation.

Furthermore, AI has also opened up new possibilities for personalized news delivery. By analyzing user behavior and preferences, AI can curate news content tailored to individual readers, enhancing the user experience and increasing engagement. This level of personalization was previously unattainable and represents a significant shift in the way news is delivered.

However, the integration of AI in journalism is not without its challenges. Concerns about job displacement, ethical considerations around data privacy, and the potential for AI to be used to spread misinformation are all valid issues that need to be addressed. But despite these challenges, the potential benefits of AI in journalism are too significant to ignore.

In conclusion, AI has had a profound impact on the evolution of journalism research design. It has transformed the way journalists gather and analyze data, streamlined the content creation process, and opened up new possibilities for personalized news delivery. While there are challenges to overcome, the integration of AI in journalism represents a significant step forward in the evolution of the industry. As AI technology continues to improve and evolve, its impact on journalism will only continue to grow.

Read the original here:
Artificial Intelligence and the Evolution of Journalism Research Design - Fagen wasanni