Archive for the ‘Artificial Intelligence’ Category

Artificial intelligence must be grounded in human rights, says High … – OHCHR

HIGH LEVEL SIDE EVENT OF THE 53rd SESSION OF THE HUMAN RIGHTS COUNCIL on

What should the limits be? A human-rights perspective on whats next for artificial intelligence and new and emerging technologies

Opening Statement by Volker Trk

UN High Commissioner for Human Rights

It is great that we are having a discussion about human rights and AI.

We all know how much our world and the state of human rights is being tested at the moment. The triple planetary crisis is threatening our existence. Old conflicts have been raging for years, with no end in sight. New ones continue to erupt, many with far-reaching global consequences. We are still reeling from consequences of the COVID-19 pandemic,which exposed and deepened a raft of inequalities the world over.

But the question before us today what the limits should be on artificial intelligence and emerging technologies is one of the most pressing faced by society, governments and the private sector.

We have all seen and followed over recent months the remarkable developments in generative AI, with ChatGPT and other programmes now readily accessible to the broader public.

We know that AI has the potential to be enormously beneficial to humanity. It could improve strategic foresight and forecasting, democratize access to knowledge, turbocharge scientific progress, and increase capacity for processing vast amounts of information.

But in order to harness this potential, we need to ensure that the benefits outweigh the risks, and we need limits.

When we speak of limits, what we are really talking about is regulation.

To be effective, to be humane, to put people at the heart of the development of new technologies, any solution any regulation must be grounded in respect for human rights.

Two schools of thoughts are shaping the current development of AI regulation.

The first one is risk-based only, focusing largely on self-regulation and self-assessment by AI developers. Instead of relying on detailed rules, risk-based regulation emphasizes identifying, and mitigating risks to achieve outcomes.

This approach transfers a lot of responsibility to the private sector. Some would say too much we hear that from the private sector itself.

It also results in clear gaps in regulation.

The other approach embeds human rights in AIs entire lifecycle. From beginning to end, human rights principles are included in the collection and selection of data; as well as the design, development, deployment and use of the resulting models, tools and services.

This is not a warning about the future we are already seeing the harmful impacts of AI today, and not only generative AI.

AI has the potential to strengthen authoritarian governance.

It can operate lethal autonomous weapons.

It can form the basis for more powerful tools of societal control, surveillance, and censorship.

Facial recognition systems, for example, can turn into mass surveillance of our public spaces, destroying any concept of privacy.

AI systems that are used in the criminal justice system to predict future criminal behaviour have already been shown to reinforce discrimination and to undermine rights, including the presumption of innocence.

Victims and experts, including many of you in this room, have raised the alarm bell for quite some time, but policy makers and developers of AI have not acted enough or fast enough on those concerns.

We need urgent action by governments and by companies. And at the international level, the United Nations can play a central role in convening key stakeholders and advising on progress.

There is absolutely no time to waste.

The world waited too long on climate change. We cannot afford to repeat that same mistake.

What could regulation look like?

The starting point should be the harms that people experience and will likely experience.

This requires listening to those who are affected, as well as to those who have already spent many years identifying and responding to harms. Women, minority groups, marginalized people, in particular, are disproportionately affected by bias in AI. We must make serious efforts to bring them to the table for any discussion on governance.

Attention is also needed to the use of AI in public and private services where there is a heightened risk of abuse of power or privacy intrusions justice, law enforcement, migration, social protection, or financial services.

Second, regulations need to require assessment of the human rights risks and impacts of AI systems before, during, and after their use. Transparency guarantees, independent oversight, and access to effective remedies are needed, particularly when the State itself is using AI technologies.

AI technologies that cannot be operated in compliance with international human rights law must be banned or suspended until such adequate safeguards are in place.

Third, existing regulations and safeguards need to be implemented for example, frameworks on data protection, competition law, and sectoral regulations, including for health, tech or financial markets. A human rights perspective on the development and use of AI will have limited impact if respect for human rights is inadequate in the broader regulatory and institutional landscape.

And fourth, we need to resist the temptation to let the AI industry itself assert that self-regulation is sufficient, or to claim that it should be for them to define the applicable legal framework. I think we have learnt our lesson from social media platforms in that regard. Whilst their input is important, it is essential that the full democratic process laws shaped by all stakeholders is brought to bear, on an issue in which all people, everywhere, will be affected far into the future.

At the same time, companies must live up to their responsibilities to respect human rights in line with the Guiding Principles on Business and Human Rights. Companies are responsible for the products they are racing to put on the market. My Office is working with a number of companies, civil society organizations and AI experts to develop guidance on how to tackle generative AI. But a lot more needs to be done along these lines.

Finally, while it would not be a quick fix, it may be valuable to explore the establishment of an international advisory body for particularly high-risk technologies, one that could offer perspectives on how regulatory standards could be aligned with universal human rights and rule of law frameworks. The body could publicly share the outcomes of its deliberations and offer recommendations on AI governance. This is something that the Secretary-General of the United Nations has also proposed as part of the Global Digital Compact for the Summit of the Future next year.

The human rights framework provides an essential foundation that can provide guardrails for efforts to exploit the enormous potential of AI, while preventing and mitigating its enormous risks.

I look forward to discussing these issues with you.

See the original post here:
Artificial intelligence must be grounded in human rights, says High ... - OHCHR

The Future of Artificial Intelligence in Healthcare: Taking a Peek into … – Medium

Artificial Intelligence (AI) has been revolutionizing various industries, and healthcare is no exception. From diagnosing diseases to predicting treatment outcomes, AI is reshaping the landscape of modern medicine.

In this blog post, well take a casual stroll through the exciting possibilities AI brings to healthcare, exploring how it is set to transform the way we receive medical care.

Gone are the days when medical diagnosis relied solely on the intuition and expertise of human doctors. With the advent of AI, were witnessing a new era of precision diagnostics.

Machine learning algorithms are being trained on massive amounts of medical data, enabling them to identify patterns and anomalies that might go unnoticed by human eyes. From radiology to pathology, AI algorithms can analyze medical images and detect abnormalities with astonishing accuracy, potentially reducing diagnostic errors and improving patient outcomes.

One of the most promising aspects of AI in healthcare is its ability to predict and prevent diseases. By analyzing vast amounts of patient data, including medical records, genetic information, and lifestyle factors, AI algorithms can identify individuals at high risk of developing certain conditions.

This allows healthcare providers to intervene early, implementing personalized preventive measures and reducing the burden of disease.

Imagine a scenario where your smartphones health app combines data from your smartwatch, medical history, and genetic profile to generate real-time health predictions.

It could alert you to take preventive measures against a potential health issue before it even arises. This proactive approach has the potential to save lives and revolutionize the concept of healthcare.

AI-powered virtual assistants and chatbots are becoming increasingly common in healthcare settings. These intelligent systems can interact with patients, providing them with immediate access to information and personalized guidance.

From answering basic health queries to reminding patients to take their medications, AI chatbots can assist in providing timely and accurate information, improving patient engagement and adherence to treatment plans.

Moreover, AI algorithms can analyze large datasets to identify treatment patterns and recommend the most effective interventions based on an individuals unique characteristics.

This level of personalized medicine has the potential to enhance treatment outcomes and reduce healthcare costs by minimizing trial-and-error approaches.

Developing new drugs is a time-consuming and expensive process. However, AI is streamlining this procedure by analyzing vast amounts of biomedical literature and scientific research.

Machine learning algorithms can identify potential drug targets, predict drug efficacy, and even suggest novel combinations of existing medications. By leveraging AIs capabilities, researchers can expedite the discovery and development of new drugs, bringing innovative treatments to patients faster than ever before.

While AI brings tremendous promise to healthcare, we must address ethical considerations and challenges associated with its implementation.

Ensuring data privacy, maintaining transparency in algorithmic decision-making, and addressing biases in AI models are crucial for building trust and safeguarding patient well-being. Striking the right balance between human judgment and AI assistance is another challenge that needs careful consideration.

The future of artificial intelligence in healthcare is brimming with possibilities. From accurate diagnostics and disease prediction to improving patient care and revolutionizing drug discovery, AI has the potential to transform healthcare as we know it.

While challenges exist, embracing AI technologies responsibly can lead to a future where smart medicine and human expertise work hand in hand to provide the best possible care for all.

So, keep an eye on the horizon and prepare for a future where AI becomes an indispensable tool in the hands of healthcare providers, helping them deliver precision medicine and personalized care to improve the health and well-being of millions of people worldwide.

Follow Techdella Blog to read more about technological innovations.

The rest is here:
The Future of Artificial Intelligence in Healthcare: Taking a Peek into ... - Medium

How artificial intelligence can aid urban development – Open Access Government

Planning and maintaining communities in the modern world is as simple as threading a needle with an elephant. Under the best of circumstances, urban planning requires tremendous amounts of data, foresight and cross-department cooperation.

But when also accounting for the most pressing issues of the day climate change and diversity, equity and inclusion, among others a difficult job suddenly becomes a Herculean task.

Modern challenges require modern technology, and no contemporary tool is more powerful or consequential than artificial intelligence.

The inherent need in urban planning to process and interpret numerous disparate streams of data while responding to dramatic changes in the moment is an undertaking layered with complexity.

With the muscular computing capacity and deep-learning capabilities to help optimize an elaborate web of systems and interests including transportation, infrastructure management, energy efficiency, public safety and citizen engagement artificial intelligence can be a game-changer in the mission of modernizing urban development.

Transportation infrastructure is what often comes to mind when the subject of urban development is raised and with good reason. Its a complex and critical challenge that requires a great deal of resources and calls for a variety of (occasionally competing) solutions.

City life features the mingling of automobiles, pedestrians and even pets, and considerations such as public transportation, bicycle traffic and rush hour surges complicate any optimization project.

So, too, do the grids and topography that are unique to every city. But with advanced video analytics software that is designed to leverage existing investments in video to identify, process and index objects and behavior from live surveillance feeds, city systems can account for and better understand factors such as traffic congestion, roadway construction and vehicle-pedestrian interactions.

City systems can account for factors such as traffic congestion, roadway construction and vehicle-pedestrian interactions

AI technologies empower urban developers with the ability to glean insights from existing surveillance networks, allowing for best-case city planning that serves the greater public good.

The only constant for urban communities is change. City populations grow and contract. A restaurant opens while a shopping mall shutters its doors. New crime hotspots and pedestrian bottlenecks materialize without warning.

Previous initiatives may go underutilized or fall short of demand. For urban developers, the goalposts are always being moved which makes city planning both exceptionally knotty and vitally necessary.

Video analytics software can help city planners and decision-makers identify certain trends and even help predict others before they become intractable challenges. Data from CCTV surveillance can be processed using AI, providing urban developers with the information they need to make the most efficient use of city resources while meeting the needs of the public.

Where might a city create green spaces that serve the most citizens? Whats the ideal spot to plan a farmers market or build a new skate park? AI-driven software helps city planners make sense of available data (which would be otherwise unmanageable and uninterpretable by human operators) to intelligently inform decisions and maximizing infrastructural investments, effectively saving community resources

Communication and data sharing between departments and systems is a challenge for most cities, especially as populations grow and a communitys needs evolve over time.

Because city-powered CCTV video surveillance cameras have typically been used only for security and investigative purposes, many local government agencies and divisions that could benefit from their useful insights, may lack access or simply be unaware of their value.

Local government agencies and divisions that could benefit from the useful insights of city-powered CCTV video surveillance

Smart cities are communities that have made a concerted effort to connect information technologies across department silos for the benefit of the public. Typically, thats achieved through AI-driven technology, such as video analytics software, that taps into a citys existing video surveillance infrastructure.

When information is shared across departments, urban developers have the tools to spot opportunities, inefficiencies or hazards whether that be filling a pothole in a busy thoroughfare or adding streetlamps to a darkened (and potentially dangerous) corner of a city park.

Artificial intelligence has the processing muscle and dynamic interpretation skills to help cities not only address everyday problems, but also anticipate and address the most modern of challenges such as pandemic preparation. With AI-powered solutions, urban planners can help develop their communities while keeping citizens and systems safer, healthier and stronger.

This piece was written and provided by Liam Galin and BriefCam.

Liam Galin joined BriefCam as CEO to take charge of the companys growth strategy and maintain its position as a video analytics market leader and innovator.

The rest is here:
How artificial intelligence can aid urban development - Open Access Government

BlackRock highlights artificial intelligence in its 2023 midyear … – Seeking Alpha

Shutthiphong Chandaeng/iStock via Getty Images

BlackRock outlined in their 2023 midyear outlook report to investors that markets currently provide an abundance of investment opportunities with one area being artificial intelligence.

AI-driven productivity gains could boost profit margins, especially of companies with high staffing costs or a large share of tasks that could be automated, the worlds largest asset manager stated in its midyear report.

The financial firm outlined that Wall Street is still assessing the potential effects AI brings to applications and how the technology could disrupt entire industries. The firm stated that AI goes beyond sectors and also brings greater cybersecurity risks across the board.

BlackRock went on to add: We think the importance of data for AI and potential winners is underappreciated. Companies with vast sets of proprietary data have the ability to more quickly and easily leverage a large amount of data to create innovative models. New AI tools could analyze and unlock the value of the data gold mine some companies may be sitting on.

For investors looking to analyze the artificial intelligence space further, see below a grouping of 10 popular AI focused exchange traded funds:

More on Artificial Intelligence:

Go here to see the original:
BlackRock highlights artificial intelligence in its 2023 midyear ... - Seeking Alpha

How to report better on artificial intelligence – Columbia Journalism Review

In the past few months we have been deluged with headlines about new AI tools and how much they are going to change society.

Some reporters have done amazing work holding the companies developing AI accountable, but many struggle to report on this new technology in a fair and accurate way.

Wean investigative reporter, a data journalist, and a computer scientisthave firsthand experience investigating AI. Weve seen the tremendous potential these tools can havebut also their tremendous risks.

As their adoption grows, we believe that, soon enough, many reporters will encounter AI tools on their beat, so we wanted to put together a short guide to what we have learned.

So well begin with a simple explanation of what they are.

In the past, computers were fundamentally rule-based systems: if a particular condition A is satisfied, then perform operation B. But machine learning (a subset of AI) is different. Instead of following a set of rules, we can use computers to recognize patterns in data.

For example, given enough labeled photographs (hundreds of thousands or even millions) of cats and dogs, we can teach certain computer systems to distinguish between images of the two species.

This process, known as supervised learning, can be performed in many ways. One of the most common techniques used recently is called neural networks. But while the details vary, supervised learning tools are essentially all just computers learning patterns from labeled data.

Similarly, one of the techniques used to build recent models like ChatGPT is called self-supervised learning, where the labels are generated automatically.

Be skeptical of PR hype

People in the tech industry often claim they are the only people who can understand and explain AI models and their impact. But reporters should be skeptical of these claims, especially when coming from company officials or spokespeople.

Reporters tend to just pick whatever the author or the model producer has said, Abeba Birhane, an AI researcher and senior fellow at the Mozilla Foundation, said. They just end up becoming a PR machine themselves for those tools.

In our analysis of AI news, we found that this was a common issue. Birhane and Emily Bender, a computational linguist at the University of Washington, suggest that reporters talk to domain experts outside the tech industry and not just give a platform to AI vendors hyping their own technology. For instance, Bender recalled that she read a story quoting an AI vendor claiming their tool would revolutionize mental health care. Its obvious that the people who have the expertise about that are people who know something about how therapy works, she said.

In the Dallas Morning Newss series of stories on Social Sentinel, the company repeatedly claimed its model could detect students at risk of harming themselves or others from their posts on popular social media platforms and made outlandish claims about the performance of their model. But when reporters talked to experts, they learned that reliably predicting suicidal ideation from a single post on social media is not feasible.

Many editors could also choose better images and headlines, said Margaret Mitchell, chief ethics scientist of the AI company Hugging Face, said. Inaccurate headlines about AI often influence lawmakers and regulation, which Mitchell and others then have to try to fix.

If you just see headline after headline that are these overstated or even incorrect claims, then thats your sense of whats true, Mitchell said. You are creating the problem that your journalists are trying to report on.

Question the training data

After the model is trained with the labeled data, it is evaluated on an unseen data set, called the test or validation set, and scored using some sort of metric.

The first step when evaluating an AI model is to see how much and what kind of data the model has been trained on. The model can only perform well in the real world if the training data represents the population it is being tested on. For example, if developers trained a model on ten thousand pictures of puppies and fried chicken, and then evaluated it using a photo of a salmon, it likely wouldnt do well. Reporters should be wary when a model trained for one objective is used for a completely different objective.

In 2017, Amazon researchers scrapped a machine learning model used to filter through rsums, after they discovered it discriminated against women. The culprit? Their training data, which consisted of the rsums of the companys past hires, who were predominantly men.

Data privacy is another concern. In 2019, IBM released a data set with the faces of a million people. The following year a group of plaintiffs sued the company for including their photographs without consent.

Nicholas Diakopoulos, a professor of communication studies and computer science at Northwestern, recommends that journalists ask AI companies about their data collection practices and if subjects gave their consent.

Reporters should also consider the companys labor practices. Earlier this year, Time magazine reported that OpenAI paid Kenyan workers $2 an hour for labeling offensive content used to train ChatGPT. Bender said these harms should not be ignored.

Theres a tendency in all of this discourse to basically believe all of the potential of the upside and dismiss the actual documented downside, she said.

Evaluate the model

The final step in the machine learning process is for the model to output a guess on the testing data and for that output to be scored. Typically, if the model achieves a good enough score, it is deployed.

Companies trying to promote their models frequently quote numbers like 95 percent accuracy. Reporters should dig deeper here and ask if the high score only comes from a holdout sample of the original data or if the model was checked with realistic examples. These scores are only valid if the testing data matches the real world. Mitchell suggests that reporters ask specific questions like How does this generalize in context? Was the model tested in the wild or outside of its domains?

Its also important for journalists to ask what metric the company is using to evaluate the modeland whether that is the right one to use. A useful question to consider is whether a false positive or false negative is worse. For example, in a cancer screening tool, a false positive may result in people getting an unnecessary test, while a false negative might result in missing a tumor in its early stage when it is treatable.

The difference in metrics can be crucial to determine questions of fairness in the model. In May 2016, ProPublica published an investigation in an algorithm called COMPAS, which aimed to predict a criminal defendants risk of committing a crime within two years. The reporters found that, despite having similar accuracy between Black and white defendants, the algorithm had twice as many false positives for Black defendants as for white defendants.

The article ignited a fierce debate in the academic community over competing definitions of fairness. Journalists should specify which version of fairness is used to evaluate a model.

Recently, AI developers have claimed their models perform well not only on a single task but in a variety of situations. One of the things thats going on with AI right now is that the companies producing it are claiming that these are basically everything machines, Bender said. You cant test that claim.

In the absence of any real-world validation, journalists should not believe the companys claims.

Consider downstream harms

As important as it is to know how these tools work, the most important thing for journalists to consider is what impact the technology is having on people today. Companies like to boast about the positive effects of their tools, so journalists should remember to probe the real-world harms the tool could enable.

AI models not working as advertised is a common problem, and has led to several tools being abandoned in the past. But by that time, the damage is often done. Epic, one of the largest healthcare technology companies in the US, released an AI tool to predict sepsis in 2016. The tool was used across hundreds of US hospitalswithout any independent external validation. Finally, in 2021, researchers at the University of Michigan tested the tool and found that it worked much more poorly than advertised. After a series of follow-up investigations by Stat News, a year later, Epic stopped selling its one-size-fits-all tool.

Ethical issues arise even if a tool works well. Face recognition can be used to unlock our phones, but it has already been used by companies and governments to surveil people at scale. It has been used to bar people from entering concert venues, to identify ethnic minorities, and to monitor workers and people living in public housing, often without their knowledge.

In March reporters at Lighthouse Reports and Wired published an investigation into a welfare fraud detection model utilized by authorities in Rotterdam. The investigation found that the tool frequently discriminated against women and nonDutch speakers, sometimes leading to highly intrusive raids of innocent peoples homes by fraud controllers. Upon examination of the model and the training data, the reporters also found that the model performed little better than random guessing.

It is more work to go find workers who were exploited or artists whose data has been stolen or scholars like me who are skeptical, Bender said.

Jonathan Stray, a senior scientist at the Berkeley Center for Human-Compatible AI and former AP editor, said that talking to the humans who are using or are affected by the tools is almost always worth it.

Find the people who are actually using it or trying to use it to do their work and cover that story, because there are real people trying to get real things done, he said.

Thats where youre going to find out what the reality is.

Link:
How to report better on artificial intelligence - Columbia Journalism Review