Archive for the ‘Artificial Intelligence’ Category

One Of The Most Important Uses Of Artificial Intelligence Is Fraud … – Finextra

Online shopping has quickly become one of the primary means for buying furniture, groceries, and clothes that were initially bought offline. Unfortunately, due to global business environments featuring high volumes of data, detecting fraudsters in such an environment can often be challenging.

Fraud Detection has proven itself effective at combating fraud with artificial intelligence in banking and insurance. Some banks reimburse consumers, while others claim the transaction was unilaterally by the customer. Either way, banks face financial or customer trust losses.

AI and Fraud Detection

Artificial Intelligence fraud detection technology has dramatically assisted businesses in enhancing internal security and streamlining corporate operations. Artificial Intelligence's efficiency makes it a formidable force against financial crime; AI's data analysis capabilities allow it to uncover patterns in transactions that indicate fraudulent behavior and then be deployed in real-time against it for detection purposes.

AI models can help detect fraud by flagging transactions for further scrutiny or altogether rejecting them, rating their likelihood, and providing investigators with case codes to investigate transactions flagged for further examination or rejection. They may even rate each likelihood differently to allow investigators to focus on those most likely committing it. These models often also provide cause codes associated with their flagged transactions.

Reason codes aid investigators by quickly pinpointing problems and expediting investigations. Investigative teams can also utilize artificial intelligence (AI), which assesses suspicious transactions. Doing this will increase its understanding and prevent it from recreating trends that don't result in fraud.

The Role of ML and AI in Fraud Detection

Machine learning refers to analytical approaches which "learn patterns" automatically within data sets without human assistance, similar to artificial intelligence (AI) approaches that recognize patterns automatically from data. AI stands for artificial intelligence: specific analytical techniques applied towards various tasks ranging from driving cars safely and detecting fraud - while machine learning serves as one method to build these models.

AI refers to technology capable of performing tasks that require intelligence, such as analyzing data or understanding human language. AI algorithms are designed to recognize and predict patterns in real-time. AI often incorporates different ML models.

AI's Machine Learning subset utilizes algorithms for processing large datasets to enable systems to become autonomous. As more data comes their way, their performance improves over time; Unsupervised Machine Learning is often taken as the approach used. While UML algorithms look for hidden patterns inside them, SML algorithms use labeled data to anticipate future events.

SML algorithms use transactional data labeled fraudulent or not to train their supervised machine-learning models; UML employs anomaly detection algorithms based on features to detect transactions that differ significantly from the norm; these models tend to be simpler but less accurate than SML models.

Fraud detection and prevention tools such as these can be highly efficient because they can automatically discover patterns across vast amounts of transactions. When employed effectively, machine learning can differentiate between fraudulent activity and legal conduct while adapting to previously unknown fraud techniques.

Data management can become quite intricate when trying to recognize patterns within data and apply data science techniques to distinguish normal from abnormal behavior, often within milliseconds of each calculation being executed. It requires understanding data patterns and using data science practices, if desired to improve classification systems and differentiation capabilities continuously. Execution of hundreds of measures within milliseconds must occur for maximum efficiency.

Without proper domain data and fraud-specific approaches, it can be easy for machine-learning algorithms to deploy inaccurately, leading to costly miscalculations that prove difficult or even impossible to rectify. This may prove expensive regarding both time and resources spent fixing it. As with humans, an improperly built machine-learning model may exhibit undesirable traits.

Is Fraud Detection Using Artificial Intelligence Possible

AI can play an invaluable role in managing fraud by detecting suspicious activities and preventing future fraudulent schemes from emerging. Fraud losses account for an average annual percentage loss of 6.055% of global gross domestic product, while cyber breaches cause businesses ranging in cost between 3-10%; global digital fraud losses will reach more than $343 billion by 2027.

Under current estimates, any organization should establish an efficient fraud management system to identify, prevent, detect, and respond appropriately to any possible fraudulent activity within its walls. This entails both detection and prevention strategies within an organization's walls.

Artificial intelligence plays a pivotal role in managing fraud. AI technology, such as machine learning algorithms (ML), can analyze large data sets to detect anomalies that suggest possible fraud.

AI fraud management systems have proven highly successful at recognizing and stopping various fraud types - payment fraud, identity fraud, or phishing, to name but three examples; adapting quickly to emerging patterns of fraudulent behavior while becoming even better detectors with time. AI fraud prevention solutions may integrate seamlessly with additional security measures like identity verification or biometric authentication for enhanced protection against such schemes.

What are the Benefits of AI in Fraud Detection?

AI fraud detection offers a way to enhance customer service without negatively affecting the accuracy and speed of operations. We discuss its key benefits below:

Accuracy: Artificial Intelligence development software can quickly sort through large volumes of data, quickly identifying patterns and anomalies that would otherwise be difficult for humans to recognize. AI algorithms also learn and develop over time by continuously processing new information gathered by analyzing previous datasets.

Real-time monitoring: AI algorithms allow real-time tracking, enabling organizations to detect and respond immediately to fraud attempts.

False positives are reduced: Fraud detection often produces false positives when legitimate transactions are mistakenly marked as fraudulent. However, AI algorithms designed for learning will reduce false positives significantly.

Increased efficiency: Human intervention is not as necessary when repetitive duties like evaluating transactions or confirming identity are automated by AI systems.

Cost reduction: Fraudulent actions may have a serious negative impact on an organization's finances and reputation. AI algorithms save them money while protecting their image by helping curb fraudulent activities and safeguard their brand by mitigating fraudulent actions.

AI-based Uses for Fraud Detection and Prevention

Combining AI Models that are Supervised and Unsupervised

As organized crime has proven incredibly adaptive and sophisticated, traditional defense methods will not suffice; each use case should include tailor-made approaches to anomaly detection that best suit its unique circumstances.

Therefore, supervised and non-supervised models must be combined into any comprehensive next-generation fraud tactics strategy. Supervised learning is one form of machine learning in which models are created using numerous "labeled transactions."

Every transaction must be classified either as fraud or not, and models need to be trained with large volumes of transaction data to identify patterns that represent lawful activity best. Accuracy directly corresponds with relevant, clean training data for a supervised algorithm. Models without supervision are used to detect unusual behaviors when transactional data labels are few or nonexistent, necessitating self-learning in these instances to uncover patterns that traditional analytics cannot.

In Action: Behavioral Analytics

Machine learning techniques are used in behavioral analytics to predict and understand behavior more closely across all transactions. Data is then utilized to create profiles highlighting each user, merchant, or account's activities and behavior.

Profiles can be updated in real-time to reflect transactions made, which allows analytic functions to predict future behavior accurately. Profiles detail financial and non-financial transactions, such as changing addresses or requests for duplicate cards and password reset requests. Financial transaction data can help create patterns that show an individual's average spending velocity, their preferred hours and days for transacting, and the distance between payment locations.

Profiles can provide a virtual snapshot of current activities. This can prevent transactions from being abandoned due to false positives. An effective corporate fraud credit solution consists of analytical models and profiles which offer real-time insights into transaction trends.

Develop Models with Large Datasets

Studies have demonstrated that data volume and variety play more of a factor than intelligence regarding machine-learning models' success, providing computing equivalent to human knowledge.

As expected, increasing the data set used for creating features of a machine-learning model could improve the accuracy of prediction. Consider that doctors have been trained to treat thousands of patients simultaneously; their knowledge allows them to diagnose correctly in their areas of specialization.

Fraud detection models can benefit significantly from processing millions of transactions (both valid and fraudulent), as well as from studying these instances in depth. To best detect fraud, one must evaluate large volumes of data to assess risk at individual levels and calculate it effectively.

Self-Learning AI and Adaptive Analytics

Machine learning can help to combat fraudsters who make it challenging for consumers to protect their accounts. Fraud detection experts must look for adaptive artificial intelligence development solutions which sharpen judgments and reactions to marginal conclusions to enhance performance and ensure maximum protection of funds.

Accuracy is crucial when distinguishing between transactions that either cross or fall below a particular threshold and those which fall just shy of it, thus indicating a false-positive event - legal transactions scoring highly - and false adverse events, in which fraudulent ones score lowly.

Adaptive analytics offers businesses a more accurate picture of danger areas within a company. It increases sensitivity to fraud trends by adapting automatically to recent cases' dispositions. As such, adaptive systems make more accurate differentiation between frauds; an analyst informs adaptive systems when any particular transaction is, in fact, legal and should remain within it.

Analysts can accurately reflect the evolving fraud landscape, from new fraud tactics and patterns that may have lain dormant for some time to subtle misconduct practices that had lain dormant for extended periods. Their adaptive modeling allows automatic model adjustments.

This innovative adaptive modeling method automatically adjusts predictor characteristics within fraud models to improve detection rates and forestall future attacks. It is an indispensable way of improving fraud detection while mitigating new ones.

What Dangers could Arise from the Application of AI in Fraud Detection?

AI technologies can also pose certain risks, but these are manageable partly by AI solutions that explain their use. Below, we discuss the potential dangers of AI fraud detection:

Biased Algorithms: AI algorithms may produce little results if their training data includes bias. Such an AI program might produce incorrect outcomes if its training data contains bias.

False positive or false negative results: Automated systems may produce inaccurate negative or positive effects that appear false positive - these false negative cases often ignore fraudulent activity that would otherwise occur, while false positive cases involve overshadowing this type of activity altogether.

Absence of transparency: AI algorithms can often be challenging to decipher, making it hard for individuals to determine why an individual transaction was marked as fraudulent.

Explainable AI can be used to reduce some of the inherent risks. This term refers to AI systems that communicate their decision-making process clearly so humans can understand. Explainable AI has proven particularly helpful for fraud detection as it offers clear explanations for why certain transactions or activities were flagged as potentially illicit activities or transactions.

Bottom Line

As part of their AI fraud detection strategies, an artificial intelligence development company can identify automated fraud and complex attempts more rapidly and efficiently by employing supervised and unsupervised machine learning approaches.

Since card-not-present transactions remain prevalent online, Banking and Retail industries face constant threats in terms of fraud allegations. Data breaches can result from various crimes, such as email phishing and financial fraud, identity theft, document falsification, and false accounts created by criminals targeting vulnerable users.

Read more here:
One Of The Most Important Uses Of Artificial Intelligence Is Fraud ... - Finextra

Artificial intelligence must be grounded in human rights, says High … – OHCHR

HIGH LEVEL SIDE EVENT OF THE 53rd SESSION OF THE HUMAN RIGHTS COUNCIL on

What should the limits be? A human-rights perspective on whats next for artificial intelligence and new and emerging technologies

Opening Statement by Volker Trk

UN High Commissioner for Human Rights

It is great that we are having a discussion about human rights and AI.

We all know how much our world and the state of human rights is being tested at the moment. The triple planetary crisis is threatening our existence. Old conflicts have been raging for years, with no end in sight. New ones continue to erupt, many with far-reaching global consequences. We are still reeling from consequences of the COVID-19 pandemic,which exposed and deepened a raft of inequalities the world over.

But the question before us today what the limits should be on artificial intelligence and emerging technologies is one of the most pressing faced by society, governments and the private sector.

We have all seen and followed over recent months the remarkable developments in generative AI, with ChatGPT and other programmes now readily accessible to the broader public.

We know that AI has the potential to be enormously beneficial to humanity. It could improve strategic foresight and forecasting, democratize access to knowledge, turbocharge scientific progress, and increase capacity for processing vast amounts of information.

But in order to harness this potential, we need to ensure that the benefits outweigh the risks, and we need limits.

When we speak of limits, what we are really talking about is regulation.

To be effective, to be humane, to put people at the heart of the development of new technologies, any solution any regulation must be grounded in respect for human rights.

Two schools of thoughts are shaping the current development of AI regulation.

The first one is risk-based only, focusing largely on self-regulation and self-assessment by AI developers. Instead of relying on detailed rules, risk-based regulation emphasizes identifying, and mitigating risks to achieve outcomes.

This approach transfers a lot of responsibility to the private sector. Some would say too much we hear that from the private sector itself.

It also results in clear gaps in regulation.

The other approach embeds human rights in AIs entire lifecycle. From beginning to end, human rights principles are included in the collection and selection of data; as well as the design, development, deployment and use of the resulting models, tools and services.

This is not a warning about the future we are already seeing the harmful impacts of AI today, and not only generative AI.

AI has the potential to strengthen authoritarian governance.

It can operate lethal autonomous weapons.

It can form the basis for more powerful tools of societal control, surveillance, and censorship.

Facial recognition systems, for example, can turn into mass surveillance of our public spaces, destroying any concept of privacy.

AI systems that are used in the criminal justice system to predict future criminal behaviour have already been shown to reinforce discrimination and to undermine rights, including the presumption of innocence.

Victims and experts, including many of you in this room, have raised the alarm bell for quite some time, but policy makers and developers of AI have not acted enough or fast enough on those concerns.

We need urgent action by governments and by companies. And at the international level, the United Nations can play a central role in convening key stakeholders and advising on progress.

There is absolutely no time to waste.

The world waited too long on climate change. We cannot afford to repeat that same mistake.

What could regulation look like?

The starting point should be the harms that people experience and will likely experience.

This requires listening to those who are affected, as well as to those who have already spent many years identifying and responding to harms. Women, minority groups, marginalized people, in particular, are disproportionately affected by bias in AI. We must make serious efforts to bring them to the table for any discussion on governance.

Attention is also needed to the use of AI in public and private services where there is a heightened risk of abuse of power or privacy intrusions justice, law enforcement, migration, social protection, or financial services.

Second, regulations need to require assessment of the human rights risks and impacts of AI systems before, during, and after their use. Transparency guarantees, independent oversight, and access to effective remedies are needed, particularly when the State itself is using AI technologies.

AI technologies that cannot be operated in compliance with international human rights law must be banned or suspended until such adequate safeguards are in place.

Third, existing regulations and safeguards need to be implemented for example, frameworks on data protection, competition law, and sectoral regulations, including for health, tech or financial markets. A human rights perspective on the development and use of AI will have limited impact if respect for human rights is inadequate in the broader regulatory and institutional landscape.

And fourth, we need to resist the temptation to let the AI industry itself assert that self-regulation is sufficient, or to claim that it should be for them to define the applicable legal framework. I think we have learnt our lesson from social media platforms in that regard. Whilst their input is important, it is essential that the full democratic process laws shaped by all stakeholders is brought to bear, on an issue in which all people, everywhere, will be affected far into the future.

At the same time, companies must live up to their responsibilities to respect human rights in line with the Guiding Principles on Business and Human Rights. Companies are responsible for the products they are racing to put on the market. My Office is working with a number of companies, civil society organizations and AI experts to develop guidance on how to tackle generative AI. But a lot more needs to be done along these lines.

Finally, while it would not be a quick fix, it may be valuable to explore the establishment of an international advisory body for particularly high-risk technologies, one that could offer perspectives on how regulatory standards could be aligned with universal human rights and rule of law frameworks. The body could publicly share the outcomes of its deliberations and offer recommendations on AI governance. This is something that the Secretary-General of the United Nations has also proposed as part of the Global Digital Compact for the Summit of the Future next year.

The human rights framework provides an essential foundation that can provide guardrails for efforts to exploit the enormous potential of AI, while preventing and mitigating its enormous risks.

I look forward to discussing these issues with you.

See the original post here:
Artificial intelligence must be grounded in human rights, says High ... - OHCHR

The Future of Artificial Intelligence in Healthcare: Taking a Peek into … – Medium

Artificial Intelligence (AI) has been revolutionizing various industries, and healthcare is no exception. From diagnosing diseases to predicting treatment outcomes, AI is reshaping the landscape of modern medicine.

In this blog post, well take a casual stroll through the exciting possibilities AI brings to healthcare, exploring how it is set to transform the way we receive medical care.

Gone are the days when medical diagnosis relied solely on the intuition and expertise of human doctors. With the advent of AI, were witnessing a new era of precision diagnostics.

Machine learning algorithms are being trained on massive amounts of medical data, enabling them to identify patterns and anomalies that might go unnoticed by human eyes. From radiology to pathology, AI algorithms can analyze medical images and detect abnormalities with astonishing accuracy, potentially reducing diagnostic errors and improving patient outcomes.

One of the most promising aspects of AI in healthcare is its ability to predict and prevent diseases. By analyzing vast amounts of patient data, including medical records, genetic information, and lifestyle factors, AI algorithms can identify individuals at high risk of developing certain conditions.

This allows healthcare providers to intervene early, implementing personalized preventive measures and reducing the burden of disease.

Imagine a scenario where your smartphones health app combines data from your smartwatch, medical history, and genetic profile to generate real-time health predictions.

It could alert you to take preventive measures against a potential health issue before it even arises. This proactive approach has the potential to save lives and revolutionize the concept of healthcare.

AI-powered virtual assistants and chatbots are becoming increasingly common in healthcare settings. These intelligent systems can interact with patients, providing them with immediate access to information and personalized guidance.

From answering basic health queries to reminding patients to take their medications, AI chatbots can assist in providing timely and accurate information, improving patient engagement and adherence to treatment plans.

Moreover, AI algorithms can analyze large datasets to identify treatment patterns and recommend the most effective interventions based on an individuals unique characteristics.

This level of personalized medicine has the potential to enhance treatment outcomes and reduce healthcare costs by minimizing trial-and-error approaches.

Developing new drugs is a time-consuming and expensive process. However, AI is streamlining this procedure by analyzing vast amounts of biomedical literature and scientific research.

Machine learning algorithms can identify potential drug targets, predict drug efficacy, and even suggest novel combinations of existing medications. By leveraging AIs capabilities, researchers can expedite the discovery and development of new drugs, bringing innovative treatments to patients faster than ever before.

While AI brings tremendous promise to healthcare, we must address ethical considerations and challenges associated with its implementation.

Ensuring data privacy, maintaining transparency in algorithmic decision-making, and addressing biases in AI models are crucial for building trust and safeguarding patient well-being. Striking the right balance between human judgment and AI assistance is another challenge that needs careful consideration.

The future of artificial intelligence in healthcare is brimming with possibilities. From accurate diagnostics and disease prediction to improving patient care and revolutionizing drug discovery, AI has the potential to transform healthcare as we know it.

While challenges exist, embracing AI technologies responsibly can lead to a future where smart medicine and human expertise work hand in hand to provide the best possible care for all.

So, keep an eye on the horizon and prepare for a future where AI becomes an indispensable tool in the hands of healthcare providers, helping them deliver precision medicine and personalized care to improve the health and well-being of millions of people worldwide.

Follow Techdella Blog to read more about technological innovations.

The rest is here:
The Future of Artificial Intelligence in Healthcare: Taking a Peek into ... - Medium

How artificial intelligence can aid urban development – Open Access Government

Planning and maintaining communities in the modern world is as simple as threading a needle with an elephant. Under the best of circumstances, urban planning requires tremendous amounts of data, foresight and cross-department cooperation.

But when also accounting for the most pressing issues of the day climate change and diversity, equity and inclusion, among others a difficult job suddenly becomes a Herculean task.

Modern challenges require modern technology, and no contemporary tool is more powerful or consequential than artificial intelligence.

The inherent need in urban planning to process and interpret numerous disparate streams of data while responding to dramatic changes in the moment is an undertaking layered with complexity.

With the muscular computing capacity and deep-learning capabilities to help optimize an elaborate web of systems and interests including transportation, infrastructure management, energy efficiency, public safety and citizen engagement artificial intelligence can be a game-changer in the mission of modernizing urban development.

Transportation infrastructure is what often comes to mind when the subject of urban development is raised and with good reason. Its a complex and critical challenge that requires a great deal of resources and calls for a variety of (occasionally competing) solutions.

City life features the mingling of automobiles, pedestrians and even pets, and considerations such as public transportation, bicycle traffic and rush hour surges complicate any optimization project.

So, too, do the grids and topography that are unique to every city. But with advanced video analytics software that is designed to leverage existing investments in video to identify, process and index objects and behavior from live surveillance feeds, city systems can account for and better understand factors such as traffic congestion, roadway construction and vehicle-pedestrian interactions.

City systems can account for factors such as traffic congestion, roadway construction and vehicle-pedestrian interactions

AI technologies empower urban developers with the ability to glean insights from existing surveillance networks, allowing for best-case city planning that serves the greater public good.

The only constant for urban communities is change. City populations grow and contract. A restaurant opens while a shopping mall shutters its doors. New crime hotspots and pedestrian bottlenecks materialize without warning.

Previous initiatives may go underutilized or fall short of demand. For urban developers, the goalposts are always being moved which makes city planning both exceptionally knotty and vitally necessary.

Video analytics software can help city planners and decision-makers identify certain trends and even help predict others before they become intractable challenges. Data from CCTV surveillance can be processed using AI, providing urban developers with the information they need to make the most efficient use of city resources while meeting the needs of the public.

Where might a city create green spaces that serve the most citizens? Whats the ideal spot to plan a farmers market or build a new skate park? AI-driven software helps city planners make sense of available data (which would be otherwise unmanageable and uninterpretable by human operators) to intelligently inform decisions and maximizing infrastructural investments, effectively saving community resources

Communication and data sharing between departments and systems is a challenge for most cities, especially as populations grow and a communitys needs evolve over time.

Because city-powered CCTV video surveillance cameras have typically been used only for security and investigative purposes, many local government agencies and divisions that could benefit from their useful insights, may lack access or simply be unaware of their value.

Local government agencies and divisions that could benefit from the useful insights of city-powered CCTV video surveillance

Smart cities are communities that have made a concerted effort to connect information technologies across department silos for the benefit of the public. Typically, thats achieved through AI-driven technology, such as video analytics software, that taps into a citys existing video surveillance infrastructure.

When information is shared across departments, urban developers have the tools to spot opportunities, inefficiencies or hazards whether that be filling a pothole in a busy thoroughfare or adding streetlamps to a darkened (and potentially dangerous) corner of a city park.

Artificial intelligence has the processing muscle and dynamic interpretation skills to help cities not only address everyday problems, but also anticipate and address the most modern of challenges such as pandemic preparation. With AI-powered solutions, urban planners can help develop their communities while keeping citizens and systems safer, healthier and stronger.

This piece was written and provided by Liam Galin and BriefCam.

Liam Galin joined BriefCam as CEO to take charge of the companys growth strategy and maintain its position as a video analytics market leader and innovator.

The rest is here:
How artificial intelligence can aid urban development - Open Access Government

BlackRock highlights artificial intelligence in its 2023 midyear … – Seeking Alpha

Shutthiphong Chandaeng/iStock via Getty Images

BlackRock outlined in their 2023 midyear outlook report to investors that markets currently provide an abundance of investment opportunities with one area being artificial intelligence.

AI-driven productivity gains could boost profit margins, especially of companies with high staffing costs or a large share of tasks that could be automated, the worlds largest asset manager stated in its midyear report.

The financial firm outlined that Wall Street is still assessing the potential effects AI brings to applications and how the technology could disrupt entire industries. The firm stated that AI goes beyond sectors and also brings greater cybersecurity risks across the board.

BlackRock went on to add: We think the importance of data for AI and potential winners is underappreciated. Companies with vast sets of proprietary data have the ability to more quickly and easily leverage a large amount of data to create innovative models. New AI tools could analyze and unlock the value of the data gold mine some companies may be sitting on.

For investors looking to analyze the artificial intelligence space further, see below a grouping of 10 popular AI focused exchange traded funds:

More on Artificial Intelligence:

Go here to see the original:
BlackRock highlights artificial intelligence in its 2023 midyear ... - Seeking Alpha