Archive for the ‘Artificial Intelligence’ Category

The Future of Artificial Intelligence in Healthcare: Taking a Peek into … – Medium

Artificial Intelligence (AI) has been revolutionizing various industries, and healthcare is no exception. From diagnosing diseases to predicting treatment outcomes, AI is reshaping the landscape of modern medicine.

In this blog post, well take a casual stroll through the exciting possibilities AI brings to healthcare, exploring how it is set to transform the way we receive medical care.

Gone are the days when medical diagnosis relied solely on the intuition and expertise of human doctors. With the advent of AI, were witnessing a new era of precision diagnostics.

Machine learning algorithms are being trained on massive amounts of medical data, enabling them to identify patterns and anomalies that might go unnoticed by human eyes. From radiology to pathology, AI algorithms can analyze medical images and detect abnormalities with astonishing accuracy, potentially reducing diagnostic errors and improving patient outcomes.

One of the most promising aspects of AI in healthcare is its ability to predict and prevent diseases. By analyzing vast amounts of patient data, including medical records, genetic information, and lifestyle factors, AI algorithms can identify individuals at high risk of developing certain conditions.

This allows healthcare providers to intervene early, implementing personalized preventive measures and reducing the burden of disease.

Imagine a scenario where your smartphones health app combines data from your smartwatch, medical history, and genetic profile to generate real-time health predictions.

It could alert you to take preventive measures against a potential health issue before it even arises. This proactive approach has the potential to save lives and revolutionize the concept of healthcare.

AI-powered virtual assistants and chatbots are becoming increasingly common in healthcare settings. These intelligent systems can interact with patients, providing them with immediate access to information and personalized guidance.

From answering basic health queries to reminding patients to take their medications, AI chatbots can assist in providing timely and accurate information, improving patient engagement and adherence to treatment plans.

Moreover, AI algorithms can analyze large datasets to identify treatment patterns and recommend the most effective interventions based on an individuals unique characteristics.

This level of personalized medicine has the potential to enhance treatment outcomes and reduce healthcare costs by minimizing trial-and-error approaches.

Developing new drugs is a time-consuming and expensive process. However, AI is streamlining this procedure by analyzing vast amounts of biomedical literature and scientific research.

Machine learning algorithms can identify potential drug targets, predict drug efficacy, and even suggest novel combinations of existing medications. By leveraging AIs capabilities, researchers can expedite the discovery and development of new drugs, bringing innovative treatments to patients faster than ever before.

While AI brings tremendous promise to healthcare, we must address ethical considerations and challenges associated with its implementation.

Ensuring data privacy, maintaining transparency in algorithmic decision-making, and addressing biases in AI models are crucial for building trust and safeguarding patient well-being. Striking the right balance between human judgment and AI assistance is another challenge that needs careful consideration.

The future of artificial intelligence in healthcare is brimming with possibilities. From accurate diagnostics and disease prediction to improving patient care and revolutionizing drug discovery, AI has the potential to transform healthcare as we know it.

While challenges exist, embracing AI technologies responsibly can lead to a future where smart medicine and human expertise work hand in hand to provide the best possible care for all.

So, keep an eye on the horizon and prepare for a future where AI becomes an indispensable tool in the hands of healthcare providers, helping them deliver precision medicine and personalized care to improve the health and well-being of millions of people worldwide.

Follow Techdella Blog to read more about technological innovations.

The rest is here:
The Future of Artificial Intelligence in Healthcare: Taking a Peek into ... - Medium

How artificial intelligence can aid urban development – Open Access Government

Planning and maintaining communities in the modern world is as simple as threading a needle with an elephant. Under the best of circumstances, urban planning requires tremendous amounts of data, foresight and cross-department cooperation.

But when also accounting for the most pressing issues of the day climate change and diversity, equity and inclusion, among others a difficult job suddenly becomes a Herculean task.

Modern challenges require modern technology, and no contemporary tool is more powerful or consequential than artificial intelligence.

The inherent need in urban planning to process and interpret numerous disparate streams of data while responding to dramatic changes in the moment is an undertaking layered with complexity.

With the muscular computing capacity and deep-learning capabilities to help optimize an elaborate web of systems and interests including transportation, infrastructure management, energy efficiency, public safety and citizen engagement artificial intelligence can be a game-changer in the mission of modernizing urban development.

Transportation infrastructure is what often comes to mind when the subject of urban development is raised and with good reason. Its a complex and critical challenge that requires a great deal of resources and calls for a variety of (occasionally competing) solutions.

City life features the mingling of automobiles, pedestrians and even pets, and considerations such as public transportation, bicycle traffic and rush hour surges complicate any optimization project.

So, too, do the grids and topography that are unique to every city. But with advanced video analytics software that is designed to leverage existing investments in video to identify, process and index objects and behavior from live surveillance feeds, city systems can account for and better understand factors such as traffic congestion, roadway construction and vehicle-pedestrian interactions.

City systems can account for factors such as traffic congestion, roadway construction and vehicle-pedestrian interactions

AI technologies empower urban developers with the ability to glean insights from existing surveillance networks, allowing for best-case city planning that serves the greater public good.

The only constant for urban communities is change. City populations grow and contract. A restaurant opens while a shopping mall shutters its doors. New crime hotspots and pedestrian bottlenecks materialize without warning.

Previous initiatives may go underutilized or fall short of demand. For urban developers, the goalposts are always being moved which makes city planning both exceptionally knotty and vitally necessary.

Video analytics software can help city planners and decision-makers identify certain trends and even help predict others before they become intractable challenges. Data from CCTV surveillance can be processed using AI, providing urban developers with the information they need to make the most efficient use of city resources while meeting the needs of the public.

Where might a city create green spaces that serve the most citizens? Whats the ideal spot to plan a farmers market or build a new skate park? AI-driven software helps city planners make sense of available data (which would be otherwise unmanageable and uninterpretable by human operators) to intelligently inform decisions and maximizing infrastructural investments, effectively saving community resources

Communication and data sharing between departments and systems is a challenge for most cities, especially as populations grow and a communitys needs evolve over time.

Because city-powered CCTV video surveillance cameras have typically been used only for security and investigative purposes, many local government agencies and divisions that could benefit from their useful insights, may lack access or simply be unaware of their value.

Local government agencies and divisions that could benefit from the useful insights of city-powered CCTV video surveillance

Smart cities are communities that have made a concerted effort to connect information technologies across department silos for the benefit of the public. Typically, thats achieved through AI-driven technology, such as video analytics software, that taps into a citys existing video surveillance infrastructure.

When information is shared across departments, urban developers have the tools to spot opportunities, inefficiencies or hazards whether that be filling a pothole in a busy thoroughfare or adding streetlamps to a darkened (and potentially dangerous) corner of a city park.

Artificial intelligence has the processing muscle and dynamic interpretation skills to help cities not only address everyday problems, but also anticipate and address the most modern of challenges such as pandemic preparation. With AI-powered solutions, urban planners can help develop their communities while keeping citizens and systems safer, healthier and stronger.

This piece was written and provided by Liam Galin and BriefCam.

Liam Galin joined BriefCam as CEO to take charge of the companys growth strategy and maintain its position as a video analytics market leader and innovator.

The rest is here:
How artificial intelligence can aid urban development - Open Access Government

BlackRock highlights artificial intelligence in its 2023 midyear … – Seeking Alpha

Shutthiphong Chandaeng/iStock via Getty Images

BlackRock outlined in their 2023 midyear outlook report to investors that markets currently provide an abundance of investment opportunities with one area being artificial intelligence.

AI-driven productivity gains could boost profit margins, especially of companies with high staffing costs or a large share of tasks that could be automated, the worlds largest asset manager stated in its midyear report.

The financial firm outlined that Wall Street is still assessing the potential effects AI brings to applications and how the technology could disrupt entire industries. The firm stated that AI goes beyond sectors and also brings greater cybersecurity risks across the board.

BlackRock went on to add: We think the importance of data for AI and potential winners is underappreciated. Companies with vast sets of proprietary data have the ability to more quickly and easily leverage a large amount of data to create innovative models. New AI tools could analyze and unlock the value of the data gold mine some companies may be sitting on.

For investors looking to analyze the artificial intelligence space further, see below a grouping of 10 popular AI focused exchange traded funds:

More on Artificial Intelligence:

Go here to see the original:
BlackRock highlights artificial intelligence in its 2023 midyear ... - Seeking Alpha

How to report better on artificial intelligence – Columbia Journalism Review

In the past few months we have been deluged with headlines about new AI tools and how much they are going to change society.

Some reporters have done amazing work holding the companies developing AI accountable, but many struggle to report on this new technology in a fair and accurate way.

Wean investigative reporter, a data journalist, and a computer scientisthave firsthand experience investigating AI. Weve seen the tremendous potential these tools can havebut also their tremendous risks.

As their adoption grows, we believe that, soon enough, many reporters will encounter AI tools on their beat, so we wanted to put together a short guide to what we have learned.

So well begin with a simple explanation of what they are.

In the past, computers were fundamentally rule-based systems: if a particular condition A is satisfied, then perform operation B. But machine learning (a subset of AI) is different. Instead of following a set of rules, we can use computers to recognize patterns in data.

For example, given enough labeled photographs (hundreds of thousands or even millions) of cats and dogs, we can teach certain computer systems to distinguish between images of the two species.

This process, known as supervised learning, can be performed in many ways. One of the most common techniques used recently is called neural networks. But while the details vary, supervised learning tools are essentially all just computers learning patterns from labeled data.

Similarly, one of the techniques used to build recent models like ChatGPT is called self-supervised learning, where the labels are generated automatically.

Be skeptical of PR hype

People in the tech industry often claim they are the only people who can understand and explain AI models and their impact. But reporters should be skeptical of these claims, especially when coming from company officials or spokespeople.

Reporters tend to just pick whatever the author or the model producer has said, Abeba Birhane, an AI researcher and senior fellow at the Mozilla Foundation, said. They just end up becoming a PR machine themselves for those tools.

In our analysis of AI news, we found that this was a common issue. Birhane and Emily Bender, a computational linguist at the University of Washington, suggest that reporters talk to domain experts outside the tech industry and not just give a platform to AI vendors hyping their own technology. For instance, Bender recalled that she read a story quoting an AI vendor claiming their tool would revolutionize mental health care. Its obvious that the people who have the expertise about that are people who know something about how therapy works, she said.

In the Dallas Morning Newss series of stories on Social Sentinel, the company repeatedly claimed its model could detect students at risk of harming themselves or others from their posts on popular social media platforms and made outlandish claims about the performance of their model. But when reporters talked to experts, they learned that reliably predicting suicidal ideation from a single post on social media is not feasible.

Many editors could also choose better images and headlines, said Margaret Mitchell, chief ethics scientist of the AI company Hugging Face, said. Inaccurate headlines about AI often influence lawmakers and regulation, which Mitchell and others then have to try to fix.

If you just see headline after headline that are these overstated or even incorrect claims, then thats your sense of whats true, Mitchell said. You are creating the problem that your journalists are trying to report on.

Question the training data

After the model is trained with the labeled data, it is evaluated on an unseen data set, called the test or validation set, and scored using some sort of metric.

The first step when evaluating an AI model is to see how much and what kind of data the model has been trained on. The model can only perform well in the real world if the training data represents the population it is being tested on. For example, if developers trained a model on ten thousand pictures of puppies and fried chicken, and then evaluated it using a photo of a salmon, it likely wouldnt do well. Reporters should be wary when a model trained for one objective is used for a completely different objective.

In 2017, Amazon researchers scrapped a machine learning model used to filter through rsums, after they discovered it discriminated against women. The culprit? Their training data, which consisted of the rsums of the companys past hires, who were predominantly men.

Data privacy is another concern. In 2019, IBM released a data set with the faces of a million people. The following year a group of plaintiffs sued the company for including their photographs without consent.

Nicholas Diakopoulos, a professor of communication studies and computer science at Northwestern, recommends that journalists ask AI companies about their data collection practices and if subjects gave their consent.

Reporters should also consider the companys labor practices. Earlier this year, Time magazine reported that OpenAI paid Kenyan workers $2 an hour for labeling offensive content used to train ChatGPT. Bender said these harms should not be ignored.

Theres a tendency in all of this discourse to basically believe all of the potential of the upside and dismiss the actual documented downside, she said.

Evaluate the model

The final step in the machine learning process is for the model to output a guess on the testing data and for that output to be scored. Typically, if the model achieves a good enough score, it is deployed.

Companies trying to promote their models frequently quote numbers like 95 percent accuracy. Reporters should dig deeper here and ask if the high score only comes from a holdout sample of the original data or if the model was checked with realistic examples. These scores are only valid if the testing data matches the real world. Mitchell suggests that reporters ask specific questions like How does this generalize in context? Was the model tested in the wild or outside of its domains?

Its also important for journalists to ask what metric the company is using to evaluate the modeland whether that is the right one to use. A useful question to consider is whether a false positive or false negative is worse. For example, in a cancer screening tool, a false positive may result in people getting an unnecessary test, while a false negative might result in missing a tumor in its early stage when it is treatable.

The difference in metrics can be crucial to determine questions of fairness in the model. In May 2016, ProPublica published an investigation in an algorithm called COMPAS, which aimed to predict a criminal defendants risk of committing a crime within two years. The reporters found that, despite having similar accuracy between Black and white defendants, the algorithm had twice as many false positives for Black defendants as for white defendants.

The article ignited a fierce debate in the academic community over competing definitions of fairness. Journalists should specify which version of fairness is used to evaluate a model.

Recently, AI developers have claimed their models perform well not only on a single task but in a variety of situations. One of the things thats going on with AI right now is that the companies producing it are claiming that these are basically everything machines, Bender said. You cant test that claim.

In the absence of any real-world validation, journalists should not believe the companys claims.

Consider downstream harms

As important as it is to know how these tools work, the most important thing for journalists to consider is what impact the technology is having on people today. Companies like to boast about the positive effects of their tools, so journalists should remember to probe the real-world harms the tool could enable.

AI models not working as advertised is a common problem, and has led to several tools being abandoned in the past. But by that time, the damage is often done. Epic, one of the largest healthcare technology companies in the US, released an AI tool to predict sepsis in 2016. The tool was used across hundreds of US hospitalswithout any independent external validation. Finally, in 2021, researchers at the University of Michigan tested the tool and found that it worked much more poorly than advertised. After a series of follow-up investigations by Stat News, a year later, Epic stopped selling its one-size-fits-all tool.

Ethical issues arise even if a tool works well. Face recognition can be used to unlock our phones, but it has already been used by companies and governments to surveil people at scale. It has been used to bar people from entering concert venues, to identify ethnic minorities, and to monitor workers and people living in public housing, often without their knowledge.

In March reporters at Lighthouse Reports and Wired published an investigation into a welfare fraud detection model utilized by authorities in Rotterdam. The investigation found that the tool frequently discriminated against women and nonDutch speakers, sometimes leading to highly intrusive raids of innocent peoples homes by fraud controllers. Upon examination of the model and the training data, the reporters also found that the model performed little better than random guessing.

It is more work to go find workers who were exploited or artists whose data has been stolen or scholars like me who are skeptical, Bender said.

Jonathan Stray, a senior scientist at the Berkeley Center for Human-Compatible AI and former AP editor, said that talking to the humans who are using or are affected by the tools is almost always worth it.

Find the people who are actually using it or trying to use it to do their work and cover that story, because there are real people trying to get real things done, he said.

Thats where youre going to find out what the reality is.

Link:
How to report better on artificial intelligence - Columbia Journalism Review

Hackers: We wont let artificial intelligence get the better of us – ComputerWeekly.com

Artificial intelligence (AI) doesnt stand a chance of being able to replicate the human creativity needed to become an ethical hacker, but it will disrupt how hackers conduct penetration testing and work on bug bounty programmes, and is already increasing the value of hacking to organisations that are prepared to engage with the hacking community rather than dismiss it outright.

This is according to the hackers who contributed to the latest edition of Inside the mind of a hacker (ITMOAH), an annual report from crowdsourced penetration testing firm Bugcrowd, which sets out to offer an in-depth look at how hackers think and function, and why they do the things they do. This year unsurprisingly leans into AI in a big way.

When it came to the existential questions around whether or not AI could outperform the average hacker or render them irrelevant, 21% of respondents said AI was already outperforming them, and a third said it will be able to do so given another five years or so.

The vast majority, 78%, said AI would disrupt how they work on penetration testing or bug bounty programmes some time between now and 2028, with 40% saying it has already changed the way people hack, and 91% of hackers saying generative AI either has already, or will in future, increase the value of their work.

Outperforming a human doing repetitive, sometimes monotonous, work such as data analysis is one thing, but hacking as a vocation also encourages creativity of thought, and it is here that the community seems to feel humans will continue to have an edge, with 72% saying they did not think AI will ever be able to replicate these qualities.

Ive done a fair amount with AI, and as impressive as it is, I dont think it will be replacing humans for quite some time, if ever, said one respondent, a 20-year cyber security veteran who hacks on the Bugcrowd platform using the handle Nerdwell.

AI is very good at what it does pattern recognition and applying well-known solutions to well-known problems, he said. Humans are biologically designed to seek out novelty and curiosity. Our brains are literally wired to be creative and find novel solutions to novel problems.

Another Bugcrowd hacker, who goes by the handle OrwaGodfather, added: AI is great, but it will not replace me. There are some bugs and issues, just like any other technology.

It can have an effect on my place in hacking, though. For example, automation has huge potential to help hackers, said OrwaGodfather, who started hacking in 2020 and when away from his keyboard works as a professional chef.

It can make things easier and save time, he said. If I find a bug when performing a pen test and I dont want to spend 30 minutes writing a report, I can start by using AI to write descriptions for me. AI makes hacking faster.

Whatever their gut feelings may be, Bugcrowds hackers are scrambling aboard the AI train, with 85% saying they had played around with generative AI technology, and 64% already incorporating it into their security workflows in some way a further 30% said they planned to do this in the future.

Hackers who have adopted or who plan to adopt generative AI are most inclined to use Open AIs ChatGPT (a Bugcrowd customer) cited by 98% of respondents with Googles Bard and Microsofts Bing Chat AI at 40%.

Those that have taken the plunge are using generative AI technology in a wide variety of ways, with the most commonly used functions being text summarisation or generation, code generation, search enhancement, chatbots, image generation, data design, collection or summarisation, and machine learning.

Within security research workflows specifically, hackers said they found generative AI most useful to automate tasks, analyse data, and identify and validate vulnerabilities. Less widely used applications included conducting reconnaissance, categorising threats, detecting anomalies, prioritising risk and building training models.

Many hackers who are not native English speakers or not fluent in English are also using services such as ChatGPT to translate or write reports and bug submissions, and fuel more collaboration across national borders.

Over the past decade, Bugcrowds annual report has also served a secondary purpose, that of helping to humanise the hacking community and disrupt negative and unhelpful stereotypes of what a hacker actually is.

This is particularly important given that, in spite of years of pushback and attempts to educate, many people who should know better readily and intentionally conflate the term hacker with the term cyber criminal.

Weve taken on the responsibility of helping the market understand what a hacker actually is, Casey Ellis, Bugcrowd founder, chief technology officer and report co-author told Computer Weekly at the recent Infosecurity Europe cyber trade fair.

I think when we started, everyone assumed it was a bad thing, he said. Some 10 years on, were now at a point where people understand that hacking is actually a skill set. Like most skill sets, its dual-use. Its like lockpicking. If youve got that skill, you can become a locksmith, or a burglar. Theres nothing wrong with lockpicking its how youre actually using it. Hacking is the same.

The 2023 ITMOAH report shows how some fundamental shifts in hacker culture and demographics look set to shake up the cyber security landscape in the coming years.

For the first time, the report reveals, the majority of active hackers, between 55% and 60%, are now members of the Generation Z cohort currently in their teens and early 20s, while between 33% and 36% are Millennials aged from their late 20s to early 40s.

And despite hackings cultural roots in the 1980s, only 2% are members of Generation X, those born between the mid-1960s and approximately 1980, the youngest of whom are now about 45 years old.

So, are the stereotypes of teenage hackers actually proving accurate, and more pertinently, are the kids all right? Were seeing a pretty rapid acceleration of participation from people that are under 18, said Ellis. Its still a very small population, only 6%, but its up from 3% year-on-year, which is a big shift.

He said this trend will become increasingly relevant because todays teenagers think about technology in a fundamentally different way to those born even a few short years earlier.

Ive got a 15-year-old daughter and the way she interacts with technology is completely different to me, said Ellis. Her introduction to technology was all about the interface mine was all about the plumbing. We just think about the internet in a fundamentally different way.

Now, I know stuff that shell never know because I grew up with the nuts and bolts, but shell think about the interface in a way that I probably never will because Im so consumed with the nuts and bolts.

You talk about Millennials as digital natives, but Gen Z and younger are actually digital natives, he said. Theyre able to wander through that environment in an intuitive way that we cant really understand. I can try to empathise with that, and I can get most of the way there, but I recognise the fact Ill never fully understand because its not my experience.

This generation is also proving adept at challenging the mores and assumptions of their elders that have often been built into technology, and Ellis said this gives them an advantage in figuring out what is coming next, and where future vulnerabilities may lie.

The other part of this trend is that todays teens are more politically and socially motivated, and more diverse, in ways that older people are not. This factor is already changing the cyber landscape and will certainly continue to do so.

Take Lapsus$, the teenage-run cyber extortion collective that attacked the systems of ride-sharing service Uber in 2022 for no particular reason other than they didnt care for Ubers ethics.

One of the big things that Ive been saying since Lapssus$ is that as defenders, were not ready for a chaotic act, said Ellis. Weve been thinking about cyber criminals, nation states, threat actors as having a symmetric motivation.

A nation state wants to advance the nation, cyber criminals want money. Theyre predictable. And there is symmetry in what theyre doing. Folks that come in with more of an activism bent, you dont really know what they want. And in the case of Lapsus$, its like we just want to make a mess because those guys suck. How do you defend against that? We havent really been thinking in that way since Lulzsec, which was probably the last example of a group that did that.

Of course, the teens on Bugcrowds platform are not attacking organisations in the same sense as Lapssus$ did, but in its story there is a lesson for the hacking community, and the defenders, and clearly the potential to channel activity that might otherwise be expended on malicious acts into legitimate security work is immense.

The full report, which can be downloaded to read in full from Bugcrowd, contains a wealth of additional insight into hacker demographics the gender gap is increasing, likely due to the extra pressure the Covid-19 pandemic put on many women motivations to hack, what hackers think ordinary security teams need to do better, and more besides.

Read the rest here:
Hackers: We wont let artificial intelligence get the better of us - ComputerWeekly.com