Archive for the ‘Artificial Intelligence’ Category

Test Yourself: Which Faces Were Made by A.I.? – The New York Times

Tools powered by artificial intelligence can create lifelike images of people who do not exist.

See if you can identify which of these images are real people and which are A.I.-generated.

Were you surprised by your results? You guessed 0 times and got 0 correct.

Ever since the public release of tools like Dall-E and Midjourney in the past couple of years, the A.I.-generated images theyve produced have stoked confusion about breaking news, fashion trends and Taylor Swift.

Distinguishing between a real versus an A.I.-generated face has proved especially confounding.

Research published across multiple studies found that faces of white people created by A.I. systems were perceived as more realistic than genuine photographs of white people, a phenomenon called hyper-realism.

Researchers believe A.I. tools excel at producing hyper-realistic faces because they were trained on tens of thousands of images of real people. Those training datasets contained images of mostly white people, resulting in hyper-realistic white faces. (The over-reliance on images of white people to train A.I. is a known problem in the tech industry.)

The confusion among participants was less apparent among nonwhite faces, researchers found.

Participants were also asked to indicate how sure they were in their selections, and researchers found that higher confidence correlated with a higher chance of being wrong.

We were very surprised to see the level of over-confidence that was coming through, said Dr. Amy Dawel, an associate professor at Australian National University, who was an author on two of the studies.

It points to the thinking styles that make us more vulnerable on the internet and more vulnerable to misinformation, she added.

The idea that A.I.-generated faces could be deemed more authentic than actual people startled experts like Dr. Dawel, who fear that digital fakes could help the spread of false and misleading messages online.

A.I. systems had been capable of producing photorealistic faces for years, though there were typically telltale signs that the images were not real. A.I. systems struggled to create ears that looked like mirror images of each other, for example, or eyes that looked in the same direction.

But as the systems have advanced, the tools have become better at creating faces.

The hyper-realistic faces used in the studies tended to be less distinctive, researchers said, and hewed so closely to average proportions that they failed to arouse suspicion among the participants. And when participants looked at real pictures of people, they seemed to fixate on features that drifted from average proportions such as a misshapen ear or larger-than-average nose considering them a sign of A.I. involvement.

The images in the study came from StyleGAN2, an image model trained on a public repository of photographs containing 69 percent white faces.

Study participants said they relied on a few features to make their decisions, including how proportional the faces were, the appearance of skin, wrinkles, and facial features like eyes.

Read the original post:
Test Yourself: Which Faces Were Made by A.I.? - The New York Times

Quantitative gait analysis and prediction using artificial intelligence for patients with gait disorders | Scientific Reports – Nature.com

Data acquisition

This study was carried out in accordance with the tenets of the Declaration of Helsinki and with the approval of the Brest, France hospitals (CHRUs) Ethics Committee. Patients had also signed an informed consent. Our work was conducted between 2021 and 2022. Data collected between June 2006 and June 2021 from 734 patients (115 adults and 619 children) who had undergone clinical 3D gait analysis were used. Their identities were preserved by respecting medical secret and protecting patient confidentiality. All data were recorded using the same motion analysis system (Vicon MX, Oxford Metrics, UK) and four force platforms (Advanced Mechanical Technology, Inc., Watertown, MA, USA) in the same motion laboratory (CHU Brest) between 2006 and 2022. The data collected by the 15 infrared cameras (sampling rate of 100 or 120Hz) were synchronized with the ground reaction forces recorded by the force platforms (1000Hz or 1200Hz). The 16 markers were placed according to the protocol by Kadaba et al.11. Marker trajectories and ground reaction forces were dual-pass filtered with a low-pass Butterworth filter at a cut-off frequency of 6 Hz. After an initial calibration in the standing position, all patients were asked to walk at a self-selected speed along a 10m walkway.

Gait kinematics were processed using the Vicon Plug-in Gait model. Kinematics were time-normalized to stride duration, from 0 to 100% from initial contact (IC) to the next IC of the ipsilateral foot. Nine gait joint angles (kinematic gait variables) were used: anteversion/retroversion of the pelvis, rotation of the pelvis, pelvic tilt, flexion/extension of the hip, abduction/adduction of the hip, internal/external rotation of the hip, flexion/extension of the knee, plantar/dorsiflexion of the ankle, and the foots angle of progression. As a result, a gait cycle yielded 101 (times) 9 measurements. Let (E_{p,d}) denote the gait session of patient p at datetime d. It can be written as follows:

$$begin{aligned} E_{p,d} = left{ {C_{ E_{p,d}}}^{1}, {C_{ E_{p,d}}}^{2}, ldots , {C_{ E_{p,d}}}^{K} right} end{aligned}$$

(1)

where ({C_{ E_{p,d}}}^{k}) is the k-th gait cycle of a gait session (E_{p,d}) and K the total number of gait cycles. Let (c_{t,n}^{E_{p,d}^{k}}) denote the gait cycle ({C_{E_{p,d}}}^{k}) value at time step t and joint angle n. To keep notations simple, (c_{t,n}^{E_{p,d}^{k}}) is referred to as (c_{t,n}) in what follows. ({C_{E_{p,d}}}^{k}) can simply be represented with a matrix of 101 lines and 9 columns, as follows:

$$begin{aligned} {C_{ E_{p,d}}}^{k} = begin{bmatrix} c_{1,1} &{} c_{1,2} &{}cdots &{} c_{1,9} \ c_{2,1} &{} c_{2,2} &{}cdots &{} c_{2,9}\ vdots &{} &{} &{} \ c_{101,1} &{} c_{101,2} &{}cdots &{} c_{101,9}\ end{bmatrix} end{aligned}$$

(2)

The Gait Profile Score (GPS), a walking behavior score, was computed for each gait cycle from the previously described joint angles12,13,14. The GPS is a single index measure that summarizes the overall deviation of kinematic gait data relative to normative data. It can be decomposed to provide Gait Variable Scores (GVS) for nine key component kinematic gait variables, which are presented as a Movement Analysis Profile (MAP). The GVS corresponding to the n-th kinematic variable, GVS(_{textrm{n}}), is given by15,16,17:

$$begin{aligned} GVS_n = sqrt{frac{1}{T}sum _{t=1}^{T}(c_{t,n} - c_{t,n} ^{ref})^{2}} end{aligned}$$

(3)

where t is a specific point in the gait cycle, T its total number of points (typically equal to 10118,19), (c_{t,n}) the value of the kinematic variable n at point t, and (c_{t,n}^{textrm{ref}}) is its mean on the reference population (physiological normative). The GPS is obtained from the GVS scores15,17 as follows:

$$begin{aligned} GPS = sqrt{frac{1}{N}sum _{n=1}^{N}GVS_n^{2}} end{aligned}$$

(4)

where N is the total number of kinematic variables (equal to 9 by definition).

We had a total of 1459 gait sessions from 734 patients (115 adults and 619 children). Each patient had an average of 1.988 gait sessions with a standard deviation of 1.515. 53,693 gait cycles were collected. Their average number per gait session is equal to 18 with a standard deviation of 6. Neurological conditions, notably cerebral palsy, are the most frequent etiologies, as we can see in Fig.1.

The average patient age within the first gait session is equal to 14years, with a standard deviation of 16years. The time delay between the first and last gait session (for patients with more than one gait session, i.e., 319) is equal to 3.92years on average with a standard deviation of 3.24years. Directly consecutive gait sessions are, on average, separated by approximately 740days, with a standard deviation of 577days. The shortest (resp. longest) time delay was equal to 4 (resp. 4438) days. We had 1384 pairs of directly consecutive gait sessions belonging to 319 patients (the remaining patients were removed since they had only one gait session). Involved gait conditions are various: without any equipment, with a cane, with a rollator, with an orthosis, with a prosthesis.. Only pairs of gait sessions without equipment were selected in order to be in the same condition (79% of all available pairs, i.e. 1152). The first gait sessions in these pairs were used for training. Models were fed the gait cycles of these first gait sessions (i.e., 21,167 gait cycles in total).

GPS variation prediction is similar enough to a Time Series Classification (TSC) issue that its proposed popular architectures should be adopted. Consecutive gait session pairs ((E_{p,d}, E_{p,d+Delta d})) were considered. For each gait cycle ({C_{ E_{p,d}}}^{k}) of the current gait session (E_{p,d}), a GPS variation (Delta {}GPS) was computed using:

$$begin{aligned} Delta {}GPS({C_{ E_{p,d}}}^{k}) = GPS_{avg}( E_{p,d+Delta d}) - GPS({C_{ E_{p,d}}}^{k}) end{aligned}$$

(5)

where (GPS_{avg}(E_{p,d+Delta d})) is the average GPS per cycle of (E_{p,d+Delta d}) and (GPS({C_{ E_{p,d}}}^{k})) the GPS of the current gait cycle ({C_{E_{p,d}}}^{k}). The average GPS per cycle (GPS_{average}(E_{p,d})) of a gait session (E_{p,d}) is simply equal to:

$$begin{aligned} GPS_{avg}(E_{p,d}) = frac{sum _{k=1}^{K} GPS({C_{ E_{p,d}}}^{k}) }{K} end{aligned}$$

(6)

(Delta {})

GPS was ranked in a binary fashion. Either it is negative, in which case the patients gait improves (class 1), or it is positive, in which case the patients gait worsens (class 0). The metric used is the Area Under the Curve (AUC).

The distribution of patients between training, validation, and test groups is provided in Table1. Such a split put 73%, 12%, and 14% of total gait cycles within the training, validation, and test groups, respectively.

To be exhaustive, one MLP, one recurrent neural network (LSTM), one hybrid architecture (Encoder), several CNN architectures (FCN, ResNet, t-LeNet), and a one-dimensional Transformer20 were included. The MLP and LSTM were designed and developed from scratch. Their hyper-parameters were optimized manually. FCN, ResNet, Encoder, and t-LeNet are among the most effective end-to-end discriminative architectures regarding the TSC state-of-the-art10. These methods were also compared to the Transformer, a more recent and popular architecture. The Transformer does not suffer from long-range context dependency issues compared to LSTM21. In addition, it is notable for requiring less training. The Adam optimizer22 and binary cross-entropy loss were employed23.

For MLP, gait cycles were flattened so that the input length was equal to 909 time steps. The number of neurons was the same across all the fully connected layers. Many values of this number were tested to find the best structure for our task. In the same way, the number of layers was optimized. The corresponding architecture is shown in Fig.2.

MLP architecture for prediction.

LSTM layers were stacked, and a dropout was added before the last layer to avoid overfitting. The corresponding architecture is shown in Fig.3.

LSTM architecture for prediction.

For FCN, ResNet, Encoder and t-LeNet, the architectures proposed in Ref.10 were considered. They are shown in Figs. 4, 5, 6 and 7, respectively. We followed an existing implementation24 to set up the Transformer.

FCN architecture for prediction.

ResNet architecture for prediction.

Encoder architecture for prediction.

t-LeNet architecture for prediction.

Different techniques of data augmentation were tested as a pre-processing step to avoid overfitting: jittering, scaling, window warping, permutation, and window slicing. Their hyperparameters were empirically optimized for each model. These are among the TSC literatures most frequently utilized techniques, particularly when it comes from sensor data10.

Image-based time series representation initiated a new branch of deep learning approaches that consider image transformation as an innovative pre-processing of feature engineering25. In an attempt to reveal features and patterns less visible in the one-dimensional sequence of the original time series, many transformation methods were developed to encode time series as input images.

In our study, sensor modalities are transformed to the visual domain using 2D FFT in order to utilize a set of pre-trained CNN models for transfer learning on the converted imagery data. The full workflow of our framework is represented in Fig.8.

Proposed (Delta GPS) prediction workflow for the image-based approach.

2D FFT is used to work in the frequency domain or Fourier domain because it efficiently extracts features based on the frequency of each time step in the time series. It can be defined as:

$$F(u,v) = frac{1}{{T.N}}sumlimits_{{t = 0}}^{T} {sumlimits_{{n = 0}}^{N} {c_{{t,n}} } } exp left( { - j2pi left( {frac{{ut}}{T} + frac{{vn}}{N}} right)} right)$$

(7)

where F(u,v) is the direct Fourier transform of the gait cycle. It is a complex function that shows the phase and magnitude of the signal in the frequency domain. u and v are the frequency space coordinates. The magnitude of the 2D FFT |F(u,v)|, also known as the spectrum, is a two-dimensional signal that represents frequency information. Because the 2D FFT has translation and rotation attributes, the zero-frequency component can be moved to the center of |F(u,v)| without losing any information, making the spectrum image more visible. The centralized FFT spectrums were computed and fed to the proposed deep learning models. A centralized FFT spectrum for a given gait cycle is represented in Fig.9.

2D FFT for a given gait cycle. (a) The gait cycle; (b) FFT spectrum of the gait cycle; (c) Centralized FFT spectrum of the gait cycle.

The Timm librarys26 pre-trained VGG16, ResNet34, EfficientNet_b0, and the Vision Transformer vit_base_patch16_224 were investigated. They were pre-trained on a large collection of images, in a supervised fashion. For the Transformer, the pre-training was at a resolution of (224 times 224) pixels. Its input images were considered as a sequence of fixed-size patches (resolution (16 times 16)), which were linearly embedded.

Converting our grayscale images to RGB images was not necessary because Timms implementations support any number of input channels. The models minimum input size for VGG16 is (32 times 32). The images width dimension (N) equals 9, which is less than 32. In order to fit the minimum needed size, 2D FFT images were repeated 4 times in this width dimension. Transfer learning with fine-tuning methods was employed. One neurons final fully connected layer was used. In the same way that the top layers were trainable, all convolutional blocks were.

The pre-trained Timm models are deep and sophisticated, with many layers. As a result, a CNN model with fewer parameters, designed from scratch, was conceived. The number of used two-dimensional convolutional layers was a hyper-parameter to optimize in a finite range of values {1, 2, 3, 4, 5}. After the convolutional block, a dropout function was applied. Following that, two-dimensional max-pooling (MaxPooling2D) and batch normalization were used. The flattened output of the batch normalization was then fed to a dense layer of a certain number of neurons to tune. In order to predict the (Delta GPS), our model had a dense output layer with a single neuron. The corresponding architecture is shown in Fig.10.

Tailored 2D CNN for prediction.

The following are all of the architecture hyper-parameters to tune: the number of convolutional layers (num_layers), the number of filters for each convolution layer (num_filters), the kernel size of each convolution layer (kernel_size), the dropout rate (dropout), the pooling size of the MaxPooling2D (pool_size), the number of neurons in the dense layer (units), and the learning rate (lr). Five models with a varying number of convolutional layers (from 1 to 5) were tested. For each of them, the rest of the hyper-parameters were tuned using KerasTuner9 to maximize the validation AUC.

Originally posted here:
Quantitative gait analysis and prediction using artificial intelligence for patients with gait disorders | Scientific Reports - Nature.com

AI Revolution: Unleashing the Power of Artificial Intelligence in Our Lives – Medium

Artificial Intelligence (AI) has swiftly emerged as a game-changer, transforming various aspects of our lives. With recent advancements in generative AI tools like ChatGPT, its evident that we are standing on the brink of an AI revolution that will reshape our world.

AI is not a futuristic concept anymore; it is already ingrained in our day-to-day lives. Although we might not always realize it, AI is all around us, seamlessly integrated into the technology we use.

From online shopping and internet searches to food deliveries and ride-hailing services, AI has become an integral part of our digital experiences. While it might not resemble the AI portrayed in science fiction movies, todays AI possesses the ability to learn and improve, simulating cognitive functions similar to our own.

One common concern associated with AI is the fear of job displacement. Its true that AI has the potential to automate certain tasks, but it is not yet capable of fully replicating the diverse skill sets required for most jobs. While manual labor and routine tasks like cashiering have already seen automation, knowledge-intensive roles and those involving human interaction are less susceptible to immediate replacement.

Moreover, the rise of AI also brings forth new job opportunities, particularly in areas related to technology and AI itself.

Ignoring the emergence of AI and its potential impact on businesses is a grave mistake. Embracing AI and understanding how it can benefit your industry or business is crucial for staying competitive in the rapidly evolving landscape.

Failing to adapt to AI-driven changes may result in being overtaken by competitors who have capitalized on the opportunities presented by this transformative technology. Just as Blockbuster Video and Kodak failed to acknowledge the threats to their core business models, businesses today must start planning for AI integration to ensure their long-term success.

Generative AI tools have opened up new possibilities for enhancing our own work and productivity. With tools like ChatGPT, professionals can leverage AI to generate drafts, outlines, and important points for reports and presentations. Creative fields, such as music and design, can benefit from generative AI tools that assist in creating videos, music, and images. While the output of these tools may not be perfect for finished work, they significantly speed up tasks like ideation and drafting, offering instant answers and advice on a wide range of topics.

As AI becomes increasingly intertwined with our lives, ensuring its ethical use and transparency is of paramount importance. Trust is the bedrock of AIs potential to address pressing global challenges, such as climate change and healthcare.

To establish trust, AI must be explainable, enabling users to understand the basis of its decisions. Moreover, ethical considerations are crucial to prevent biases and discrimination that may arise from biased or incomplete data. Addressing these challenges will pave the way for AIs positive impact on society.

Craving more insights? Dont miss out. Sign up now to get my posts delivered straight to your inbox!

The field of education has been significantly influenced by AI advancements. AI-powered tools can revolutionize the way students learn and interact with educational content. Personalized learning experiences, adaptive assessments, and intelligent tutoring systems hold the potential to enhance student outcomes and engagement.

AI can also streamline administrative tasks, freeing up educators time to focus on individualized instruction and student support. However, it is essential to strike a balance between AI integration and human interaction to ensure a holistic and effective learning environment.

AI tools are poised to revolutionize the recruitment and admissions processes in the education sector. With AI-powered search engines and chatbots, educational institutions can enhance their outreach efforts and provide personalized support to prospective students. Rich search prompts based on student profiles, reduced response times for queries and applications, and personalized communication can significantly improve the recruitment experience. Leveraging AI in these areas enables institutions to better understand student needs, optimize their marketing strategies, and improve conversion rates.

While AI holds immense potential, it is not without limitations and challenges. Large language models like ChatGPT are prone to generating incorrect or nonsensical answers, highlighting the need for cautious interpretation of AI-generated content.

Concerns regarding the automation of propaganda and the spread of disinformation have also arisen. It is crucial to strike a balance between the benefits and potential risks associated with AI, ensuring that its development and deployment prioritize ethical considerations and address societal concerns.

Looking ahead, the future of AI is brimming with possibilities. As AI models continue to evolve and improve, we can expect even more powerful and sophisticated applications. OpenAIs GPT-4, with its potential for hundreds of billions of parameters, represents the ongoing advancements in AI capabilities.

While challenges and disruptions may arise on the path to artificial general intelligence, the potential benefits far outweigh the obstacles. It is through overcoming these challenges that we can unlock the full potential of AI and usher in an era of unprecedented innovation and progress.

The AI revolution is here, and it is transforming the way we live, work, and interact with technology. Rather than fearing AI or underestimating its impact, we must embrace this transformative technology and harness its potential for positive change.

By understanding the nuances of AI, exploring its applications, and prioritizing ethical considerations, we can navigate the AI era with confidence. Let us seize the opportunities presented by AI, shaping a future where human intelligence and AI coexist harmoniously to create a better world.

The AI revolution is not a distant dream; it is unfolding before our eyes. AIs ability to simulate human-like cognitive functions and augment our capabilities holds immense promise.

As AI becomes an integral part of various industries and sectors, understanding its potential, limitations, and ethical implications becomes imperative.

By embracing AI, we can unlock a world of possibilities and pave the way for a future where human ingenuity and AI-driven advancements coexist harmoniously. Let us embark on this journey together, shaping a future that harnesses the true potential of AI to create a better world for all.

See the rest here:
AI Revolution: Unleashing the Power of Artificial Intelligence in Our Lives - Medium

Arguing the Pros and Cons of Artificial Intelligence in Healthcare – HealthITAnalytics.com

December 26, 2023 -In what seems like the blink of an eye, mentions of artificial intelligence (AI) have become ubiquitous in the healthcare industry.

From deep learning algorithms that can read computed tomography (CT) scans faster than humans tonatural language processing(NLP) that can comb through unstructured data in electronic health records (EHRs), the applications for AI in healthcare seem endless.

But like any technology at the peak of its hype curve, artificial intelligence faces criticism from its skeptics alongside enthusiasm from die-hard evangelists.

Despite its potential to unlock new insights and streamline the way providers and patients interact with healthcare data, AI may bring considerable threats ofprivacy problems, ethical concerns, and medical errors.

Balancing the risks and rewards of AI in healthcarewill require a collaborative effort from technology developers, regulators, end-users, and consumers.

READ MORE: Providers, Payers Sign Pledge for Ethical, Responsible AI in Healthcare

The first step will be addressing the highly divisive discussion points commonly raised when considering the adoption of some of the most complex technologies the healthcare world has to offer.

AI in healthcare will challenge the status quo as the industry adapts to new technologies. As a result, patient-provider relationships will be forever changed, and the idea that AI will change the role of human workers to some extent is worth considering.

Seventy-one percent of Americanssurveyed by Gallupin 2018 believed that AI will eliminate more healthcare jobs than it creates, with just under a quarter indicating that they believe the healthcare industry will be among the first to see widespread handouts of pink slips due to the rise of machine learning tools.

However, more recent data around occupational shifts and projected job growth dont necessarily bear this out.

A report published earlier this year by McKinsey & Co. indicates that AI could automate up to 30 percent of the hours worked by US employees by 2030, but healthcare jobs are projected to remain relatively stable, if not grow.

READ MORE: The Clinical Promise and Ethical Pitfalls of Electronic Phenotyping

The report notes that health aides and wellness workers will have anywhere from 4 to 20 percent more of their work automated, and health professionals overall can expect up to 18 percent of their work to be automated by 2030.

But healthcare employment demand is expected to grow 30 percent by then, negating the potential harmful impacts of AI on the healthcare workforce.

Despite these promising projections, fears around AI and the workforce may not beentirelyunfounded.

AI tools that consistently exceed human performance thresholds are constantly in the headlines, and the pace of innovation is only accelerating.

Radiologists and pathologists may be especially vulnerable, as many of themost impressive breakthroughsare happening aroundimaging analytics and diagnostics.

READ MORE: Ethical Artificial Intelligence Standards To Improve Patient Outcomes

In a 2021 report, Stanford University researchersassessedadvancements in AI over the last five years to see how perceptions and technologies have changed. Researchers found evidence of growing AI use in robotics, gaming, and finance.

The technologies supporting these breakthrough capabilities are also finding a home in healthcare, and physicians are starting to be concerned that AI is about to evict them from their offices and clinics. However, providers perceptions of AI vary, with some cautiously optimistic about its potential.

Recent years have seen AI-based imaging technologies move from an academic pursuit to commercial projects.Tools now exist for identifying a variety of eye and skin disorders,detecting cancers,and supporting measurements needed for clinical diagnosis, the report stated.

Some of these systems rival the diagnostic abilities of expert pathologists and radiologists, and can help alleviate tedious tasks (for example, counting the number of cells dividing in cancer tissue). In other domains, however, the use of automated systems raises significant ethical concerns.

At the same time, however, one could argue that there simply arent enough radiologists and pathologists or surgeons, or primary care providers, or intensivists to begin with. The US is facing a dangerousphysician shortage, especially in rural regions, and the drought is even worse in developing countries around the world.

AI may also help alleviatethe stresses of burnout that drive healthcare workers to resign. The epidemic affectsthe majority of physicians, not to mention nurses and other care providers, who are likely to cut their hours or take early retirements rather than continue powering through paperwork that leaves them unfulfilled.

Automating some of the routine tasks that take up a physicians time, such asEHR documentation, administrative reporting, or even triaging CT scans, can free up humans to focus on the complicated challenges of patients with rare or serious conditions.

Most AI experts believe that this blend of human experience and digital augmentation will be the natural settling point for AI in healthcare. Each type of intelligence will bring something to the table, andboth will work togetherto improve the delivery of care.

Some have raised concerns that clinicians may become over-reliant on these technologies as they become more common in healthcare settings, but experts emphasize that this is unlikely to occur, as automation bias isnt a new topic in healthcare, and there are existing strategies to prevent it.

Patients also appear to believe that AI will improve healthcare in the long run, despite some concerns about the technologys use.

A research letter published in JAMA Network Open last year that surveyed just under 1,000 respondents found that over half believed that AI would make healthcare either somewhat or much better. However, two-thirds of respondents indicated that being informed if AI played a big role in their diagnosis or treatment was very important to them.

Concerns about the use of AI in healthcare appear to vary somewhat by age, but research conducted by SurveyMonkey and Outbreaks Near Me a collaboration between epidemiologists from Boston Children's Hospital and Harvard Medical School shows that generally, patients prefer that important healthcare tasks, such as prescribing pain medication or diagnosing a rash, be led by a medical professional rather than an AI tool.

But whether patients and providers are comfortable with the technology or not, AI is advancing in healthcare. Many health systems are already deploying the tools across a plethora of use cases.

Michigan Medicine leveraged ambient computing a type of AI designed to create an environment that is responsive to human behaviors to further its clinical documentation improvement efforts in the midst of the COVID-19 pandemic.

Researchers from Mayo Clinic are taking a different AI approach: they aim to use the tech to improve organ transplant outcomes. Currently, these efforts are focused on developing AI tools that can prevent the need for a transplant, improve donor matching, increase the number of usable organs, prevent organ rejection, and bolster post-transplant care.

AI and other data analytics tools can also play a key role in population health management. A comprehensive strategy to manage population health requires that health systems utilize a combination of data integration, risk stratification, and predictive analytics tools. Care teams at Parkland Center for Clinical Innovation (PCCI) and Parkland Hospital in Dallas, Texas are leveraging some of these tools as part of their program to address preterm birth disparities.

Despite the potential for AI in healthcare, though, implementing the technology while protecting privacy and security is not easy.

AI in healthcare presents a whole new set of challenges around data privacy and security challenges that are compounded by the fact that most algorithms need access to massive datasets for training and validation.

Shuffling gigabytes of data between disparate systems is uncharted territory for most healthcare organizations, and stakeholders are no longer underestimating the financial and reputational perils of a high-profile data breach.

Most organizations are advised to keep their data assets closely guarded in highly secure, HIPAA-compliant systems. In light of anepidemic of ransomwareand knock-out punches from cyberattacks of all kinds, chief information security officers have every right to bereluctantto lower their drawbridges and allow data to move freely into and out of their organizations.

Storing large datasets in a single location makes that repository a very attractive target for hackers. In addition to AIs position as an enticing target to threat actors, there is a severe need for regulations surrounding AI and how to protect patient data using these technologies.

Experts caution that ensuring healthcare data privacy will require that existing data privacy laws and regulations be updated to include information used in AI and ML systems, as these technologies can re-identify patients if data is not properly de-identified.

However, AI falls into a regulatory gray area, making it difficult to ensure that every user is bound to protect patient privacy and will face consequences for not doing so.

In addition to more traditional cyberattacks and patient privacy concerns, a 2021 study by University of Pittsburgh researchers found thatcyberattacks using falsified medical images could fool AI models.

The study shed light on the concept of adversarial attacks, in which bad actors aim to alter images or other data points to make AI models draw incorrect conclusions. The researchers began by training a deep learning algorithm to identify cancerous and benign cases with more than 80 percent accuracy.

Then, the researchers developed a generative adversarial network (GAN), a computer program that generates false images by misplacing cancerous regions from negative or positive images to confuse the model.

The AI model was fooled by 69.1 percent of the falsified images. Of the 44 positive images made to look negative, the model identified 42 as negative. Of the 319 negative images doctored to look positive, the AI model classified 209 as positive.

These findings show not only how these types of adversarial attacks are possible, but also how they can cause AI models to make a wrong diagnosis, opening up the potential for major patient safety issues.

The researchers emphasized that by understanding how healthcare AI behaves under an adversarial attack, health systems can better understand how to make models safer and more robust.

Patient privacy can also be at risk in health systems that engage in electronic phenotyping via algorithms integrated into EHRs. The process is designed to flag patients with certain clinical characteristics to gain better insights into their health and provide clinical decision support. However, electronic phenotyping can lead to a series of ethical pitfalls around patient privacy, including unintentionally revealing non-disclosed information about a patient.

However, there are ways to protect patient privacy and provide an additional layer of protection to clinical data, like privacy-enhancing technologies (PETs). Algorithmic, architectural, and augmentation PETs can all be leveraged to secure healthcare data.

Security and privacy will always be paramount, but this ongoing shift in perspective as stakeholders get more familiar with the challenges and opportunities of data sharing is vital for allowing AI to flourish in ahealth IT ecosystem where data is siloed and access to quality information is one of the industrys biggest obstacles.

The thorniest issues in the debate about AI are the philosophical ones. In addition to the theoretical quandaries about who gets the ultimate blame for a life-threatening mistake, there are tangible legal and financial consequences when the word malpractice enters the equation.

Artificial intelligence algorithms are complex by their very nature. The more advanced the technology gets, the harder it will be for the average human to dissect the decision-making processes of these tools.

Organizations are already struggling with the issue of trust when it comes to heeding recommendations flashing on a computer screen, and providers are caught in the difficult situation of having access to large volumes of data but not feeling confident in the tools that are available to help them parse through it.

While some may assume that AI is completely free of human biases, these algorithms will learn patterns and generate outputs based on the data they were trained on. If these data are biased, then the model will be, too.

There are currently few reliable mechanisms to flag such biases.Black box artificial intelligence toolsthat give little rationale for their decisions only complicate the problem and make it more difficult to assign responsibility to an individual when something goes awry.

When providers arelegally responsiblefor any negative consequences that could have been identified from data they have in their possession, they need to be certain that the algorithms they use are presenting all of the relevant information in a way that enables optimal decision-making.

However, stakeholders are working to establish guidelines to address algorithmic bias.

In a 2021 report, the Cloud Security Alliance (CSA)suggested that the rule of thumb should be to assume that AI algorithms contain bias and work to identify and mitigate those biases.

The proliferation of modeling and predictive approaches based on data-driventechniques has helped to expose various social biases baked into real-world systems, and there is increasing evidence that the general public has concerns about the societal risks of AI, the report stated.

Identifying and addressing biases early in the problem formulation process is an important step to improving the process.

The White House Blueprint for an AI Bill of Rights and the Coalition for Health AI (CHAI)s Blueprint for Trustworthy AI Implementation Guidance and Assurance for Healthcare have also recently provided some guidance for the development and deployment of trustworthy AI, but these can only go so far.

Developers may unknowingly introduce biases to AI algorithms or train the algorithms using incomplete datasets. Regardless of how it happens, users must be aware of the potential biases and work to manage them.

In 2021, the World Health Organization (WHO) released thefirst global report on the ethics and governance of AI in healthcare. WHO emphasized the potential health disparities that could emerge as a result of AI, particularly because many AI systems are trained on data collected from patients in high-income care settings.

WHO suggested that ethical considerations should be taken into account during the design, development, and deployment of AI technology.

Specifically, WHO recommended that individuals working with AI operate under the following ethical principles:

Bias in AI is a significant negative, but one that developers, clinicians, and regulators are actively trying to change.

Ensuring that AI develops ethically, safely, and meaningfully in healthcarewill be the responsibility of all stakeholders: providers, patients, payers, developers, and everyone in between.

There are more questions to answer than anyone can even fathom. But unanswered questions are the reason to keep exploring not to hang back.

The healthcare ecosystem has to start somewhere, and from scratch is as good a place as any.

Defining the industrys approaches to AI is a significant responsibility and a golden opportunity to avoid some of the past mistakes and chart a better path for the future.

Its an exciting, confusing, frustrating, optimistic time to be in healthcare, and the continuing maturity of artificial intelligence will only add to the mixed emotions of these ongoing debates. There may not be any clear answers to these fundamental challenges at the moment, but humans still have the opportunity to take the reins, make the hard choices, and shape the future of patient care.

See the original post here:
Arguing the Pros and Cons of Artificial Intelligence in Healthcare - HealthITAnalytics.com

Michael Cohen Used Artificial Intelligence to Feed Lawyer Bogus Cases – Yahoo! Voices

NEW YORK Michael Cohen, the onetime fixer for former President Donald Trump, said in court papers unsealed Friday that he had mistakenly given his lawyer bogus legal citations generated by the artificial intelligence program Google Bard.

The fictitious citations were used by Cohens lawyer in a motion submitted to a federal judge, Jesse Furman. Cohen, who pleaded guilty in 2018 to campaign finance violations and served time in prison, had asked the judge for an early end to the courts supervision of his case now that he is out of prison and has complied with the conditions of his release.

In a sworn declaration made public Friday, Cohen explained that he had not kept up with emerging trends (and related risks) in legal technology and did not realize that Google Bard was a generative text service that, like ChatGPT, could show citations and descriptions that looked real but actually were not.

Sign up for The Morning newsletter from the New York Times

He also said he did not realize that the lawyer filing the motion on his behalf, David Schwartz, would drop the cases into his submission wholesale without even confirming that they existed.

The episode the second this year in which lawyers in Manhattan federal court have cited bogus decisions created by AI could have implications for a Manhattan criminal case against Trump in which Cohen is expected to be the star witness. The former presidents lawyers have long attacked Cohen as a serial fabulist; now they say they have a brand-new example.

Schwartz, in his own declaration, acknowledged using the three citations in question and said he had not independently reviewed the cases because Cohen indicated that another lawyer, E. Danya Perry, was providing suggestions for the motion.

I sincerely apologize to the court for not checking these cases personally before submitting them to the court, Schwartz wrote.

Barry Kamins, a lawyer for Schwartz, declined to comment Friday.

Perry has said she began representing Cohen only after Schwartz filed the motion. She wrote to Furman on Dec. 8 that after reading the already-filed document, she could not verify the case law being cited. In a statement at the time, she said that consistent with my ethical obligation of candor to the court, I advised Judge Furman of this issue.

She said in a letter made public Friday that Cohen, a former lawyer who has been disbarred, did not know that the cases he identified were not real and, unlike his attorney, had no obligation to confirm as much.

It must be emphasized that Mr. Cohen did not engage in any misconduct, Perry wrote. She said Friday that Cohen had no comment and that he had consented to the unsealing of the court papers after the judge raised the question of whether they contained information protected by the attorney-client privilege.

The imbroglio began when Furman said in an order Dec. 12 that he could not find any of the three decisions. He ordered Schwartz to provide copies or a thorough explanation of how the motion came to cite cases that do not exist and what role, if any, Mr. Cohen played.

The matter could have significant implications, given Cohens pivotal role in a case brought by the Manhattan district attorney that is scheduled for trial March 25.

The district attorney, Alvin Bragg, charged Trump with orchestrating a hush-money scheme that centered on a payment Cohen made during the 2016 election to an adult film actress, Stormy Daniels. Trump has pleaded not guilty to 34 felony charges.

Seeking to rebut Trumps lawyers claims that Cohen is untrustworthy, his defenders have said that Cohen lied on Trumps behalf but has told the truth since splitting with the former president in 2018 and pleading guilty to the federal charges.

Trumps lawyers immediately seized on the Google Bard revelation Friday. Susan Necheles, a lawyer representing Trump in the coming Manhattan trial, said it was typical Michael Cohen.

The DAs office should not be basing a case on him, Necheles said. Hes an admitted perjurer and has pled guilty to multiple felonies, and this is just an additional indication of his lack of character and ongoing criminality.

Perry, the lawyer now representing Cohen on the motion, rejected that assertion.

These filings and the fact that he was willing to unseal them show that Mr. Cohen did absolutely nothing wrong, she said. He relied on his lawyer, as he had every right to do. Unfortunately, his lawyer appears to have made an honest mistake in not verifying the citations in the brief he drafted and filed.

A spokesperson for Bragg declined to comment Friday.

Prosecutors may argue that Cohens actions were not intended to defraud the court, but rather, by his own admission, a woeful misunderstanding of new technology.

The nonexistent cases cited in Schwartzs motion United States v. Figueroa-Flores, United States v. Ortiz and United States v. Amato came with corresponding summaries and notations that they had been affirmed by the 2nd U.S. Circuit Court of Appeals. It has become clear that they were hallucinations created by the chatbot, taking bits and pieces of actual cases and combining them with robotic imagination.

Furman noted in his Dec. 12 order that the Figueroa-Flores citation in fact referred to a page from a decision that has nothing to do with supervised release.

The Amato case named in the motion, the judge said, actually concerned a decision of the Board of Veterans Appeals, an administrative tribunal.

And the citation to the Ortiz case, Furman wrote, appeared to correspond to nothing at all.

William K. Rashbaum contributed reporting.

c.2023 The New York Times Company

Go here to see the original:
Michael Cohen Used Artificial Intelligence to Feed Lawyer Bogus Cases - Yahoo! Voices