Archive for the ‘Artificial Intelligence’ Category

Autonomous artificial intelligence increases real-world specialist … – Nature.com

Theoretical foundation of unbiased estimation of healthcare productivity

To test our central hypothesisthat autonomous AI improves healthcare system productivityin an unbiased manner, we developed a healthcare productivity model based on rational queueing theory30, as widely used in the healthcare operations management literature31. A healthcare provider system, which can be a hospital, an individual physician providing a service, an autonomous AI providing a service at a performance level at least or higher than a human expert, a combination thereof, or a national healthcare system, are all modeled as an overloaded queue, facing a potential demand that is greater than its capacity; that is, , where denotes the total demand on the system - patients seeking careand denotes the maximum number of patients the system can serve per unit of time. We define system productivity as

$$lambda =frac{{n}_{q}}{t},$$

(1)

where nq is the number of patients who completed a care encounter with a quality of care that was non-inferior to q, and t is the length of time over which nq was measured, allowing for systems that include autonomous AI in some fashion. While the standard definitions of healthcare labor productivity, such as in Camasso et al.7, ignore quality of care, q assumes quality of care non-inferior to the case when care is provided by a human expert, such as a retina specialist, to address potential concerns about the safety of healthcare AI8: Our definition of , as represented by Eq. (1), guarantees that quality of care is either maintained or improved.

denotes the proportion of patients who receive and complete the care encounter in a steady state, where the average number of patients who successfully complete the care encounter is equal to the average number of patients who gain access to care, per unit of time, in other words, =. See Fig. 3. Remember that in the overloaded queue model, there are many patients 1- who do not gain access. is agnostic about the specific manner in which access is determined: may take the form of a hospital administrator who establishes a maximum number of patients admitted to the system or in the form of barriers to caresuch as an inability to pay, travel long distances, take time off work or other sources of health inequitieslimiting a patient gaining access to the system. As mentioned, is agnostic on whether the care encounter is performed and completed by an autonomous AI, human providers, or a combination thereof, as from the patient perspective, we measure the number of patients that complete the appropriate level of care per unit time at a performance level at least or higher than human physician. Not every patient will be eligible to start their encounter with autonomous AI, and we denote by , 0<<1 the proportion of eligible patients, for example, because they do not fit the inclusion criteria for the autonomous AI; not every patient will be able to complete their care encounter with autonomous AI when the autonomous AI diagnosed them with disease requiring a human specialist, and we denote by , 0<<1, the proportion of patients who started their care encounter with AI, and still required a human provider to complete their encounter. The proportion (1-) are diagnosed as disease absent and start and complete their encounter with autonomous AI only, without needing to see a human provider. For all permutations, productivity is measured as the number of patients who complete a provided care encounter per unit of time, with C, the productivity associated with the control group, where the screening result of the AI system is not used to determine the rest of the care process, and AI, the productivity associated with the intervention group, where the screening result of the AI system is used to determine the rest of the care process, and where the AI performance is at least as high as the human provider.

a Mathematical model of overloaded queue healthcare system in order to estimate productivity as = . without observer bias. b Model of overloaded queue healthcare system where autonomous AI is added to the workflow.

Because an autonomous AI that completes the care process for patients without diseasetypically less complex patientsas in the present study, will result in relatively more complex patients to be seen by the human specialist, we calculate complexity-adjusted specialist productivity as

$${lambda }_{{ca}}=frac{{bar{c}n}_{q}}{t},$$

(2)

with (bar{c}) the average complexity, as determined with an appropriate method, for all n patients that complete the care encounter with that specialist. This definition of ca, as represented by Eq. (2), corrects for a potentially underestimated productivity because the human specialist sees more clinically complex patients requiring more time than without the AI changing the patient mix.

We focus on the implication ; in other words, that system capacity is limited relative to potential demand, as that is the only way in which c and AI, can be measured without recruitment bias, i.e., in a context where patients arrive throughout the day without appointment or other filter, as is the case in Emergency Departments in the US, and almost all clinics in low- and middle-income countries (LMICs). This is not the case, however, in contexts where most patient visits are scheduled, and thus cannot be changed dynamically, and measuring in such a context would lead to bias. Thus, we selected a clinic with a very large demand (), Deep Eye Care Foundation (DECF) in Bangladesh, for the trial setting in order to avoid recruitment bias.

The B-PRODUCTIVE (Bangladesh-PRODUCTIVity in Eyecare) study was a preregistered, prospective, double-masked, cluster-randomized clinical trial performed in retina specialist clinics at DECF, a not-for-profit, non-governmental hospital in Rangpur, Bangladesh, between March 20 and July 31, 2022. The clusters were specialist clinic days, and all clinic days were eligible during the study period. Patients are not scheduled; there are no pre-scheduled patient visit times or time slots, instead access to a specialist clinic visit is determined by clinic staff on the basis of observed congestion, as explained in the previous Section.

The study protocol was approved by the ethics committees at the Asian Institute of Disability and Development (Dhaka, Bangladesh; # Southasia-hrec-2021-4-02), the Bangladesh Medical Research Council (Dhaka, Bangladesh; # 475 27 02 2022) and Queens University Belfast (Belfast, UK; # MHLS 21_46). The tenets of the Declaration of Helsinki were adhered to throughout, and the trial was preregistered with ClinicalTrials.gov, #NCT05182580, before the first participant was enrolled. The present study included local researchers throughout the research process, including design, local ethics review, implementation, data ownership and authorship to ensure it was collaborative and locally relevant.

The autonomous AI system (LumineticsCore (formerly IDx-DR), Digital Diagnostics, Coralville, Iowa, USA) was designed, developed, previously validated and implemented under an ethical framework to ensure compliance with the principles of patient benefit, justice and autonomy, and avoid Ethics Dumping13. It diagnoses specific levels of diabetic retinopathy and diabetic macular edema (Early Treatment of Diabetic Retinopathy Study level 35 and higher), clinically significant macular edema, and/or center-involved macular edema32, referred to as referable Diabetic Eye Disease (DED)33, that require management or treatment by an ophthalmologist or retina specialist, for care to be appropriate. If the ETDRS level is 20 or lower and no macular edema is present, appropriate care is to retest in 12 months34. The AI system is autonomous in that the medical diagnosis is made solely by the system without human oversight. Its safety, efficacy, and lack of racial, ethnic and sex bias were validated in a pivotal trial in a representative sample of adults with diabetes at risk for DED, using a workflow and minimally trained operators comparable to the current study13. This led to US FDA De Novo authorization (FDA approval) in 2018 and national reimbursement in 202113,15.

The autonomous AI system was installed by DECF hospital information technology staff on March 2, 2022, with remote assistance from the manufacturer. Autonomous AI operators completed a self-paced online training module on basic fundus image-capture and camera operations (Topcon NW400, Tokyo, Japan), followed by remote hands-on training on the operation by representatives of the manufacturers. Deployment was performed locally, without the physical presence of the manufacturers, and all training and support were provided remotely.

Typically, pharmacologic pupillary dilation is provided only as needed during use of the autonomous AI system. For the current study, all patient participants received pharmacologic dilation with a single drop each of tropicamide 0.8% and phenylephrine 5%, repeated after 15min if a pupil size of 4mm was not achieved. This was done to facilitate indirect ophthalmoscopy by the specialist participants as required. The autonomous AI system guided the operator to acquire two color fundus images determined to be of adequate quality using an image quality assessment algorithm, one each centered on the fovea and the optic nerve, and directed the operator to retake any images of insufficient quality. This process took approximately 10min, after which the autonomous AI system reported one of the following within 60s: DED present, refer to specialist, DED not present, test again in 12 months, or insufficient image quality. The latter response occurred when the operator was unable to obtain images of adequate quality after three attempts.

This study included both physician participants and patient participants. Physician participants were retina specialists who gave written informed consent prior to enrollment. For specialist participants, the inclusion criteria were:

Completed vitreoretinal fellowship training;

Examined 20 patients per week with diabetes and no known DED over the prior three months;

Performed laser retinal treatments or intravitreal injections on at least three DED patients per month over the same time period.

Exclusion criteria were:

AI-eligible patients are clinic patients meeting the following criteria:

Presenting to DECF for eye care;

Age 22 years. While preregistration stated participants could be aged 18 years, the US FDA De Novo clearance for the autonomous AI limits eligibility to 22 years;

Diagnosis of type 1 or type 2 diabetes prior to or on the day of recruitment;

Best corrected visual acuity 6/18 in the better-seeing eye;

No prior diagnosis of DED;

No history of any laser or incisional surgery of the retina or injections into either eye;

No medical contraindication to fundus imaging with dilation of the pupil12.

Exclusion criteria were:

Inability to provide informed consent or understand the study;

Persistent vision loss, blurred vision or floaters;

Previously diagnosed with diabetic retinopathy or diabetic macular edema;

History of laser treatment of the retina or injections into either eye or any history of retinal surgery;

Contraindicated for imaging by fundus imaging systems.

Patient participants were AI-eligible patients who gave written informed consent prior to enrollment. All eligibility criteria remained unchanged over the duration of the trial.

B-PRODUCTIVE was a concealed cluster-randomized trial in which a block randomization scheme by clinic date was generated by the study statistician (JP) on a monthly basis, taking into account holidays and scheduled clinic closures. The random allocation of each cluster (clinic day) was concealed until clinic staff received an email with this information just before the start of that days clinic, and they had no contact with the specialists during trial operations. Medical staff who determined access, specialists and patient participants remained masked to the random assignment of clinic days as control or intervention.

After giving informed consent, patient participants provided demographic, income, educational and clinical data to study staff using an orally administered survey in Bangla, the local language. Patients who were eligible but did not consent underwent the same clinical process without completing an autonomous AI diagnosis or survey. All patient participants, both intervention and control, completed the autonomous AI diagnostic process as described in the Autonomous AI implementation and workflow section above: the difference between intervention and control groups was that in the intervention group, the diagnostic AI output determined what happened to the patient next. In the control group, patient participants always went on to complete a specialist clinic visit after autonomous AI, irrespective of its output. In the intervention group, patient participants with an autonomous AI diagnostic report of DED absent, return in 12 months completed their care encounters without seeing a specialist and were recommended to make an appointment for a general eye exam in three months as a precautionary measure for the trial, minimizing the potential for disease progression (standard recall would be 12 months).

In the intervention group, patient participants with a diagnostic report of DED present or image quality insufficient completed their care encounters by seeing the specialist for further management. Seeing the specialist for not-consented, control group, and DED present / insufficient patient participants involved tonometry, anterior and posterior segment biomicroscopy, indirect ophthalmoscopy, and any further examinations and ancillary testing deemed appropriate by the specialist. After the patient participant completed the autonomous AI process, a survey with a 4-point Likert scale (very satisfied, satisfied, dissatisfied, very dissatisfied) was administered concerning the participants satisfaction with interactions with the healthcare team, time to receive examination results, and receiving their diagnosis from the autonomous AI system.

The primary outcome was clinic productivity for diabetes patients (d), measured as the number of completed care encounters per hour per specialist for control / non-AI (d,C) and intervention / AI (d,AI) days. d,C used the number of completed specialist encounters; d,AI used the number of eligible patients in the intervention group who completed an autonomous AI care encounter with a diagnostic output of DED absent, plus the number of encounters that involved the specialist exam. For the purposes of calculating the primary outcome, all diabetes patients who presented to the specialty clinic on study days were counted, including those who were not patient participants or did not receive the autonomous AI examination.

One of the secondary outcomes from this study was for all patients (patients both with and without diabetes) measured as the number of completed care encounters per hour per specialist by counting all patients presenting to the DECF specialty clinic on study days, including those without diabetes, for control (C) and intervention days (AI). Complexity-adjusted specialist productivity ca was calculated for intervention and control arms by multiplying (d,C) and (d,AI) by the average patient complexity (bar{c}).

During each clinic day, the study personnel recorded the day of the week and the number of hours that each specialist participant spent in the clinic, starting with the first consultation in the morning and ending when the examination of the last patient of the day was completed, including any time spent ordering and reviewing diagnostic tests and scheduling future treatments. Any work breaks, time spent on performing procedures, and other duties performed outside of the clinic were excluded. Study personnel obtained the number of completed clinic visits from the DECF patient information system after each clinic day.

At baseline, specialist participants provided information on demographic characteristics, years in specialty practice and patient volume. They also completed a questionnaire at the end of the study, indicating their agreement (5-point Likert scale, strongly agree to strongly disagree) with the following statements regarding autonomous AI: (1) saves time in clinics, (2) allows time to be focused on patients requiring specialist care, (3) increases the number of procedures and surgeries, and (4) improves DED screening.

Other secondary outcomes were (1) patient satisfaction; (2) number of DED treatments scheduled per day; and (3) complexity of patient participants. Patient and provider willingness to pay for AI was a preregistered outcome, but upon further review by the Bangladesh Medical Research Council, these data were removed based on their recommendation. The complexity score for each patient was calculated by a masked United Kingdom National Health Service grader using the International Grading system (a level 4 reference standard24), adapted from Wilkinson et al. International Clinical Diabetic Retinopathy and Diabetic Macular Edema Severity Scales31 (no DED=0 points, mild non-proliferative DED=0 points, moderate or severe non-proliferative DED=1 point, proliferative DED=3 points and diabetic macular edema=2 points.) The complexity score was summed across both eyes.

The null hypothesis was that the primary outcome parameter d, would not differ significantly between the study groups. The intra-cluster correlation coefficient (ICC) between patients within a particular cluster (clinic day) was estimated at 0.15, based on pilot data from the clinic. At 80% power, a two-sided alpha of 5%, a cluster size of eight patients per clinic day, and a control group estimated mean of 1.34 specialist clinic visits per hour (based on clinic data from January to March 2021), a sample size of 924 patients with completed clinically-appropriate retina care encounters (462 in each of the two study groups) was sufficient to detect a between-group difference of 0.34 completed care encounters per hour per specialist (equivalent to a 25% increase in productivity d,AI), with autonomous AI.

Study data were entered into Microsoft Excel 365 (Redmond, WA, USA) by the operators and the research coordinator in DECF. Data entry errors were corrected by the Orbis program manager in the US (NW), who remained masked to study group assignment.

Frequencies and percentages were used to describe patient participant characteristics for the two study groups. Age as a continuous variable was summarized with the mean and standard deviation. The number of treatments and complexity score were compared with the Wilcoxon rank sum test since they were not normally distributed. The primary outcome was normally distributed and compared between study groups using a two-sided Students t-test, and 95% confidence intervals around these estimates were calculated.

The robustness of the primary outcome was tested by utilizing linear regression modeling with generalized estimating equations that included clustering effects of clinic days. The adjustment for clustering of days since the beginning of the trial utilized an autoregressive first-order covariance structure since days closer together were expected to be more highly correlated. Residuals were assessed to confirm that a linear model fit the rate outcome. Associations between the outcome and potential confounders of patient age, sex, education, income, complexity score, clinic day of the week, and autonomous AI output were assessed. A sensitivity analysis with multivariable modeling included patient age and sex, and variables with p-values<0.10 in the univariate analysis. All statistical analyses were performed using SAS version 9.4 (Cary, North Carolina).

Read more from the original source:
Autonomous artificial intelligence increases real-world specialist ... - Nature.com

Artificial intelligence in veterinary medicine: What are the ethical and … – American Veterinary Medical Association

Artificial intelligence (AI) and machine learning, a type of AI that includes deep learning, which produces data with multiple levels of abstraction, are emerging technologies that have the potential to change how veterinary medicine is practiced. They have been developed to help improve predictive analytics and diagnostic performance, thus supporting decision-making when practitioners analyze medical images. But unlike human medicine, no premarket screening of AI tools is required for veterinary medicine.

This raises important ethical and legal considerations, particularly when it comes to conditions with a poor prognosis where such interpretations may lead to a decision to euthanize, and makes it even more vital for the veterinary profession to develop best practices to protect care teams, patients, and clients.

That's according to Dr. Eli Cohen, a clinical professor of diagnostic imaging at the North Carolina State College of Veterinary Medicine. He, presented the webinar, "Do No Harm: Ethical and Legal Implications of A.I.," which debuted in late August on AVMA Axon, AVMA's digital education platform.

During the presentation, he explored the potential of AI to increase efficiency and accuracy throughout radiology, but also acknowledged its biases and risks.

The use of AI in clinical diagnostic imaging practice will continue to grow, largely because much of the dataradiographs, ultrasound, CT, MRI, and nuclear medicineand their corresponding reports are in digital form, according to a Currents in One Health paper published in JAVMA in May 2022.

Dr. Ryan Appleby, assistant professor at the University of Guelph Ontario Veterinary College, who authored the paper, said artificial intelligence can be a great help in expediting tasks.

For example, AI can be used to automatically rotate or position digital radiographs, produce hanging protocolswhich are instructions for how to arrange images for optimal viewingor call up report templates based on the body parts included in the study.

More generally, AI can triage workflows by taking a first pass at various imaging studies and prioritize more critical patients to the top of the queue, said Dr. Appleby, who is chair of the American College of Veterinary Radiology's (ACVR) Artificial Intelligence Committee.

That said, when it comes to interpreting radiographs, not only does AI need to identify common cases of a disease, but it must also bring up border cases as well to ensure patients are being treated accurately and for it to be useful.

"As a specialist, I'm there for the subset of times when there is something unusual," Dr. Cohen said, who is co-owner of Dragonfly Imaging, a teleradiology company, where he serves as a radiologist. "While AI will get better, it's not perfect. We need to be able to troubleshoot it when it doesn't perform appropriately."

Medical device developers must gain Food and Drug Administration (FDA) approval for their devices and permission to sell their product in the U.S. Artificial intelligence and machine learning-enabled medical devices for humans are classified by the FDA as medical devices.

However, companies developing medical devices for animals are not required to undergo a premarket screening, unlike those developing devices for people. The ACVR has expressed concern about the lack of oversight for software used to read radiographs.

"It is logical that if the FDA provides guidelines and oversight of medical devices used on people, that similar measures should be in place for veterinary medical devices to help protect our pets," said Dr. Tod Drost, executive director of the American College of Veterinary Radiology. "The goal is not to stifle innovation, but rather have a neutral third party to provide checks and balances to the development of these new technologies."

Massive amounts of data are needed to train machine-learning algorithms and training images must be annotated manually. Because of the lack of regulation for AI developers and companies, it's not a requirement for companies to provide information about how their employees trained or validated their products. Many of these algorithms are often referred to as operating in a "black box."

"That raises pretty relevant ethical considerations if we're using these to make diagnoses and perform treatments," Dr. Cohen said.

Because AI doesn't have a conscience, he said, those who are developing and using AI need to have a conscience and can't afford to be indifferent. "AI might be smart, but that doesn't mean it's ethical," he said.

In the case of black-box medicine, "there exists no expert who can provide practitioners with useful causal or mechanistic explanations of the systems' internal decision procedures," according to a study published July 14, 2022, in Frontiers.

Dr. Cohen says, "As we adopt AI and bring it into veterinary medicine in a prudent and intentional way, the new best practice ideally would be leveraging human expertise and AI together as opposed to replacing humans with AI."

He suggested having a domain expert involved in all stages of AIfrom product development, validation, and testing to clinical use, error assessment, and oversight of these products.

The consensus of multiple leading radiology societies, including the American College of Radiology and Society for Imaging Informatics in Medicine, is that ethical use of AI in radiology should promote well-being and minimize harm.

"It is important that veterinary professionals take an active role in making medicine safer as use of artificial intelligence becomes more common. Veterinarians will hopefully learn the strengths and weaknesses of this new diagnostic tool by reviewing current literature and attending continuing education presentations," Dr. Appleby said.

Dr. Cohen recommends veterinarians obtain owner consent before using AI in decision making, particularly if the case involves a consult or referral. And during the decision-making process, practitioners should be vigilant about AI providing a diagnosis that exacerbates human and cognitive biases.

"We need to be very sure that when we choose to make that decision, that it is as validated and indicated as possible," Dr. Cohen said.

According to a 2022 Veterinary Radiology & Ultrasound article written by Dr. Cohen, if not carefully overseen, AI has the potential to cause harm. For example, an AI product could produce a falsepositive diagnosis, leading to tests or interventions, or lead to falsenegative results, possibly delaying diagnosis and care. It could also be applied to inappropriate datasets or populations, such as applying an algorithm to an ultrasound on a horse that gathered information from small animal cases.

He added that veterinary professionals need to consider if it is ethical to shift responsibility to general practitioners, emergency veterinarians, or non-imaging specialists who use a product whose accuracy is not published or otherwise known.

"How do we make sure there is appropriate oversight to protect our colleagues, our patients, and our clients, and make sure we're not asleep at the wheel as we usher in this new tech and adopt it responsibly?" Dr. Cohen asked.

See the article here:
Artificial intelligence in veterinary medicine: What are the ethical and ... - American Veterinary Medical Association

Artificial Intelligence: The third-party candidate – The Miami Hurricane

Creativity, confusion and controversy have defined the introductory stages of artificial intelligence integration into our society. When it comes to political campaigns and the upcoming 2024 election, this combination is changing the way politicians sway public opinion.

In June 2023, presidential candidate and Florida governor Ron DeSantis campaign used AI to generate images of his opponent, former president Donald Trump, with Anthony Fauci, a premier target of the Republican party base for his response to the COVID-19 pandemic.

The video, posted on X, displayed a collection of images of Trump and Fauci together. Some are real photographs, but three are AI-generated photos of the two embracing.

Lawmakers fear the use of deceiving AI images could potentially cause some voters to steer away from candidates in 2024.

There are two ways politicians are using it, said Dr. Yelena Yesha, UM professor and Knight Foundation Endowed Chair of Data Science and AI. One is biasness, trying to skew information and change the sentiments of populations, and the other is the opposite effect, using blockchain technology that will control misinformation.

Conversations about regulating the dangers of AI have already begun circulating on Capitol Hill, starting with the U.S. Senate hearing on May 16, 2023. The hearing included Sam Altman, CEO of OpenAI, who expressed concern of potential manipulation of his companys technology to target voters.

The most notable OpenAI technology is ChatGPT, which has seen the most rapid user consumption rate in internet history, surpassing the success of applications like TikTok and Instagram in its first two months.

The platform initially banned political campaigns from using the chatbot, but its enforcement of the ban has since been limited.

An analysis by The Washington Post found that ChatGPT can bypass its campaign restriction ban when prompted to create a persuasive message that targets a specific voter demographic.

AI will certainly be used to generate campaign content, said UM professor of political science Casey Klofstad. Some will use it to create deepfakes to support false narratives. Whether this misinformation will influence voters is an open question.

Deep fakes, an enhanced form of AI that alters photo and video, has reached the political mainstream. Following President Bidens re-election announcement last April, the Republican National Committee (RNC) released a fully AI-generated ad depicting a fictional and dystopian society if Biden is re-elected in 2024.

Congress has furthered its efforts in establishing boundaries for AI, with Senate Majority Leader Chuck Shumer (D-NY) recently leading a closed-door meeting on Sept. 13 with high-profile tech leaders, including Elon Musk and Mark Zuckerberg.

The goal of this meeting was to gather information on how prominent big tech platforms could enforce oversight within the use of AI. Senate sessions on the matter will continue throughout the fall, with Schumer hopeful for bipartisan support and legislation across Congress.

I would be reluctant to see the government take a heavy hand in regulating AI, but policy could be tailored more narrowly to incentivize AI developers to inform consumers about the source and validity of AI-generated content, Klofstad said.

The extent to which the federal government can have major influence over regulating AI is unclear as artificial intelligence continues to develop.

It should be regulated, but it should not to the point where the progress can be slowed down by regulatory processes, Yesha said. If you have too much regulation, it may at a certain point decelerate science and the adoption of innovation.

A significant reason for AI regulation efforts stems from the anticipation of foreign influence in our elections. Russian-led misinformation campaigns played a part in the 2016 election, and elected officials foresee advancement of foreign meddling in tandem with AIs improvement.

At a certain point, as AI becomes more developed, if it falls in the wrong hands of totalitarian regimes or autocratic governments, it can have a negative effect on our homeland. Yesha said.

However, AIs applications do provide numerous benefits for political campaigns.

A prominent benefit of AI in the political arena is its messaging capabilities. With a chatbots ability to instantly regurgitate personalized messages when fed consumer data, essentially taking over the work of lower-level campaign staff, the ability to garner donor support is vastly expanded.

Campaigns have always adapted to new modes of communication, from the printing press, to electronic mailing lists, to websites, text messaging and social media. Klofstad said. I expect AI will not be different in this regard.

Go here to see the original:
Artificial Intelligence: The third-party candidate - The Miami Hurricane

Artificial intelligences future value in environmental remediation – The Miami Hurricane

Artificial intelligence is enabling us to rethink how we integrate information, analyze data and use the resulting insights to improve decision-making. The power of AI is revolutionizing various industries, and environmental science is no exception.

With increasing threats of environmental stressors, AI is emerging as a powerful tool in detecting, mapping and mitigating these effects for the future.

As AI increasingly drives innovation and becomes a facet of everyday life, fears about its capabilities are growing.

It doesnt help that the media and pundits are stoking those fears, suggesting that AI could take over the world, lead to losses of control and privacy and devalue the importance of humans in the workforce.

According to Business News Daily, 69% of people worry that AI could take over their jobs entirely, while 74% predict that AI will eliminate all forms of human labor. However, its potential to remedy environmental problems can be a beneficial use of the technology.

From monitoring air and water quality to predicting the spread of pollutants, AI is already playing a crucial role in safeguarding our environment and public health.

As 2030 quickly approaches, the agreed deadline for hitting climate targets, the world is on track to achieve only 12 percent of the Sustainable Development Goals (SDGs), with progress plateauing or regressing on over half of the set goals.

How can we use artificial intelligence the technology that is revolutionizing the production of knowledge to actually improve lives; to make the world a little bit safer, a little bit healthier, a little bit more prosperous; to help eliminate poverty and hunger; to promote health and access to quality education; to advance gender equity; to save our planet, said Secretary of State of the United States Anthony Blinken, at the 78th Session of the United Nations General Assembly.

The most prominent applications of AI are currently in detecting, mapping and mitigating environmental toxins and pressures, which can help engineers and scientists gather more accurate data, but its uses are constantly growing and developing.

AI can help automate the process of taking and analyzing samples, and recognizing the presence of specific toxins in water, soil or air, so it can report real-time status. In delicate ecosystems, such as coral reefs and wetlands, including those around Florida, studying the parameters of the environment can alert to harmful conditions and propel action.

AI models can also create analytical maps based on historical or statistical data to understand trends and trajectories regarding toxin levels, weather patterns, human activities and other relevant factors. Those models can also evaluate satellite imagery to identify areas where specific conditions may be present and be trained to recognize patterns or changes, which can be extremely important in forecasting future dangerous weather events, enhancing agricultural productivity to combat hunger, responding to disease outbreaks, and addressing other imminent climate change threats to Earth.

These technologies can be also used to identify the sources and pathways of toxins and optimize mitigation strategies, crucial for effective mitigation and intervention, while monitoring the success of mitigation efforts.

If these practices for AI are deployed effectively and responsibly, they can drive inclusive and sustainable growth for all, which can reduce poverty and inequality, advance environmental sustainability and improve lives around the world.

However, real concerns exist that the developing world is being left behind as AI advances rapidly. If not distributed equitably, the technology has the potential to exacerbate inequality.

Countries must work together to promote access to AI around the world, with a particular focus on developing countries. Industrialized nations should share knowledge that can advance progress toward achieving SDGs, as AI has the potential to advance progress on nearly 80 percent of them.

To succeed in directing AI toward achieving the SDGs, complete support and participation from the multistakeholder community of system developers, governments and organizations, and communities is required.

Meanwhile, the need for AI governance is imperative, and support from federal and state governments as well as corporations is crucial to this transition. As AIs footprint grows and nations work to manage risks, we must maximize its use for the greater good and deepen cooperation across governments to foster beneficial uses for AI.

The United States is committed to supporting and accelerating efforts on AI development, hoping to foster an environment where AI innovation can continue to flourish. Secretary Blinken mentioned the U.S.s creation of a blueprint for an AI Bill of Rights and Risk Management Framework at the UNGA, which would guide the future use, design and safeguards for these systems.

The US has announced a $15 million commitment, designated to helping more governments leverage the power of AI to drive global good, focused specifically on the SDGs. Commitments and contributions have been made by other countries and large corporations, such as Google, IBM and Microsoft.

We are at an inflection point, and the decisions we make today will affect the world for decades to come, especially when it comes to AI and climate change. AI has the potential to accelerate progress, an immense responsibility to be taken by governments, the private sector, civil society and individuals that must consider the social, economic and environmental aspects of sustainability.

Lia Mussie is a senior majoring in ecosystem science and policy and political science with minors sustainable business and public health.

Read more:
Artificial intelligences future value in environmental remediation - The Miami Hurricane

Researchers develop a way to hear photos using artificial intelligence – KXLH News Helena

Researchers at Northeastern University have developed a way to extract audio from both still photos and muted videos using artificial intelligence.

The research project is calledSide Eye.

Most of the cameras today have what's called image stabilization hardware, said Kevin Fu, a professor of electrical and computer engineering at Northeastern University. It turns out that when you speak near a camera lens that has some of these functions, a camera lens will move every so slightly, what's called modulating your voice, onto the image and it changes the pixels.

Basically, these small movements can be interpreted into rudimentary audio that Side Eye artificial intelligence can then interpret into individual words with high accuracies, according to the research team.

You're able to get thousands of samples per second. What does this mean? It means you basically get a very rudimentary microphone, Fu said.

SEE MORE: Companies plan to build largest image-based AI model to fight cancer

Even though the recovered audio sounds muffled, some pieces of information can be extracted.

Things like understanding what is the gender of the speaker, not on camera but in the room while the photograph or video is being taken, that's nearly 100% accurate, he said.

So what can technology like this be used for?

For instance in legal cases or in investigations of either proving or disproving somebodys presence, it gives you evidence that can be backed up by science of whether somebody was likely in the room speaking or not, Fu said.

This is one more tool we can use to bring authenticity to evidence, potentially to investigations, but also trying to solve criminal applications, he said.

Trending stories at Scrippsnews.com

Read more here:
Researchers develop a way to hear photos using artificial intelligence - KXLH News Helena