Archive for the ‘Artificial Intelligence’ Category

Welcome to IJCAI | IJCAI

International Joint Conferences on Artificial Intelligence is a non-profit corporation founded in California, in 1969 for scientific and educational purposes, including dissemination of information on Artificial Intelligence at conferences in which cutting-edge scientific results are presented and through dissemination of materials presented at these meetings in form of Proceedings, books, video recordings, and other educational materials. IJCAI consists of two divisions: the Conference Division and the AI Journal Division. IJCAI conferences present premier international gatherings of AI researchers and practitioners and they were held biennially in odd-numbered years since 1969.

Starting with 2016, IJCAI conferences are held annually.IJCAI-ECAI-22will be held in Vienna, Austria from July 23rd until July 29th, IJCAI-23 in Cape Town, South Africa, and IJCAI-PRICAI-24 in Shanghai, P.R. China.

IJCAI is governed by the Board of Trustees, with IJCAI Secretariat in charge of its operations.

IJCAI-21was held from August 19th until August 26th, 2021 in a virtual Montreal-themed reality. The Conference Committee thanks you all for participating.

Call for Proposals to Host IJCAI-ECAI-2026Call for IJCAI-22 Awards NominationsAI Hub launchedFunding Opportunities for Promoting AI Research Free Access to the AI journal

IJCAI Anti-Discrimination Policy (pdf)IJCAI Privacy Policy (pdf)

IJCAI Organization would like to acknowledge and thank the following platinum level sponsors of its past three conferences in a row:

Visit link:
Welcome to IJCAI | IJCAI

Can artificial intelligence overcome the challenges of the health care system? – MIT News

Even as rapid improvements in artificial intelligence have led to speculation over significant changes in the health care landscape, the adoption of AI in health care has been minimal. A 2020 survey by Brookings, for example, found that less than 1 percent of job postings in health care required AI-related skills.

The Abdul Latif Jameel Clinic for Machine Learning in Health (Jameel Clinic), a research center within the MIT Schwarzman College of Computing, recently hosted the MITxMGB AI Cures Conference in an effort to accelerate the adoption of clinical AI tools by creating new opportunities for collaboration between researchers and physicians focused on improving care for diverse patient populations.

Once virtual, the AI Cures Conference returned to in-person attendance at MITs Samberg Conference Center on the morning of April 25, welcoming over 300 attendees primarily made up of researchers and physicians from MIT and Mass General Brigham (MGB).

MIT President L. Rafael Reif began the event by welcoming attendees and speaking to the transformative capacity of artificial intelligence and its ability to detect, in a dark river of swirling data, the brilliant patterns of meaning that we could never see otherwise. MGBs president and CEO Anne Klibanski followed up by lauding the joint partnership between the two institutions and noting that the collaboration could have a real impact on patients lives and help to eliminate some of the barriers to information-sharing.

Domestically, about $20 million in subcontract work currently takes place between MIT and MGB. MGBs chief academic officer and AI Cures co-chair Ravi Thadhani thinks that five times that amount would be necessary in order to do more transformative work. We could certainly be doing more, Thadhani said. The conference just scratched the surface of a relationship between a leading university and a leading health-care system.

MIT Professor and AI Cures Co-Chair Regina Barzilay echoed similar sentiments during the conference. If were going to take 30 years to take all the algorithms and translate them into patient care, well be losing patient lives, she said. I hope the main impact of this conference is finding a way to translate it into a clinical setting to benefit patients.

This years event featured 25 speakers and two panels, with many of the speakers addressing the obstacles facing the mainstream deployment of AI in clinical settings, from fairness and clinical validation to regulatory hurdles and translation issues using AI tools.

On the speaker list, of note was the appearance of Amir Khan, a senior fellow from the U.S. Food and Drug Administration (FDA), who fielded a number of questions from curious researchers and clinicians on the FDAs ongoing efforts and challenges in regulating AI in health care.

The conference also covered many of the impressive advancements AI made in the past several years: Lecia Sequist, a lung cancer oncologist from MGB, spoke about her collaborative work with MGB radiologist Florian Fintelmann and Barzilay to develop an AI algorithm that could detect lung cancer up to six years in advance. MIT Professor Dina Katabi presented with MGBs doctors Ipsit Vahia and Aleksandar Videnovic on an AI device that could detect the presence of Parkinsons disease simply by monitoring a persons breathing patterns while asleep. It is an honor to collaborate with Professor Katabi, Videnovic said during the presentation.

MIT Assistant Professor Marzyeh Ghassemi, whose presentation concerned designing machine learning processes for more equitable health systems, found the longer-range perspectives shared by the speakers during the first panel on AI changing clinical science compelling.

What I really liked about that panel was the emphasis on how relevant technology and AI has become in clinical science, Ghassemi says. You heard some panel members [Eliezer Van Allen, Najat Khan, Isaac Kohane, Peter Szolovits] say that they used to be the only person at a conference from their university that was focused on AI and ML [machine learning], and now were in a space where we have a miniature conference with posters just with people from MIT.

The 88 posters accepted to AI Cures were on display for attendees to peruse during the lunch break. The presented research spanned different areas of focus from clinical AI and AI for biology to AI-powered systems and others.

I was really impressed with the breadth of work going on in this space, Collin Stultz, a professor at MIT, says. Stultz also spoke at AI Cures, focusing primarily on the risks of interpretability and explainability when using AI tools in a clinical setting, using cardiovascular care as an example of showing how algorithms could potentially mislead clinicians with grave consequences for patients.

There are a growing number of failures in this space where companies or algorithms strive to be the most accurate, but do not take into consideration how the clinician views the algorithm and their likelihood of using it, Stultz said. This is about what the patient deserves and how the clinician is able to explain and justify their decision-making to the patient.

Phil Sharp, MIT Institute Professor and chair of the advisory board for Jameel Clinic, found the conference energizing and thought that the in-person interactions were crucial to gaining insight and motivation, unmatched by many conferences that are still being hosted virtually.

The broad participation by students and leaders and members of the community indicate that theres an awareness that this is a tremendous opportunity and a tremendous need, Sharp says. He pointed out that AI and machine learning are being used to predict the structures of almost everything from protein structures to drug efficacy. It says to young people, watch out, there might be a machine revolution coming.

See more here:
Can artificial intelligence overcome the challenges of the health care system? - MIT News

Why Artificial Intelligence Creates an Unprecedented Era of Opportunity in the Near Future – Inc.

The age of artificial intelligence (A.I.) is finally upon us. Consumer applications of A.I., in particular, have come a long way, leading to more accurate search results for online shoppers, allowing apps and websites to make more personalized recommendations, and enabling voice-activated digital assistants to better understand us.

As impressive as these uses of A.I. are, they only hint at how this game-changing technology will be applied in business. Because the goal of business A.I. is to help the companies that drive our global economy learn from their data to become vastly more resilient, adaptive, and innovative.

We all know there is tremendous potential value in data, which continues to grow exponentially. In fact, the world is creating 2.5 quintillion bytes of data every day (that's 2.5 followed by 18 zeros). To harness that potential, companies need A.I. to make sense of the data, and hybrid cloud computing platforms that can distribute it across organizations.

The economic opportunity behind these technologies is enormous, given that business is only about 10 percent of the way to realizing A.I.'s full potential. Fortunately, we are making steady progress, with the number of organizations poised to integrate A.I. into their business processes and workflows growing rapidly. A recent IBM study showed that more than a third of the companies surveyed were using some form of A.I. to save time and streamline operations.

Take the challenge of demographic shifts. A.I., in conjunction with hybrid cloud, is helping many companies automate certain routine business activities, and move people to higher-value work. In manufacturing, a factory floor operator can now rely on A.I. to detect defects that are invisible to the human eye. In health care, A.I.-enabled virtual agents can handle millions of calls at once. In the energy sector, autonomous robots can use cloud and A.I. to analyze data at the edge to improve equipment uptime and prevent power outages. Another example: IBM is helping McDonald's launch an automated order-taking drive-thru experience that benefits both customers and restaurant crews.

Then there is the massive challenge of cybersecurity. The inherent business value of data makes it a prime target for hackers. But with about a half-million unfilled cybersecurity jobs in the U.S. alone, security teams are stretched dangerously thin. Most data breaches today take an average of 287 days to detect and contain. That is clearly unacceptable. With A.I.'s ability to analyze threat information at scale, we can help reduce that timeline to a few days or even hours.

A.I. is not only making businesses smarter, stronger, and safer; it is also accelerating scientific discovery. A.I. can speed the ingestion of scientific papers and the extraction of knowledge by 1,000x compared with human experts. At the height of the global pandemic, IBM adapted our cloud-based A.I. platform to comb through thousands of scientific papers about the coronavirus. We then shared relevant data with fellow members of the Covid-19 High Performance Computing Consortium to speed up drug design.

As these use cases show, for business A.I. to be effective it must also be trustworthy and explainable. It is one thing to rely on an A.I. application to order dinner for us. It is quite another to have it drive a car or make potentially life-or-death recommendations about a course of medical treatment.

For this reason, technology companies must be clear about who trains their A.I. systems, what data is used in that training, and, most important, what went into their algorithm's recommendations. Developing responsible, ethical A.I. requires that we remove any potential for human bias to influence this process.

We must also recognize that the purpose of A.I. systems is to augment--not replace--human intelligence. Throughout history, the introduction of new technologies has led to sea changes in the way businesses create value while eliminating burdensome and repetitive tasks for humans. These include everything from windmills to the printing press to the steam engine and factory robotics. This is how progress happens. Artificial intelligence will create even greater progress, but only if it is deployed responsibly.

Businesses have the potential to usher in a new and unprecedented era of greater productivity, faster insights, better decision-making, and enhanced employee and customer experiences through the combination of A.I. and hybrid cloud. Given the enthusiasm of our clients for these transformative technologies, the business A.I. spring can't come soon enough.

View post:
Why Artificial Intelligence Creates an Unprecedented Era of Opportunity in the Near Future - Inc.

Elementary Named to the 2022 CB Insights AI 100 List of Most Innovative Artificial Intelligence Startups – PR Newswire

Elementaryrecognized for achievements in machine vision and industrial quality inspections

NEW YORK, May 19, 2022 /PRNewswire/ --CB Insights today named Elementary to its annual AI 100 ranking, showcasing the 100 most promising private artificial intelligence companies in the world.

"This is the sixth year that CB Insights has recognized the most promising private artificial intelligence companies with the AI 100. This year's cohort spans 13 industries, working on everything from recycling plastic waste to improving hearing aids," said Brian Lee, Senior Vice President of CB Insights' Intelligence Unit."Last year's AI 100 companies had a remarkable run, raising more than $6 billion, including 20 mega-rounds worth more than $100 million each. We're excited to watch the companies on this year's list continue to grow and create products and services that meaningfully impact the world around them."

"Manufacturing and supply chain are being forced through the largest transformation we've seen in decades. The global supply chain shock, coupled with increased demand and a difficult labor market, make it imperative that manufacturers find autonomous solutions to automate processes, improve digital intelligence, and increase yield and volume," said Arye Barnehama, Chief Executive Officer and founder of Elementary. "At Elementary, we champion closed-loop quality. Our platform uses edge machine learning to inspect goods and protect production lines from defects. Using cloud technology, inspection data is analyzed for defects and root causes. These AI-driven, real-time insights are then pushed to the factory floor, closing the loop and avoiding defects through operational improvements."

Utilizing theCB Insights platform, the research team picked 100 private market vendors from a pool of over 7,000 companies, including applicants and nominees. They were chosen based on factors including R&D activity,proprietary Mosaic scores, market potential, business relationships, investor profile, news sentiment analysis, competitive landscape, team strength, and tech novelty. The research team also reviewed thousands ofAnalyst Briefings submitted by applicants.

Quick facts about the 2022 AI 100:

About ElementaryElementary delivers an easily scalable, flexible, securly connected machine vision platform that leverages the power of machine learning to open new use cases, provide insights, and close the loop on the manufacturing process. With Elementary Quality as a Service (QaaS), we deploy the inspection hardware, train the machine learning models, integrate with your automation equipment, and provide data analytics. From cameras, lighting and mounting to software and support, we are the single-source product experts, providing everything you need to increase detections, reduce defects and improve productivity.For more information, please visit:https://www.elementaryml.com/.

SOURCE Elementary

Go here to read the rest:
Elementary Named to the 2022 CB Insights AI 100 List of Most Innovative Artificial Intelligence Startups - PR Newswire

Is Artificial Intelligence Made in Humanity’s Image? Lessons for an AI Military Education – War on the Rocks

Artificial intelligence is not like us. For all of AIs diverse applications, human intelligence is not at risk of losing its most distinctive characteristics to its artificial creations.

Yet, when AI applications are brought to bear on matters of national security, they are often subjected to an anthropomorphizing tendency that inappropriately associates human intellectual abilities with AI-enabled machines. A rigorous AI military education should recognize that this anthropomorphizing is irrational and problematic, reflecting a poor understanding of both human and artificial intelligence. The most effective way to mitigate this anthropomorphic bias is through engagement with the study of human cognition cognitive science.

This article explores the benefits of using cognitive science as part of an AI education in Western military organizations. Tasked with educating and training personnel on AI, military organizations should convey not only that anthropomorphic bias exists, but also that it can be overcome to allow better understanding and development of AI-enabled systems. This improved understanding would aid both the perceived trustworthiness of AI systems by human operators and the research and development of artificially intelligent military technology.

For military personnel, having a basic understanding of human intelligence allows them to properly frame and interpret the results of AI demonstrations, grasp the current natures of AI systems and their possible trajectories, and interact with AI systems in ways that are grounded in a deep appreciation for human and artificial capabilities.

Artificial Intelligence in Military Affairs

AIs importance for military affairs is the subject of increasing focus by national security experts. Harbingers of A New Revolution in Military Affairs are out in force, detailing the myriad ways in which AI systems will change the conduct of wars and how militaries are structured. From microservices such as unmanned vehicles conducting reconnaissance patrols to swarms of lethal autonomous drones and even spying machines, AI is presented as a comprehensive, game-changing technology.

As the importance of AI for national security becomes increasingly apparent, so too does the need for rigorous education and training for the military personnel who will interact with this technology. Recent years have seen an uptick in commentary on this subject, including in War on the Rocks. Mick Ryans Intellectual Preparation for War, Joe Chapas Trust and Tech, and Connor McLemore and Charles Clarks The Devil You Know, to name a few, each emphasize the importance of education and trust in AI in military organizations.

Because war and other military activities are fundamentally human endeavors, requiring the execution of any number of tasks on and off the battlefield, the uses of AI in military affairs will be expected to fill these roles at least as well as humans could. So long as AI applications are designed to fill characteristically human military roles ranging from arguably simpler tasks like target recognition to more sophisticated tasks like determining the intentions of actors the dominant standard used to evaluate their successes or failures will be the ways in which humans execute these tasks.

But this sets up a challenge for military education: how exactly should AIs be designed, evaluated, and perceived during operation if they are meant to replace, or even accompany, humans? Addressing this challenge means identifying anthropomorphic bias in AI.

Anthropomorphizing AI

Identifying the tendency to anthropomorphize AI in military affairs is not a novel observation. U.S. Navy Commander Edgar Jatho and Naval Postgraduate School researcher Joshua A. Kroll argue that AI is often too fragile to fight. Using the example of an automated target recognition system, they write that to describe such a system as engaging in recognition effectively anthropomorphizes algorithmic systems that simply interpret and repeat known patterns.

But the act of human recognition involves distinct cognitive steps occurring in coordination with one another, including visual processing and memory. A person can even choose to reason about the contents of an image in a way that has no direct relationship to the image itself yet makes sense for the purpose of target recognition. The result is a reliable judgment of what is seen even in novel scenarios.

An AI target recognition system, in contrast, depends heavily on its existing data or programming which may be inadequate for recognizing targets in novel scenarios. This system does not work to process images and recognize targets within them like humans. Anthropomorphizing this system means oversimplifying the complex act of recognition and overestimating the capabilities of AI target recognition systems.

By framing and defining AI as a counterpart to human intelligence as a technology designed to do what humans have typically done themselves concrete examples of AI are measured by [their] ability to replicate human mental skills, as De Spiegeleire, Maas, and Sweijs put it.

Commercial examples abound. AI applications like IBMs Watson, Apples SIRI, and Microsofts Cortana each excel in natural language processing and voice responsiveness, capabilities which we measure against human language processing and communication.

Even in military modernization discourse, the Go-playing AI AlphaGo caught the attention of high-level Peoples Liberation Army officials when it defeated professional Go player Lee Sedol in 2016. AlphaGos victories were viewed by some Chinese officials as a turning point that demonstrated the potential of AI to engage in complex analyses and strategizing comparable to that required to wage war, as Elsa Kania notes in a report on AI and Chinese military power.

But, like the attributes projected on to the AI target recognition system, some Chinese officials imposed an oversimplified version of wartime strategies and tactics (and the human cognition they arise from) on to AlphaGos performance. One strategist in fact noted that Go and warfare are quite similar.

Just as concerningly, the fact that AlphaGo was anthropomorphized by commentators in both China and America means that the tendency to oversimplify human cognition and overestimate AI is cross-cultural.

The ease with which human abilities are projected on to AI systems like AlphaGo is described succinctly by AI researcher Eliezer Yudkowsky: Anthropomorphic bias can be classed as insidious: it takes place with no deliberate intent, without conscious realization, and in the face of apparent knowledge. Without realizing it, individuals in and out of military affairs ascribe human-like significance to demonstrations of AI systems. Western militaries should take note.

For military personnel who are in training for the operation or development of AI-enabled military technology, recognizing this anthropomorphic bias and overcoming it is critical. This is best done through an engagement with cognitive science.

The Relevance of Cognitive Science

The anthropomorphizing of AI in military affairs does not mean that AI is always given high marks. It is now clich for some commentators to contrast human creativity with the fundamental brittleness of machine learning approaches to AI, with an often frank recognition of the narrowness of machine intelligence. This cautious commentary on AI may lead one to think that the overestimation of AI in military affairs is not a pervasive problem. But so long as the dominant standard by which we measure AI is human abilities, merely acknowledging that humans are creative is not enough to mitigate unhealthy anthropomorphizing of AI.

Even commentary on AI-enabled military technology that acknowledges AIs shortcomings fails to identify the need for an AI education to be grounded in cognitive science.

For example, Emma Salisbury writes in War on the Rocks that existing AI systems rely heavily on brute force processing power, yet fail to interpret data and determine whether they are actually meaningful. Such AI systems are prone to serious errors, particularly when they are moved outside their narrowly defined domain of operation.

Such shortcomings reveal, as Joe Chapa writes on AI education in the military, that an important element in a persons ability to trust technology is learning to recognize a fault or a failure. So, human operators ought to be able to identify when AIs are working as intended, and when they are not, in the interest of trust.

Some high-profile voices in AI research echo these lines of thought and suggest that the cognitive science of human beings should be consulted to carve out a path for improvement in AI. Gary Marcus is one such voice, pointing out that just as humans can think, learn, and create because of their innate biological components, so too do AIs like AlphaGo excel in narrow domains because of their innate components, richly specific to tasks like playing Go.

Moving from narrow to general AI the distinction between an AI capable of only target recognition and an AI capable of reasoning about targets within scenarios requires a deep look into human cognition.

The results of AI demonstrations like the performance of an AI-enabled target recognition system are data. Just like the results of human demonstrations, these data must be interpreted. The core problem with anthropomorphizing AI is that even cautious commentary on AI-enabled military technology hides the need for a theory of intelligence. To interpret AI demonstrations, theories that borrow heavily from the best example of intelligence available human intelligence are needed.

The relevance of cognitive science for an AI military education goes well beyond revealing contrasts between AI systems and human cognition. Understanding the fundamental structure of the human mind provides a baseline account from which artificially intelligent military technology may be designed and evaluated. It possesses implications for the narrow and general distinction in AI, the limited utility of human-machine confrontations, and the developmental trajectories of existing AI systems.

The key for military personnel is being able to frame and interpret AI demonstrations in ways that can be trusted for both operation and research and development. Cognitive science provides the framework for doing just that.

Lessons for an AI Military Education

It is important that an AI military education not be pre-planned in such detail as to stifle innovative thought. Some lessons for such an education, however, are readily apparent using cognitive science.

First, we need to reconsider narrow and general AI. The distinction between narrow and general AI is a distraction far from dispelling the unhealthy anthropomorphizing of AI within military affairs, it merely tempers expectations without engendering a deeper understanding of the technology.

The anthropomorphizing of AI stems from a poor understanding of the human mind. This poor understanding is often the implicit framework through which the person interprets AI. Part of this poor understanding is taking a reasonable line of thought that the human mind should be studied by dividing it up into separate capabilities, like language processing and transferring it to the study and use of AI.

The problem, however, is that these separate capabilities of the human mind do not represent the fullest understanding of human intelligence. Human cognition is more than these capabilities acting in isolation.

Much of AI development thus proceeds under the banner of engineering, as an endeavor not to re-create the human mind in artificial ways but to perform specialized tasks, like recognizing targets. A military strategist may point out that AI systems do not need to be human-like in the general sense, but rather that Western militaries need specialized systems which can be narrow yet reliable during operation.

This is a serious mistake for the long-term development of AI-enabled military technology. Not only is the narrow and general distinction a poor way of interpreting existing AI systems, but it clouds their trajectories as well. The fragility of existing AIs, especially deep-learning systems, may persist so long as a fuller understanding of human cognition is absent from their development. For this reason (among others), Gary Marcus points out that deep learning is hitting a wall.

An AI military education would not avoid this distinction but incorporate a cognitive science perspective on it that allows personnel in training to re-think inaccurate assumptions about AI.

Human-Machine Confrontations Are Poor Indicators of Intelligence

Second, pitting AIs against exceptional humans in domains like Chess and Go are considered indicators of AIs progress in commercial domains. The U.S. Defense Advanced Research Projects Agency participated in this trend by pitting Heron Systems F-16 AI against a skilled Air Force F-16 pilot in simulated dogfighting trials. The goals were to demonstrate AIs ability to learn fighter maneuvers while earning the respect of a human pilot.

These confrontations do reveal something: some AIs really do excel in certain, narrow domains. But anthropomorphizings insidious influence lurks just beneath the surface: there are sharp limits to the utility of human-machine confrontations if the goals are to gauge the progress of AIs or gain insight into the nature of wartime tactics and strategies.

The idea of training an AI to confront a veteran-level human in a clear-cut scenario is like training humans to communicate like bees by learning the waggle dance. It can be done, and some humans may dance like bees quite well with practice, but what is the actual utility of this training? It does not tell humans anything about the mental life of bees, nor does it gain insight into the nature of communication. At best, any lessons learned from the experience will be tangential to the actual dance and advanced better through other means.

The lesson here is not that human-machine confrontations are worthless. However, whereas private firms may benefit from commercializing AI by pitting AlphaGo against Lee Sedol or Deep Blue against Garry Kasparov, the benefits for militaries may be less substantial. Cognitive science keeps the individual grounded in an appreciation for the limited utility without losing sight of its benefits.

Human-Machine Teaming Is an Imperfect Solution

Human-machine teaming may be considered one solution to the problems of anthropomorphizing AI. To be clear, it is worth pursuing as a means of offloading some human responsibility to AIs.

But the problem of trust, perceived and actual, surfaces once again. Machines designed to take on responsibilities previously underpinned by the human intellect will need to overcome hurdles already discussed to become reliable and trustworthy for human operators understanding the human element still matters.

Be Ambitious but Stay Humble

Understanding AI is not a straightforward matter. Perhaps it should not come as a surprise that a technology with the name artificial intelligence conjures up comparisons to its natural counterpart. For military affairs, where the stakes in effectively implementing AI are far higher than for commercial applications, ambition grounded in an appreciation for human cognition is critical for AI education and training. Part of a baseline literacy in AI within militaries needs to include some level of engagement with cognitive science.

Even granting that existing AI approaches are not intended to be like human cognition, both anthropomorphizing and the misunderstandings about human intelligence it carries are prevalent enough across diverse audiences to merit explicit attention for an AI military education. Certain lessons from cognitive science are poised to be the tools with which this is done.

Vincent J. Carchidi is a Master of Political Science from Villanova University specializing in the intersection of technology and international affairs, with an interdisciplinary background in cognitive science. Some of his work has been published in AI & Society and the Human Rights Review.

Image: Joint Artificial Intelligence Center blog

Visit link:
Is Artificial Intelligence Made in Humanity's Image? Lessons for an AI Military Education - War on the Rocks