Archive for the ‘Artificial Intelligence’ Category

Artificial Intelligence, Foresight, and the Offense-Defense Balance – War on the Rocks

There is a growing perception that AI will be a transformative technology for international security. The current U.S. National Security Strategy names artificial intelligence as one of a small number of technologies that will be critical to the countrys future. Senior defense officials have commented that the United States is at an inflection point in the power of artificial intelligence and even that AI might be the first technology to change the fundamental nature of war.

However, there is still little clarity regarding just how artificial intelligence will transform the security landscape. One of the most important open questions is whether applications of AI, such as drone swarms and software vulnerability discovery tools, will tend to be more useful for conducting offensive or defensive military operations. If AI favors the offense, then a significant body of international relations theory suggests that this could have destabilizing effects. States could find themselves increasingly able to use force and increasingly frightened of having force used against them, making arms-racing and war more likely. If AI favors the defense, on the other hand, then it may act as a stabilizing force.

Anticipating the impact of AI on the so-called offense-defense balance across different military domains could be extremely valuable. It could help us to foresee new threats to stability before they arise and act to mitigate them, for instance by pursuing specific arms agreements or prioritizing the development of applications with potential stabilizing effects.

Unfortunately, the historical record suggests that attempts to forecast changes in the offense-defense balance are often unsuccessful. It can even be difficult to detect the changes that newly adopted technologies have already caused. In the lead-up to the First World War, for instance, most analysts failed to recognize that the introduction of machine guns and barbed wire had tilted the offense-defense balance far toward defense. The years of intractable trench warfare that followed came as a surprise to the states involved.

While there are clearly limits on the ability to anticipate shifts in the offense-defense balance, some forms of technological change have more predictable effects than others. In particular, as we argue in a recent paper, changes that essentially scale up existing capabilities are likely to be much easier to analyze than changes that introduce fundamentally new capabilities. Substantial insight into the impacts of AI can be achieved by focusing on this kind of quantitative change.

Two Kinds of Technological Change

In a classic analysis of arms races, Samuel Huntington draws a distinction between qualitative and quantitative changes in military capabilities. A qualitative change involves the introduction of what might be considered a new form of force. A quantitative change involves the expansion of an existing form of force.

Although this is a somewhat abstract distinction, it is easy to illustrate with concrete examples. The introduction of dreadnoughts in naval surface warfare in the early twentieth century is most naturally understood as a qualitative change in naval technology. In contrast, the subsequent naval arms race which saw England and Germany competing to manufacture ever larger numbers of dreadnoughts represented a quantitative change.

Attempts to understand changes in the offense-defense balance tend to focus almost exclusively on the effects of qualitative changes. Unfortunately, the effects of such qualitative changes are likely to be especially difficult to anticipate. One particular reason why foresight about such changes is difficult is that the introduction of a new form of force from the tank to the torpedo to the phishing attack will often warrant the introduction of substantially new tactics. Since these tactics emerge at least in part through a process of trial and error, as both attackers and defenders learn from the experience of conflict, there is a limit to how much can ultimately be foreseen.

Although quantitative technological changes are given less attention, they can also in principle have very large effects on the offense-defense balance. Furthermore, these effects may exhibit certain regularities that make them easier to anticipate than the effects of qualitative change. Focusing on quantitative change may then be a promising way forward to gain insight into the potential impact of artificial intelligence.

How Numbers Matter

To understand how quantitative changes can matter, and how they can be predictable, it is useful to consider the case of a ground invasion. If the sizes of two armies double in the lead-up to an invasion, for example, then it is not safe to assume that the effect will simply cancel out and leave the balance of forces the same as it was prior to the doubling. Rather, research on combat dynamics suggests that increasing the total number of soldiers will tend to benefit the attacker when force levels are sufficiently low and benefit the defender when force levels are sufficiently high. The reason is that the initial growth in numbers primarily improves the attackers ability to send soldiers through poorly protected sections of the defenders border. Eventually, however, the border becomes increasingly saturated with ground forces, eliminating the attackers ability to exploit poorly defended sections.

Figures 1: A simple model illustrating the importance of force levels. The ability of the attacker (in red) to send forces through poorly defended sections of the border rises and then falls as total force levels increase.

This phenomenon is also likely to arise in many other domains where there are multiple vulnerable points that a defender hopes to protect. For example, in the cyber domain, increasing the number of software vulnerabilities that both an attacker and defender can each discover will benefit the attacker at first. The primary effect will initially be to increase the attackers ability to discover vulnerabilities that the defender has failed to discover and patch. In the long run, however, the defender will eventually discover every vulnerability that can be discovered and leave behind nothing for the attacker to exploit.

In general, growth in numbers will often benefit the attacker when numbers are sufficiently low and benefit the defender when they are sufficiently high. We refer to this regularity as offensive-then-defensive scaling and suggest that it can be helpful for predicting shifts in the offense-defense balance in a wide range of domains.

Artificial Intelligence and Quantitative Change

Applications of artificial intelligence will undoubtedly be responsible for an enormous range of qualitative changes to the character of war. It is easy to imagine states such as the United States and China competing to deploy ever more novel systems in a cat-and-mouse game that has little to do with quantity. An emphasis on qualitative advantage over quantitative advantage is a fairly explicit feature of the American military strategy and has been since at least the so-called Second Offset strategy that emerged in the middle of the Cold War.

However, some emerging applications of artificial intelligence do seem to lend themselves most naturally to competition on the basis of rapidly increasing quantity. Armed drone swarms are one example. Paul Scharre has argued that the military utility of these swarms may lie in the fact that they offer an opportunity to substitute quantity for quality. A large swarm of individually expendable drones may be able to overwhelm the defenses of individual weapon platforms, such as aircraft carriers, by attacking from more directions or in more waves than the platforms defenses are capable of managing. If this method of attack is in fact viable, one could see a race to build larger and larger swarms that ultimately results in swarms containing billions of drones. The phenomenon of offensive-then-defensive scaling suggests that growing swarm sizes could initially benefit attackers who can focus their attention increasingly intensely on less well-defended targets and parts of targets before potentially allowing defensive swarms to win out if sufficient growth in numbers occurs.

Automated vulnerability discovery tools also stand out as another relevant example, which have the potential to vastly increase the number of software vulnerabilities that both attackers and defenders can discover. The DARPA Cyber Grand Challenge recently showcased machine systems autonomously discovering, patching, and exploiting software vulnerabilities. Recent work on novel techniques such as deep reinforcement fuzzing also suggests significant promise. The computer security expert Bruce Schneier has suggested that continued progress will ultimately make it feasible to discover and patch every single vulnerability in a given piece of software, shifting the cyber offense-defense balance significantly toward defense. Before this point, however, there is reason for concern that these new tools could initially benefit attackers most of all.

Forecasting the Impact of Technology

The impact of AI on the offense-defense balance remains highly uncertain. The greatest impact might come from an as-yet-unforeseen qualitative change. Our contribution here is to point out one particularly precise way in which AI could impact the offense-defense balance, through quantitative increases of capabilities in domains that exhibit offensive-then-defensive scaling. Even if this idea is mistaken, it is our hope that by understanding it, researchers are more likely to see other impacts. In foreseeing and understanding these potential impacts, policymakers could be better prepared to mitigate the most dangerous consequences, through prioritizing the development of applications that favor defense, investigating countermeasures, or constructing stabilizing norms and institutions.

Work to understand and forecast the impacts of technology is hard and should not be expected to produce confident answers. However, the importance of the challenge means that researchers should still try while doing so in a scientific, humble way.

This publication was made possible (in part) by a grant to the Center for a New American Security from Carnegie Corporation of New York. The statements made and views expressed are solely the responsibility of the author(s).

Ben Garfinkel is a DPhil scholar in International Relations, University of Oxford, and research fellow at the Centre for the Governance of AI, Future of Humanity Institute.

Allan Dafoe is associate professor in the International Politics of AI, University of Oxford, and director of the Centre for the Governance of AI, Future of Humanity Institute. For more information, see http://www.governance.ai and http://www.allandafoe.com.

Image: U.S. Air Force (Photo by Tech. Sgt. R.J. Biermann)

Go here to see the original:

Artificial Intelligence, Foresight, and the Offense-Defense Balance - War on the Rocks

7 tips to get your resume past the robots reading it – CNBC

There are about 7.3 million open jobs in the U.S., according to the most recent Job Openings and Labor Turnover Survey from the Bureau of Labor Statistics. And for many job seekers vying for these openings, the likelihood they'll submit their application to an artificial intelligence-powered hiring system is growing.

A 2017 Deloitte report found 33% of employers already use some form of AI in the hiring process to save time and reduce human bias. These algorithms scan applications for specific words and phrases around work history, responsibilities, skills and accomplishments to identify candidates who match well with the job description.

These assessments may also aim to predict a candidate's future success by matching their abilities and accomplishments to those held by a company's top performers.

But it remains unclear how effective these programs are.

As Sue Shellenbarger reports for The Wall Street Journal, many vendors of these systems don't tell employers how their algorithms work. And employers aren't required to inform job candidates when their resumes will be reviewed by these systems.

That said, "it's sometimes possible to tell whether an employer is using an AI-driven tool by looking for a vendor's logo on the employer's career site," Shellenbarger writes. "In other cases, hovering your cursor over the 'submit' button will reveal the URL where your application is being sent."

CNBC Make It spoke with career experts about how to make sure your next application makes it past the initial robot test.

AI-powered hiring platforms are designed to identify candidates whose resumes match open job descriptions the most. These machines are nuanced, but their use still means very specific wording, repetition and prioritization of certain phrases matter.

Job seekers can make sure to highlight the right skills to get past initial screens by using tools, such as an online cloud generator, to understand what the AI system will prioritize most. Candidates can drop in the text of a job description and see which words appear most often, based on how large they appear within the word cloud.

CareerBuilder also created an AI resume builder to help candidates include skills on an application they may not have identified on their own.

Including transferable skills mentioned in the job description can also increase your resume odds. After all, executives from a recent IBM report say soft skills such as flexibility, time management, teamwork and communication are some of the most important skills in the workforce today.

"Job seekers should be cognizant of how they are positioning their professional background to put their best foot forward," Michelle Armer, chief people officer at talent acquisition company CareerBuilder, tells CNBC Make It. "Since a candidate's skill set will help set them apart from other applicants, putting these front and center on a resume will help make sure you're giving skills the attention they deserve."

It's also worth noting that AI enables employers to source candidates from the entire application system more easily, rather than limiting consideration just to people who applied to a specific role. "As a result," says TopResume career expert Amanda Augustine, "you could be contacted for a role the company believes is a good fit even if you never specifically applied for that opportunity."

When it comes to actually writing your resume, here are seven ways to make sure it looks best for the robots who will be reading it.

Use a text-based application like Microsoft Word rather than a PDF, HTML, Open Office, or Apple Pages document so buzzwords can be accurately scanned by AI programs. Augustine suggests job seekers skip images, graphics and logos, which might not be readable. Test how well bots will comprehend your resume by copying it into a plain text file, then making sure nothing gets out of order and no strange symbols pop up.

Mirror the job description in your work history. Job titles should be listed in reverse-chronological order, Augustine says, because machines favor documents with a clear hierarchy to their information. For each role, prioritize the most relevant information that matches the critical responsibilities and requirements of the job you're applying for. "The bullets that directly match one of the job requirements should be listed first," Augustine adds, "and other notable contributions or accomplishments should be listed lower in a set of bullets."

Include keywords from the job description, such as the role's day-to-day responsibilities, desired previous experience and overall purpose within the organization. Consider having a separate skills section, Augustine says, where you list any certifications, technical skills and soft skills mentioned in the job description.

Quantify performance results, Shellenbarger writes. Highlight ones that involve meeting company goals, driving revenue, leading a certain number of people or projects, being efficient with costs and so on.

Tailor each application to the description of each role you're applying for. These AI systems are generally built to weed out disqualifying resumes that don't match enough of the job description. The more closely you mirror the job description in your application, the better, Augustine says.

Don't place information in the document header or footer, even though resumes traditionally list contact information here. According to Augustine, many application systems can't read the information in this section, so crucial details may be omitted.

Network within the company to build contacts and get your resume to the hiring manager's inbox directly. "While AI helps employers narrow down the number of applicants they will move forward with for interviews," Armer says, "networking is also important."

AI hiring programs show promise at filling roles with greater efficiency, but can also perpetuate bias when they reward candidates with similar backgrounds and experiences as existing employees. Armer stresses hiring algorithms need to be built by teams of diverse individuals across race, ethnicity, gender, experience and other background factors in order to minimize bias.

This is also where getting your resume in front of a human can pay off the most.

"When you have someone on the inside advocating for you, you are often able to bypass the algorithm and have your application delivered directly to the recruiter or hiring manager, rather than getting caught up in the screening process," Augustine says.

Augustine recommends job seekers take stock of their existing network and identify those who may know someone at the companies they're interested in working at. "Look for professional organizations and events that are tied to your industry 10times.com is a great place to find events around the world for every imaginable field," she adds.

Finally, Armer recommends those starting their job hunt review and polish their social media profiles.

Like this story? Subscribe to CNBC Make It on YouTube!

Don't miss: This algorithm can predict when workers are about to quithere's how

Read more:

7 tips to get your resume past the robots reading it - CNBC

Finland offers crash course in artificial intelligence to EU – The Associated Press

HELSINKI (AP) Finland is offering a techy Christmas gift to all European Union citizens a free-of-charge online course in artificial intelligence in their own language, officials said Tuesday.

The tech-savvy Nordic nation, led by the 34-year-old Prime Minister Sanna Marin, is marking the end of its rotating presidency of the EU at the end of the year with a highly ambitious goal.

Instead of handing out the usual ties and scarves to EU officials and journalists, the Finnish government has opted to give practical understanding of AI to 1% of EU citizens, or about 5 million people, through a basic online course by the end of 2021.

It is teaming up with the University of Helsinki, Finlands largest and oldest academic institution, and the Finland-based tech consultancy Reaktor.

Teemu Roos, a University of Helsinki associate professor in the department of computer science, described the nearly $2 million project as a civics course in AI to help EU citizens cope with societys ever-increasing digitalization and the possibilities AI offers in the jobs market.

The course covers elementary AI concepts in a practical way and doesnt go into deeper concepts like coding, he said.

We have enormous potential in Europe but what we lack is investments into AI, Roos said, adding that the continent faces fierce AI competition from digital giants like China and the United States.

The initiative is paid for by the Finnish ministry for economic affairs and employment, and officials said the course is meant for all EU citizens whatever their age, education or profession.

Since its launch in Finland in 2018 The Elements of AI has been phenomenally successful the most popular course ever offered by the University of Helsinki, which traces its roots back to 1640 with more than 220,000 students from over 110 countries having taken it so far online, Roos said.

A quarter of those enrolled so far are aged 45 and over, and some 40% are women. The share of women is nearly 60% among Finnish participants - a remarkable figure in the male-dominated technology domain.

Consisting of several modules, the online course is meant to be completed in about six weeks full time - or up to six months on a lighter schedule - and is currently available in Finnish, English, Swedish and Estonian.

Together with Reaktor and local EU partners, the university is set to translate it to the remaining 20 of the EUs official languages in the next two years.

Megan Schaible, COO of Reaktor Education, said during the projects presentation in Brussels last week that the company decided to join forces with the Finnish university to prove that AI should not be left in the hands of a few elite coders.

An official University of Helsinki diploma will be provided to those passing and Roos said many EU universities would likely give credits for taking the course, allowing students to include it in their curriculum.

For technology aficionados, the University of Helsinkis computer science department is known as the alma mater of Linus Torvalds, the Finnish software engineer who developed the Linux operating system during his studies there in the early 1990s.

In September, Google set up its free-of-charge Digital Garage training hub in the Finnish capital with the intention of helping job-seekers, entrepreneurs and children to brush up their digital skills including AI.

See the article here:

Finland offers crash course in artificial intelligence to EU - The Associated Press

The Machines Are Learning, and So Are the Students – The New York Times

Riiid claims students can increase their scores by 20 percent or more with just 20 hours of study. It has already incorporated machine-learning algorithms into its program to prepare students for English-language proficiency tests and has introduced test prep programs for the SAT. It expects to enter the United States in 2020.

Still more transformational applications are being developed that could revolutionize education altogether. Acuitus, a Silicon Valley start-up, has drawn on lessons learned over the past 50 years in education cognitive psychology, social psychology, computer science, linguistics and artificial intelligence to create a digital tutor that it claims can train experts in months rather than years.

Acuituss system was originally funded by the Defense Departments Defense Advanced Research Projects Agency for training Navy information technology specialists. John Newkirk, the companys co-founder and chief executive, said Acuitus focused on teaching concepts and understanding.

The company has taught nearly 1,000 students with its course on information technology and is in the prototype stage for a system that will teach algebra. Dr. Newkirk said the underlying A.I. technology was content-agnostic and could be used to teach the full range of STEM subjects.

Dr. Newkirk likens A.I.-powered education today to the Wright brothers early exhibition flights proof that it can be done, but far from what it will be a decade or two from now.

The world will still need schools, classrooms and teachers to motivate students and to teach social skills, teamwork and soft subjects like art, music and sports. The challenge for A.I.-aided learning, some people say, is not the technology, but bureaucratic barriers that protect the status quo.

There are gatekeepers at every step, said Dr. Sejnowski, who together with Barbara Oakley, a computer-science engineer at Michigans Oakland University, created a massive open online course, or MOOC, called Learning How to Learn.

He said that by using machine-learning systems and the internet, new education technology would bypass the gatekeepers and go directly to students in their homes. Parents are figuring out that they can get much better educational lessons for their kids through the internet than theyre getting at school, he said.

Craig S. Smith is a former correspondent for The Times and hosts the podcast Eye on A.I.

Go here to see the original:

The Machines Are Learning, and So Are the Students - The New York Times

How Artificial Intelligence Is Humanizing the Healthcare Industry – HealthITAnalytics.com

December 17, 2019 -Seventy-nine percent of healthcare professionals indicate that artificial intelligence tools have helped mitigate clinician burnout, suggesting that the technology enables providers to deliver more engaging, patient-centered care, according to a survey conducted by MIT Technology Review and GE Healthcare.

As artificial intelligence tools have slowly made their way into the healthcare industry, many have voiced concerns that the technology will remove the human aspect of patient care, leaving individuals in the care of robots and machines.

Healthcare institutions have been anticipating the impact that artificial intelligence (AI) will have on the performance and efficiency of their operations and their workforcesand the quality of patient care, the report stated.

Contrary to common, yet unproven, fears that machines will replace human workers, AI technologies in health care may actually be re-humanizing healthcare, just as the system itself shifts to value-based care models that may favor the outcome patients receive instead of the number of patients seen.

Through interviews with over 900 healthcare professionals, researchers found that providers are already using AI to improve data analysis, enable better treatment and diagnosis, and reduce administrative burdensall of which free up clinicians time to perform other tasks.

READ MORE: Using Artificial Intelligence to Strengthen Suicide Prevention

Numerous technologies are in play today to allow healthcare professionals to deliver the best care, increasingly customized to patients, and at lower costs, the report said.

Our survey has found medical professionals are already using AI tools, to improve both patient care and back-end business processes, from increasing the accuracy of oncological diagnosis to increasing the efficiency of managing schedules and workflow.

The survey found that medical staff with pilot AI projects spend one-third less time writing reports, while those with extensive AI programs spend two-thirds less time writing reports. Additionally, 45 percent of participants said that AI has helped increase consultation time, as well as time to perform surgery and other procedures.

For those with the most extensive AI rollouts, 70 percent expect to spend more time performing procedures than doing administrative or other work.

AI is being used to assume many of a physicians more mundane administrative responsibilities, such as taking notes or updating electronic health records, researchers said. The more AI is deployed, the less time doctors spend at their computers.

READ MORE: Patient, Provider Support Key to Healthcare Artificial Intelligence

Respondents also indicated that AI is helping them gain an edge in the healthcare market. Eighty percent of business and administrative healthcare professionals said that AI is helping them improve revenue opportunities, while 81 percent said they think AI will make them more competitive providers.

The report also showed that AI-related projects will continue to receive an increasing portion of healthcare spending now and in the future. Seventy-nine percent of respondents said they will be spending more to develop AI applications.

Respondents also indicated that AI has increased the operational efficiency of healthcare organizations. Seventy-eight percent of healthcare professionals said that their AI deployments have already created workflow improvements in areas including schedule management.

Using AI to optimize schedule management and other administrative tasks creates opportunities to leverage AI for more patient-facing applications, allowing clinicians to work with patients more closely.

AIs core value proposition is in both improving diagnosing abilities and reducing regulatory and data complexities by automating and streamlining workflow. This allows healthcare professionals to harness the wealth of insight the industry is generating, without drowning in it, the report said.

READ MORE: GE Launches Program to Ease Artificial Intelligence Adoption

AI has also helped healthcare professionals reduce clinical errors. Medical staff who dont use AI cited fighting clinical error as a key challenge two-thirds of the timemore than double that of medical staff who have AI deployments.

Additionally, advanced tools are helping users identify and treat clinical issues. Seventy-five percent of respondents agree that AI has enabled better predictions in the treatment of disease.

AI-enabled decision-support algorithms allow medical teams to make more accurate diagnoses, researchers noted.

This means doing something big by doing something really small: noticing minute irregularities in patient information. That could be the difference between acting on a life-threatening issueor missing it.

While AI has shown a lot of promise in the industry, the technology still comes with challenges. Fifty-seven percent of respondents said that integrating AI applications into existing systems is challenging, and more than half of professionals planning to deploy AI raise concerns about medical professional adoption, support from top management, and technical support.

To overcome these challenges, researchers recommended that clinical staff collaborate to implement and deploy AI tools.

AI needs to work for healthcare professionals as part of a robust, integrated ecosystem. It needs to be more than deploying technologyin fact, the more humanized the application of AI is, the more it will be adopted and improve results and return on investment. After all, in healthcare, the priority is the patient, researchers concluded.

Read the rest here:

How Artificial Intelligence Is Humanizing the Healthcare Industry - HealthITAnalytics.com