Archive for the ‘Artificial Intelligence’ Category

Artificial Intelligence Systems Will Need to Have Certification, CISA Official Says – Nextgov

Vendors of artificial intelligence technology should not be shielded by intellectual property claims and will have to disclose elements of their designs and be able to explain how their offering works in order to establish accountability, according to a leading official from the Cybersecurity and Infrastructure Security Agency.

I dont know how you can have a black-box algorithm thats proprietary and then be able to deploy it and be able to go off and explain whats going on, said Martin Stanley, a senior technical advisor who leads the development of CISAs artificial intelligence strategy. I think those things are going to have to be made available through some kind of scrutiny and certification around them so that those integrating them into other systems are going to be able to account for whats happening.

Stanley was among the speakers on a recent Nextgov and Defense One panel where government officials, including a member of the National Security Commission on Artificial Intelligence, shared some of the ways they are trying to balance reaping the benefits of artificial intelligence with risks the technology poses.

Experts often discuss the rewards of programming machines to do tasks humans would otherwise have to labor onfor both offensive and defensive cybersecurity maneuversbut the algorithms behind such systems and the data used to train them into taking such actions are also vulnerable to attack. And the question of accountability applies to users and developers of the technology.

Artificial intelligence systems are code that humans write, but they exercise their abilities and become stronger and more efficient using data that is fed to them. If the data is manipulated, or poisoned, the outcomes can be disastrous.

Changes to the data could be things that humans wouldnt necessarily recognize, but that computers do.

Weve seen ... trivial alterations that can throw off some of those results, just by changing a few pixels in an image in a way that a person might not even be able to tell, said Josephine Wolff, a Tufts University cybersecurity professor who was also on the panel.

And while its true that behind every AI algorithm is a human coder, the designs are becoming so complex, that youre looking at automated decision-making where the people who have designed the system are not actually fully in control of what the decisions will be, Wolff says.

This makes for a threat vector where vulnerabilities are harder to detect until its too late.

With AI, theres much more potential for vulnerabilities to stay covert than with other threat vectors, Wolff said. As models become increasingly complex it can take longer to realize that something is wrong before theres a dramatic outcome.

For this reason, Stanley said an overarching factor CISA uses to help determine what use cases AI gets applied to within the agency, is to assess the extent to which they offer high benefits and low regrets.

We pick ones that are understandable and have low complexity, he said.

Among other things federal personnel need to be mindful of is who has access to the training data.

You can imagine you get an award done, and everyone knows how hard that is from the beginning, and then the first thing that the vendor says is OK, send us all your data, hows that going to work so we can train the algorithm? he said. Those are the kinds of concerns that we have to be able to address.

Were going to have to continuously demonstrate that we are using the data for the purpose that it was intended, he said, adding, Theres some basic science that speaks to how you interact with algorithms and what kind of access you can have to the training data. Those kinds of things really need to be understood by the people who are deploying them.

A crucial but very difficult element to establish is liability. Wolff said ideally, liability wouldbe connected to a potential certification program where an entity audits artificial intelligence systems for factors like transparency and explainability.

Thats important, she said, for answering the question of how can we incentivize companies developing these algorithms to feel really heavily the weight of getting them right and be sure to do their own due diligence knowing that there are serious penalties for failing to secure them effectively.

But this is hard, even in the world of software development more broadly.

Making the connection is still very unresolved. Were still in the very early stages of determining what would a certification process look like, who would be in charge of issuing it, what kind of legal protection or immunity might you get if you went through it, she said. Software developers and companies have been working for a very long time, especially in the U.S., under the assumption that they cant be held legally liable for vulnerabilities in their code, and when we start talking about liability in the machine learning and AI context, we have to recognize that thats part of what were grappling with, an industry that for a very long time has had very strong protections from any liability.

View from the Commission

Responding to this, Katharina McFarland, a member of the National Security Commission on Artificial Intelligence, referenced the Pentagons Cybersecurity Maturity Model Certification program.

The point of the CMMC is to establish liability for Defense contractors, Defense Acquisitions Chief Information Security Officer Katie Arrington has said. But McFarland highlighted difficulties facing CMMC that program officials themselves have acknowledged.

Im sure youve heard of the [CMMC], theres a lot of thought going on, the question is the policing of it, she said. When you consider the proliferation of the code thats out there, and the global nature of it, you really will have a challenge trying to take a full thread and to pull it through a knothole to try to figure out where that responsibility is. Our borders are very porous and machines that we buy from another nation may not be built with the same biases that we have.

McFarland, a former head of Defense acquisitions, stressed that AI is more often than not viewed with fear and said she wanted to see more of a balance in procurement considerations for the technology.

I found that we had a perverse incentive built into our system and that was that we took, sometimes, I think extraordinary measures to try to creep into the one percent area for failure, she said, In other words, we would want to 110% test a system and in doing so, we might miss the venue of where its applicability in a theater to protect soldiers, sailors, airmen and Marines is needed.

She highlighted upfront a need for testing a verification but said it shouldnt be done at the expense of adoption. To that end, she asks that industry help by sharing the testing tools they use.

I would encourage industry to think about this from the standpoint of what tools would we needbecause theyre using themin the department, in the federal space, in the community, to give us transparency and verification, she said, so that we have a high confidence in the utility, in the data that were using and the AI algorithms that were building.

More here:
Artificial Intelligence Systems Will Need to Have Certification, CISA Official Says - Nextgov

Israel obtains the observer status to the Ad hoc Committee on Artificial Intelligence (CAHAI) – Council of Europe

On 1st of July 2020, the Committee of Ministers decided, in line with paragraph 8 of the Resolution CM/Res(2011)24, to give Israel the observer status totheAd hoc Committee on Artificial Intelligence (CAHAI).

Israel will, as from now, fully contribute to the work of the CAHAI. Its participation expands the reach of the CAHAI,which already includes Canada, the Holy See, Japan, Mexico and the United States of America among its observers.

The CAHAI is currently examining the feasibility of a legal framework for the development, design and application of artificial intelligence, based on the Council of Europe standards on human rights, democracy and the rule of law.

The CAHAI's work will be the result of a unique and close co-operation between numerous stakeholders from various sectors ranging from member and non-member States, but also representatives of civil society, research and academia, and the private sector.

Read more from the original source:
Israel obtains the observer status to the Ad hoc Committee on Artificial Intelligence (CAHAI) - Council of Europe

Increasing Transparency at the National Security Commission on Artificial Intelligence – Lawfare

In 2018, Congress established the National Security Commission on Artificial Intelligence (NSCAI)a temporary, independent body tasked with reviewing the national security implications of artificial intelligence (AI). But two years later, the commissions activities remain little known to the public. Critics have charged that the commission has conducted activities of interest to the public outside of the public eye, only acknowledging that meetings occurred after the fact and offering few details on evolving commission decision-making. As one commentator remarked, Companies or members of the public interested in learning how the Commission is studying AI are left only with the knowledge that appointed people met to discuss these very topics, did so, and are not yet releasing any information about their recommendations.

That perceived lack of transparency may soon change. In June, the U.S. District Court for the District of Columbia handed down its decision in Electronic Privacy Information Center v. National Security Commission on Artificial Intelligence, holding that Congress compelled the NSCAI to comply with the Federal Advisory Committee Act (FACA). Under FACA, the commission must hold open meetings and proactively provide records and other materials to the public. This decision follows a ruling from December 2019, holding that the NSCAI must also provide historical documents upon request under the Freedom of Information Act (FOIA). As a result of these decisions, the public is likely to gain increased access to and insight into the once-opaque operations of the commission.

Lawmakers established the NSCAI in the John S. McCain National Defense Authorization Act (NDAA) for fiscal 2019 1051, which tasked the commission with consider[ing] the methods and means necessary to advance the development of artificial intelligence, machine learning, and associated technologies to comprehensively address the national security and defense needs of the United States. The commissions purview includes an array of issues related to the implications and uses of artificial intelligence and machine learning for national security and defense, including U.S. competitiveness and leadership, research and development, ethics, and data standards.

The NSCAI is currently chaired by Eric Schmidt, the former executive chairman of Googles parent company, Alphabet. The commissions 15 membersappointed by a combination of Congress, the secretary of defense and the secretary of commercereceive classified and unclassified briefings, meet in working groups and engage with industry. They report their findings and recommendations to the president and Congress, including in an annual report.

The Electronic Privacy Information Center (EPIC), a research center focused on privacy and civil liberties issues in the digital age, submitted a request to the NSCAI in September 2019, seeking access to upcoming meetings and records prepared by the commission under FACA and FOIA. In the six-month period prior to the request, the NSCAI held more than a dozen meetings and received more than 100 briefings, according to EPIC. At the time it filed the lawsuit, EPIC noted that the commissions first major report was also one month overdue for release. When the commission did not comply with the requests under FOIA and FACA, EPIC brought suit under the two laws.

EPICs complaint alleged that the NSCAI had conducted its operations opaquely in its short lifespan. Since its establishment, the commission has operated almost entirely in secret with meetings behind closed doors[,] and has failed to publish or disclose any notices, agendas, minutes, or materials. If Congress had intended the NSCAI to comply with FOIA and FACA, such activity would not satisfy the statutes requirements. Given the potential implications of federal artificial intelligence decisions for privacy, cybersecurity, human rights, and algorithmic bias, EPIC argued that [p]ublic access to the records and meetings of the AI Commission is vital to ensure government transparency and democratic accountability. The complaint also noted the potential ramifications of commission activities for the government, private sector, and public, as well as the importance of artificial intelligence safeguards in the national security context due to limited public oversight. According to EPIC, increasing public participation would permit greater input into the development of national AI policy by those whose privacy and data security could potentially be affected.

The U.S. District Court for the District of Columbia addressed EPICs FOIA claim in a December 2019 decision. FOIA requires agencies to disclose their records to a party upon request, barring exemptions (including for information classified to protect national security). EPIC alleged that the NSCAI failed to uphold its obligations under FOIA to process FOIA requests in a timely fashion; to process EPICs FOIA requests in an expedited manner, in accordance with EPICs claims of urgency; and to make available for public inspection and copying its records, reports, transcripts, minutes, appendixes, working papers, drafts, studies, agenda, or other documents. The commission, which at the time did not have a FOIA processing mechanism in place or other pending FOIA requests, argued that it was not an agency subject to FOIA.

The courts inquiry centered on whether the NSCAI is an agency under FOIA. Comparing the language establishing the NSCAI with FOIAs definition of agency, the court held that the NSCAI is subject to FOIA. In his decision, District Judge Trevor McFadden noted that Congress could have hardly been clearer. As a result, since that time, the commission has had to produce historical documents in response to FOIA requests.

FACA, by contrast, applies forward-looking requirements specifically to federal advisory committees. These mandates include requiring committees to open meetings to the public and announce them in the Federal Register, and to make reports, transcripts and other commission materials publicly available. The measures aim to inform the public about and invite public engagement with the committees that provide expertise to the executive branch. EPIC alleged that the NSCAI violated FACA by failing to hold open meetings and provide notice of them, and by failing to make records available to the public. EPIC sought mandamus relief pursuant to the alleged FACA violations.

In its June decision, the district court ruled that FACA applies to the NSCAI. The commission had filed a motion to dismiss the FACA claims, arguing that it could not be subject to both FOIA and FACA. Since the court had previously held the NSCAI to be an agency for purposes of FOIA, the commission reasoned that it could not simultaneously be an advisory committee under FACA. McFadden disagreed. Invoking the Roman God Januss two facesone forward-looking and the other backward-facinghe wrote, [L]ike Janus, the Commission does indeed have two faces, and ... Congress obligated it to comply with FACA as well as FOIA. The court could not identify a conflict between the requirements of the two statutes, despite differences in their obligations and exceptions. Rather, it noted that if such conflicts arise, it will be incumbent on the parties and the Court to resolve any difficulties. The court dismissed additional claims under the Administrative Procedure Act (APA) for lack of subject matter jurisdiction, as it determined that the commission is not an agency under the APA definition.

The courts decision turned on whether the NSCAI is an advisory committee subject to FACA. The court determined that the statutory text of the 2019 NDAA establishing the NSCAI fit[s] the [FACA] definition of advisory committee like a glove. Furthermore, turning to the full text of the 2019 NDAA, the court noted that the law contains at least two instances in which it explicitly exempts a government body from FACA. The court read the 2019 NDAA as silent when FACA applies and explicit when FACA does not apply. Given Congresss silence on the applicability of FACA to the NSCAI in the 2019 NDAAand again in the 2020 NDAAthe court reasoned that Congress intended the NSCAI to be subject to FACA.

In determining the NSCAI to be subject to FACA, in addition to FOIA, the court has compelled the commission to adopt a more transparent operating posture going forward. Since the December 2019 decision on FOIA, the NSCAI has produced a number of historical records in response to FOIA requests. The recent ruling on FACA grounds requires the NSCAI to hold open meetings, post notice of meetings in advance and make documents publicly available. As a result, the commissions process of compiling findings and developing recommendations for government action related to artificial intelligence and machine learning will likely become more accessible to the public.

The two court decisions come in time to have a noticeable impact on the remaining term of the temporary commission. While the NSCAI was previously due to disband later in 2020, the NDAA for fiscal 2020 1735 extended the commissions lifespan by one year, to October 1, 2021. Citing federal budgetary timelines and the pace of AI development, the commission released its first set of recommendations in March 2020 and expressed its intent to publish additional recommendations on a quarterly basis thereafter. The commission is due to submit its final report to Congress by March 1, 2021. As the NSCAI prepares to enter its final year of operations and develop its closing recommendations, the public will have a clearer window into the commissions work.

Read more:
Increasing Transparency at the National Security Commission on Artificial Intelligence - Lawfare

Artificial intelligence levels show AI is not created equal. Do you know what the vendor is selling? – Spend Matters

Just like there are eight levels to analytics as mentioned in a recent Spend Matters PRO brief, artificial intelligence (AI) has various stages of the technology today even though there is no such thing as true AI by any standard worth its technical weight.

But just because we dont yet have true AI doesnt mean todays AI cant help procurement improve its performance. We just need enough computational intelligence to allow software to do the tactical and non-value-added tasks that software should be able to perform with all of the modern computational power available to us. As long as the software can do the tasks as well as an average human expert the vast majority of the time (and kick up a request for help when it doesnt have enough information or when the probability it will outperform a human expert is less than the expert performing a task) thats more than good enough.

The reality is, for some basic tactical tasks, there are plenty of software options today (e.g., intelligent invoice processing). And even for some highly specialized tasks that we thought could never be done by a computer, we have software that can do it better, like early cancerous growth detection in MRIs and X-rays.

That being said, we also have a lot of software on the market that claims to be artificial intelligence but that is not even remotely close to what AI is today, let alone what useful software AI should be. For software to be classified as AI today, it must be capable of artificial learning and evolving its models or codes and improve over time.

So, in this PRO article, we are going to define the levels of AI that do exist today, and that may exist tomorrow. This will allow you to identify what truth there is to the claims that a vendor is making and whether the software will actually be capable of doing what you expect it to.

Not counting true AI, there are five levels of AI that are available today or will likely be available tomorrow:

Lets take a look at each group.

See the original post:
Artificial intelligence levels show AI is not created equal. Do you know what the vendor is selling? - Spend Matters

How Coronavirus and Protests Broke Artificial Intelligence And Why Its A Good Thing – Observer

Until February 2020, Amazon thought that the algorithms that controlled everything from their shelf space to their promoted products were practically unbreakable. For years they had used simple and effective artificial intelligence (AI) to predict buying patterns, and planned their stock levels, marketing, and much more based on a simple question: who usually buys what?

Yet as COVID-19 swept the globe they found that the technology that they relied on was much more shakable than they had thought. As sales of hand sanitizer, face masks, and toilet paper soared, sites such as Amazon found that their automated systems were rendered almost useless as AI models were thrown into utter disarray.

Elsewhere, the use of AI in everything from journalism to policing has been called into question. As long-overdue action on racial inequalities in the US has been demanded in recent weeks, companies have been challenged for using technology that regularly displays sometimes catastrophic ethnic biases.

Microsoft was recently held to account after the AI algorithms that it used on its MSN news website confused mixed-race members of girlband Little Mix, and many companies have now suspended the sale of facial recognition technologies to law enforcement agencies after it was revealed that they are significantly less effective at identifying images of minority individuals, leading to potentially inaccurate leads being pursued by police.

The past month has brought many issues of racial and economic injustice into sharp relief, says Rediet Abebe, an incoming assistant professor of computer science at the University of California, Berkeley. AI researchers are grappling with what our role should be in dismantling systemic racism, economic oppression, and other forms of injustice and discrimination. This has been an opportunity to reflect more deeply on our research practices, on whose problems we deem to be important, whom we aim to serve, whom we center, and how we conduct our research.

SEE ALSO: Artificial Intelligence Is on the Case in the Legal Profession

From the COVID-19 pandemic to the Black Lives Matter protests, 2020 has been a year characterized by global unpredictability and social upheaval. Technology has been a crucial medium of effecting change and keeping people safe, from test and track apps to the widespread use of social media to spread the word about protests and petitions. But amidst this, machine learning AI has sometimes failed to meet its remit, lagging behind rapid changes in social behavior and falling short on the very thing that it is supposed to do best: gauging the data fed into it and making smart choices.

The problem often lies not with the technology itself, but in a lack of data used to build algorithms, meaning that they fail to reflect the breadth of our society and the unpredictable nature of events and human behavior.

Most of the challenges to AI that have been identified by the pandemic relate to the substantial changes in behavior of people, and therefore in the accuracy of AI models of human behavior, says Douglas Fisher, an associate professor of computer science at Vanderbilt University. Right now, AI and machine learning systems are stovepiped, so that although a current machine learning system can make accurate predictions about behaviors under the conditions under which it learned them, the system has no broader knowledge.

The last few months have highlighted the need for greater nuance in AIin short, we need technology that can be more human. But in a society increasingly experimenting with using AI to carry out such crucial roles as identifying criminal suspects or managing food supply chains how can we ensure that machine learning models are sufficiently knowledgeable?

Most challenges related to machine learning over the past months result from change in data being fed into algorithms, explains Kasia Borowska, Managing Director of AI consultancy Brainpool.ai. What we see a lot of these days is companies building algorithms that just about do the job. They are not robust, not scalable, and prone to bias this has often been due to negligence or trying to cut costsbusinesses have clear objectives and these are often to do with saving money or simply automating manual processes, and often the ethical sideremoving biases or being prepared for changeisnt seen as the primary objective.

Kasia believes that both biases in AI algorithms and an inability to adapt to change and crisis stem from the same problem and present an opportunity to build better technology in the future. She argues that by investing in building better algorithms, issues such as bias and an inability to predict user behavior in times of crisis can be eliminated.

Although companies might previously have been loath to invest time and money into building datasets that did much more than the minimum that they needed to operate, she hopes that the combination of COVID and an increased awareness of machine learning biases might be the push that they need.

I think that a lot of businesses that have seen their machine learning struggle will now think twice before they try and deploy a solution that isnt robust hasnt been tested enough, she says. Hopefully the failure of some AI systems will motivate data scientists as well as corporations to invest time and resources in the background work ahead of jumping into the development of AI solutions we will see more effort being put into ensuring that AI products are robust and bias-free.

The failures of AI have been undeniably problematic, but perhaps they present an opportunity to build a smarter future. After all, in recent months we have also seen the potential of AI, with new outbreak risk software and deep learning models that help the medical community to predict drugs and treatments and develop prototype vaccines. These strides in progress demonstrate the power of combining smart technology with human intervention, and show that with the right data AI has the power to enact massive positive change.

This year has revealed the full scope of AI, laying bare the challenges that developers face alongside the potential for tremendous benefits. Building datasets that encompass the broadest scope of human experience may be challenging, but it will also make machine learning more equitable, more useful, and much more powerful. Its an opportunity that those in the field should be keen to corner.

Read the original post:
How Coronavirus and Protests Broke Artificial Intelligence And Why Its A Good Thing - Observer