Archive for the ‘Artificial Intelligence’ Category

Israel obtains the observer status to the Ad hoc Committee on Artificial Intelligence (CAHAI) – Council of Europe

On 1st of July 2020, the Committee of Ministers decided, in line with paragraph 8 of the Resolution CM/Res(2011)24, to give Israel the observer status totheAd hoc Committee on Artificial Intelligence (CAHAI).

Israel will, as from now, fully contribute to the work of the CAHAI. Its participation expands the reach of the CAHAI,which already includes Canada, the Holy See, Japan, Mexico and the United States of America among its observers.

The CAHAI is currently examining the feasibility of a legal framework for the development, design and application of artificial intelligence, based on the Council of Europe standards on human rights, democracy and the rule of law.

The CAHAI's work will be the result of a unique and close co-operation between numerous stakeholders from various sectors ranging from member and non-member States, but also representatives of civil society, research and academia, and the private sector.

Read more from the original source:
Israel obtains the observer status to the Ad hoc Committee on Artificial Intelligence (CAHAI) - Council of Europe

Increasing Transparency at the National Security Commission on Artificial Intelligence – Lawfare

In 2018, Congress established the National Security Commission on Artificial Intelligence (NSCAI)a temporary, independent body tasked with reviewing the national security implications of artificial intelligence (AI). But two years later, the commissions activities remain little known to the public. Critics have charged that the commission has conducted activities of interest to the public outside of the public eye, only acknowledging that meetings occurred after the fact and offering few details on evolving commission decision-making. As one commentator remarked, Companies or members of the public interested in learning how the Commission is studying AI are left only with the knowledge that appointed people met to discuss these very topics, did so, and are not yet releasing any information about their recommendations.

That perceived lack of transparency may soon change. In June, the U.S. District Court for the District of Columbia handed down its decision in Electronic Privacy Information Center v. National Security Commission on Artificial Intelligence, holding that Congress compelled the NSCAI to comply with the Federal Advisory Committee Act (FACA). Under FACA, the commission must hold open meetings and proactively provide records and other materials to the public. This decision follows a ruling from December 2019, holding that the NSCAI must also provide historical documents upon request under the Freedom of Information Act (FOIA). As a result of these decisions, the public is likely to gain increased access to and insight into the once-opaque operations of the commission.

Lawmakers established the NSCAI in the John S. McCain National Defense Authorization Act (NDAA) for fiscal 2019 1051, which tasked the commission with consider[ing] the methods and means necessary to advance the development of artificial intelligence, machine learning, and associated technologies to comprehensively address the national security and defense needs of the United States. The commissions purview includes an array of issues related to the implications and uses of artificial intelligence and machine learning for national security and defense, including U.S. competitiveness and leadership, research and development, ethics, and data standards.

The NSCAI is currently chaired by Eric Schmidt, the former executive chairman of Googles parent company, Alphabet. The commissions 15 membersappointed by a combination of Congress, the secretary of defense and the secretary of commercereceive classified and unclassified briefings, meet in working groups and engage with industry. They report their findings and recommendations to the president and Congress, including in an annual report.

The Electronic Privacy Information Center (EPIC), a research center focused on privacy and civil liberties issues in the digital age, submitted a request to the NSCAI in September 2019, seeking access to upcoming meetings and records prepared by the commission under FACA and FOIA. In the six-month period prior to the request, the NSCAI held more than a dozen meetings and received more than 100 briefings, according to EPIC. At the time it filed the lawsuit, EPIC noted that the commissions first major report was also one month overdue for release. When the commission did not comply with the requests under FOIA and FACA, EPIC brought suit under the two laws.

EPICs complaint alleged that the NSCAI had conducted its operations opaquely in its short lifespan. Since its establishment, the commission has operated almost entirely in secret with meetings behind closed doors[,] and has failed to publish or disclose any notices, agendas, minutes, or materials. If Congress had intended the NSCAI to comply with FOIA and FACA, such activity would not satisfy the statutes requirements. Given the potential implications of federal artificial intelligence decisions for privacy, cybersecurity, human rights, and algorithmic bias, EPIC argued that [p]ublic access to the records and meetings of the AI Commission is vital to ensure government transparency and democratic accountability. The complaint also noted the potential ramifications of commission activities for the government, private sector, and public, as well as the importance of artificial intelligence safeguards in the national security context due to limited public oversight. According to EPIC, increasing public participation would permit greater input into the development of national AI policy by those whose privacy and data security could potentially be affected.

The U.S. District Court for the District of Columbia addressed EPICs FOIA claim in a December 2019 decision. FOIA requires agencies to disclose their records to a party upon request, barring exemptions (including for information classified to protect national security). EPIC alleged that the NSCAI failed to uphold its obligations under FOIA to process FOIA requests in a timely fashion; to process EPICs FOIA requests in an expedited manner, in accordance with EPICs claims of urgency; and to make available for public inspection and copying its records, reports, transcripts, minutes, appendixes, working papers, drafts, studies, agenda, or other documents. The commission, which at the time did not have a FOIA processing mechanism in place or other pending FOIA requests, argued that it was not an agency subject to FOIA.

The courts inquiry centered on whether the NSCAI is an agency under FOIA. Comparing the language establishing the NSCAI with FOIAs definition of agency, the court held that the NSCAI is subject to FOIA. In his decision, District Judge Trevor McFadden noted that Congress could have hardly been clearer. As a result, since that time, the commission has had to produce historical documents in response to FOIA requests.

FACA, by contrast, applies forward-looking requirements specifically to federal advisory committees. These mandates include requiring committees to open meetings to the public and announce them in the Federal Register, and to make reports, transcripts and other commission materials publicly available. The measures aim to inform the public about and invite public engagement with the committees that provide expertise to the executive branch. EPIC alleged that the NSCAI violated FACA by failing to hold open meetings and provide notice of them, and by failing to make records available to the public. EPIC sought mandamus relief pursuant to the alleged FACA violations.

In its June decision, the district court ruled that FACA applies to the NSCAI. The commission had filed a motion to dismiss the FACA claims, arguing that it could not be subject to both FOIA and FACA. Since the court had previously held the NSCAI to be an agency for purposes of FOIA, the commission reasoned that it could not simultaneously be an advisory committee under FACA. McFadden disagreed. Invoking the Roman God Januss two facesone forward-looking and the other backward-facinghe wrote, [L]ike Janus, the Commission does indeed have two faces, and ... Congress obligated it to comply with FACA as well as FOIA. The court could not identify a conflict between the requirements of the two statutes, despite differences in their obligations and exceptions. Rather, it noted that if such conflicts arise, it will be incumbent on the parties and the Court to resolve any difficulties. The court dismissed additional claims under the Administrative Procedure Act (APA) for lack of subject matter jurisdiction, as it determined that the commission is not an agency under the APA definition.

The courts decision turned on whether the NSCAI is an advisory committee subject to FACA. The court determined that the statutory text of the 2019 NDAA establishing the NSCAI fit[s] the [FACA] definition of advisory committee like a glove. Furthermore, turning to the full text of the 2019 NDAA, the court noted that the law contains at least two instances in which it explicitly exempts a government body from FACA. The court read the 2019 NDAA as silent when FACA applies and explicit when FACA does not apply. Given Congresss silence on the applicability of FACA to the NSCAI in the 2019 NDAAand again in the 2020 NDAAthe court reasoned that Congress intended the NSCAI to be subject to FACA.

In determining the NSCAI to be subject to FACA, in addition to FOIA, the court has compelled the commission to adopt a more transparent operating posture going forward. Since the December 2019 decision on FOIA, the NSCAI has produced a number of historical records in response to FOIA requests. The recent ruling on FACA grounds requires the NSCAI to hold open meetings, post notice of meetings in advance and make documents publicly available. As a result, the commissions process of compiling findings and developing recommendations for government action related to artificial intelligence and machine learning will likely become more accessible to the public.

The two court decisions come in time to have a noticeable impact on the remaining term of the temporary commission. While the NSCAI was previously due to disband later in 2020, the NDAA for fiscal 2020 1735 extended the commissions lifespan by one year, to October 1, 2021. Citing federal budgetary timelines and the pace of AI development, the commission released its first set of recommendations in March 2020 and expressed its intent to publish additional recommendations on a quarterly basis thereafter. The commission is due to submit its final report to Congress by March 1, 2021. As the NSCAI prepares to enter its final year of operations and develop its closing recommendations, the public will have a clearer window into the commissions work.

Read more:
Increasing Transparency at the National Security Commission on Artificial Intelligence - Lawfare

Artificial intelligence levels show AI is not created equal. Do you know what the vendor is selling? – Spend Matters

Just like there are eight levels to analytics as mentioned in a recent Spend Matters PRO brief, artificial intelligence (AI) has various stages of the technology today even though there is no such thing as true AI by any standard worth its technical weight.

But just because we dont yet have true AI doesnt mean todays AI cant help procurement improve its performance. We just need enough computational intelligence to allow software to do the tactical and non-value-added tasks that software should be able to perform with all of the modern computational power available to us. As long as the software can do the tasks as well as an average human expert the vast majority of the time (and kick up a request for help when it doesnt have enough information or when the probability it will outperform a human expert is less than the expert performing a task) thats more than good enough.

The reality is, for some basic tactical tasks, there are plenty of software options today (e.g., intelligent invoice processing). And even for some highly specialized tasks that we thought could never be done by a computer, we have software that can do it better, like early cancerous growth detection in MRIs and X-rays.

That being said, we also have a lot of software on the market that claims to be artificial intelligence but that is not even remotely close to what AI is today, let alone what useful software AI should be. For software to be classified as AI today, it must be capable of artificial learning and evolving its models or codes and improve over time.

So, in this PRO article, we are going to define the levels of AI that do exist today, and that may exist tomorrow. This will allow you to identify what truth there is to the claims that a vendor is making and whether the software will actually be capable of doing what you expect it to.

Not counting true AI, there are five levels of AI that are available today or will likely be available tomorrow:

Lets take a look at each group.

See the original post:
Artificial intelligence levels show AI is not created equal. Do you know what the vendor is selling? - Spend Matters

How Coronavirus and Protests Broke Artificial Intelligence And Why Its A Good Thing – Observer

Until February 2020, Amazon thought that the algorithms that controlled everything from their shelf space to their promoted products were practically unbreakable. For years they had used simple and effective artificial intelligence (AI) to predict buying patterns, and planned their stock levels, marketing, and much more based on a simple question: who usually buys what?

Yet as COVID-19 swept the globe they found that the technology that they relied on was much more shakable than they had thought. As sales of hand sanitizer, face masks, and toilet paper soared, sites such as Amazon found that their automated systems were rendered almost useless as AI models were thrown into utter disarray.

Elsewhere, the use of AI in everything from journalism to policing has been called into question. As long-overdue action on racial inequalities in the US has been demanded in recent weeks, companies have been challenged for using technology that regularly displays sometimes catastrophic ethnic biases.

Microsoft was recently held to account after the AI algorithms that it used on its MSN news website confused mixed-race members of girlband Little Mix, and many companies have now suspended the sale of facial recognition technologies to law enforcement agencies after it was revealed that they are significantly less effective at identifying images of minority individuals, leading to potentially inaccurate leads being pursued by police.

The past month has brought many issues of racial and economic injustice into sharp relief, says Rediet Abebe, an incoming assistant professor of computer science at the University of California, Berkeley. AI researchers are grappling with what our role should be in dismantling systemic racism, economic oppression, and other forms of injustice and discrimination. This has been an opportunity to reflect more deeply on our research practices, on whose problems we deem to be important, whom we aim to serve, whom we center, and how we conduct our research.

SEE ALSO: Artificial Intelligence Is on the Case in the Legal Profession

From the COVID-19 pandemic to the Black Lives Matter protests, 2020 has been a year characterized by global unpredictability and social upheaval. Technology has been a crucial medium of effecting change and keeping people safe, from test and track apps to the widespread use of social media to spread the word about protests and petitions. But amidst this, machine learning AI has sometimes failed to meet its remit, lagging behind rapid changes in social behavior and falling short on the very thing that it is supposed to do best: gauging the data fed into it and making smart choices.

The problem often lies not with the technology itself, but in a lack of data used to build algorithms, meaning that they fail to reflect the breadth of our society and the unpredictable nature of events and human behavior.

Most of the challenges to AI that have been identified by the pandemic relate to the substantial changes in behavior of people, and therefore in the accuracy of AI models of human behavior, says Douglas Fisher, an associate professor of computer science at Vanderbilt University. Right now, AI and machine learning systems are stovepiped, so that although a current machine learning system can make accurate predictions about behaviors under the conditions under which it learned them, the system has no broader knowledge.

The last few months have highlighted the need for greater nuance in AIin short, we need technology that can be more human. But in a society increasingly experimenting with using AI to carry out such crucial roles as identifying criminal suspects or managing food supply chains how can we ensure that machine learning models are sufficiently knowledgeable?

Most challenges related to machine learning over the past months result from change in data being fed into algorithms, explains Kasia Borowska, Managing Director of AI consultancy Brainpool.ai. What we see a lot of these days is companies building algorithms that just about do the job. They are not robust, not scalable, and prone to bias this has often been due to negligence or trying to cut costsbusinesses have clear objectives and these are often to do with saving money or simply automating manual processes, and often the ethical sideremoving biases or being prepared for changeisnt seen as the primary objective.

Kasia believes that both biases in AI algorithms and an inability to adapt to change and crisis stem from the same problem and present an opportunity to build better technology in the future. She argues that by investing in building better algorithms, issues such as bias and an inability to predict user behavior in times of crisis can be eliminated.

Although companies might previously have been loath to invest time and money into building datasets that did much more than the minimum that they needed to operate, she hopes that the combination of COVID and an increased awareness of machine learning biases might be the push that they need.

I think that a lot of businesses that have seen their machine learning struggle will now think twice before they try and deploy a solution that isnt robust hasnt been tested enough, she says. Hopefully the failure of some AI systems will motivate data scientists as well as corporations to invest time and resources in the background work ahead of jumping into the development of AI solutions we will see more effort being put into ensuring that AI products are robust and bias-free.

The failures of AI have been undeniably problematic, but perhaps they present an opportunity to build a smarter future. After all, in recent months we have also seen the potential of AI, with new outbreak risk software and deep learning models that help the medical community to predict drugs and treatments and develop prototype vaccines. These strides in progress demonstrate the power of combining smart technology with human intervention, and show that with the right data AI has the power to enact massive positive change.

This year has revealed the full scope of AI, laying bare the challenges that developers face alongside the potential for tremendous benefits. Building datasets that encompass the broadest scope of human experience may be challenging, but it will also make machine learning more equitable, more useful, and much more powerful. Its an opportunity that those in the field should be keen to corner.

Read the original post:
How Coronavirus and Protests Broke Artificial Intelligence And Why Its A Good Thing - Observer

Artificial intelligence is on the rise – Independent Australia

New developments and opportunities are opening up in artificial intelligence, says Paul Budde.

I RECENTLY followed a "lunch box lecture", organised by the University of Sydney.In thetalk, Professor Zdenka Kuncic explored the very topical issue of artificial intelligence.

The world is infatuated with artificial intelligence (AI), and understandably so, given its super-human ability to find patterns in big data as we all notice when using Google, Facebook, Amazon, eBay and so on. But the so-called general intelligence that humans possess remains elusive forAI.

Interestingly, Professor Kuncic approached this topic from a physics perspective. By viewing the brains neural network as a physical hardware system, rather than the algorithm-based software as for example AI-based research used insocial media.

Her approach reveals clues that suggest the underlying nature of intelligence is physical.

Basically, what this means is that a software-based system will require ongoing input from software specialists to make updates based on new developments.Her approach, however, is to look at a physical system based on nanotechnology and use these networks as self-learning systems, where human intervention is no longer required.

Imagine the implications of the communications technologies that are on the horizon, where basically billions of sensors and devices will be connected to networks.

The data from these devices need to be processed in real-time and dynamic decisions will have to be made without human intervention. The driverless car is, of course, a classic example of such an application.

The technology needed to make such a system work will have to be based on edge technology in the device out there in the field. It is not going to work in any scaled-up situation if the data from these devices will first have to be sent to the cloud for processing.

Nano networks are a possible solution for such situations. A nanonetwork or nanoscale network is a set of interconnected nanomachines (devices a few hundred nanometers or a few micrometres at most in size), which at the moment can perform only very simple tasks such as computing, data storing, sensing and actuation.

However, Professor Kuncik expects that new developments will see expanded capabilities of single nanomachines both in terms of complexity and range of operation by allowing them to coordinate, share and fuse information.

Professor Kuncik concentrates, in her work, on electromagnetics for communication in the nanoscale.

This is commonly defined as the 'transmission and reception of electromagnetic radiation from components based on novel nanomaterials'.

Professor Kuncik mentioned this technology was still in its infancy. She was very upbeat about the future, based on the results of recent research and international collaboration. Advancements in carbon and molecular electronics have opened the door to a new generation of electronic nanoscale components such as nanobatteries, nanoscale energy harvesting systems, nano-memories, logical circuitry in the nanoscale and even nano-antennas.

From a communication perspective, the unique properties observed in nanomaterials will decide on the specific bandwidths for the emission of electromagnetic radiation, the time lag of the emission, or the magnitude of the emitted power forinput energy.

The researchers are looking at the output of these nanonetworks rather than the input. The process is analogue rather than digital. In other words, the potential output provides a range of possible choices, rather than one (digital) outcome.

The trick is to understand what choices are made in a nanonetwork and why.

There are two main alternatives for electromagnetic communication in the nanoscale the one as pursued by Professor Kuncik the other one being based on molecular communication.

Nanotechnology could have an enormous impact on for example the future of 5G. If nanotechnology can be included in the various Internet of Things (IoT) sensors and devices than this will open an enormous amount of new applications.

It has been experimentally demonstrated that is possible to receive and demodulate an electromagnetic wave by means of a nano radio.

Second, graphene-based nano-antennas have been analysed as potential electromagnetic radiators in the terahertz band.

Once these technologies are further developed and commercialised, we can see a revolution in edge-computing.

Paul Buddeis an Independent Australia columnist and managing director ofPaul Budde Consulting, an independent telecommunications research and consultancy organisation. You can follow Paul on Twitter@PaulBudde.

Support independent journalism Subscribeto IA.

See the article here:
Artificial intelligence is on the rise - Independent Australia