Archive for the ‘Artificial Intelligence’ Category

The Regulation of Artificial Intelligence in Canada and Abroad: Comparing the Proposed AIDA and EU AI Act – Fasken

Laws governing technology have historically focused on the regulation of information privacy and digital communications. However, governments and regulators around the globe have increasingly turned their attention to artificial intelligence (AI) systems. As the use of AI becomes more widespread and changes how business is done across industries, there are signs that existing declarations of principles and ethical frameworks for AI may soon be followed by binding legal frameworks. [1]

On June 16, 2022, the Canadian government tabled Bill C-27, the Digital Charter Implementation Act, 2022. Bill C-27 proposes to enact, among other things, the Artificial Intelligence and Data Act (AIDA). Although there have been previous efforts to regulate automated decision-making as part of federal privacy reform efforts, AIDA is Canadas first effort to regulate AI systems outside of privacy legislation. [2]

If passed, AIDA would regulate the design, development, and use of AI systems in the private sector in connection with interprovincial and international trade, with a focus on mitigating the risks of harm and bias in the use of high-impact AI systems. AIDA sets out positive requirements for AI systems as well as monetary penalties and new criminal offences on certain unlawful or fraudulent conduct in respect of AI systems.

Prior to AIDA, in April 2021, the European Commission presented a draft legal framework for regulating AI, the Artificial Intelligence Act (EU AI Act), which was one of the first attempts to comprehensively regulate AI. The EU AI Act sets out harmonized rules for the development, marketing, and use of AI and imposes risk-based requirements for AI systems and their operators, as well as prohibitions on certain harmful AI practices.

Broadly speaking, AIDA and the EU AI Act are both focused on mitigating the risks of bias and harms caused by AI in a manner that tries to be balanced with the need to allow technological innovation. In an effort to be future-proof and keep pace with advances in AI, both AIDA and the EU AI Act define artificial intelligence in a technology-neutral manner. However, AIDA relies on a more principles-based approach, while the EU AI Act is more prescriptive in classifying high-risk AI systems and harmful AI practices and controlling their development and deployment. Further, much of the substance and details of AIDA are left to be elaborated in future regulations, including the key definition of high risk AI systems to which most of AIDAs obligations attach.

The table below sets out some of the key similarities and differences between the current drafts of AIDA and the EU AI Act.

High-risk system means:

The EU AI Act does not apply to:

AIDA does not stipulate an outright ban on AI systems presenting an unacceptable level of risk.

It does, however, make it an offence to:

The EU AI Act prohibits certain AI practices and certain types of AI systems, including:

Persons who process anonymized data for use in AI systems must establish measures (in accordance with future regulations) with respect to:

High-risk systems that use data sets for training, validation and testing must be subject to appropriate data governance and management practices that address:

Data sets must:

Transparency. Persons responsible for high-impact systems must publish on a public website a plain-language description of the AI system which explains:

Transparency. AI systems which interact with individuals and pose transparency risks, such as those that incorporate emotion recognition systems or risks of impersonation or deception, are subject to additional transparency obligations.

Regardless of whether or not the system qualifies as high-risk, individuals must be notified that they are:

Persons responsible for AI systems must keep records (in accordance with future regulations) describing:

High-risk AI systems must:

Providers of high-risk AI systems must:

The Minister of Industry may designate an official to be the Artificial Intelligence and Data Commissioner, whose role is to assist in the administration and enforcement of AIDA. The Minister may delegate any of their powers or duties under AIDA to the Commissioner.

The Minister of Industry has the following powers:

The European Artificial Intelligence Board will assist the European Commission in providing guidance and overseeing the application of the EU AI Act. Each Member State will designate or establish a national supervisory authority.

The Commission has the authority to:

Persons who commit a violation of AIDA or its regulations may be subject to administrative monetary penalties, the details of which will be establish by future regulations. Administrative monetary penalties are intended to promote compliance with AIDA.

Contraventions to AIDAs governance and transparency requirements can result in fines:

Persons who commit more serious criminal offences (e.g., contravening the prohibitions noted above or obstructing or providing false or misleading information during an audit or investigation) may be liable to:

While both acts define AI systems relatively broadly, the definition provided in AIDA is narrower. AIDA only encapsulates technologies that process data autonomously or partly autonomously, whereas the EU AI Act does not stipulate any degree of autonomy. This distinction in AIDA is arguably a welcome divergence from the EU AI Act, which as currently drafted would appear to include even relatively innocuous technology, such as the use of a statistical formula to produce an output. That said, there are indications that the EU AI Acts current definition may be modified before its final version is published, and that it will likely be accompanied by regulatory guidance for further clarity. [4]

Both acts are focused on avoiding harm, a concept they define similarly. The EU AI Act is, however, slightly broader in scope as it considers serious disruptions to critical infrastructure a harm, whereas AIDA is solely concerned with harm suffered by individuals.

Under AIDA, high-impact systems will be defined in future regulations, so it is not yet possible to compare AIDAs definition of high-impact systems to the EU AI Acts definition of high-risk systems. The EU AI Act identifies two categories of high-risk systems. The first category is AI systems intended to be used as safety components of products, or as products themselves. The second category is AI systems listed in an annex to the act and which present a risk to the health, safety, or fundamental rights of individuals. It remains to be seen how Canada would define high-impact systems, but the EU AI Act provides an indication of the direction the federal government could take.

Similarly, AIDA also defers to future regulations with respect to risk assessments, while the proposed EU AI Act sets out a graduated approach to risk in the body of the act. Under the EU AI Act, systems presenting an unacceptable level of risk are banned outright. In particular, the EU AI Act explicitly bans manipulative or exploitive systems that can cause harm, real-time biometric identification systems used in public spaces by law enforcement, and all forms of social scoring. AI systems presenting low or minimal risk are largely exempt from regulations, except for transparency requirements.

AIDA only imposes transparency requirements on high-impact AI systems, and does not stipulate an outright ban on AI systems presenting an unacceptable level of risk. It does, however, empower the Minister of Industry to order that a high-impact system presenting a serious risk of imminent harm cease being used.

AIDAs application is limited by the constraints of the federal governments jurisdiction. AIDA broadly applies to actors throughout the AI supply chain from design to delivery, but only as their activities relate to international or interprovincial trade and commerce. AIDA does not expressly apply to intra-provincial development and use of AI systems. Government institutions (as defined under the Privacy Act) are excluded from AIDAs scope, as are products, services, and activities that are under the direction or control of specified federal security agencies.

The EU AI Act specifically applies to providers (although this may be interpreted broadly) and users of AI systems, including government institutions but excluding where AI systems are exclusively developed for military purposes. The EU AI Act also expressly applies to providers and users of AI systems insofar as the output produced by those systems is used in the EU.

AIDA is largely silent on requirements with respect to data governance. In its current form, it only imposes requirements on the use of anonymized data in AI systems, most of which will be elaborated in future regulations. AIDAs data governance requirements will apply to anonymized data used in the design, development, or use of any AI system, whereas the EU AI Acts data governance requirements will apply only to high-impact systems.

The EU AI Act sets the bar very high for data governance. It requires that training, validation, and testing datasets be free of errors and complete. In response to criticisms of this standard for being too strict, the European Parliament has introduced an amendment to the act that proposes to make error-free and complete datasets an overall objective to the extent possible, rather than a precise requirement.

While AIDA and the EU AI Act both set out requirements with respect to assessment, monitoring, transparency, and data governance, the EU AI Act imposes a much heavier burden on those responsible for high-risk AI systems. For instance, under AIDA, persons responsible for such systems will be required to implement mitigation, monitoring, and transparency measures. The EU AI Act goes a step further by putting high-risk AI systems through a certification scheme, which requires that the responsible entity conduct a conformity assessment and draw up a declaration of conformity before the system is put into use.

Both acts impose record-keeping requirements. Again, the EU AI Act is more prescriptive, but contrary to AIDA, its requirements will only apply to high-risk systems, whereas AIDAs record-keeping requirements would apply to all AI systems.

Finally, both acts contain notification requirements that are limited to high-impact (AIDA) and high-risk (EU AI Act) systems. AIDA imposes a slightly heavier burden, requiring notification for all uses that are likely to result in material harm. The EU AI Act only requires notification if a serious incident or malfunction has occurred.

Both AIDA and the EU AI Act provide for the creation of a new monitoring authority to assist with administration and enforcement. The powers attributed to these entities under both acts are similar.

Both acts contemplate significant penalties for violations of their provisions. AIDAs penalties for more serious offences up to $25 million CAD or 5% of the offenders gross global revenues from the preceding financial year are significantly greater than those found in Quebecs newly revised privacy law and the EUs General Data Protection Regulation (GDPR). The EU AI Acts most severe penalty is higher than both the GDPR and AIDAs most severe penalty: up to 30 million or 6% of gross global revenues from the preceding financial year for non-compliance with prohibited AI practices or the quality requirements set out for high-risk AI systems.

In contrast to the EU AI Act, AIDA also introduces new criminal offences for the most serious offences committed under the act.

Finally, the EU AI Act would also grant discretionary power to Member States to determine additional penalties for infringements of the act.

While both AIDA and the EU AI Act have broad similarities, it is impossible to predict with certainty how similar they could eventually be, given that so much of AIDA would be elaborated in future regulations. Further, at the time of writing, Bill C-27 has only completed first reading, and is likely to be subject to amendments as it makes its way through Parliament.

It is still unclear how much influence the EU AI Act will have on AI regulations globally, including in Canada. Regulators in both Canada and the EU may aim for a certain degree of consistency. Indeed, many have likened the EU AI Act to the GDPR, in that it may set global standards for AI regulation just as the GDPR did for privacy law.

Regardless of the fates of AIDA and the EU AI Act, organizations should start considering how they plan to address a future wave of AI regulation.

For more information on the potential implications of the new Bill C-27, Digital Charter Implementation Act, 2022, please see our bulletin,The Canadian Government Undertakes a Second Effort at Comprehensive Reform to Federal Privacy Law, on this topic.

[1]There have been a number of recent developments in AI regulation, including the United Kingdoms Algorithmic Transparency Standard, Chinas draft regulations on algorithmic recommendation systems in online services, the United States Algorithmic Accountability Act of 2022, and the collaborative effort between Health Canada, the FDA and the United Kingdoms Medicines and Healthcare Products Regulatory Agency to publish Guiding Principles on Good Machine Learning Practice for Medical Device Development.

[2]In the public sphere, the Directive on Automated Decision-Makingguides the federal governments use of automated decision systems.

[3]This prohibition is subject to three exhaustively listed and narrowly defined exceptions where the use of such AI systems is strictly necessary to achieve a substantial public interest, the importance of which outweighs the risks: (1) the search for potential victims of crime, including missing children; (2) certain threats to the life or physical safety of individuals or a terrorist attack; and (3) the detection, localization, identification or prosecution of perpetrators or suspects of certain particularly reprehensible criminal offences.

[4]As an indication of potential changes, the Slovenian Presidency of the Council of the European Union tabled a proposed amendment to the act in November 2021 that would effectively narrow the scope of the regulation to machine learning.

Follow this link:
The Regulation of Artificial Intelligence in Canada and Abroad: Comparing the Proposed AIDA and EU AI Act - Fasken

Carestream Scientist to Discuss Role of Artificial Intelligence in Radiology at RSNA 2022 – Imaging Technology News

October 17, 2022 Carestream Health will demonstrate the value and impact of artificial intelligence (AI) in radiology at the upcomingRadiological Society of North America(RSNA) 2022 conference.

Often, imaging is the first step in making an informed diagnosis, said Luca Bogoni, Ph.D., Head of Advanced Research and Innovation at Carestream. Carestream has been a leader in applying AI across our solutions, from image capture to processing and workflow efficiency. As we innovate, we work to support diagnostic confidence by using the power of intelligent tools.

Dr. Bogoni will give a presentation entitled Artificial Intelligence: Fueling Innovation Across the Full Arc of Carestream Solutions on Sunday, Nov. 27, at 1:00 p.m. in the RSNA Innovation Theater.

To enable a confident diagnosis, Carestream solutions utilize a variety of AI algorithms to improve workflow both in rooms and at the bedside. For example,Eclipse Imaging Intelligencecapabilities deliver superb image quality and unrivaled diagnostic confidence with AI, proprietary algorithms and advanced image-processing capabilities. The companys Imaging and Workflow Intelligence solutions help improve image clarity, optimize dose and increase workflow efficiency. AI-based Smart DR Workflow helps captures anatomy precisely, saving time and reducing the number of X-ray retakes. By using AI to positively impact each step of a patients clinical journey from image acquisition to diagnosis, Carestream empowers partners with powerful clinical solutions for effective patient management.

Artificial intelligence is much more than technological advances. It allows radiographers to spend more time on patient care, Dr. Bogoni said. These tools create more time and space for patient interaction.

For more information:www.carestream.com

See more here:
Carestream Scientist to Discuss Role of Artificial Intelligence in Radiology at RSNA 2022 - Imaging Technology News

ACS Receives BOA Award Providing Data Readiness for Artificial Intelligence Development (DRAID) for DoD Joint Artificial Intelligence Center (JAIC)…

RESTON, Va.--(BUSINESS WIRE)--Assured Consulting Solutions (ACS) is proud to announce receiving an award to the basic ordering agreement (BOA) providing Data Readiness for Artificial Intelligence Development (DRAID) for the DoD Joint Artificial Intelligence Center (JAIC) and the Chief Digital and Artificial Intelligence Office. This BOA is a decentralized vehicle that streamlines rapid procurement and agile delivery of AI data readiness capabilities for Defense AI initiatives. The streamlined methodologies implemented will benefit both industry and government partners by increasing competition and flexibility for each task order.

The successful use of AI depends critically on the availability of quality data that can be used to build reliable AI-enabled systems. The DRAID vehicle will address the entire data lifecycle, from collection, through pre-processing, up to before AI system creation. It will also support AI-specific requirements, including unique challenges in operationalizing data for AI. DRAID is also customizable; it enables the selection of a custom subset of AI data readiness services to meet individual needs. ACS will leverage our DeepGovernance practice and D2SAM approach to help DoD customers to prepare for rapid and agile AI technologies.

ABOUT ASSURED CONSULTING SOLUTIONS

Founded in 2011 and headquartered in Reston, Va., Assured Consulting Solutions is a well-respected and trusted partner, domain expert, and provider of expert-level support. ACS is a certified Woman-Owned Small Business (WOSB) that delivers advanced technology solutions and strategic support services in support of critical national security missions for Intelligence, Defense, and Federal Civilian customers. Learn more at http://www.assured-consulting.com.

ABOUT D2SAM

ACSs Data-Driven Secure Agile Methodology (D2SAM) is a framework of engineering and non-technical tools, processes, and techniques supported by an underlying model-based infrastructure, data environment, and process library. The D2SAM Framework is organized into four cyclical quadrants that reflect the continuous delivery of services and systems to customers. ACS envisions our customers being on a continual journey through strategy, design, transition, and operations (SDTO) cycles leading towards their future goals and operational outcomes. Learn more at https://www.assured-consulting.com/blog/2021/12/17/acs-announces-trademark-registration-of-d2sam

Excerpt from:
ACS Receives BOA Award Providing Data Readiness for Artificial Intelligence Development (DRAID) for DoD Joint Artificial Intelligence Center (JAIC)...

How Can Artificial Intelligence Help With Suicidal Ideation? – Theravive

A new study published in the Journal of Psychiatric Research looked at the performance of machine learning models in predicting suicidal ideation, attempts, and deaths.

My study sought to quantify the ability of existing machine learning models to predict future suicide-related events, study author Karen Kusuma told us. While there are other research studies examining a similar question, my study is the first to use clinically relevant and statistically appropriate performance measures for the machine learning studies.

The utility of artificial intelligence has been a controversial topic in psychiatry, and medicine overall. Some studies have demonstrated better performance with machine learning methods, while others have not. Kusuma began the study expecting that machine learning models would perform well.

Suicide is a leading cause of years of life lost across most of Europe, central Asia, southern Latin America, and Australia (Naghavi, 2019; Australian Bureau of Statistics, 2020), Kusuma told us. Standard clinical practice dictates that people seeking help for suicide-related issues need to be first administered with a suicide risk assessment. However, research has found that suicide risk predictions tend to be inaccurate.

Only five per cent of people ordinarily classified as high risk died by suicide, while around half of those who died by suicide would normally be categorised as low risk (Large, Ryan, Carter, & Kapur, 2017). Unfortunately, there has been no improvement in suicide prediction research in the last fifty years (Franklin et al., 2017).

Some researchers have claimed that machine learning will become an efficient and effective alternative to current suicide risk assessments (e.g. Fonseka et al., 2019), Kusuma told us, so I wanted to examine the potential of machine learning quantitatively, while evaluating the methodology currently used in the literature.

Researchers searched for relevant studies across four research databases and identified 56 relevant studies. From there, 54 models from 35 studies had sufficient data, and were included in the quantitative analyses.

We found that machine learning models achieved a very good overall performance according to clinical diagnostic standards, Kusuma told us. The models correctly predicted 66% of the people who would experience a suicide-related event (i.e. ideation, attempt, or death), and correctly predicted 87% of the people who would not experience a suicide-related event.

However, there was a high prevalence of risk of bias in the research, with many studies processing or analysing the data inappropriately. This isnt a finding specific to machine learning research, but a systemic issue caused largely by a publish-or-perish culture in academia.

I did expect machine learning models to do well, so I think this review establishes a good benchmark for future research, Kusuma told us. I do believe that this review shows the potential of machine learning to transform the future of suicide risk prediction. Automated suicide risk screening would be quicker and more consistent than current methods.

This could potentially identify many people at risk of suicide without them having to reach out proactively. However, researchers need to be careful to minimise data leakage, which would skew performance measures. Furthermore, many iterations of development and validation need to take place to ensure that the machine learning models can predict suicide risk in previously unseen populations.

Prior to deployment, researchers also need to ascertain if artificial intelligence would work in an equitable manner across people from different backgrounds, Kusuma told us. For example, a study has found their machine learning models performed better in predicting deaths by suicide in White patients, as opposed to Black and American Indian/ Alaskan Native patients (Coley et al., 2022).

That isnt to say that artificial intelligence is inherently discriminatory, Kusuma explained, but there is less data available for minorities, which often means lower performance in those populations. Its possible that models need to be developed and validated separately for people of different demographic characteristics.

Machine learning is an exciting innovation in suicide research, Kusuma told us. An improvement in suicide prediction abilities would mean that resources could be allocated to those who need them the most.

Categories: Depression , Stress , Suicide | Tags: suicide, depression, machine

Patricia Tomasi is a mom, maternal mental health advocate, journalist, and speaker. She writes regularly for the Huffington Post Canada,focusing primarily on maternal mental health after suffering from severe postpartum anxiety twice. You can find her Huffington Post biography here. Patricia is also a Patient Expert Advisor for the North American-based,Maternal Mental Health Research Collectiveand is the founder of the online peer support group -Facebook Postpartum Depression & Anxiety Support Group - with over 1500 members worldwide. Blog:www.patriciatomasiblog.wordpress.com Email:tomasi.patricia@gmail.com

More:
How Can Artificial Intelligence Help With Suicidal Ideation? - Theravive

Neurodiversity Emerges as a Skill in Artificial Intelligence Work – BNN Bloomberg

(Bloomberg) -- Staring closely at the screen, Jordan Wright deftly picks out a barely distinguishable shape with his mouse, bringing to life a stark blue outline from a blur of overexposed features.

Its a process similar to the automated tests that teach computers to distinguish humans from machines, by asking someone to identify traffic lights or stop signs in a picture known as a Captcha.

Only in Wrights case, the shape turns out to be of a Tupolev Tu-160, a supersonic strategic heavy bomber, parked on a Russian base. The outline one of hundreds a day he picks out from satellite imagesis training an algorithm so a US intelligence agency can locate and identify Moscows firepower in an automated flash.

Its become a run-of-the-mill task for the 25-year-old, who describes himself as on the autism spectrum. Starting in the spring, Wright began working atEnabled Intelligence, a Virginia-based startup that works largely for US intelligence and other federal agencies. Foundedin 2020, itspecializes in labeling, training and testing the sensitive digital data on which artificial intelligence depends.

Peter Kant, chief executive officer of Enabled Intelligence, said he was inspired to start the company after reading about an Israeli program to recruit people with autism for cyber-intelligence work. Therepetitive,detailedwork of training artificial intelligence algorithms relies on pattern recognition, puzzle-solving and deep focus that is sometimes a particular strength of autistic workers, he said.

Enabled Intelligences main type ofwork, known as data annotation, is usually farmed out to technically skilled but far cheaper labor forces in countries including China, Kenya andMalaysia. Thats not an option for US government agencies whose data is sensitive or classified, Kant said, adding that morethan half hisworkforce of 25 areneurodiverse.

I can easily say this is the best opportunity I've got in my life, said Wright, who grew up with an infatuation for military aviation, dropped out of college and has since experienced long stints of unemployment in between poorly paid work. Most recently, he baggedfrozen groceries.

For decades, workers with developmental disabilities, especially autism, have faced discrimination and disproportionately high unemployment levels. A large shortfall in cybersecurity jobs, along with a new push for workplace acceptance and flexibility in part spurred by the Covid-19 pandemic has started to focus attention onthe abilities of people who think and work differently.

Enabled Intelligence has adjusted its work rules to accommodate its employees, ditching resumes and interviews for online assessmentsand staggering work hours for those who find it hard to get in early. It has built three new areas for classified material and hopes to secure government clearances for much of its neurodiverse workforce something the US intelligence community has sometimes struggled to accommodate in the past.Pay starts at $20 an hour,in line with industry standards, and the company provideshealth insurance, paid leave and a path for promotion. Enabled Intelligenceexpects to make revenues of $2 millionthis year and double thatnext year, along with doubling its workforce.

The US intelligence community has been slow to catch on to the opportunity, critics say. It falls short of the 12% federal target for workforce representation of persons with disabilities, according to the latest statistics out this month. Until this year, it has also regularly fallen short of the 2% federal target for persons with targeted disabilities, which include those with autism.

In other countries its old hat, said Teresa Thomas, program lead for neurodiverse talent enablement at MITRE, which operates federally funded research and development centers. She citeswell established programs in Denmark, Israel, the UK and Australia, where one state recently appointed a minister for autism.

Thomas has recently spearheaded a new neurodiverse federal workforce pilot to establish a template for the US government to hire and support autistic workers, but so far only one of the countrys 18 intelligence agencies, the National Geospatial-Intelligence Agency, known asNGA,has participated.Now the federal governmentscyberdefense agency, the Cybersecurity and Infrastructure Security Agency,intends to undertake a similar pilot.

Stephanie La Rue, chief of diversity, equity and inclusion for the Office of the Director of National Intelligence, told Bloomberg the US intelligence community needs to acknowledge that its not where we need to bewhen it comes to employing people with disabilities.

Its like turning the Titanic, said La Rue, adding that NGAs four-person pilot would be reviewed and shared with the wider intelligence community as a promising practice. Change is going to be incremental.

Research indicated that neurodiverse intelligence officers on the autism spectrum exhibit the ability to parse large data sets and identify patterns and trends at rates that far exceed folks who are not autistic and were less prone to cognitive bias, La Rue said.Yet securing a clearance to access classified information can still present an additional challenge, according to some observers.

If an office wall board at Enabled Intelligenceis any indication, experiencesvary. There, 18 anonymous handwritten notesanswer the question: What does neurodiversity mean to you?

Difficult. Trying. Its held me back a lot, says one in an uncertain script. Strength,answers a second in careful cursive. A third, in capital letters, declares: SUPERPOWERS.

2022 Bloomberg L.P.

Original post:
Neurodiversity Emerges as a Skill in Artificial Intelligence Work - BNN Bloomberg