Archive for the ‘Artificial Intelligence’ Category

Over 80% of Health Execs Have Artificial Intelligence Plans in Place – HealthITAnalytics.com

November 02, 2020 -Eighty-three percent of healthcare organizations have implemented an artificial intelligence strategy, while another 15 percent are planning to develop one, according to a recent survey conducted by Optum.

Fifty-nine percent of leaders said they believe AI will deliver significant cost savings within three years, a 90 percent increase since 2018.

The results of the survey show that the healthcare industrys increase in AI adoption is driven by executives seeing more tangible benefits from the technology including improved business performance and patient outcomes.

These insights demonstrate that as those in late-stage AI implementation grow more familiar with AI as well as the benefits it yields they in turn become more comfortable and confident, generating momentum in which AI grows more beneficial more quickly, researchers stated.

With AI, the more quickly organizations in early or middle stages of AI deployment move forward, the sooner they will overcome uncertainty and unlock the rewards of this powerful business tool.

READ MORE: Expanding Access to Mental Healthcare with Artificial Intelligence

The survey also revealed that the current healthcare crisis has catalyzed the use of AI in medical settings. More than half (56 percent) said that their response to COVID-19 has caused them to accelerate or expand their AI implementation strategies.

Additionally, of those who reported being in the late stages of AI development, 51 percent believe theyll achieve a return on AI investments faster because of their pandemic response.

The need to have a strategy in place may have come into sharp focus during the COVID-19 pandemic, when organizations scrambled to use every tool at their disposal to overcome the unprecedented strain being placed on the industry, the team said.

AIs ability to automate workflows and help simplify the communication and analysis of complex data can help alleviate that burden.

Researchers also noted that organizations who take too long to plan or deploy AI strategies are at risk of falling behind their tech-savvy counterparts: Fifty-five percent of companies with $1 billion or more in revenue have an AI strategy in place, compared to just 37 percent of their lower-revenue peers.

READ MORE: Applying Artificial Intelligence to Chronic Disease Management

In addition to cost savings, healthcare executives are looking forward to improve patients health. Fifty-five percent of leaders rank improving health outcomes as the greatest impact of AI investments, while another 55 percent rank improving patient experiences as the top impact.

Executives emphasis on the consumer-focused benefits serves as a reminders that health care is first and foremost an industry focused on the well-being of those it serves and that AI has implications for real people most in need, researchers said.

To realize these benefits, healthcare leaders are planning to apply AI to a range of tasks. Forty percent plan to monitor data from Internet of Things (IoT) devices such as wearable technologies, while 37 percent want to accelerate research for new therapeutic or clinical discoveries.

Another 37 percent want to use AI tools to assign codes for accurate diagnoses, facilities, and procedures.

These applications are all well-suited to advanced analytics technologies, the team noted.

READ MORE: Enhancing Cervical Cancer Screenings with Artificial Intelligence

Internet-connected remote patient monitoring devices enable more complete virtual health offerings, and AI can identify signals and trends within those data streams; AI can help prioritize potential investigative targets for treatments or vaccines; and automating business processes can enable organizations to achieve more even when resources are under duress, researchers said.

While the industry appears to be increasingly adopting and deploying AI technology, several barriers to use and implementation still exist. Seventy-three percent of respondents said they had concerns about AI because of a lack of transparency in how the data is used or how the technology makes decisions. Just under 70 percent said the role of humans in the decision-making process was a top concern.

These findings highlight ongoing concerns that AI will take over the jobs of human clinicians, or come to conclusions that may not be based in evidence or provider expertise.

As executives prepare to infuse AI into their operations, they should ask system designers to include an explainable interface whenever possible to help recipients of AI-driven predictions better understand whats influencing those recommendations, researchers said.

Likewise, while routine processes can be targeted for automation, complex decisions should always include a human perspective within the workflow. That means the AI is augmenting human capabilities and helping individuals work at the top of their license. Peoples judgment remains the deciding factor and the human touch of health care is maintained.

Many leaders have also recognized the importance of integrating social determinants of health data into AI algorithms. Fifty-nine percent have already incorporated non-clinical information into their AI plans to improve predictions about future health needs, while another 36 percent plan to do so.

The results of the survey demonstrate the steadily increasing prevalence of AI in healthcare, as well as the significant benefits the technology can bring.

As AI grows more and more popular across all industries, healthcare executives will see increasing opportunities to capitalize on the insights it offers, setting the stage to radically alter their industry from the bottom line all the way to patient experience, the researchers concluded.

The results of this survey capture not only how AI is becoming increasingly the norm at a more rapid pace, but how its benefits as well as the ways in which the industry can overcome pitfalls will be more widespread as familiarity with AI grows.

Read more:
Over 80% of Health Execs Have Artificial Intelligence Plans in Place - HealthITAnalytics.com

Patenting Artificial Intelligence in Canada, the UK and Europe: A Primer – Lexology

Artificial intelligence (AI) has been the subject of human fascination and awe (and Hollywood movies) for many years. Who can forget the iconic scene in 2001: A Space Odyssey when the intelligent computer HAL 9000 says Im sorry Dave, Im afraid I cant do that, and refuses to allow Dave to re-enter the pod bay doors because HAL knows that Dave is planning to disconnect it (him?). AI is busy learning, growing, computing, and in some cases inventing, all while we sleep.

It is therefore no surprise that the patentability of AI is a current focus of many innovators and Patent Offices around the world. Recent decisions and practice in Canada, the UK and Europe, bring into sharp focus the unique challenges of protecting these innovations. Below we consider some of the foibles of the AI-related patent practices of these jurisdictions.

Canada

The Backstory

Amazons One-Click Patent

The patentability of computer-implemented inventions in Canada has been evolving since the early 2000s, when Amazons patent application for what is known as its one-click internet shopping solution, which covered the automation of several steps ordinarily involved in placing an online shopping order on its website, was rejected by the Commissioner of Patents as lacking patentable subject matter. The Federal Court allowed Amazons appeal, and the Federal Court of Appeal agreed, concluding that the Commissioner of Patents was required to purposively construe the claims to identify essential elements and thereby the alleged invention (which it had not done), and consider whether the invention (i) had a method of practical application; (ii) was a new and inventive method of applying skill and knowledge; and (iii) had a commercially useful result. The matter was referred back to the Commissioner, who ultimately granted Amazons one-click patent.

The Patent Offices Problem-Solution

Shortly thereafter, the Canadian Patent Office issued notices to the profession establishing that claim construction was to be focused on identification of the problem-solution the invention addressed. As a result, Patent Office examiners were able to ignore elements of claims from construction if those elements were not essential to accomplishing the solution or provided context to the claim (e.g., a computer). In so doing, the Patent Office could (and did) designate AI-related claims as being directed to ineligible subject matter. Earlier this year, several patent applications for computer-related inventions were found to lack patentable subject matter because the examiners concluded the computers were not essential elements or simply provided working environments for data analysis.

The Federal Court rejects the Problem-Solution Test

However, a recent Federal Court decision criticized the Patent Offices problem-solution claim construction approach, holding it was failing to apply the purposive construction test established by the Supreme Court of Canada. The invention at issue covered a computer implementation of a new method for selecting and weighing investment portfolio assets that minimizes risk without impacting returns. Despite the claim language explicitly including a computer, the Patent Office found that the invention lacked subject matter patentability because the Office deemed the computer non-essential based on the problem-solution test.

The Federal Court reviewed and discarded the problem-solution test as incorrect, and reiterated that purposive construction is required to assess the essential elements of a claim to identify the claimed invention. It also observed that the Commissioner had failed to explain why she had excluded computer processing as a solution. The Court then sent the application back to the Patent Office for reassessment. The Patent Office did not appeal the Courts decision.

CIPO Updates its Patentable Subject Matter Guidance

Instead, the Canadian Intellectual Property Office released new guidance on patentable subject matter in response to the Choueifaty decision, which specifies that if [a] computer merely processes [an] algorithm in a well-known manner and the processing of the algorithm on the computer does not solve any problem in the functioning of the computer, the computer and the algorithm do not form part of a single actual invention that solves a problem related to the manual or productive arts and the claim is therefore unpatentable.

The Takeaway for AI Inventions

Accordingly, it seems that (at least for now) AI-related inventions employing a computer programmed to execute an algorithm, where the result has no physical existence or does not manifest a discernable physical effect or change (e.g., the generation or display of data) will likely still be difficult to protect. However, if the AI-related inventions are employed in technical fields where the end result is physical and tangible (e.g., the control of an external process, improving the functioning of computers), they will likely be patentable.

United Kingdom and Europe

The Backstory

Across the Atlantic, the situation is a little more settled. It is well established in UK and European case law that AI inventions can be patented if they provide a technical contribution. The law in the UK and Europe specifies that computer programs and mathematical methods as such are not patentable. However, over a number of years, the case law has evolved to define the meaning of as such, and it is now generally recognised in both European and UK law that an invention that essentially lies in novel mathematics but that is configured to control a technical process, for example an anti-lock braking system, would be considered to have technical character, and so would not be considered to be a computer program as such. In contrast, a computer program for providing a non-technical process, for example providing a personalised shopping itinerary, would likely not be considered as having technical character.

The EPO and UK courts however disagree on how to correctly assess patentability for software-based inventions and there is divergence in the practices of the EPO and UKIPO in this area. While the UK courts have indicated that the outcome from the two approaches is the same, many practitioners in the UK will say it is easier to obtain a patent to a software-based invention at the EPO than it is at the UKIPO.

Different Strokes

The EPO Approach

The EPO consider AI-related inventions to generally relate to computational models that have an abstract mathematical nature. Therefore, in order to patent an AI-related invention in Europe (e.g., a software based mathematical method), it must find technical character from outside the computational model itself. The EPO have published guidance for assessing whether or not such an invention has technical character; a mathematical method may contribute to the technical character of an invention if the method is either, (i) applied to a field of technology, or is (ii) adapted to a specific technical implementation. Thus, novel and inventive AI innovations that are applied to well-defined technical fields as provided for under option (i), such as image/speech processing, data encoding/encryption, optimising load distribution in a computer network, or controlling a physical process (e.g., robotic arm, self driving car, etc.) would likely be patentable. Similarly, AI inventions that are adapted to a specific technical implementation as provided under option (ii), such as the adaptation of a polynomial reduction algorithm to exploit word-size shifts matched to the word size of the computer hardware, would also likely be patentable.

If, however, an AI-related invention does not satisfy option (i) or option (ii), the EPO will likely consider it to lack technical character. It is worth noting that the mere fact that an AIs invention can be executed on physical hardware is not enough to demonstrate that it has technical character. Rather, the method itself must provide some technical contribution to be patentable.

The UKIPO Approach

Like the EPO, the UKIPO also considers AI-related inventions to be related to software-based mathematical models. Again, like the EPO, if the UKIPO deem an AI-related invention to provide a technical contribution, the invention should not be excluded from patentability. To determine if a technical contribution is made, the UKIPO considers five signposts (known as the AT&T signposts) that may hint at a technical contribution, which can be broadly summarized as whether the invention (i) provides a technical effect outside of the computer, (ii) means that a computer operates in a different way, and (iii) overcomes a perceived problem or merely circumvents said problem. Generally, AI inventions related to image processing or the control of an external process (e.g., a robotic arm) are not excluded from patentability. Similarly, AI-related inventions that operate at the level of the architecture of the computer, or that make a computer operate in a new way are also generally not excluded from patentability. However, if none of the AT&T signposts are satisfied, generally the invention will not be patentable; the mere fact that an AI invention can be executed on physical hardware is not enough to demonstrate that the invention has technical character.

The Takeaway

Would Mr. Choueifatys invention be patentable in the UK or Europe?

The subject of Mr. Choueifatys invention, described above, generally relates to a method involving tradeable financial assets. In the UK or Europe, the tradeable financial assets themselves will generally be considered non-technical and the method would therefore only be patentable if the method was judged to have technical character lying outside of that data. For example, if the invention lay in a novel encryption method for sending trading data between servers securely, this could provide the required technical character. However, if the effect of the invention was judged to be solely solving a business related problem, for example relating to improvements in how much money is made through the trading, such a method would likely not be considered by the EPO and UKIPO to have technical character.

Read the original here:
Patenting Artificial Intelligence in Canada, the UK and Europe: A Primer - Lexology

Artificial Intelligence: The Next Front of the Fight Against Institutional Racism – IoT For All

Its been three months since the world was shaken by the brutal murder of George Floyd. The image of a white police officer kneeling on a black citizen for 8 minutes and 46 seconds are still fresh in Americas collective memory.

This wasnt the first case of racially-charged police brutality in the US. And unfortunately, it wont be the last one either.

Racism in this country has deep roots. It is a festering wound thats either left ignored or treated with an infective medicine. Theres no end in sight to institutional racism in the country and to make matters worse, this disease is finding new ways to spread.

Even Artificial Intelligence, which is said to be one of the biggest technological breakthroughs in modern history, has inherited some of the prejudices that sadly prevail in our society.

It wouldve been ridiculous to suggest that computer programs are biased a few years ago. After all, why would any software care about someones race, gender, and color? But that was before machine learning and big data empowered computers to make their own decisions.

Algorithms now are enhancing customer support, reshaping contemporary fashion, and paving the way for a future where everything from law & order to city management can be automated.

Theres an extremely realistic chance we are headed towards an AI-enabled dystopia, explains Michael Reynolds of Namobot, a website that generates blog names with the help of big data and algorithms. Erroneous dataset that contains human interpretation and cognitive assessments can make machine-learning models transfer human biases into algorithms.

This isnt something far into the future but is already happening.

Risk assessment tools are often used in the criminal justice system to predict the likelihood of a felon committing a crime again. In theory, this Minority Report type technology is used to deter future crimes. However, critics believe these programs harm minorities.

ProPublica put this to test in 2016 when it examined the risk scores for over 7000 people. The non-profit organization analyzed data of prisoners arrested over two years in Broward County Florida to see who was charged for new crimes in the next couple of years.

The result showed what many had already feared. According to the algorithm, Black defendants were twice as likely to commit crimes than white ones. But as it turned out, only 20% of those who were predicted to engage in criminal activity did so.

Similarly, facial recognition software used by police could end up disproportionately affecting African Americans. As per a study co-authored by FBI, face recognition used in cities such as Seattle may be less accurate on Black people, leading to misidentification and false arrests.

Algorithm bias isnt just limited to the justice system. Black Americans are routinely denied programmers that are designed to improve care for patients with complex medical conditions. Again, these programs are less likely to refer Black patients than White patients for the same ailments.

To put it simply, tech companies are feeding their own biases into the systems. The exact systems that are designed to make fair, data-based decisions.

So whats being done to fix this situation?

Algorithmic bias is a complex issue mostly because its hard to observe. Programmers are often baffled to find out their algorithm discriminates against people on the basis of gender and color. Last year, Steve Wozniak revealed that Apple gave him a 10-times higher credit limit than his wife even though she had a better credit score.

It is rare for consumers to find such disparities. Studies that examine discrimination on part of AI also take considerable time and resources. Thats why advocates demand more transparency around how the entire system operates.

The problem merits an industry-wide solution but there are hurdles along the way. Even when algorithms are revealed to be biased, companies do not allow others to analyze the data and arent thorough with their investigations. Apple said it would look into the Wozniak issue but so far, nothing has come of it.

Bringing transparency would require companies to reveal their training data to observers or open themselves to a third-party audit. Theres also an option for programmers to take the initiative and run tests to determine how their system fares when applied to individuals belonging to different backgrounds.

To ensure a certain level of transparency, the data used to train the AI and the data used to evaluate it should be made public. Getting this done should be easier in government matters. However, the corporate world would resist such ideas.

According to a paper published by New York University research center, the lack of diversity in AI has reached a moment of reckoning. The research indicates that the AI field is overwhelmingly white and male due to which, it risks reasserting power imbalances and historical biases.

The industry has to acknowledge the gravity of the situation and admit that its existing methods have failed to address these problems, explained Kate Crawford, an author of the report.

With both Facebook and Microsoft having 4% of the workforce thats Black its quite clear that minorities are not being fairly represented in the AI field. Researchers and programmers are a homogeneous population who come from a certain level of privilege.

If the pool is diversified, the data would be much more representative of the world we inhabit. Algorithms would gain perspectives that are currently being ignored and AI programs would be much less biased.

Is it possible to create an algorithm thats completely free of bias? Probably not.

Artificial Intelligence is designed by humans and people are never truly unbiased. However, programs created by individuals from dominant groups will only help in perpetuating injustices against minorities.To make sure that algorithms dont become a tool of oppression against Black and Hispanic communities public and private institutions should be pushed to maintain a level of transparency.

Its also imperative that big tech embraces diversity and elevates programmers belonging to ethnic minorities. Moves like these can save our society from becoming an AI dystopia.

Excerpt from:
Artificial Intelligence: The Next Front of the Fight Against Institutional Racism - IoT For All

Artificial Intelligence (AI): 9 things IT pros wish the CIO knew – The Enterprisers Project

Artificial intelligence(AI) capabilities, frommachine learninganddeep learningtonatural languageprocessing (NLP) and computer vision, are rapidly advancing. Technology has never moved at such pace, meaning the role of the CIO is harder than ever to stay current and up to date with technology overall, so understanding the vast array of AI capabilities is a stretch for most CIOs right now, says Wayne Butterfield, director of cognitive automation and innovation technology research at advisory firmISG.

Naturally, IT leaders are increasingly exploring AI applications in the enterprise. However, AI-enabled initiatives do not necessarily lend themselves to traditional IT approaches.

AI-enabled initiatives do not necessarily lend themselves to traditional IT approaches.

It is imperative for CIOs to know AI in reasonable depth to understand its realistic and pragmatic adoption, explains Yugal Joshi, vice president of digital, cloud, and application services research forEverest Group. They need to understand what is doable as of today versus 3-5 years from now. Otherwise, there is a risk of them to either overestimate or underestimate AIs impact on business as well as IT.

[ Do you understandthe main types of AI?Read also:5 artificial intelligence (AI) types, defined.]

In addition, the business appetite for AI-driven transformation is at an all-time high, even asAI-washing by technology vendorscontinues to be a very real phenomenon. Its more important than ever that CIOs be able to differentiate between what is real versus what is vendor-driven AI marketing to make the best decisions for their business, Joshi says.

CIOs are increasingly hiring AI-savvy IT pros to further their digital transformation efforts. But those team members are depending on their IT leaders to understand enough about AI to best support and sustain their efforts. To that end, here are nine things CIOs should understand about AI.

In actual fact, its a group of technologies used to solve specific problems, says Butterfield. The catch-all term of Artificial Intelligence is so genericthat it is almost meaningless. In the most simplistic terms, AI is usually geared around providing a data-based answer or providing a data-fueled prediction. Then things begin to diverge.

NLP may be used to automate incoming emails, machine vision to gauge quality on the product line, or advanced analytics to predict a failure of your network. (For more on the various flavors of AI, read5 AI types, defined.) CIOs need to at least understand the strands of AI that are relevant to their business and ensure that they have a basic understanding of the problems that AI can solve for their business, and those it will not, Butterfield says.

"There is certainly a wide variety of people's expectations of AI, from realistic to off-the-wall."

There is certainly a wide variety of peoples expectations of AI, from realistic to off-the-wall, says Timothy Havens, the William and Gloria Jackson Associate Professor of Computer Systems in theCollege of Computing at Michigan Technological Universityand director of theInstitute of Computing and Cybersystems. CIOs should have at least a decent understanding of the limitations of AI such that they can predicate their expectations and properly evaluate AI solutions they are considering.

Machine learning, for example, can produce implicit models of very complex processes from representative data or experience. So an ML algorithm can learn to recognize cats by looking at millions of pictures of cats and not-cats, but it will not learn that cats meow or eat kibble.

The ROI on AI requires more patience than your average IT initiative. An Everest Group survey of more than 200 global IT leaders 84 percent cited long wait to return as a challenge. CIOs need to realize the reasons behind these long waits rather than getting flustered and disappointed with these, Joshi says.

In some cases, there may not be sufficient data governance in place.

CIOs need to understand the amount of data crunching needed to create an intelligent system, says Joshi. Therefore, CIOs need to decide whether the business has data and capability to build or use an AI system.

Havens advises CIOs to always ask where the training data will come from and how an algorithm is evaluated. That gets at whether this algorithm has been proven on real-world data that it hasnt seen before, Havens says.

In some cases, there may not be sufficient data governance in place. Although most organizations claim data is important, few invest as if that is the case. Their other enterprise functions such as HR and Finance have much larger teams than their data practice, says Joshi. CIOs need to understand what skills they need to invest given their spend appetite as some data skills may not be affordable for enterprises.

There is often a debate of where data science or AI Centers of Excellence belong, says Dan Simion, vice president of AI & Analytics with Capgemini North America. Some CIOs believe data scientists should sit within IT, while others may suggest data scientists be embedded within the business. CIOs must ensure that they are not downplaying the role of data scientists, says Simion, noting that when used properly they can do more than descriptive data visualizations but also solve business problems by leveraging AI and machine learning technologies.

CIOs who want to unlock the full potential of their AI programs should realize the knowledge and skills of their data scientists and give them opportunities to maximize the value they can drive, Simion says.

Thus, the operations team becomes extremely critical to the success or failure of intelligent capabilities. In fact, 61 percent of enterprises said their operations team are leaders in the charge of AI adoption in their organization, according to Everest Group research.

The operations team becomes extremely critical to the success or failure of intelligent capabilities.

Though [an increasing number of] enterprises are leveraging cloud-based AI offerings for cloud and SaaS vendors, the operations team is critical to scale such initiatives and create the needed guardrails, Joshi says.

One of the IT leaders most important roles is understanding the technology requirements necessary to support and sustain the companys AI transformations. In order for a company to be successful along its AI journey, Simion says, the CIO needs to make sure the AI technology stack is working and in sync with the overall enterprise technology.

Unlike many historical IT projects, AI initiatives require collaboration across data analytics, infrastructure, applications, data management, and the business. CIOs need to have the vision for creating such pod-based cross-functional teams that are jointly held accountable for the outcome and not for their individual pieces, Joshi says.

Although we throw around the term intelligent, AI is not inherently adaptive. AI algorithms are only good at what they are designed for, and will often fail miserably and in strange ways when applied to problems that may seem similar to humans, but are not similar from an AI-perspective, Havens says. An algorithm that is trained to drive a car in an urban environment may and probably will fail at rural driving, for example.

Is your organization looking to increase efficiency? Improve effectiveness? Transform the customer or user experience? Create entirely new business models? The CIO must understand what value the business wants to derive from AI adoption. Everest Group notes four common business imperatives: Efficiency, Effectiveness, Experience, and Evolution. CIOs may also need to manage inflated expectations of business around AI adoption and its impact on the organization.

[ How can automation free up staff time for innovation? Get the free eBook:Managing IT with Automation. ]

Go here to see the original:
Artificial Intelligence (AI): 9 things IT pros wish the CIO knew - The Enterprisers Project

This Harvard Professor And His Students Have Raised $14 Million To Make AI Too Smart To Be Fooled By Hackers – Forbes

By adding a few pixels (highlighted in red) to a legitimate check, fraudsters can trick artificial intelligence models into mistaking a $401 check for one worth $701. Undetected, the exploit could lead to large-scale financial fraud.

Yaron Singer climbed the tenure track ladder to a full professorship at Harvard in seven years, fueled by his work on adversarial machine learning, a way to fool artificial intelligence models using misleading data. Now, Singers startup, Robust Intelligence, which he formed with a former Ph.D. advisee and two former students, is emerging from stealth to take his research to market.

This year, artificial intelligence is set to account for $50 billion in corporate spending, though companies are still figuring out how to implement the technology into their business processes. Companies are still figuring out, too, how to protect their good AI from bad AI, like an algorithmically generated voice deepfake that can spoof voice authentication systems.

In the early days of the internet, it was designed like everybodys a good actor. Then people started to build firewalls because they discovered that not everybody was, says Bill Coughran, former senior vice president of engineering at Google. Were seeing signs of the same thing happening with these machine learning systems. Where theres money, bad actors tend to come in.

Enter Robust Intelligence, a new startup led by CEO Singer with a platform that the company says is trained to detect more than 100 types of adversarial attacks. Though its founders and most of the team hold a Cambridge pedigree, the startup has established headquarters in San Francisco and announced Wednesday that it had raised $14 million in a seed and Series A round led by Sequoia. Coughran, now a partner at the venture firm, is the lead investor on the fundraise, which also comes with participation from Engineering Capital and Harpoon Ventures.

Robust Intelligence CEO Yaron Singer is taking a leave from Harvard, where he is a professor of computer science and applied mathematics.

Singer followed his Ph.D. in computer science from the University of California at Berkeley, by joining Google as a postdoctoral researcher in 2011. He spent two years working on algorithms and machine-learning models to make the tech giants products run faster, and saw how easily AI could go off the rails with bad data.

Once you start seeing these vulnerabilities, it gets really, really scary, especially if we think about how much we want to use artificial intelligence to automate our decisions, he says.

Fraudsters and other bad actors can exploit the relative inflexibility of artificial intelligence models in processing unfamiliar data. For example, Singer says, a check for $401 can be manipulated by adding a few pixels that are imperceptible to the human eye yet cause the AI model to read the check erroneously as $701. If fraudsters get their hands on checks, they can hack into these apps and start doing this at scale, Singer says. Similar modifications to data inputs can lead to fraudulent financial transactions, as well as spoofed voice or facial recognition.

In 2013, upon taking an assistant professor position at Harvard, Singer decided to focus his research on devising mechanisms to secure AI models. Robust Intelligence comes from nearly a decade in the lab for Singer, during which time he worked with three Harvard pupils who would become his cofounders: Eric Balkanski, a Ph.D. student advised by Singer; Alexander Rilee, a graduate student; and undergraduate Kojin Oshiba, who coauthored academic papers with the professor. Across 25 papers, Singers team broke ground on designing algorithms to detect misleading or fraudulent data, and helped bring the issue to government attention, even receiving an early Darpa grant to conduct its research. Rilee and Oshiba remain involved with the day-to-day activities at Robust, the former on government and go-to-market, and the latter on security, technology and product development.

Robust Intelligence is launching with two products, an AI firewall and a red team offering, in which Robust functions like an adversarial attacker. The firewall works by wrapping around an organizations existing AI model to scan for contaminated data via Robusts algorithms. The other product, called Rime (or Robust Intelligence Machine Engine), performs a stress test on a customers AI model by inputting basic mistakes and deliberately launching adversarial attacks on the model to see how it holds up.

The startup is currently working with about ten customers, says Singer, including a major financial institution and a leading payment processor, though Robust will not name any names due to confidentiality. Launching out of stealth, Singer hopes to gain more customers as well as double the size of the team, which currently stands at 15 employees. Singer, who is on leave from Harvard, is sheepish about his future in academia, but says he is focused on his CEO role in San Francisco at the moment.

For me, Ive climbed the mountain of tenure at Harvard, but now I think weve found an even higher mountain, and that mountain is securing artificial intelligence, he says.

Continued here:
This Harvard Professor And His Students Have Raised $14 Million To Make AI Too Smart To Be Fooled By Hackers - Forbes