Archive for the ‘Artificial Intelligence’ Category

Industry VoicesAI doesn’t have to replace doctors to produce better health outcomes – FierceHealthcare

Americans encounter some form of artificial intelligence and machine learning technologies in nearly every aspect of daily life: We accept Netflixs recommendations on what movie we should stream next, enjoy Spotifys curated playlists and take a detour when Waze tells us we can shave eight minutes off of our commute.

And it turns out that were fairly comfortable with this new normal: A survey released last year by Innovative Technology Solutions found that, on a scale of 1 to 10, Americans give their GPS systems an 8.1 trust and satisfaction score, followed closely by a 7.5 for TV and movie streaming services.

But when it comes to higher stakes, were not so trusting. When asked about whether they trust an AI doctor diagnosing or treating a medical issue, respondents scored it just a 5.4.

CMS Doubles Down on CAHPS and Raises the Bar on Member Experience

A new CMS final rule will double the impact of CAHPS and member experience on a Medicare plans overall Star Rating. Learn more and discover how to exceed member expectations and improve Star Ratings in this new whitepaper.

Overall skepticism about medical AI and ML is nothing new. In 2012, we were told that IBMs AI-powered Watson was being trained to recommend treatments for cancer patients. There were claims that the advanced technology could make medicine personalized and tailored to millions of people living with cancer. But in 2018, reports surfaced that indicated the research and technology had fallen short of expectations, leaving users to speculate the accuracy of Watsons predictive analytics.

RELATED:Investors poured $4B into healthcare AI startups in 2019

Patients have been reluctant to trust medical AI and ML out of fear that the technology would not offer a unique or personalized recommendation based on individual needs. A piece in Harvard Business Review in 2019 referenced a survey in which 200 business students were asked to take a free health assessment to perform a diagnosis40% of students signed up for the assessment when told their doctor would perform the diagnosis, while only 26% signed up when told a computer would perform the diagnosis.

These concerns are not without basis. Many of the AI and ML approaches that are being used in healthcare todaydue to simplicity and ease of implementationstrive for performance at the population-level by fitting to the characteristics most common among patients. They look to do well in the general case, failing to serve large groups of patients and individuals with unique health needs. However, this limitation of how AI and ML is being applied is not a limitation of the technology.

If anything, what makes AI and ML exceptionalif done rightis its ability to process huge sets of data comprising a diversity of patients, providers, diseases and outcomes and model the fine-grained trends that could potentially have a lasting impact on a patients diagnosis or treatment options. This ability to use data in the large for representative populations and to obtain inferences in the small for individual-level decision support is the promise of AI and ML. The whole process might sound impersonal or cookie-cutter, but the reality is that the advancements in precision medicine and delivery will make care decisions more data-driven and thus more exact.

Consider a patient choosing a specialist. Its anything but data-driven: Theyll search for a provider in-network or maybe one that is conveniently located, without understanding potential health outcomes as a result of their choice. The issue is that patients lack the proper data and information they need to make these informed choices.

RELATED:The unexpected ways AI is impacting the delivery of care, including for COVID-19

Thats where machine intelligence comes into playan AI/ML model that is able to accurately predict the right treatment, at the right time, by the right provider for a patient, which could drastically help reduce the rate of hospitalizations and emergency room visits.

As an example, research published last month in AJMC looked at claims data from 2 million Medicare beneficiaries between 2017 and 2019 to evaluate the utility of ML in the management of severe respiratory infections in community and post-acute settings. The researchers found that machine intelligence for precision navigation could be used to mitigate infection rates in the post-acute care setting.

Specifically, at-risk individuals who received care at skilled nursing facilities (SNFs) that the technology predicted would be the best choice for them had a relative reduction of 37% for emergent care and 36% for inpatient hospitalizations due to respiratory infections compared to those who received care at non-recommended SNFs.

This advanced technology has the ability to comb through and analyze an individuals treatment needs and medical history so that the most accurate recommendations can be made based on that individuals personalized needs and the doctors or facilities available to them. In turn, matching a patient to the optimal provider has the ability to drastically improve health outcomes while also lowering the cost of care.

We now have the technology where we can use machine intelligence to optimize some of the most important decisions in healthcare. The data show results we can trust.

Zeeshan Syed is the CEO and Zahoor Elahi is the COO of Health at Scale.

Continue reading here:
Industry VoicesAI doesn't have to replace doctors to produce better health outcomes - FierceHealthcare

Artificial Intelligence in Aviation Market 2020 | What Is The Estimated Market Size In The Upcoming Years? – The Daily Chronicle

Market Scenario of the Artificial Intelligence in Aviation Market:

The most recent Artificial Intelligence in Aviation Market Research study includes some significant activities of the current market size for the worldwide Artificial Intelligence in Aviation market. It presents a point by point analysis dependent on the exhaustive research of the market elements like market size, development situation, potential opportunities, and operation landscape and trend analysis. This report centers around the Artificial Intelligence in Aviation-business status, presents volume and worth, key market, product type, consumers, regions, and key players.

Sample Copy of This Report @ https://www.quincemarketinsights.com/request-sample-68803?utm_source=TDC/komal

The prominent players covered in this report: Boeing, Micron, NVIDIA, Amazon, Airbus, General Electric,Lockheed Martin, Thales, Garmin, Xilinx, and Intel.

The market is segmented into By Offering (Hardware, Software, Service) By Technology (Machine Learning, Context Awareness, NLP, Computer Vision), By Application (Virtual Assistants, Smart Maintenance).

Geographical segments are North America, Europe, Asia Pacific, Middle East & Africa, and South America.

A 360 degree outline of the competitive scenario of the Global Artificial Intelligence in Aviation Market is presented by Quince Market Insights. It has a massive data allied to the recent product and technological developments in the markets.

It has a wide-ranging analysis of the impact of these advancements on the markets future growth, wide-ranging analysis of these extensions on the markets future growth. The research report studies the market in a detailed manner by explaining the key facets of the market that are foreseeable to have a countable stimulus on its developing extrapolations over the forecast period.

Get ToC for the overview of the premium report @ https://www.quincemarketinsights.com/request-toc-68803?utm_source=TDC/komal

This is anticipated to drive the Global Artificial Intelligence in Aviation Market over the forecast period. This research report covers the market landscape and its progress prospects in the near future. After studying key companies, the report focuses on the new entrants contributing to the growth of the market. Most companies in the Global Artificial Intelligence in Aviation Market are currently adopting new technological trends in the market.

Finally, the researchers throw light on different ways to discover the strengths, weaknesses, opportunities, and threats affecting the growth of the Global Artificial Intelligence in Aviation Market. The feasibility of the new report is also measured in this research report.

Reasons for buying this report:

Make an Enquiry for purchasing this Report @ https://www.quincemarketinsights.com/enquiry-before-buying/enquiry-before-buying-68803?utm_source=TDC/komal

About Us:

QMI has the most comprehensive collection of market research products and services available on the web. We deliver reports from virtually all major publications and refresh our list regularly to provide you with immediate online access to the worlds most extensive and up-to-date archive of professional insights into global markets, companies, goods, and patterns.

Contact Us:

Quince Market Insights

Ajay D. (Knowledge Partner)

Office No- A109

Pune, Maharashtra 411028

Phone: APAC +91 706 672 4848 / US +1 208 405 2835 / UK +44 1444 39 0986

Email: [emailprotected]

Web: https://www.quincemarketinsights.com

More here:
Artificial Intelligence in Aviation Market 2020 | What Is The Estimated Market Size In The Upcoming Years? - The Daily Chronicle

Tackling the artificial intelligence IP conundrum – TechHQ

Artificial intelligence has become a general-purpose technology. Not confined to futuristic applications such as self-driving vehicles, it powers the apps we use daily, from navigation with Google Maps to check deposits from our mobile banking app. It even manages the spam filters in our inbox.

These are all-powerful, albeit functional roles. Whats perhaps more exciting is AIs growing potential in sourcing and producing new creations and ideas, from writing news articles to discovering new drugs in some cases, far quicker than teams of human scientists.

With every new iteration in software design, computing power, and ability to leverage large data sets, AIs potential as an initiator of ideas and concepts grow, and this raises questions around its rights to Intellectual Property (IP).

Professor of Law and Health Sciences at the University of Surrey, Dr. Ryan Abbotts work is focused on the meeting of law and technology in particular, the regulation of AI. While Abbott doesnt believe AI should be entitled to its own IP, he believes the time is right to discuss the ability of people to own IP generated autonomously by AI or risk losing out on the technologys full potential.

Right now, we have a system where AI and human activity are treated very differently, Abbott told TechHQ.

Drug discovery is a tangible example of how AI contributes to society. Technology is making the discovery of new drugs faster, cheaper, and more successful. Its been used this way for decades, helping to identify new drug targets or validate drug candidates, and to help design trials in ways that can potentially shorten drug development timeframes, bringing treatments to market faster. But the critical nature of patent protection in life sciences, drug development, in particular, is holding back these advances.

Thats because, when it comes to AI-generated content and ideas, AI tends to be seen by experts and lawmakers as a tool, and not the source of the creation or discovery. In the same way that a paintbrush doesnt get the credit for an oil painting and CAD software isnt credited for the designs of an architect, AI is perceived as a vehicle to an end product. The trouble is, current laws are not consistent and clear cut. In the UK, where a work lacks a traditional human author, the producer of the work is deemed the author. In the US, the inventor is the person who conceives the idea. In either case, neither human may know what the AI system will produce or discover.

While patent rules in life sciences highlight the legal constraints on AI in research and development, these same challenges affect everything from the development of components for cars to spacecraft. The problem will become increasingly apparent as AI continues to improve, and people do not.

The consensus among legal experts is that its not clear whether AI could carry out the understood rights and obligations of an IP owner. IP rights are restricted to natural persons and legal entities such as businesses. The European Union reportedly abandoned plans to introduce a third type of entity electronic personality in 2018 after pressure from 150 experts on AI, robotics, IP, and ethics.

Speaking to Raconteur previously, Julie Barrett-Major, consulting attorney at AA Thornton and member of the Chartered Institute of Patent Attorneys International Liaison Committee, explained: With patent ownership come certain obligations and responsibilities or at least opportunities to exercise these. For example, to enforce the rights awarded, the owner can sue for infringement or at least indicate a willingness to do so to maintain exclusivity.

[] the patent must be renewed at regular intervals, and there are other actions that need to be taken to ensure the rewards are not diluted, such as updating the government registers of patents with details of changes in ownership details, informing of licensees and so forth.

Abbott argues that, ultimately, the limitations of current IP frameworks may force organizations to continue to use people, where a machine might be more efficient.

Last year, Siemens was unable to file for multiple patents on inventions they believed to be patentable because they could not identify a human inventor. The involved engineers stated that the machine did the inventive work. Abbott himself is carrying out a legal test case, filing patents for two inventions made autonomously by AI. Both have been rejected from the US, UK, German, and European patent offices on the basis they failed to disclose a human inventor. The rejections are under appeal, but the idea to help raise dialogue on the issue.

Most of the time today AI is just working as a tool and helping augment people, but machines are getting increasingly autonomous and sophisticated, and increasingly doing the sorts of things that used to make someone an inventor, Abbott said.

The current status quo means that law can get in the way of AI development in certain areas, but not others. That means AIs benefits are not evenly spread across industries. While parents are important to drug development, for example, they are less important when it comes to making software. This imbalance could lead to the emergence of shady IP practices in certain sectors when it comes to using AI. The workaround, says Abbott, is people simply not disclosing AIs role in creating something valuable, whether thats an article, video, or song. Someone can just list themselves as the author and no one is going to question that.

The issue of patents and intellectual property in the fields of academic research, for many of us, might not seem like its worth our consideration. But the broader legal concept Ryan looks to highlight that we should question current standards of AI accountability and ownership affect how AI is being used around us.

Across all areas of the law, we are seeing the phenomenon of artificial intelligence stepping into the shoes of people and doing the sorts of things that people used to do, said Abbott.

Ultimately, for AI to be used to its full potential, there must be an open discussion, public consultation, and debates on the current litigation surrounding AI. Thats now happening. The issue has had recent attention by the World Intellectual Property Organization (WIPO) while the UK Intellectual Property Office just announced a public request for comments on whether the IP system is fit for purpose in light of AI. The US has just completed a similar consultation.

These efforts are a solid start to getting a diverse range of input from stakeholders, said Abbott. In time, legislators should get involved.

Originally posted here:
Tackling the artificial intelligence IP conundrum - TechHQ

Patent application strategies in the field of artificial intelligence based on examination standards – Lexology

I. Introduction

Artificial Intelligence (AI) refers to an intelligence technology similar to human implemented by means of ordinary computer programs. With rapid development of artificial intelligence technology and continuous reflection of commercial values thereof, patent applications related to artificial intelligence technology have become a hot field in patent applications, and the number of applications is continuously rising, and scopes of application fields are also expanding.

This article aims attempts to provide some patent application strategies in the field of artificial intelligence based on latest examination standards in China, and summarize similarities and differences between examination standards in the field of artificial intelligence in China, Japan, Korea, US and Europe, for reference by patent applicants, and patent attorneys, etc.

II. Main laws involved and coping strategies

In China, as a patent application involving a computer program, the primary examination focus of a patent application in the field of artificial intelligence is whether the patent application is an eligible object protected by a patent, and another examination focus is the inventiveness as provided in Article 22, Paragraph 3 of the Chinese Patent Law.

Figure 1

Figure 1 shows the general examination process of a patent application in the field of artificial intelligence in China.

For a patent application in the field of artificial intelligence, it may be drafted as product claim or method claim, and the product claim may be drafted as an eligible subject, such as a system, a device, and a storage medium, etc.

Table 1 Forms of drafting of claims

Following description mainly focuses on analysis of latest examination standards of China and coping strategies regarding whether a patent application in the field of artificial intelligence belongs to an eligible object protected by a patent and whether it is in conformity with the provisions of inventiveness.

1. Examination standards and coping strategies regarding an eligible object protected by a patent

1.1 The latest examination standards on eligible object issues

It is provided in Article 25, Paragraph 1, Item (2) of the Chinese Patent Law that no patent right shall be granted for rules and methods for mental activities.

It is provided in the newly-amended Guidelines for Examination that if a claim contains a technical feature in addition to an algorithm feature or a commercial rule and a method feature, the claim as a whole is not a rule and method of an intellectual activity, and a possibility that it is granted a patent right shall not be excluded in accordance with Article 25, Paragraph 1, Item (2) of the Patent Law.

Moreover, it is provided in Rule 22, Paragraph 2 of the Implementing Regulations of the Chinese Patent Law that Invention as mentioned in the Patent Law means any new technical solution relating to a product, a process or an improvement thereof.

Correspondingly, it is provided in the newly-amended Guidelines for Examination that if steps involved in an algorithm in a claim reflect that they are closely related to the technical problem to be solved, for example, data processed by the algorithm are data having definite technical meanings in the technical field, execution of the algorithm is able to directly reflect a process of solving a technical problem by using natural laws, and produces a technical effect, then in general, the solution defined in this claim belongs to the technical solution provided in Article 2, Paragraph 2 of the Patent Law.

1.2 Application strategy for eligible object issues

Patent applications in the field of artificial intelligence may basically be divided into two types according to their application scopes: basic type patent applications and applied type patent applications. The so-called basic type patent application refers to that an algorithm involved in the patent application may be widely used in multiple particular fields, and the applied patent application refers to that an algorithm involved in the patent application is mainly combined with a particular field, and is an application in this field.

Taking two aspects into account, i.e. patent protection scope and conformity to examination requirements, ways of drafting the two types of patent applications are proposed for reference.

Table 2 Ways of drafting two types of patent applications

In addition, due to the development of Internet technology and big data technology, the artificial intelligence technology is also increasingly used in commercial and financial fields. In making an application for this type of patent, attention should be paid to combining a business rule, an algorithm feature and a technical feature in description.

Moreover, based on a stage of technological improvement, a patent application in the field of artificial intelligence may be divided into two stages: a training stage (learning stage) and an application stage. Following are corresponding ways of drafting.

Table 3 eligible subjects in two stages

2. Examination standard and coping strategy regarding inventiveness

2.1 Latest examination standards regarding inventiveness

It is provided in the newly-amended Guidelines for Examination that when examination regarding inventiveness is conducted on an application for patent for invention containing a technical feature and an algorithm feature, or a business rule and a method feature, the algorithm feature or the business rule and the method feature shall be taken into account together with the technical feature as a whole, when they functionally and mutually support the technical feature and have an interaction relationship between them and the technical feature.

2.2 Application strategy for examination on inventiveness

Based on the above examination standards, when an application for patent in the field of artificial intelligence is drafted, attention should be paid to combine an algorithm feature and a technical feature in describing the technical solution. Moreover, in describing a technical problem and a technical effect, emphasis should be placed on that the algorithm feature and the technical feature are specifically combined, and jointly solve the technical problem and produce a corresponding technical effect.

Furthermore, for some artificial intelligence patent applications not involved in improvement of a basic algorithm, their improvement points relative to existing technologies may mainly exist in application of an algorithm, such as a neural network, to a specific field, while the neural network itself is not changed much. For this type of patent applications, inventiveness may be considered mainly based on the following two aspects: first, whether the technical fields are similar; and second, a difficulty of applying the neural network to the technical field of the present application and whether a technical effect different from that in the original technical field is produced.

III. Comparison of examination standards of China, Japan, Korea, US and Europe

1. Comparison of examination standards of an eligible object protected by a patent

Comparisons of examination standards of an eligible object protected by a patent in China, Japan, Korea, US and Europe is as follows.

Table 4 Examination of an eligible object protected by a patent

in China, Japan, Korea, US and Europe

2. Comparison of examination standards of inventiveness

Comparison of examination standards of inventiveness in China, Japan, Korea, US and Europe is as follows.

Table 5 Examination of inventiveness in China, Japan, Korea, US and Europe

IV. Summary Patent applications in the field of artificial intelligence belong to patent applications involving computer programs, which need to meet the universal requirements on patent applications involving computer programs. Due to the specialty of the artificial intelligence technology, for patent applications in the field of artificial intelligence, the China National Intellectual Property Administration (CNIPA) has formulated new special examination regulations. Drafting of patent applications and the responses to examination opinions based on the latest examination standards are beneficial to applicants in obtaining patent rights of relevant technologies in China.

In addition, understanding of examination standards for patent applications in the field of artificial intelligence in major patent countries and regions in the world, namely China, Japan, Korea, US and Europe, is advantageous to global application strategy formulation and reasonable patent layout of applicants.

Read this article:
Patent application strategies in the field of artificial intelligence based on examination standards - Lexology

How do we govern artificial intelligence and act ethically? – Open Access Government

The world has evolved rapidly in the last few years and artificial intelligence (AI) has often been leading the change. The technology has been adopted by almost every industry with companies wanting to explore how AI can automate processes, increase efficiency, and improve business operations.

AI has certainly proved how it can be beneficial to us all, but a common misconception is that it is always objective and avoids bias, opinion, and ideologies. Based on this understanding, there has been a rise in recent years with companies utilising AI-based recruiting platforms in a bid to make the hiring process more efficient and devoid of human bias.

Yet, a Financial Times article quoted an employment barrister who doubted the progressive nature of AI tools and said that there is overwhelming evidence available that the machines are very often getting it wrong. A high-profile example of this being the case is when Amazon had to abandon its AI-recruiting tool in 2018 after the company realised it was favouring men for technical jobs.

However, AI has continued to advance at a rapid pace and its adoption by businesses has been further accelerated following COVID-19s arrival. With debates of whether AI can be relied upon to behave impartially still ongoing, how can the technology be governed so organisations continue to act ethically?

During a press conference in Brussels earlier this year, the European Commission said it was preparing to draft regulation for AI that will help prevent its misuse, but the governing body has set itself quite the challenge. The technology is developing constantly so after only a few weeks any regulation that is introduced may not go far enough. After a few months, it could become completely irrelevant.

Within the risk community however, there is no doubt that policies are needed as a study found that 80% of risk professionals are not confident with the AI governance in place. At the same time, there are also concerns from technology leaders who believe tighter regulations will stifle AI innovation and obstruct the potentially enormous advantages it can have on the world.

A certain level of creative and scientific freedom is required for companies to create innovative new technologies and although AI can be used for good, the increasing speed with which it is being developed and adopted across industries is a major consideration for governance. The ethical concerns need to be addressed.

Given the current and ongoing complexities that the global pandemic brings, as well as the looming Brexit deadline, we will likely have to wait for the EUs regulation to be finalised and put in place. In the meantime, businesses should begin to get their own houses in order if they havent already with their use of AI and governance, risk and compliance (GRC) processes to ensure they are not caught out when legislation does arrive.

By setting up a forward-looking risk management program around implementing and managing the use of AI, organisations can improve their ability in handling both existing and emerging risks by analysing past trends, predicting future scenarios, and proactively preparing for further risk. A governance framework should also be implemented around AI both within and outside the organisation to better overcome any unforeseen exposure to risk from evolving AI technologies and an ever-changing business landscape.

Unlike the financial services sector where internal controls and regulators require businesses to regularly validate and manage their own models, AI model controls are already being put in place, reflecting the abundant usage of AI within enterprises. It wont be long before regulators begin to demand proof points of there being the right controls in place, so organisations need to monitor where AI is being used for business decisions and ensure the technology operates with accuracy and is void of inherent biases and incomplete underlying datasets.

When an organisation is operating with such governance and a forward-looking risk management program towards its use of AI, it will certainly be better positioned once new regulation is eventually enforced.

Too often, businesses are operating with multiple information siloes created by different business units and teams in various geographic locations. This can lead to information blind spots and a recent Garter study found that poor data quality is responsible for an average loss of $15 million per year.

Now more than ever, businesses need to be conscious of avoiding unnecessary fines as the figures can be crippling. Hence, it is important that these restrictive siloes are removed in favour of a centralised information hub that everyone across the business can access. This way, senior management and risk professionals are always aware of their risks, including any introduced by AI, and can be confident that they have a clear vision of the bigger picture to be able to efficiently respond to threats.

Another reason for moving towards centralisation and complete visibility throughout the business is that it often gets touted that the reason AI fails to act impartially is that AI systems learn to make decisions based on training data that humans provide. If this data is incomplete or contains conscious or unconscious bias or reflects historical and social inequalities, so will the AI technology.

While an organisation may not always be responsible for creating AI bias in the first place, by having a good oversight and complete centralised information to hand at any time, it becomes a lot easier to see where there are blind spots that could damage a companys reputation.

Ultimately, it is down to organisations themselves to manage their GRC processes, have a clear oversight of the entire risk landscape and strongly protect their reputation. One of the outcomes of the pandemic is the increased laser focus on ethics and integrity, so it is critical that organisations hold these values at the core of their business model to prevent scrutiny from regulators, stakeholders and consumers. Until adequate regulation is introduced by the EU, companies essentially need to take AI governance into their own hands to mitigate any risk and to always perform with integrity.

Editor's Recommended Articles

Visit link:
How do we govern artificial intelligence and act ethically? - Open Access Government