Archive for the ‘Artificial Intelligence’ Category

DoDs AI center striving to be connective tissue across all projects – Federal News Network

Best listening experience is on Chrome, Firefox or Safari. Subscribe to Federal Drives daily audio interviews onApple PodcastsorPodcastOne.

Its unclear if anyone really knows just how many pilot projects in the Defense Department are using artificial intelligence, machine learning or intelligent automation.

Some say its around 300, while others say its closer to 600, and then there are those who believe the number could be more than 1,000.

But unlike so many technology innovations that came before it, the Pentagon, through its Joint Artificial Intelligence Center (JAIC), is taking aggressive action to stop, or at least limit, AI-sprawl.

Theres a lot of efforts that are out there that are not very well tied together and theres a whole bunch of them that are dealing with exactly the same thing. So one of them is talent. Do they have talent? Or do they have to grow their talent or do they have to acquire the talent? The other big one, of course, is data and its almost invariably when anybody in the Department of Defense talks about doing work, they get to the data saying, Okay, my data hasnt been cleansed so is it usable? said Anthony Robbins, the vice president of the North American public sector business for NVIDIA, in an interview with Federal News Network. They try to assess use cases, and then theyre trying to figure out how to get started. The JAIC wants to help them figure out this out.

DoD launched the JAIC in June 2018 with a much different vision than where it stands today. Whereas the Pentagon saw JAIC nearly three years ago as pushing AI to the military services and defense agencies through pathfinder projects, its now focused on providing services and setting the foundational elements for mission areas to take advantage of the technologies.

In November, DoD announced JAIC 2.0 detailing its new vision and mission. As part of that new approach, the JAIC awarded a $106 million contract in September to build the Joint Common Foundation Artificial Intelligence (JCF), and plans to create three new other transaction agreements (OTA) vehicles in the coming year under the Tradewinds moniker to further build out its services catalog.

Jacqueline Tame, the acting deputy director of JAIC, said the move to 2.0 is a recognition that the services and defense agencies need a different kind of help to ensure AI tools improve and measure mission readiness.

The JAIC doesnt need to be a doer, but a trainer, educator and supporter because the adoption of AI and AI-like capabilities think robotics process automation (RPA) and predictive analytics are spreading across the department like wildfire.

What we have been able to do over the last two-and-a-half years is really test what the department actually needs, what the department is actually ready for and what the foundational building blocks of AI-readiness actually are. JAIC 2.0 is a recognition and learnings that weve undertaken that there are some key building blocks we have to put in place departmentwide to be AI ready, Tame said during AFCEA NOVA IC IT day. Where we are today, having developed a lot of capabilities, deployed a lot of prototypes and implemented a lot of solutions across the department is that weve learned that what the department actually needs is enabling services.

Tame said while some like the Army Futures Command, the Special Operations Command and in the Air Force have matured their AI capabilities, the efforts too often are rolling out in siloes.

What is still not happening, and this is the underpinning of JAIC 2.0, is the connective tissues between all of those capabilities that is being researched or deployed. What is still lacking in our assessment is the aggregate of the components of AI-readiness, she said. That includes removing some of the barriers to entry that present themselves in terms of both education and awareness about what AI is and what AI is not, what things actually lend themselves to AI and AI-enabled applications. Really understanding what the data need to looks like, the status of AI readiness in order to leverage it, test it appropriately and an understanding of the ethical underpinnings in terms of what that needs to look like as we consider some of the more advanced capabilities that we are trying to deploy across the force. Having a really foundational understanding of the types of infrastructure and architectures that need to be able to be interoperable in order to achieve the goals we are trying to achieve here. And really trying to understand the culture barriers to entry that still exist.

Like with any new technology, the culture barriers to AI arent unusual. But Tame, Robbins and other experts say trust, confidence and usability are at the heart of AI-readiness.

This is a technology that is and will affect every person, every country and every industry around the world, Robbins said. It is a technology that can go into every industry from transportation to healthcare to defense. Technology transformation is as much about leading change in transformation as it is the technology. The technology is ready.

Robbins said a predictive and preventive maintenance program, as well as its use to help with humanitarian assistance, are two examples of how DoD already is using AI.

One example is the Armys Aviation and Missile Command G-3s work with the JAIC since 2019 on the predictive and preventive maintenance for the UH-60 Blackhawk helicopter.

When it comes to logistics and maintenance, there is an overwhelming amount of data available anything from aircraft sensor data to maintenance forms and part records, Chris Shumeyko, JAIC product manager, said in an Army release. Ordinarily, subject matter experts play a huge role in understanding this data and identifying trends that may affect the readiness of the Armys vehicle fleet. However, as the amount of data grows, you either need more experts to comb through that data or possible warning signs of problems may get missed. By injecting AI/ML, were not replacing these experts, but rather providing them with tools that can find hard-to-spot trends, anomalies or warning signs in a fraction of the time. Our goal is to increase the efficiency of the experts.

Its this type of service that the JAIC is providing under its latest iteration.

Tame said the new services include or will include:

Robbins said these services and other recent actions by JAIC is part of how DoD is moving AI out of the testing phase and into the operations phase.

Tame added part of the way to address that operational need is not to develop, test and deploy in the siloes of yesterday, but through a common framework that creates a starting point for all AI technology.

These critical building blocks will enable us to get to the point of implementation of AI across the force in a really cohesive way are not there yet, she said. The JAICs role really needs to be driving that advocacy and education of our senior executive leadership all the way down to line analysts and intelligence agencies about institutionalizing the ethical underpinnings that need to be talked about every time we are thinking about AI, about ensuring there is a departmentwide test and evaluation framework that is specific to AI, which is different than everything else the test and evaluation community has been saying before, and ensuring we have a really foundational understanding across the board of those data standards, many of which do not exist yet or havent been agreed upon, and the level of infrastructure interoperability that we need to both put in place in terms of new systems and reimagine in terms of our legacy systems.

The end goal of JAIC 2.0 isnt just about offering new services or changing its mission focus, but addressing the AI-sprawl that seems to be quickly happening by giving the military services and Defense agencies a common baseline to build on top of and ensure the necessary trust, confidence, security and ethical foundations are in place. This is something that was missing with cloud, mobile devices and many other technologies that led to unabated sprawl.

Read the original here:
DoDs AI center striving to be connective tissue across all projects - Federal News Network

IBM-Red Hat deal with Palantir is big boost for its artificial intelligence, cloud strategy – WRAL Tech Wire

Editors note: Nicole Catchpole is a Senior Analyst with Technology Business Research.

HAMPTON, N.H. Since Arvind Krishna took the helm as CEO in April, IBM has engaged in a series of acquisitions and partnerships to support its transformative shift to fully embrace an open hybrid cloud strategy. The company is further solidifying the strategy with the announcement that IBM and Palantir are coming together in a partnership that combines AI, hybrid cloud, operational intelligence and data processing into an enterprise offering.

The partnership will leverage Palantir Foundry, a data integration and analysis platform that enables users to easily manage and visualize complex data sets, to create a new solution called Palantir for IBM Cloud Pak for Data. The new offering, which will be available in March, will leverage AI capabilities to help enterprises further automate data analysis across a wide variety of industries and reduce inherent silos in the process.

A core benefit that customers will derive from the collaboration between IBM (NYSE: IBM) and Palantir (NYSE: PLTR) is the easement of the pain points associated with adopting a hybrid cloud model, including integration across multiple data sources and the lack of visibility into the complexities of cloud-native development. By partnering with Palantir, IBM will be able to make its AI software more user-friendly, especially for those customers who are not technical by nature or trade. Palantirs software requires minimal, if any, coding and enhances the accessibility of IBMs cloud and AI business.

IBMs latest cloud move: Linking with big datas Palantir for hybrid using AI, Red Hat

According to Rob Thomas, IBMs senior vice president of software, cloud and data, the new offering will help to boost the percentage of IBMs customers using AI from 20% to 80% and will be sold to 180 countries and thousands of customers, which is a pretty fundamental change for us.Palantir for IBM Cloud Pak for Datawill extend the capabilities of IBM Cloud Pak for Data and IBM Cloud Pak for Automation, and according to a recent IBM press release, the new solution is expected to simplify how businesses build and deploy AI-infused applications with IBM Watson and help users access, analyze and take action on the vast amounts of data that is scattered across hybrid cloud environments, without the need for deep technical skills.

By drawing on the no-code and low-code capabilities of Palantirs software as well as the automated data governance capabilities embedded into the latest update of IBM Cloud Pak for Data, IBM is looking to drive AI adoption across its businesses, which, if successful, can serve as a ramp to access more hybrid cloud workloads. IBM perhaps summed it up best during its 2020 Think conference, with the comment: AI is only as good as the ecosystem that supports it. While many software companies are looking to democratize AI, Red Hats open hybrid cloud approach, underpinned by Linux and Kubernetes, positions IBM to bring AI to chapter 2 of the cloud.

IBM graphic

For historical context, it is important to remember that the acquisition of Red Hat marked the beginning of IBMs dramatic transformation into a company that places the values of flexibility, openness, automation and choice at the core of its strategic agenda. IBM Cloud Paks, which are modular AI-powered solutions that enable customers to efficiently and securely move workloads to the cloud, have been a central component of IBMs evolving identity.

After more than a year of messaging to the market the critical role Red Hat OpenShift plays in IBMs hybrid cloud strategy, Big Blue is now tasked with delivering on top of the foundational layer with the AI capabilities it has been tied to since the inception of Watson. By leveraging the openness and flexibility of OpenShift, IBM continues to emphasize its Cloud Pak portfolio, which serves as the middleware layer, allowing clients to run IBM software as close or as far away from the data as they desire. This architectural approach supports IBMs cognitive applications, such as Watson AIOps and Watson Analytics, while new integrations, such as those with Palantir Foundry will support the data integration process for customers SaaS offerings.

The partnership with IBM is a landmark relationship for Palantir that provides access to a broad network of internal sales and research teams as well as IBMs expansive global customer base. To start, Palantir will now have access to the reach and influence of IBMs Cloud Paks sales force, which is a notable expansion from its current team of 30. The company already primarily sells to companies that have over $500 million in revenue, and many of them already have relationships with IBM. By partnering with IBM, Palantir will not only be able to deepen its reach into its existing customer base but also have access to a much broader customer base across multiple industries. The partnership additionally provides Palantir with access to the IBM Data Science and AI Elite Team, which helps organizations across industries address data science use cases as well as the challenges inherent in AI adoption.

As a rebrand of its partner program, IBM unveiled the Public Cloud Ecosystem program nearly one year ago, onboarding key global systems integrators, such as inaugural partner Infosys, to push out IBM Cloud Paks solutions to customers on a global scale. As IBM increasingly looks up the technology stack, where enterprise value is ultimately generated, the company is emphasizing the IBM Cloud Pak for Data, evidenced by the November launch of version 3.5 of the solution, which offers support for new services.

In addition, IBM refreshed the IBM Cloud Pak for Automation while integrating robotic process automation technology from the acquisition of WDG Automation. Alongside the product update, IBM announced there are over 50 ISV partners that offer services integrated with IBM Cloud Pak for Data, which is also now available on the Red Hat Marketplace. IBMs ability to leverage technology and services partners to draw awareness to its Red Hat portfolio has become critical and has helped accelerate the vendors efforts in industry cloud following the launch of the financial services-ready public cloud and the more recent telecommunications cloud. New Cloud Pak updates such as these highlight IBMs commitment to OpenShift as well as its growing ecosystem of partners focused on AI-driven solutions.

Palantirs software, which serves over 100 clients in 150 countries, is diversified across various industries, and the new partner solution will support IBMs industry cloud strategy by targeting AI use cases. Palantir for IBM Cloud Pak for Data was created to mitigate the challenges faced by multiple industries, including retail, financial services, healthcare and telecommunications in other words, some of the most complex, fast-changing industries in the world, according to Thomas. For instance, many financial services organizations have been involved in extensive M&A activity, which results in a fragmented and dispersed environment involving multiple pools of data.

Palantir for IBM Cloud Pak for Data will remediate associated challenges with rapid data integration, cleansing and organization. According to IBMs press release, Guy Chiarello, chief administrative officer and head of technology at Fiserv (Nasdaq: FISV), an enterprise focused on supporting financial services institutions, reacted positively to the announcement, stating, This partnership between two of the worlds technology leaders will help companies in the financial services industry provide business-ready data and scaleAIwith confidence.

(C) TBR

Follow this link:
IBM-Red Hat deal with Palantir is big boost for its artificial intelligence, cloud strategy - WRAL Tech Wire

Top Ten Legal Considerations for Use and/or Development of Artificial Intelligence in Health Care – JD Supra

The purpose of this article is to provide an overview of the top ten legal issues that health care providers and health care companies should consider when using and/or developing artificial intelligence (AI). In particular, this article summarizes, in no particular order:

Thats a long list. However, we will attempt to break down these considerations and briefly summarize them as described below.

1. Statutory, Regulatory and Common Law Requirements

Regardless of whether you encounter AI as a health care provider or a developer (or both), there are statutory, regulatory and common law requirements that may be implicated when considering AI in the health care space. Depending on the functionality that the AI is discharging, there could be state and federal laws that require a health care provider or an AI developer to seek licensure, permits and/or other registrations (for example, AI may be employed in a way that requires FDA approval if it provides diagnosis without a health care professionals review). Additionally, as AI functionality expands (and potentially replaces physicians in the provision of physician services), the question may be raised as to how these services are regulated, and whether the provision of such services would be considered the unlicensed practice of medicine or in violation of corporate practice of medicine prohibitions.

2. Ethical Considerations

Where health care decisions have been almost exclusively human in the past, the use of AI in the provision of health care raises ethical questions relating to accountability, transparency and consent. In the instance where complex, deep-learning algorithm AI is used in the diagnosis of patients, a physician may not be able to fully understand or, even more importantly, explain to his or her patient the basis of their diagnosis. As a result, a patient may be left not understanding the status of their diagnosis or being unsatisfied with the delivery of their diagnosis. Further, it may be difficult to establish accountability when errors occur in diagnosis as a result of the use of AI. Additionally, AI is not immune from algorithmic biases, which could lead to diagnosis based on gender or race or other factors that do not have a causal link to the diagnosis.

3. Reimbursement Issues

The use of AI in both patient care and administrative functions raises questions relating to reimbursement by payors for health care services. How will payors reimburse for health care services provided by AI (will they even reimburse for such services)? Will federal and state health care programs (e.g., Medicare and Medicaid) recognize services provided by AI, and will AI impact provider enrollment? AI has the potential to affect every aspect of revenue cycle management. In particular, there are concerns that errors could occur when requesting reimbursement through AI. For example, if AI is assisting providers with billing and coding, could the provider be at risk of a False Claims Act violation as a result of an AI error? In the inevitable event that an error occurs, it may be ambiguous as to who is ultimately responsible for such errors unless clearly defined contractually.

4. Contractual Exposure

5. Torts and Private Causes of Action

If AI is involved in the provision of health care (or other) services, both the developer and provider of the services may have liability under a variety of tort law principles. Under theories of strict liability, a developer may be held liable for defects in their AI that are unreasonably dangerous to users. In the case of design defects, a developer may be held liable if the AI is inadequately planned or unreasonably hazardous to consumers. At least for the near term, the AI itself probably will not be liable for its acts or omissions (but recognize that as AI evolves, tort theories could also evolve to hold the AI itself liable). As a result, those involved in the process (the developer and provider) will likely have exposure to liability associated with the AI. Whether the liability is professional liability or product liability will likely depend on the functions the AI is performing. Further, depending on how the AI is used, a provider may be required to disclose the use of AI to their patients as a part of the informed consent process.

6. Antitrust Issues

The Antitrust Division of the Department of Justice (the DOJ) has made remarks regarding algorithmic collusion that may impact the use of AI in the health care space. While acknowledging the fact that algorithmic pricing can be highly competitive, the DOJ has acknowledged that concerted action to fix prices may occur when competitors have a common understanding to use the same software to achieve the same results. As a result, the efficiencies gained by using AI with pricing information, and other competitive data, may be offset by the antitrust risks.

7. Employment and Labor Considerations

The use of AI in the workforce will likely impact the structure of employment arrangements as well as employment policies, training and liability. AI may change the structure of the workforce by increasing the efficiencies in job performance and competition for those jobs (i.e., less workforce members are necessary when tasks are performed more quickly and efficiently by AI). However, integration of AI into the workforce also may create new bases for litigation and causes of actions based on discrimination in hiring practices. If AI is used in making hiring decisions, how can you ensure decisions based on any discriminatory characteristics are removed from the analysis? AI also may affect the terms of the employment and independent contractor contractual agreements with workforce members, particularly with respect to ownership of intellectual property, restrictive covenants and confidentiality.

8. Privacy and Security Risks

Speaking of confidentiality, the use and development of AI in health care poses unique challenges to companies that have ongoing obligations to safeguard protected health information, personally identifiable information and other sensitive information. AIs processes often require enormous amounts of data. As a result, it is inevitable that using AI may implicate the Health Insurance Portability and Accountability Act (HIPAA) and state-level privacy and security laws and regulations with respect to such data, which may need to be de-identified. Alternatively, an authorization from the patient may be required prior to disclosure of the data via AI or to the AI. Further, AI poses unique challenges and risks with respect to privacy breaches and cybersecurity threats, which has an obvious negative impact on patients and providers.

9. Intellectual Property Considerations

It is of particular importance for AI developers to preserve and protect the intellectual property rights that they may be able to assert over their developments (patent rights, trademark rights, etc.) and for users of AI to understand the rights they have to use the AI they have licensed. It also is important to consider carefully who owns the data that the AI uses to learn and the liability associated with such ownership.

10. Compliance Program Implications

As technology evolves, so should a providers compliance program. When new technology such as AI is introduced, compliance program policies and procedures should be updated based on the new technology. In addition, it is important that the workforce implementing and using the AI technology is trained appropriately. As in a traditional compliance plan, continual monitoring and evaluation should take place and programs and policies should be updated pursuant to such monitoring and changes in AI.

We predict that as the use and development of AI grows in health care, so will this list of legal considerations.

See more here:
Top Ten Legal Considerations for Use and/or Development of Artificial Intelligence in Health Care - JD Supra

Artificial Intelligence And The End Of Work – Forbes

Dating back to the Industrial Revolution, people have speculated that machines would render human ... [+] work obsolete. Unlike in earlier eras, artificial intelligence will prove this prophecy true.

When looms weave by themselves, mans slavery will end. Aristotle, 4th century BC

Stanford is hosting an event next month named Intelligence Augmentation: AI Empowering People to Solve Global Challenges. This title is telling and typical.

The notion that, at its best, AI will augment rather than replace humans has become a pervasive and influential narrative in the field of artificial intelligence today.

It is a reassuring narrative. Unfortunately, it is also deeply misguided. If we are to effectively prepare ourselves for the impact that AI will have on society in the coming years, it is important for us to be more clear-eyed on this issue.

It is not hard to understand why people are receptive to a vision of the future in which AIs primary impact is to augment human activity. At an elemental level, this vision leaves us humans in control, unchallenged at the top of the cognitive food chain. It requires no deep, uncomfortable reconceptualizations from us about our place in the world. AI is, according to this line of thinking, just one more tool we have cleverly created to make our lives easier, like the wheel or the internal combustion engine.

But AI is not just one more tool, and uncomfortable reconceptualizations are on the horizon for us.

Chess provides an illustrative example to start with. Machine first surpassed man in chess in 1997, when IBMs Deep Blue computer program defeated world chess champion Garry Kasparov in a widely publicized match. In response, in the years that followed, the concept of centaur chess emerged to become a popular intellectual touchstone in discussions about AI.

The idea behind centaur chess was simple: while the best AI could now defeat the best human at chess, an AI and human working together (a centaur) would be the most powerful player of all, because man and machine would bring complementary skills to bear. It was an early version of the myth of augmentation.

And indeed, for a time, mixed AI/human teams were able to outperform AI programs at chess. Centaur chess was hailed as evidence of the irreplaceability of human creativity. As one centaur chess advocate reasoned: Human grandmasters are good at long-term chess strategy, but poor at seeing ahead for millions of possible moveswhile the reverse is true for chess-playing AIs. And because humans and AIs are strong on different dimensions, together, as a centaur, they can beat out solo humans and computers alike.

But as the years have gone by, machine intelligence has continued on its inexorable exponential upward trajectory, leaving human chess players far behind.

Today, no one talks about centaur chess. AI is now so far superior to humanity in this domain that a human player would simply have nothing to add. No serious commentator today would argue that a human working together with DeepMinds AlphaZero chess program would have an advantage over AlphaZero by itself. In the world of chess, the myth of augmentation has been proven untenable.

Chess is just a board game. What about real-world settings?

The myth of augmentation has spread far and wide in real-world contexts, too. One powerful reason why: job loss from automation is a frightening prospect and a political hot potato.

Lets unpack that. Entrepreneurs, technologists, politicians and others have much to gain by believingand by persuading others to believethat AI will not replace but rather will supplement humans in the workforce. Employment is one of the most basic social and political necessities in every society in the world today. To be openly job-destroying is therefore a losing proposition for any technology or business.

AI is going to bring humans and machines closer together, business leader Robin Bordoli said recently, echoing a narrative that has been on the lips of countless Fortune 500 CEOs in recent years. Its not about machines replacing humans, but machines augmenting humans. Humans and machines have different relative strengths and weaknesses, and its about the combination of these two that will allow human intents and business process to scale 10x, 100x, and beyond that in the coming years.

Former IBM CEO Gina Rometti summed it up even more succinctly in a 2018 Wall Street Journal op-ed: AIbetter understood as augmented intelligencecomplements, rather than replaces, human cognition.

Yet a moments honest reflection makes clear that many AI systems being built today will displace, not augment, vast swaths of human workers across the economy.

AIs core promisethe reason we are pursuing it to begin withis that it will be able to do things more accurately, more cheaply and more quickly than humans can do them today. Once AI can deliver on this promise, there will be no practical or economic justification for humans to continue to be involved in many fields.

For instance, once an AI system can provably drive a truck better and safer in all conditions than a human canthe technology is not there today, but it is getting closerit simply will not make sense for humans to continue driving trucks. In fact, it would be affirmatively harmful and wasteful to have a human in the loop: aside from saved labor costs, AI systems never speed, never get distracted, never drive drunk, and can stay on the road 24 hours a day without getting drowsy.

The startups and truck manufacturers developing self-driving truck technology today may not acknowledge it publicly, but the end game of their R&D efforts is not to augment human laborers (although that narrative always finds a receptive audience). It is to replace them. That is where the real value lies.

Radiology provides another instructive example. Radiologists primary responsibility is to examine medical images for the presence or absence of particular features, like tumors. Pattern recognition and object detection in images is exactly what deep learning excels at.

A common refrain in the field of radiology these days goes like this: AI will not replace radiologists, but radiologists who use AI will replace radiologists who do not. This is a quintessential articulation of the myth of augmentation.

And in the near term, it will be true. AI systems will not replace humans overnight, in radiology or in any other field. Workflows, organizational systems, infrastructure and user preferences take time to change. The technology will not be perfect at first. So to start, AI will indeed be used to augment human radiologists: to provide a second opinion, for instance, or to sift through troves of images to prioritize those that merit human review. In fact, this is already happening. Consider it the centaur chess phase of radiology.

But fast forward five or ten years. Once it is established beyond dispute that neural networks are superior to human radiologists at classifying medical imagesacross patient populations, care settings, disease stateswill it really make sense to continue employing human radiologists? Consider that AI systems will be able to review images instantly, at zero marginal cost, for patients anywhere in the world, and that these systems will never stop improving.

In time, the refrain quoted above will prove less on-the-mark than the controversial but prescient words of AI legend Geoff Hinton: We should stop training radiologists now. If you work as a radiologist, you are like Wile E. Coyote in the cartoon; youre already over the edge of the cliff, but you havent looked down.

What does all of this mean for us, for humanity?

A vision of the future in which AI replaces rather than augments human activity has a cascade of profound implications. We will briefly surface a few here, acknowledging that entire books can and have been written on these topics.

To begin, there will be considerable human pain and dislocation from job loss. It will occur across social strata, geographies and industries. From security guards to accountants, from taxi drivers to lawyers, from cashiers to stock brokers, from court reporters to pathologists, human workers across the economy will find their skills out of demand and their roles obsolete as increasingly sophisticated AI systems come to perform these activities better, cheaper and faster than humans can. It is not Luddite to acknowledge this inevitability.

Society needs to be nimble and imaginative in its public policy response in order to mitigate the effects of this job displacement. Meaningful investment in retraining and reskilling by both governments and private employers will be important in order to postpone the obsolescence of human workers in an increasingly AI-driven economy.

More fundamentally, a paradigm shift in how society conceives of resource allocation will be necessary in a world in which material goods and services are increasingly cheaply available thanks to automation, while demand for compensated human labor is increasingly scarce.

The idea of a universal basic incomeuntil recently, little more than a pet thought experiment among academicshas begun to be taken seriously by mainstream policymakers. Last year Spains national government launched the largest UBI program in history. One of the leading candidates in the 2020 U.S. presidential elections made UBI the centerpiece of his campaign. Expect universal basic income to become a normalized and increasingly important policy tool in the era of AI.

An important dimension of AI-driven job loss is that some roles will resist automation for far longer than others. The jobs in which humans will continue to outperform machines for the foreseeable future will not necessarily be those that are the most cognitively complex. Rather, they will be those in which our humanity itself plays an essential part.

Chief among these are roles that involve empathy, camaraderie, social interaction, the human touch. Human babysitters, nurses, therapists, schoolteachers, and social workers, for instance, will continue to find work for many years to come.

Likewise, humans will not be replaced any time soon in roles that require true originality and unconventional thinking. A clich but insightful adage about the relationship between man and AI goes as follows: as AI gets better at knowing the right answers, humans most important role will be to know which questions to ask. Roles that demand this sort of imaginativeness include, for instance, academic researchers, entrepreneurs, technologists, artists, and novelists.

In the jobs that do remain as the years go by, then, people will spend less of their energy on tedious, repeatable, soulless tasks and more of it developing human relationships, managing interpersonal dynamics, thinking creatively.

But make no mistake: a larger, more profound transition is in store for humanity as AI assumes more and more of the responsibilities that people bear today. To put it simply, we will eventually enter a post-work world.

There will not be nearly enough meaningful jobs to employ every working-age person. More radically, we will not need people to work in order to generate the material wealth necessary for everyones healthy subsistence. AI will usher in an era of bounty. It will automate (and dramatically improve upon) the value-creating activities that humans today perform; it will, for instance, enable us to synthetically generate food, shelter, and medicine at scale and at low cost.

This is a startling, almost incomprehensible vision of the future. It will require us to reconceptualize what we value and what the meaning of our lives is.

Today, adult life is largely defined by what resources we have and by how we go about accumulating those resourcesin other words, by work and money. If we relax these constraints, what will fill our lives?

No one knows what this future will look like, but here are some possible answers. More leisure time. More time to invest in family and to develop meaningful human relationships. More time for hobbies that give us joy, whether reading or fly fishing or photography. More mental space to be creative and productive for its own sake: in art, writing, music, filmmaking, journalism. More time to pursue our inborn curiosity about the world and to deepen our understanding of lifes great mysteries, from the atom to the universe. More capacity for the basic human impulse to explore: the earth, the seas, the stars.

The AI-driven transition to a post-work world will take many decades. It will be disruptive and painful. It will require us to completely reinvent our society and ourselves. But ultimately, it can and should be the greatest thing that has ever happened to humanity.

See the rest here:
Artificial Intelligence And The End Of Work - Forbes

OnSeen Announces Addition of New Artificial Intelligence Service to its LiveClaims Solution to Streamline and Accelerate the Claims Management Process…

COLUMBUS, Ohio, Feb. 17, 2021 /PRNewswire/ --OnSeen, Inc. announcedtoday the development of a new Artificial Intelligence service for its LiveClaims Solution aimed at accelerating and personalizing the claims adjustment and settlement process for P&C insurance policyholders. Using the LiveClaims AI service to intelligently automate adjuster assignment and scheduling, while simultaneously replacing manual claims adjustment activities with virtual processes, makes field adjusting faster and more efficient. In addition, automated, more accurate claims estimates can be generated in a fraction of the time compared to traditional desk adjusting by using the LiveClaims Optimizer's AI-based, learning models. These models simultaneously analyze claims data, inspection reports, replacement material, and labor cost tables. The result is a significant acceleration of claims settlement times and increased efficiency across the entire claims adjustment workflow, which is critical during catastrophes in which hundreds of claims may need to be processed within a compress timeline.

"We are applying an AI, machine learning model, A2C, as the framework for the development of our new AI service," said Ryan Memmelaar, OnSeen CTO. "During the developmental phase, we are doing model research, testing, and training using historical claims data. After production launch of the new AI service, we will perform model training and improvement through a continuous feedback loop using real claims data captured in production."

The LiveClaims Solution, supported by OnSeen's new AI service, seamlessly connects all parties involved in the claim adjustment workflow in real-time through an affordable, easy-to-use, mobile-web platform. LiveClaims is comprised of a set of integrated components. The Admin Console is used by Claims Managers to monitor, manage and oversee the Claims Adjustment process. The Adjuster App is used by field adjusters to connect with policyholders and claims managers, and capture and upload collected data required to write the claim estimate through its intelligent, dynamic property inspection forms. The Policyholder Portalis used by Policyholders to submit their claim details and photos, receive notifications and monitor their claim status in real time. And the new, AI-enabled version of the LiveClaims Optimizer applies intelligence and continual learning to the claims management process, resulting in more accurate, accelerated claims adjusting, writing and settlement.

"LiveClaims is the core technology component of our new, revolutionary Pace-EVOscoper/writer claims management service," said Bill Brassfield, CEO of Pacesetter Claims Services. "We are excited to continue partnering with OnSeen on the development of their new AI service that will make our Pace-EVO claims adjustment process faster, more accurate, and policyholder-friendly."

About OnSeen:

Founded in Columbus, Ohio, in 2017, OnSeen, Inc provides mobile workforce management software for the insurance, healthcare and government markets. The OnSeen family of services, including LiveClaims, LiveCare and LiveGov are focused on helping organizations manage their remote people, places, and things. OnSeen is a veteran-friendly company.

SOURCE OnSeen, Inc.

https://prnewswire2-a.akamaihd.net/p/1893751/sp/189375100/thumbnail/entry_id/1_jiyg4puu/def_height/480/def_width/640/version/100011/type/1

Read more here:
OnSeen Announces Addition of New Artificial Intelligence Service to its LiveClaims Solution to Streamline and Accelerate the Claims Management Process...