Archive for the ‘Artificial Intelligence’ Category

BLUE CROSS BLUE SHIELD OF MASSACHUSETTS USES ARTIFICIAL INTELLIGENCE TO SPEED REVIEW TIME, AUTOMATE AUTHORIZATIONS & ELIMINATE ADMINISTRATIVE…

Review time shortened from an average of nine days to less than one day

BOSTON, Oct. 12, 2022 /PRNewswire/ --Blue Cross Blue Shield of Massachusetts("Blue Cross") today announced the completion of a proof-of-concept pilot called "FastPass," an automated prior authorization process from end-to-end, eliminating the need for faxes, phone calls and manual processes for payers and providers. The initiative, piloted at New England Baptist Hospital (NEBH), focused on addressing the major problem areas, including reducing the time from submission to decision, alleviating administrative burden, decreasing clinical review time, and increasing clinician satisfaction.

The ProblemPrior Authorization (also known as "Pre-Certification") is a process through which a clinician seeks advanced approval from a health plan to ensure that a service or treatment is covered, medically necessary, and not duplicated. Prior authorizations exist to manage excess health care costs and mitigate patient risk while also helping ensure consumers receive high-quality care. However, prior authorization can be cumbersome for clinicians.

"We realize that the prior authorization process is widely recognized as the single biggest administrative pain point for hospital staff," said Kathy Gardner, RN, vice president of clinical operations at Blue Cross. "We wanted to figure out a way to retain the value of prior authorizations ensuring our members receive treatments that are medically necessary and clinically effective while eliminating the administrative burden on our clinical partners and allowing members to get the care they need sooner."

How it worksBlue Cross engaged Olive, a leading automation and intelligence company bridging the divide in health care, to help streamline both clinician and payer processes and prior authorization decision-making using artificial intelligence (AI).

The technology automated the process of cross-checking Blue Cross' prior authorization requirements in real-time to identify if a prior authorization was required. If a prior authorization was not required, the provider received instant notification that they could proceed with scheduling the procedure. When prior authorizations were required, FastPass used AI to cross-check the clinical history in the electronic medical record against Blue Cross' medical necessity criteria and automatically generate a recommendation in real time, again giving the clinician the ability to proceed with scheduling the procedure. For the remaining prior authorization submissions that required more complex clinical review, FastPass automatically packaged and made available all the clinical documentation and notes to the clinical review team, significantly streamlining and accelerating the reviews.

The ResultsThe pilot at NEBH focused on hip and knee procedures for 32 orthopedic providers over the course of a four-month period. 88% of prior authorization submissions were processed automatically in real-time. The overall impact on prior authorization approval time went from an average of nine days to an average of less than one day. The associated impact on administrative burden and cost has been significant for Blue Cross.

"The FastPass proof-of-concept is just one step in our journey toward automating prior authorizations across BCBSMA to continue to make the process frictionless for our clinical partners and ultimately our members," said Deb Vona, senior director of business operations at Blue Cross.

About Blue Cross Blue Shield ofMassachusettsBlue Cross Blue Shield ofMassachusetts(http://www.bluecrossma.org) is a community-focused, tax-paying, not-for-profit health plan headquartered inBoston. We are committed to the relentless pursuit of quality, affordable and equitable health carewithan unparalleled consumer experience.Consistent with our promise to always put our members first, we are rated among the nation's best health plans for member satisfaction and quality.Connect with us onFacebook,Twitter,YouTube,andLinkedIn.

About OliveOlive delivers automation and intelligence to bridge the divide in healthcare. By addressing the most burdensome operational issues, Olive is reducing costs and increasing capacity for hospitals, health systems and payers, so the focus can remain on delivering the best, most effective care to patients. To learn more about Olive, visit oliveai.com.

SOURCE Blue Cross Blue Shield of Massachusetts

Read more:
BLUE CROSS BLUE SHIELD OF MASSACHUSETTS USES ARTIFICIAL INTELLIGENCE TO SPEED REVIEW TIME, AUTOMATE AUTHORIZATIONS & ELIMINATE ADMINISTRATIVE...

Lantheus Presentations at the European Association of Nuclear Medicine Annual Meeting Showcased Artificial Intelligence Data – Yahoo Finance

Lantheus Holdings, Inc.

NORTH BILLERICA, Mass., Oct. 17, 2022 (GLOBE NEWSWIRE) -- Lantheus Holdings, Inc. (the Company) (NASDAQ: LNTH), a company committed to improving patient outcomes through diagnostics, radiotherapeutics and artificial intelligence solutions that enable clinicians to Find, Fight and Follow disease, showcased artificial intelligence (AI) data at the 2022 European Association of Nuclear Medicine (EANM) Annual Meeting in Barcelona, Spain.

PYLARIFY AI has the potential to contribute meaningful insights to inform treatment selection and monitoring in prostate cancer. Our presentations at EANM highlight new data on the clinical utility of our artificial intelligence solution to assess response to prostate cancer therapy, said Etienne Montagut, Chief Business Officer, Lantheus. Lantheus continues to be a leader in harnessing the power of AI and machine learning to inform clinical decisions, and support our mission to Find, Fight and Follow disease to deliver better patient outcomes.

A summary of the data presented is included below.

In an oral presentation, the Company reviewed the results from a retrospective analysis using aPROMISE to evaluate PSMA PET/CT scans, pre- and post-androgen deprivation therapy, of men with treatment nave castration sensitive prostate cancer. The results demonstrated that a change in automated PSMA scores in bone and lymph nodes is strongly associated with PSA response. The analysis also indicated that a quantitative automated PSMA-score may assess treatment response in bone, which is not feasible with conventional imaging.1 This presentation was chosen as a top-rated oral presentation within the scientific program at EANM.

In a poster presentation, the Company shared the results from an evaluation of the volumetric expression of PSMA in prostate tumor in PET/CT against MRI PIRAD-Index 4 and 5, in patients who underwent radical prostatectomy. The volumetric expression of PSMA was quantified into an automated PSMA score, which PSMA score was calculated using the PROMISE criteria. The automated PSMA score was observed to be significantly lower in patients with PIRAD score-4 (Median=21.40; 95% CI 10.76 - 40.65), compared to that observed in PIRAD score-5 (Median=37.00; 95% CI 24.68 - 56.05), p=0.014.2

In a second poster presentation, the Company highlighted the results from a study evaluating a novel methodology for adaptive lesion segmentation in PSMA PET/CT that employs a threshold based on a decreasing percentage of maximum Standard Uptake Value (SUVmax), with the percentage dependent on SUVmax and blood-pool uptake of PSMA PET/CT imaging. The study concluded that the adaptive threshold can be applied to improve reproducibility and robustness when quantifying tumor burden in PSMA PET/CT images. The proposed adaptive thresholding for automatic lesion segmentation demonstrated significantly more accurate segmentations than the conventional method, achieving an improved precision for all lesion types and a similar recall, as compared to the conventional method.3

Story continues

About PYLARIFY AIPYLARIFY AI employs deep learning algorithms that allow healthcare professionals and researchers to perform standardized quantitative assessment of PSMA PET/CT images in prostate cancer. Through rigorous analytical and clinical studies, PYLARIFY AI has demonstrated improved consistency, accuracy and efficiency in quantitative assessment of PSMA PET/CT. An FDA-cleared medical device software, PYLARIFY AI is commercially available in the United States.

PYLARIFY AI Indications for UsePYLARIFY AI is intended to be used by healthcare professionals and researchers for acceptance, transfer, storage, image display, manipulation, quantification and reporting of digital medical images. The system is intended to be used with images acquired using nuclear medicine imaging using PSMA PET/CT. The device provides general picture Archiving and Communications System (PACS) tools as well as a clinical application for oncology including marking of regions of interest and quantitative analysis.

PYLARIFY AI Warnings and PrecautionsThe user must ensure that the patients name, ID, and study date displayed in the patient section correspond to the patient case. The user must ensure the review of the image quality and quantification analysis results before signing the report. User must review the images and quantification results in the report to ensure that the information saved and exported is correct. The quantification analysis results provided by PYLARIFY AI are intended to be used as complementary information together with other patient information. The user shall not rely solely on the information provided by PYLARIFY AI for diagnostic or treatment decisions. Quantitative indexes (aPSMA Score) are only appropriate for PSMA PET/CT images. User should not select hotspots for studies with images that do not fulfill the Quality Control requirements. In such cases, user can create and sign a report indicating that the review cannot be done due to image quality deficiencies.

AboutLantheus With more than 60 years of experience in delivering life-changing science, Lantheus is committed to improving patient outcomes through diagnostics, radiotherapeutics and artificial intelligence solutions that enable clinicians to Find, Fight and Follow disease. Lantheus is headquartered in Massachusetts and has offices in New Jersey, Canada and Sweden. For more information, visit http://www.lantheus.com.

Safe Harbor for Forward-Looking and Cautionary StatementsThis press release contains forward-looking statements within the meaning of the Private Securities Litigation Reform Act of 1995, as amended, that are subject to risks and uncertainties and are made pursuant to the safe harbor provisions of Section 27A of the Securities Act of 1933, as amended, and Section 21E of the Securities Exchange Act of 1934, as amended. Forward-looking statements may be identified by their use of terms such as can, continue, may, potential and other similar terms. Such forward-looking statements are based upon current plans, estimates and expectations that are subject to risks and uncertainties that could cause actual results to materially differ from those described in the forward-looking statements. The inclusion of forward-looking statements should not be regarded as a representation that such plans, estimates and expectations will be achieved. Readers are cautioned not to place undue reliance on the forward-looking statements contained herein, which speak only as of the date hereof. The Company undertakes no obligation to publicly update any forward-looking statement, whether as a result of new information, future developments or otherwise, except as may be required by law. Risks and uncertainties that could cause our actual results to materially differ from those described in the forward-looking statements include (i) our ability to successfully launch PYLARIFY AI as a commercial product; (ii) the market receptivity to PYLARIFY AI as a new digital application for quantitative assessment of PSMA PET/CT images in prostate cancer; (iii) the intellectual property protection of PYLARIFY AI; (iv) interruptions or performance problems associated with our digital application, including a service outage; (v) a network or data security incident that allows unauthorized access to our network or data or our customers data; and (vi) the risks and uncertainties discussed in our filings with the Securities and Exchange Commission (including those described in the Risk Factors section in our Annual Reports on Form 10-K and our Quarterly Reports on Form 10-Q).

1Anand A, et.al. PROMISE-criteria inspired quantitative response in PSMA PET to androgen deprivation in patients with treatment nave castration sensitive prostate cancer. 35th Annual Congress of the European Association of Nuclear Medicine Scientific Program, OP-060, p. 34 (Eur J Nucl Med Mol Imaging (2022) 49 (Suppl 1): S495).2Wang W. et.al. Evaluation of PROMISE criteria inspired intraprostatic PSMA-score against PIRAD-Index in patients undergoing radical prostatectomy. Eur J Nucl Med Mol Imaging (2022) 49 (Suppl 1): S489.3Brynolfsson J, et.al. A Novel Adaptive Approach to Automatic Segmentation of PSMA-positive Lesions in Positron Emission Tomography (PET) of Prostate Cancer. Eur J Nucl Med Mol Imaging (2022) 49 (Suppl 1): S596.

Contacts:Mark KinarneyVice President, Investor Relations978-671-8842ir@lantheus.com

Melissa Downs Senior Director, Corporate Communications646-975-2533media@lantheus.com

Original post:
Lantheus Presentations at the European Association of Nuclear Medicine Annual Meeting Showcased Artificial Intelligence Data - Yahoo Finance

Economists View Of Artificial Intelligence: Beyond Cheaper Prediction Power – Forbes

AI takes on predictive power; humans retain judgement.

There has been no shortage of attention given to the potential of artificial intelligence, along with related concerns about bias, data viability, costs, and employee resistance. But we may be missing the most important point when it comes to AIs ultimate impact, a leading AI proponent argues. That is, were starting to outsource a large share of human decision-making to machines, which may have unforeseen implications beyond simply making cheaper predictions.

Its time to start looking at AI not from a technologists perspective, but from an economists perspective, states Ajay Agrawal, professor at the University of Toronto, and co-author of Power and Prediction: The Disruptive Economics of Artificial Intelligence. Agrawal recently shared his views on the coming AI wave in a talk hosted at the University of British Columbias Green College. AI is moving into its next phase moving up the decision-making food chain. This is where AI is moving from sidelines to a more central role in the economy, he says.

Overall, there has been disappointment with AI, as it does not appear to be delivering the miracles initially promised, he adds, noting that many things seem much less impacted than what we thought. Productivity growth even still continues to decline. At the same time, Agrawal continues, AI is still a work in progress, and were just beginning to see it unfold.

Agrawal maintains that its time to take an economists view of AI. A computer scientist or an engineer will talk about AI in terms of advances in neural networks. But if you ask an economist what's going on with AI, they will characterize it as a drop in the cost of prediction. As AI gets better and better, it effectively makes prediction cheaper and cheaper. This is significant because we use prediction everywhere. Prediction is embedded in all kinds of things where you might not think of prediction for example, autonomous driving.

Decision-making, which is the source of financial and political power in the economy, has two components: prediction and judgment, Agrawal says. These two functions are being decoupled in AI systems humans are retaining judgement, but turning prediction over to AI. We are constantly making some form of a probability assessment and a judgment assessment whether we realize it or not, he says. The rise of AI is shifting one of those ingredients prediction from humans to machines. Were outsourcing the prediction part to the machine.

To date, AI has focused on point solutions transcribing text, detecting errors in production lines, and so forth. We've picked all the low-hanging fruit of all the point solutions where you just get a prediction, the prediction leads to a simple action, Agrawal says. Like a tool linked to a camera that predicts if a tooth on a digger in a mining operation if the tooth is broken. That's a point solution, a prediction that leads to a specific action. It doesn't impact anything else in the operation.

AI begins to realize greater value when you start building a fully autonomous system, where one prediction one decision impacts all many other decisions, Agrawal points out. From an economics perspective, we're into the realm of game theory, where if we change a decision how does that impact all other decisions?

Moving the predictive aspect of decisions to machines can be an eye-opening experience as it rolls out. AI opens the door to a flourishing of new decisions, Agrawal says. Many of these decisions are new because we previously hid them via rules, insurance and over-engineering, he says. We did such a good job hiding them that weve long since forgotten they were ever there. AI is unearthing these long-hidden, latent decisions.

This is more than an exercise in creativity it means power. Decision-making confers power; changes in decision-making can lead to changes in power, he says. Centralizing or decentralizing decision-making will consolidate or distribute power.

This means transformation throughout the economy, Agrawal states. AI is arguably the first tool in human history that learns as you use it. The more you use it, the smarter it gets because every time you use it.

Continued here:
Economists View Of Artificial Intelligence: Beyond Cheaper Prediction Power - Forbes

Artificial intelligence in government is about people, not programming – Federal Times

U.S. government investment in artificial intelligence has grown significantly in the last few years, as evidenced by the additional funding for AI research in President Bidens 2023 fiscal budget.

With more than $2 billion allocated to the National Institute of Standards and Technology and the Department of Energy for AI research and development, it is clear there is a growing enthusiasm for the technology in government.

Driving the push to implement AI is the urgent need to address federal employee burnout. A recent study found that almost two-thirds of government employees are experiencing burnout, a much higher rate than seen in the private sector. Furthermore, almost half of respondents are considering leaving their government jobs within the next year due to increased burnout and stress.

One immediate solution to help this potential crisis is the responsible implementation of artificial intelligence. AI can mitigate the impact of burnout by removing repetitive and time-consuming tasks and streamlining processes, reducing the overall burden and repetitive nature of government work. However, effective AI investments demand more than just funding and technology.

Agencies must balance efforts to scale investment in AI while responding to the unique needs and challenges of the many diverse teams that make up the federal workforce.

Currently, there is a lack of cohesive guidance leading government efforts around AI. While organizations such as NIST have released basic AI framework for RMF, organizations without any AI experience may struggle to build the necessary foundation for a mature, agile AI posture. To lay the groundwork for a long-term AI strategy while generating short-term gains to support the federal workforce quickly, agencies must consider three guideline components of AI.

Dedicated funding for AI is only one component of an effective AI strategy. Before implementing new technologies, agencies must start with their existing processes beginning with their level of data maturity.

If an agency does not have enough historical data to analyze, or the data they do have is not organized, implementing AI can create extra work on the front end for federal workers who could find themselves sorting through inaccurate or incomplete data processed by AI. For example, in response to this challenge, the Department of Defense stood up the Chief Digital and Artificial Intelligence Office (CDAO) to lead the deployment of AI across DoD, including the Departments strategy and policy on data.

Once agencies realize a baseline of data maturity, they can pilot basic AI applications such as automating basic tasks, empowering agencies to gather high-quality data and provide analysis and insight around that data, providing the information needed to create a scalable AI roadmap that can integrate with other IT modernization technologies.

But having a roadmap alone is not enough to ensure that AI-driven technologies are useful for the federal workforce.

To implement AI that truly supports federal workers, agencies need to understand the main pain points and challenges facing federal employees. For most private enterprises looking to implement new technology, user experience surveys would be a core part of the pilot program to ensure an analytics-driven understanding of the technologys successes and gaps.

However, although employee input is a crucial part of the AI planning process, government surveys are often expensive and can take months or years to consolidate into actionable data.

One way to combat this difficulty is utilizing existing AI to inform AI investments. For example, instead of sending out a survey where the results may take months to receive, an AI dashboard may provide a real-time view of overall work showing what areas need more support or automate a simple response survey where employees can provide input.

Using relatively basic AI to evaluate implementation allows agencies to gain insight into the needs of the workforce more sustainably and effectively than surveys, showing IT leaders where to implement AI for the most impact.

Once agencies have an AI baseline and understand worker needs, the last step to implementing employee-focused AI is creating a robust AI-empowered employee experience program.

There are many ways that AI can help agencies with experience management, from automating timesheets to streamlining business decisions. When AI is designed with these improvements in mind, AIs tangible benefits support both broader organizational goals and the humans working to achieve them.

Scaling AI beyond pilot programs remains a challenge. One of the primary responsibilities of the CDAO is to develop processes for AI-enabled capabilities to be developed and fielded at scale across the defense space.

CDAO addressed this issue by selectively scaling only proven AI solutions for enterprise and joint use cases. Prioritizing proven solutions ensures that the AI they are implementing runs smoothly, is easy to use and most importantly is familiar to the workforce. As AI solutions become more sophisticated, agencies can continue to expand until they have a fully scalable AI network designed by and for humans.

Contrary to much of the conversation around AI, people are the most essential component of successful artificial intelligence programs. For implementation to be successful, agencies need a human-centered AI mindset. Following these three guidelines to create human-centered AI creates space for the federal workforce to be exponentially more creative, productive and ultimately more effective in furthering agency missions, equipping leaders to elevate the full potential of their teams.

Dr. Allen Badeau is the chief technology officer for Empower AI, as well as the director of the Empower AI Center for Rapid Engagement and Agile Technology Exchange (CREATE) Lab.

This article is an Op-Ed and the opinions expressed are those of the author. If you would like to respond, or have an editorial of your own you would like to submit, please email Federal Times Senior Managing Editor Cary OReilly.

Here is the original post:
Artificial intelligence in government is about people, not programming - Federal Times

Of God and Machines – The Atlantic

This article was featured in One Story to Read Today, a newsletter in which our editors recommend a single must-read from The Atlantic, Monday through Friday. Sign up for it here.

Miracles can be perplexing at first, and artificial intelligence is a very new miracle. Were creating God, the former Google Chief Business Officer Mo Gawdat recently told an interviewer. Were summoning the demon, Elon Musk said a few years ago, in a talk at MIT. In Silicon Valley, good and evil can look much alike, but on the matter of artificial intelligence, the distinction hardly matters. Either way, an encounter with the superhuman is at hand.

Early artificial intelligence was simple: Computers that played checkers or chess, or that could figure out how to shop for groceries. But over the past few years, machine learningthe practice of teaching computers to adapt without explicit instructionshas made staggering advances in the subfield of Natural Language Processing, once every year or so. Even so, the full brunt of the technology has not arrived yet. You might hear about chatbots whose speech is indistinguishable from humans, or about documentary makers re-creating the voice of Anthony Bourdain, or about robots that can compose op-eds. But you probably dont use NLP in your everyday life.

Or rather: If you are using NLP in your everyday life, you might not always know. Unlike search or social media, whose arrivals the general public encountered and discussed and had opinions about, artificial intelligence remains esotericevery bit as important and transformative as the other great tech disruptions, but more obscure, tucked largely out of view.

Science fiction, and our own imagination, add to the confusion. We just cant help thinking of AI in terms of the technologies depicted in Ex Machina, Her, or Blade Runnerpeople-machines that remain pure fantasy. Then theres the distortion of Silicon Valley hype, the general fake-it-til-you-make-it atmosphere that gave the world WeWork and Theranos: People who want to sound cutting-edge end up calling any automated process artificial intelligence. And at the bottom of all of this bewilderment sits the mystery inherent to the technology itself, its direct thrust at the unfathomable. The most advanced NLP programs operate at a level that not even the engineers constructing them fully understand.

But the confusion surrounding the miracles of AI doesnt mean that the miracles arent happening. It just means that they wont look how anybody has imagined them. Arthur C. Clarke famously said that technology sufficiently advanced is indistinguishable from magic. Magic is coming, and its coming for all of us.

All technology is, in a sense, sorcery. A stone-chiseled ax is superhuman. No arithmetical genius can compete with a pocket calculator. Even the biggest music fan you know probably cant beat Shazam.

But the sorcery of artificial intelligence is different. When you develop a drug, or a new material, you may not understand exactly how it works, but you can isolate what substances you are dealing with, and you can test their effects. Nobody knows the cause-and-effect structure of NLP. Thats not a fault of the technology or the engineers. Its inherent to the abyss of deep learning.

I recently started fooling around with Sudowrite, a tool that uses the GPT-3 deep-learning language model to compose predictive text, but at a much more advanced scale than what you might find on your phone or laptop. Quickly, I figured out that I could copy-paste a passage by any writer into the programs input window and the program would continue writing, sensibly and lyrically. I tried Kafka. I tried Shakespeare. I tried some Romantic poets. The machine could write like any of them. In many cases, I could not distinguish between a computer-generated text and an authorial one.

A quotation from this story, as interpreted and summarized by Googles OpenAI software.

I was delighted at first, and then I was deflated. I was once a professor of Shakespeare; I had dedicated quite a chunk of my life to studying literary history. My knowledge of style and my ability to mimic it had been hard-earned. Now a computer could do all that, instantly and much better.

A few weeks later, I woke up in the middle of the night with a realization: I had never seen the program use anachronistic words. I left my wife in bed and went to check some of the texts Id generated against a few cursory etymologies. My bleary-minded hunch was true: If you asked GPT-3 to continue, say, a Wordsworth poem, the computers vocabulary would never be one moment before or after appropriate usage for the poems era. This is a skill that no scholar alive has mastered. This computer program was, somehow, expert in hermeneutics: interpretation through grammatical construction and historical context, the struggle to elucidate the nexus of meaning in time.

The details of how this could be are utterly opaque. NLP programs operate based on what technologists call parameters: pieces of information that are derived from enormous data sets of written and spoken speech, and then processed by supercomputers that are worth more than most companies. GPT-3 uses 175 billion parameters. Its interpretive power is far beyond human understanding, far beyond what our little animal brains can comprehend. Machine learning has capacities that are real, but which transcend human understanding: the definition of magic.

This unfathomability poses a spiritual conundrum. But it also poses a philosophical and legal one. In an attempt to regulate AI, the European Union has proposed transparency requirements for all machine-learning algorithms. Eric Schmidt, the ex-CEO of Google, noted that such requirements would effectively end the development of the technology. The EUs plan requires that the system would be able to explain itself. But machine-learning systems cannot fully explain how they make their decisions, he said at a 2021 summit. You use this technology to think through what you cant; thats the whole point. Inscrutability is an industrial by-product of the process.

Sorry, this animated vignette failed to load.

My little avenue of literary exploration is my own, and neither particularly central nor relevant to the unfolding power of artificial intelligence (although I can see, off the top of my head, that the tech I used will utterly transform education, journalism, film, advertising, and publishing). NLP has made its first strides into visual arts tooDall-E 2 has now created a limitless digital museum of AI-generated images drawn from nothing more than prompts.

Others have headed into deeper waters. Schmidt recently proposed a possible version of our AI future in a conversation with this magazines executive editor, Adrienne LaFrance: If you imagine a child born today, you give the child a baby toy or a bear, and that bear is AI-enabled, he said. And every year the child gets a better toy. Every year the bear gets smarter, and in a decade, the child and the bear who are best friends are watching television and the bear says, I dont really like this television show. And the kid says, Yeah, I agree with you.

Schmidts vision does not yet exist. But in late 2020, Microsoft received a patent for chatbots that bring back the dead, using inputs from images, voice data, social media posts, electronic messages, written letters, etc. to create or modify a special index in the theme of the specific persons personality. Soon after, a company called Project December released a version of just such a personality matrix. It created bots such as William, which speaks like Shakespeare, and Samantha, a rather bland female companion. But it also allowed mourners to re-create dead loved ones. An article in the San Francisco Chronicle told the story of Joshua Barbeau, who created a bot of his deceased fiance, Jessica Pereira. Their conversation started like this:

Joshua: Technically, Im not really talking to you

Jessica: Huh?

Joshua: Youre a ghost.

Jessica: *looks at herself* How?

Joshua: Magic.

Jessica: I dont like magic. Where am I?

Joshua: Im Not sure. The internet? Where does it feel like you are?

Jessica: *smiles* Everywhere and nowhere. *cups her head with her hands* I am trying to work that out. How are you, honey?

Barbeaus conversation with Jessica continued for several months. His experience of Project December was far from perfectthere were glitches, there was nonsense, the bots architecture decayedbut Barbeau really felt like he was encountering some kind of emanation of his dead fiance. The technology, in other words, came to occupy a place formerly reserved for mediums, priests, and con artists. It may not be the first intelligent machine, Jason Rohrer, the designer of Project December, has said, but it kind of feels like its the first machine with a soul.

Sorry, this animated vignette failed to load.

What we are doing is teaching computers to play every language game that we can identify. We can teach them to talk like Shakespeare, or like the dead. We can teach them to grow up alongside our children. We can certainly teach them to sell products better than we can now. Eventually, we may teach them how to be friends to the friendless, or doctors to those without care.

PaLM, Googles latest foray into NLP, has 540 billion parameters. According to the engineers who built it, it can summarize text, reason through math problems, use logic in a way thats not dissimilar from the way you and I do. These engineers also have no idea why it can do these things. Meanwhile, Google has also developed a system called Player of Games, which can be used with any game at allgames like Go, exercises in pure logic that computers have long been good at, but also games like poker, where each party has different information. This next generation of AI can toggle back and forth between brute computation and human qualities such as coordination, competition, and motivation. It is becoming an idealized solver of all manner of real-world problems previously considered far too complicated for machines: congestion planning, customer service, anything involving people in systems. These are the extremely early green shoots of an entire future tech ecosystem: The technology that contemporary NLP derives from was only published in 2017.

And if AI harnesses the power promised by quantum computing, everything Im describing here would be the first dulcet breezes of a hurricane. Ersatz humans are going to be one of the least interesting aspects of the new technology. This is not an inhuman intelligence but an inhuman capacity for digital intelligence. An artificial general intelligence will probably look more like a whole series of exponentially improving tools than a single thing. It will be a whole series of increasingly powerful and semi-invisible assistants, a whole series of increasingly powerful and semi-invisible surveillance states, a whole series of increasingly powerful and semi-invisible weapons systems. The world would change; we shouldnt expect it to change in any kind of way that you would recognize.

Our AI future will be weird and sublime and perhaps we wont even notice it happening to us. The paragraph above was composed by GPT-3. I wrote up to And if AI harnesses the power promised by quantum computing; machines did the rest.

Technology is moving into realms that were considered, for millennia, divine mysteries. AI is transforming writing and artthe divine mystery of creativity. It is bringing back the deadthe divine mystery of resurrection. It is moving closer to imitations of consciousnessthe divine mystery of reason. It is piercing the heart of how language works between peoplethe divine mystery of ethical relation.

All this is happening at a raw moment in spiritual life. The decline of religion in America is a sociological fact: Religious identification has been in precipitous decline for decades. Silicon Valley has offered two replacements: the theory of the simulation, which postulates that we are all living inside a giant computational matrix, and of the singularity, in which the imminent arrival of a computational consciousness will reconfigure the essence of our humanity.

Like all new faiths, the tech religions cannibalize their predecessors. The simulation is little more than digital Calvinism, with an omnipotent divinity that preordains the future. The singularity is digital messianism, as found in various strains of Judeo-Christian eschatologya pretty basic onscreen Revelation. Both visions are fundamentally apocalyptic. Stephen Hawking once said that the development of full artificial intelligence could spell the end of the human race. Experts in AI, even the men and women building it, commonly describe the technology as an existential threat.

But we are shockingly bad at predicting the long-term effects of technology. (Remember when everybody believed that the internet was going to improve the quality of information in the world?) So perhaps, in the case of artificial intelligence, fear is as misplaced as that earlier optimism was.

AI is not the beginning of the world, nor the end. Its a continuation. The imagination tends to be utopian or dystopian, but the future is humanan extension of what we already are. My own experience of using AI has been like standing in a river with two currents running in opposite directions at the same time: Alongside a vertiginous sense of power is a sense of humiliating disillusionment. This is some of the most advanced technology any human being has ever used. But of 415 published AI tools developed to combat COVID with globally shared information and the best resources available, not one was fit for clinical use, a recent study found; basic errors in the training data rendered them useless. In 2015, the image-recognition algorithm used by Google Photos, outside of the intention of its engineers, identified Black people as gorillas. The training sets were monstrously flawed, biased as AI very often is. Artificial intelligence doesnt do what you want it to do. It does what you tell it to do. It doesnt see who you think you are. It sees what you do. The gods of AI demand pure offerings. Bad data in, bad data out, as they say, and our species contains a great deal of bad data.

Artificial intelligence is returning us, through the most advanced technology, to somewhere primitive, original: an encounter with the permanent incompleteness of consciousness. Religions all have their approaches to magictransubstantiation for Catholics, the lost temple for the Jews. Even in the most scientific cultures, there is always the beyond. The acropolis in Athens was a fortress of wisdom, a redoubt of knowledge and the power it bringsthrough agriculture, through military victory, through the control of nature. But if you wanted the inchoate truth, you had to travel the road to Delphi.

A fragment of humanity is about to leap forward massively, and to transform itself massively as it leaps. Another fragment will remain, and look much the same as it always has: thinking meat in an inconceivable universe, hungry for meaning, gripped by fascination. The machines will leap, and the humans will look. They will answer, and we will question. The glory of what they can do will push us closer and closer to the divine. They will do things we never thought possible, and sooner than we think. They will give answers that we ourselves could never have provided. But they will also reveal that our understanding, no matter how great, is always and forever negligible. Our role is not to answer but to question, and to let our questioning run headlong, reckless, into the inarticulate.

View original post here:
Of God and Machines - The Atlantic