Archive for the ‘Artificial Intelligence’ Category

Glorikian’s New Book Sheds Light on Artificial Intelligence Advances in the Healthcare Field – The Armenian Mirror-Spectator

After describing various ways in which AI and big data are involved already in our daily lives, ranging from the food we eat, the cars we drive and the things we buy, he concludes that it is leading to the Fourth Industrial Revolution, a phrase coined by Klaus Schwab, the head of the World Economic Forum. All aspects of life will be transformed in a way analogous to the prior industrial revolutions (first the use of steam and waterpower, second the expansion of electricity and telegraph cables, and third, the digital revolution of the end of the 20th century).

At the heart of the book are the chapters in which he explains what data and AI have already accomplished for our health and what they can do in the future. The ever-expanding amount of personal data available combined with advances in AI allows for increasing accuracy of diagnoses, treatments and better sensors and software. Glorikian notes that today there are over 350,000 different healthcare apps and the mobile health market is expected to approach $290 billion in revenue by 2025.

Glorikian employs a light, informal style of writing, with references to pop culture such as Star Trek. He asks the reader questions and intersperses each chapter with what he calls sidebars. They are short illustrative stories or sets of examples. For example, AI Saved My Life: The Watch That Called 911 for a Fallen Cyclist (p. 68) starts with a man who lost consciousness after falling off his bike, and then lists other ways current phones can save lives. Other sidebars explain basic concepts like the meaning of genes and DNA; or about gene editing with CRISPR.

Present and Future Advances

Before getting into more complex issues, Glorikian describes what be most familiar to readers: the use of AI-enabled smartphone apps which guide individuals towards optimal diets and exercising as well as allow for group activities through remote communication and virtual reality. There are already countless AI-enabled smartphone apps and sensors allowing us to track our movements and exercise, as well as our diets, sleep and even stress levels. In the future, their approach will become more tailored to individual needs and data, including genomics, environment, lifestyle and molecular biology, with specific recommendations.

He speculates as to what innovations the near future may bring, remarking: What isnt clear is just how long it will take us to move from this point of collecting and finding patterns in the data, to one where we (and our healthcare providers are actively using those patterns to make accurate predications about our health. He gives the example of having an app to track migraine headaches, which can find and analyze patterns in the data (do they occur on nights when you have eaten a particular kind of food or traveled on a plane, for example). Eventually, at a more advanced stage, it might suggest you take an earlier flight or eat in a different restaurant that does not use ingredients that might be migraine triggers for you.

Healthcare will become more decentralized, Glorikian predicts, with people no longer forced to wait hours in hospital emergency rooms. Instead, some issues can be determined through phone apps and remote specialists, and others can be handled at rapid care facilities or pharmacies. Hospitals themselves will become more efficient with command centers monitoring the usage of various resources and using AI to monitor various aspects of patient health. Telerobotics will allow access to specialized surgeons located in major urban centers even if there are none in the local hospital.

In the chapter on genetics, Glorikian presents three ways in which unlocking the secrets of an individuals genome can have practical health consequences right now. The first is the prevention of bad drug reactions through pharmacogenomics, or learning how genes affect response to drugs. Second are enhanced screening and preventative treatment for hereditary cancer syndromes. One major advancement just starting to be used more, notes Glorikian, is liquid biopsy, in which a blood sample allows identification of tumor cells as opposed to standard physical biopsies. It is less invasive and sometimes more accurate for detecting cancers prior to the appearance of symptoms. The third way is DNA sequencing at birth to screen for many disorders which are treatable when caught early. The future may see corrections of various mutations through gene editing.

He points out the various benefits in the health field of collecting large sets of data. For example, it allows the use of AI or machine learning to better read mammogram results and to better predict which patients would see benefit from various procedures like cardiac resynchronization therapy or who had greater risk for cardiovascular disease. There is hope that this approach can help detect the start and the progression of diseases like Alzheimers or diabetic retinopathy. Ultimately it may even be able to predict fairly reliably when individuals would die.

At present, AI accessing sufficient data is helping identify new drugs, saving time and money by using statistical models to predict whether the new drugs will work even before trials. AI can determine which variables or dimensions to remove when making complex computations of models in order to speed up computational processes. This is important when there are large numbers of variables and vast amounts of data.

Glorikian does not miss the opportunity to use the current Covid-19 crisis as a teaching moment. In a chapter called Solving the Pandemic Problem, Glorikian discusses the role AI, machine learning and big data played in the fight against the coronavirus pandemic, in spotting it early on, predicting where it might travel next, sequencing its genome in days, and developing diagnostic tests, vaccines and treatments. Vaccine development, like drug development, is much faster today than even 20 years ago, thanks to computational modeling and virtual clinical trials and studies.

Potential Problems

Glorikian does not shy away from raising some of the potential problems associated with the wide use of AI in medicine, such as the threat to patient privacy and ethical questions about what machines should be allowed to do. Should genetic editing be allowed in humans for looks, intelligence or various types of talents? Should AI predictions of lifespan and dates of death be used? What types of decisions should machines be allowed to make in healthcare? And what sort of triage should be allowed in case of limited medical resources (if AI predicts one patient is for example ten times more likely to die than another despite medical intervention)? There are grave dangers if hackers access databanks or medical machines.

There are also potential operational problems with using data as a basis for AI, such as outdated information, biased data, missing data (and how it is handled), misanalyzed or differently analyzed data.

Despite all these issues, Glorikian is optimistic about the value of AI. He concludes, But despite the risk, for the most part, the benefits outweigh the potential downsidesThe data we willingly give up makes our lives better.

Armenian Connection

When asked at the end of June, 2022 how Armenia compares with the US and other parts of the world in the use of AI in healthcare, he made the distinction between the Armenian healthcare system and Armenian technology that is directed at the world healthcare system.

On the one hand, he said, I dont know of a lot that is being incorporated into the healthcare system, although we do have a national electronic medical record system that they have really been improving on a consistent basis. Having such a health record system throughout the country will provide data for the next step in use of AI, and that, he said is very exciting.

On the other hand, for technology companies involved in healthcare and biotechnology in Armenia, he said, I would always like to see more, but there are some really interesting companies that have sprouted up over the last five years. Also, with the tech giant NVDIA opening up a research center in Armenia, Glorikian said he hoped there will be interesting synergies since this company does invest in the healthcare area.Harry Glorikian, second from left, next to Acting Prime Minister Nikol Pashinyan, in a December 19, 2018 Yerevan meeting

At the end of 2018, Glorikian met with then Acting Prime Minister Nikol Pashinyan to discuss launching the Armenian Genome project to expand the scope of genetic studies in the field of healthcare. He said that this undertaking was halted for reasons beyond his understanding. He said, My lesson learned was you can move a lot faster and have significant impact by focusing on the private sector.

Indeed, this is what he does, as an individual investor, although he finds investing as a general partner of a fund more impactful. He is also a member of the Angel Investor Club of Armenia. While the group looks at a broad range of companies, mainly technology driven, he and a few other people in it take a look at those which are involved in healthcare. In fact, he is going to California at the very end of June to learn more about a robot companion for children called Moxie, prepared by Embodied, Inc., a company founded by veteran roboticist Paolo Pirjanian. Pirjanian, who was a guest on Glorikians podcast several weeks ago, lives in California, but Glorikian said that the back end of his companys work is done in Armenia.

Glorikian added that he is always finding out about or running into Armenians in the diaspora doing work with AI.

Changes

When asked what has changed since the publication of the book last year, he replied, Things are getting better! While hardware does not change overnight, he said that there have been incremental improvements to software during the period of time it took to write the book and then have it published. He said, For someone reading the book now, you are probably saying, I had no idea that this was even available. For someone like me, you already feel a little behind.

Readers of the book have already begun to contact Glorikian with anecdotes about what it led them to find out and do. He hopes the book will continue to reach more people. He said, The biggest thing I get out of it is when someone says I learned this and I did something about it. When individuals have access to more quantifiable data, not only can they manage their own health better, but they also provide their doctors with more data longitudinally that helps the doctor to be more effective. Glorikian said this should have a corollary effect of deflating healthcare costs in the long run.

One minor criticism of the book, at least of the paperback version that fell into the hands of this reviewer, is the poor quality of some of the images used. The text which is part of those illustrations is very hard to read. Otherwise, this is a very accessible read for an audience of varying backgrounds seeking basic information on the ongoing transformations in healthcare through AI.

Read the original post:
Glorikian's New Book Sheds Light on Artificial Intelligence Advances in the Healthcare Field - The Armenian Mirror-Spectator

Building explainability into the components of machine-learning models – MIT News

Explanation methods that help users understand and trust machine-learning models often describe how much certain features used in the model contribute to its prediction. For example, if a model predicts a patients risk of developing cardiac disease, a physician might want to know how strongly the patients heart rate data influences that prediction.

But if those features are so complex or convoluted that the user cant understand them, does the explanation method do any good?

MIT researchers are striving to improve the interpretability of features so decision makers will be more comfortable using the outputs of machine-learning models. Drawing on years of field work, they developed a taxonomy to help developers craft features that will be easier for their target audience to understand.

We found that out in the real world, even though we were using state-of-the-art ways of explaining machine-learning models, there is still a lot of confusion stemming from the features, not from the model itself, says Alexandra Zytek, an electrical engineering and computer science PhD student and lead author of a paper introducing the taxonomy.

To build the taxonomy, the researchers defined properties that make features interpretable for five types of users, from artificial intelligence experts to the people affected by a machine-learning models prediction. They also offer instructions for how model creators can transform features into formats that will be easier for a layperson to comprehend.

They hope their work will inspire model builders to consider using interpretable features from the beginning of the development process, rather than trying to work backward and focus on explainability after the fact.

MIT co-authors include Dongyu Liu, a postdoc; visiting professor Laure Berti-quille, research director at IRD; and senior author Kalyan Veeramachaneni, principal research scientist in the Laboratory for Information and Decision Systems (LIDS) and leader of the Data to AI group. They are joined by Ignacio Arnaldo, a principal data scientist at Corelight. The research is published in the June edition of the Association for Computing Machinery Special Interest Group on Knowledge Discovery and Data Minings peer-reviewed Explorations Newsletter.

Real-world lessons

Features are input variables that are fed to machine-learning models; they are usually drawn from the columns in a dataset. Data scientists typically select and handcraft features for the model, and they mainly focus on ensuring features are developed to improve model accuracy, not on whether a decision-maker can understand them, Veeramachaneni explains.

For several years, he and his team have worked with decision makers to identify machine-learning usability challenges. These domain experts, most of whom lack machine-learning knowledge, often dont trust models because they dont understand the features that influence predictions.

For one project, they partnered with clinicians in a hospital ICU who used machine learning to predict the risk a patient will face complications after cardiac surgery. Some features were presented as aggregated values, like the trend of a patients heart rate over time. While features coded this way were model ready (the model could process the data), clinicians didnt understand how they were computed. They would rather see how these aggregated features relate to original values, so they could identify anomalies in a patients heart rate, Liu says.

By contrast, a group of learning scientists preferred features that were aggregated. Instead of having a feature like number of posts a student made on discussion forums they would rather have related features grouped together and labeled with terms they understood, like participation.

With interpretability, one size doesnt fit all. When you go from area to area, there are different needs. And interpretability itself has many levels, Veeramachaneni says.

The idea that one size doesnt fit all is key to the researchers taxonomy. They define properties that can make features more or less interpretable for different decision makers and outline which properties are likely most important to specific users.

For instance, machine-learning developers might focus on having features that are compatible with the model and predictive, meaning they are expected to improve the models performance.

On the other hand, decision makers with no machine-learning experience might be better served by features that are human-worded, meaning they are described in a way that is natural for users, and understandable, meaning they refer to real-world metrics users can reason about.

The taxonomy says, if you are making interpretable features, to what level are they interpretable? You may not need all levels, depending on the type of domain experts you are working with, Zytek says.

Putting interpretability first

The researchers also outline feature engineering techniques a developer can employ to make features more interpretable for a specific audience.

Feature engineering is a process in which data scientists transform data into a format machine-learning models can process, using techniques like aggregating data or normalizing values. Most models also cant process categorical data unless they are converted to a numerical code. These transformations are often nearly impossible for laypeople to unpack.

Creating interpretable features might involve undoing some of that encoding, Zytek says. For instance, a common feature engineering technique organizes spans of data so they all contain the same number of years. To make these features more interpretable, one could group age ranges using human terms, like infant, toddler, child, and teen. Or rather than using a transformed feature like average pulse rate, an interpretable feature might simply be the actual pulse rate data, Liu adds.

In a lot of domains, the tradeoff between interpretable features and model accuracy is actually very small. When we were working with child welfare screeners, for example, we retrained the model using only features that met our definitions for interpretability, and the performance decrease was almost negligible, Zytek says.

Building off this work, the researchers are developing a system that enables a model developer to handle complicated feature transformations in a more efficient manner, to create human-centered explanations for machine-learning models. This new system will also convert algorithms designed to explain model-ready datasets into formats that can be understood by decision makers.

Read more here:
Building explainability into the components of machine-learning models - MIT News

Arm Cortex microprocessor for artificial intelligence (AI), imaging, and audio introduced by Microchip – Military & Aerospace Electronics

CHANDLER, Ariz. Microchip Technology Inc. in Chandler, Ariz., is introducing the SAMA7G54 Arm Cortex A7-based microprocessor that runs as fast as 1 GHz for low-power stereo vision applications with accurate depth perception.

The SAMA7G54 includes a MIPI CSI-2 camera interface and a traditional parallel camera interface for high-performing yet low-power artificial intelligence (AI) solutions that can be deployed at the edge, where power consumption is at a premium.

AI solutions often require advanced imaging and audio capabilities which typically are found only on multi-core microprocessors that also consume much more power.

When coupled with Microchip's MCP16502 Power Management IC (PMIC), this microprocessor enables embedded designers to fine-tune their applications for best power consumption vs. performance, while also optimizing for low overall system cost.

Related: Embedded computing sensor and signal processing meets the SWaP test

The MCP16502 is supported by Microchip's mainline Linux distribution for the SAMA7G54, allowing for easy entry and exit from available low-power modes, as well as support for dynamic voltage and frequency scaling.

For audio applications, the device has audio features such as four I2S digital audio ports, an eight-microphone array interface, an S/PDIF transmitter and receiver, as well as a stereo four-channel audio sample rate converter. It has several microphone inputs for source localization for smart speaker or video conferencing systems.

The SAMA7G54 also integrates Arm TrustZone technology with secure boot, and secure key storage and cryptography with acceleration. The SAMA7G54-EK Evaluation Kit (CPN: EV21H18A) features connectors and expansion headers for easy customization and quick access to embedded features.

For more information contact Microchip online at http://www.microchipdirect.com.

Read the original here:
Arm Cortex microprocessor for artificial intelligence (AI), imaging, and audio introduced by Microchip - Military & Aerospace Electronics

What’s Your Future of Work Path With Artificial Intelligence? – CMSWire

What does the future of artificial intelligence in the workplace look like for employee experience?

Over last few years, artificial intelligence (AI) has become a very significant part of business operations across all industries. Its already making an impact as part of our daily lives, from appliances, voice assistants, search, surveillance, marketing, autonomous vehicles, video games, TVs, to large sporting events.

AI is the result of applying cognitive science techniques to emulate human intellect and artificially create something that performs tasks that only humans can perform, like reasoning, natural communication and problem-solving. It does this by leveraging machine learning technique by reading and analyzing large data sets to identify patterns, detect anomalies and make decisions with no human intervention.

In this ever-evolving market, AI has become super crucial for businesses to upscale workplace infrastructure and improve employee experience. According to Precedence Research, the AI market size is projected to surpass around $1,597.1 billion by 2030, and is expanding growth at a CAGR of 38.1% from 2022 to 2030.

Currently, AI is being used in the workplace to automate jobs that are repetitive or require a high degree of precision, like data entry or analysis. AI can also be used to make predictions about customer behavior or market trends.

In the future, AI is expected to increasingly be used to augment human workers, providing them with recommendations or suggestions based on the data that it has been programmed to analyze.

Todays websites are capable of using AI to quickly detect potential customer intent in real-time based on interactions by the online visitor, and to show more engaging and personalized content to enhance the possibility of converting customers. As AI continues to develop, its capabilities in the workplace are expected to increase, making it an essential tool for businesses looking to stay ahead of the competition.

Kai-Fu Lee, a famous computer scientist, businessman and writer, said in a 2019 interview with CBS News, that he believes 40% of the worlds jobs will be replaced by robots capable of automating tasks.

AI has a potential to replace many types of jobs that involve mechanical or structured tasks that are repetitive in nature. Some opportunities we are seeing now are robotic vehicles, drones, surgical devices, logistics, call centers, administrative tasks like housekeeping, data entry and proofreading. Even armies of robots for security and defense are being discussed.

That said, AI is going to be a huge disruption worldwide over the next decade or so. Most innovations come from disruptions; take COVID-19 pandemic as an example, it dramatically changed how we work now.

While AI takes some jobs, it is also creates many opportunities. When it comes to strategic thinking, creativity, emotions and empathy, humans will always win over machines. This rings the bell to adapt with the change and grow human factors in workplace in all possible dimensions. Nokia and Blackberry mobile phones, Kodak cameras are the living examples of failing by not acknowledging the digital disruption. Timely market research, using the right technology and enabling the workforce to adapt for change can bring success to businesses through digital transformation.

Related Article:What's Next for Artificial Intelligence in Customer Experience?

There will be changes in the traditional means of doing things, and more jobs will be generated. AI has the potential to revolutionize the workplace, transforming how we do everything from customer service to driving cars in one of the busiest places like downtown San Francisco. However, there are still several challenges that need to be overcome before AI can be widely implemented in the workplace.

One of the biggest challenges is developing algorithms that can reliably replicate human tasks. This is often difficult because human tasks often involve common sense and reasoning, which are difficult for computers to understand. We should also ensure that AI systems are fair and unbiased. This is important because AI systems are often used to make decisions about things like hiring and promotions, and if they are biased then this can lead to discrimination. We live in the world of diversity, equity, and inclusion (DEI), and mistakes with AI can be costly for businesses. It may take a very long time to develop a customer-centric model that is completely dependent on AI, one that is reliable and trustworthy.

The future of AI is hard to predict, but there are a few key trends that are likely to shape its development. The increasing availability of data will allow AI systems to become more accurate and efficient, and as businesses and individuals rely on AI more and more, a need for new types of AI applications means more work and jobs. As these trends continue, AI is likely to have a significant impact on the workforce. It can very well lead to the automation of many cognitive tasks, including those that are currently performed by human workers.

This could result in a reduction in the overall demand for labor as well as an increase in the need for workers with skills that complement the AI systems. AI is the future of work; there's no doubt about that, but how it will shape the future of human workforce remains to be seen.

Many are worried that AI will remove many jobs, while others see it as an opportunity to increase efficiency and accuracy in the workforce. No matter which side you're on, it's important to understand how AI is changing the way we work and what that means for the future.

Related Article: 8 Examples of Artificial Intelligence in the Workplace

Let's look at few real-world examples that are already changing the way of work:

All above implementations look great. However, it is important to note that AI should be used as a supplement to human intelligence, not a replacement for it. When used properly, AI can help businesses thrive. The role of AI in the workplace is ever evolving, and it will be interesting to see how businesses adopt these technologies and improve the overall work environment to provide the best employee experience.

AnOctober 2020 Gallup pollfound that 51% of workers are not engaged they are psychologically unattached to their work and company.

Here are some employee experience aspects that AI could improve:

Employees need to know and trust that you have their best interests in mind. The value of AI in human resources is going to be critical to deliver employee experiences along with human connection and values.

Continue reading here:
What's Your Future of Work Path With Artificial Intelligence? - CMSWire

Does Artificial Intelligence Really Have the Potential to Create Transformative Art? – Literary Hub

I. The Situation

In 1896, the Lumiere brothers released a 50-second-long film, The Arrival of a Train at La Ciotat, and a myth was born. The audiences, it was reported, were so entranced by the new illusion that they jumped out of the way as the flickering image steamed towards them.

The urban legend of film-induced mass panic, established well before 1900, illustrated a valid contention if the story was, in fact, untrue: The technology had produced a new emotional reaction. That reaction was hugely powerful but inchoate and inarticulate. Nobody knew what it was doing or where it would go. Nobody had any idea that it would turn into what we call film. Today, the world is in a similar state of bountiful confusion over the creative use of artificial intelligence.

Already the power of the new technology is evident to everyone who has managed to use it. Artificial intelligence can recreate the speaking voice of dead persons. It can produce images from instructions. It can fill in the missing passages from damaged texts. It can imitate any and all literary styles. It can convert any given authorial corpus into logarithmic probability. It can create characters that speak in unpredictable but convincing ways. It can write basic newspaper articles. It can compose adequate melodies. But what any of this means, or to what uses these new abilities will ultimately be turned, are as yet unclear.

There is some fascinating creative work emerging from this primordial ooze of nascent natural language processing (NLP). Vauhini Vatas GPT-based requiem for her sister and the poetry of Sasha Stiles are experiments in the avant garde tradition. (My own NLP-work falls into this category as well, including the short story this essay accompanies.)

Then there are attempts to use AI in more popular media. Dungeon AI, which is an infinitely generated text adventure driven by deep learning, explores the gaming possibilities. Perhaps the most exciting format for NLP is in bot-character generation. Project December allows its users to recreate dead people, to have conversations with them. But theres no need for these generated voices to be based on actual human beings. Lucas Rizzotto concocted a childhood imaginary friend, Magnetron, which existed inside his familys microwave, out of OpenAI and a hundred-page backstory.

These early attempts to find spheres of expression for the new technology are dynamic and exciting, but they remain marginal. This work has not yet resonated with the public, nor has it solidified into coherent practice.

The scattered few of us who use this technology feel its eerie power. The encounter with deep learning is simultaneously ultramodern and ancient, manufacturing an unsettling impression of being recognized by a machine, or of having access, through machines, to a vast human pattern, even a collective unconscious or noosphere. But that sensation has not yet been communicated to audiences. They dont participate in it. They see only the results, the words on the page, which are little more than aftereffects.

The literary world tends to engage creative technology with either petulant resistance or slavish adulation. Neither are particularly useful. A novel about social media is still considered surprisingly innovative, and even the smartphone rarely makes an appearance in literary fiction.

Recent novels about artificial intelligence, such as Klara and the Sun by Kazuo Ishiguro or Machines Like Me by Ian McEwan, have absolutely nothing to do with actual artificial intelligence as it currently exists or will exist in the foreseeable future. They are, frankly, embarrassingly lazy on the subject.

Meanwhile, the hacker aesthetic has had its basic fraud exposed: it fantasized technologists as rebel outsiders, poised to make the world a better place, as a cover for monopolists who need excuses to justify their hunger for total impunity.

Both the resistance and the adulation are stupid, and so we find ourselves toxically ill-prepared for the moment we are facing: the intrusion of technology into the creative process. The machines are no longer lurking on the periphery; they are entering the temple, piercing the creative act itself.

The Lumiere brothers produced roughly 1,400 minute-length films, or views as they were called at the time, but nobody could see what these views would blossom into: A Trip to the Moon, and Birth of a Nation, and Citizen Kane, and Vertigo, and Apocalypse Now. Creative AI is not a new technique. It is an entirely new artistic medium. It needs to be developed as such. The question facing the small band of creators using artificial intelligence today is how we get from The Arrival of a Train at La Ciotat to Citizen Kane.

II. The Direction

One thing is certain: Nobody needs machines to make shitty poetry. Humans make quite enough of that already. The blossoming of AI art into its unique and particular reality will demand a unique and particular practice, one that sheds traditional categories of art as they currently exist and which engages audiences in ways they have never been engaged before.

One potential danger, at least in the short term, is that the technology is advancing so quickly it is unclear whether any artistic practice that emerges from it will have time to mature before it becomes obsolete.

Every example of creative AI I have listed above uses GPT-3 (enerative Pre-trained Transformer 3). But Google just very recently released its own Transformer-based large language model, PaLM, which promises low-level reasoning functions. What does that mean? What can be built from that new function? Art requires technical mastery, and also conscious transcendence of technical mastery. Even keeping up with the latest AI developments, never mind getting access to the tech, is a full-time job. And art that does nothing more than show off the power of a machine isnt doing its job.

Then there is the question of whether anyone wants computer-generated art. One of the somewhat confounding aspects of the internet generally is that it is hugely creative but fundamentally resistant to art, or at least to anything that identifies itself as art. TikTok has turned into a venue of explosive creativity but there is no Martin Scorcese of TikTok, nor could there ever be. Internet-specific genres, like Vine, are inherently ephemeral and impersonal. They arent art forms so much as widespread crafting activities, like Victorian-era collages, or Japanese Chigiri-e, or Ukrainian pysanky.

When people want to read consciously made, individually controlled language, they tend to pick up physically printed books, as ridiculous as that sounds. Creators follow the audiences. The top ten novels published this year are not fundamentally different, in their modes of composition, dissemination and consumption, from the novels of the 1950s.

But the resistance creative AI faces, both from artists and from audiences, is a sign of the power and potential of the new medium. The most exciting promise of creative AI is that it runs in complete opposition to the overarching value that defines contemporary art: Identity. The practice itself removes identity from the equation.

Since so few people have used this technology, Im afraid Ill have to use the short story that accompanies this essay as an example, although, to be clear, many people are using this tech in completely different ways and my own approach is representative of nothing but my own fascinations and capacities.

A few months ago, I received access to the product of a Canadian AI company called Cohere, which allows for sophisticated, nimble manipulations of Natural Language Processing. Through Cohere, I was able to create algorithms derived from various styles. These included Thomas Browne, Eileen Chang, Dickens, Shakespeare, Chekhov, Hemingway and others, including anthologies of love stories and Chinese nature poetry.

I then took those algorithms and had them write sentences and paragraphs for me on selected themes: a marketplace, love at first sight, a life played out after falling in love. The ones I liked I kept. The ones I didnt I threw out. Then I took the passages those algorithms had provided and input them to Sudowrite, the stochastic writing tool. Sudowrite generated texts on the basis of the prompts the other algorithms had generated.

To generate Autotuned Love Story I had to develop a separate artistic practice around the technology. Im not proposing my practice as a model; in fact, now that Ive done it, I dont see why anyone else would do what Ive done. My point is that what I created here and how I created here is distinct from traditional artistic creation.

The love story below is my attempt to develop an idealized love story out of all the love stories that I have admired. It exists on the line between art and criticism. Autotuned Love Story certainly isnt mine. I built it but its not my love story. Its the love story of the machines interacting with all the love stories I have loved. I confess that I find it eerie; there is something true and moving in it that I recognize but which I also cant place.

Creative AI is not an expression of a self. Rather it is the permutation and recombination and reframing of other identities. It is not, nor will it be, nor can it be, a representation of a generation or a race or a time. It is not a voice. Whatever voice is, it is the opposite. The process of using creative AI is literally derivative. The power of creative AI is its strange mixture of human and other. The revelation of the medium will be the exploitation of that fact.

Because creative AI is not self-expression, its development will be different from other media. On that basis, two propositions:

Artists should not use artificial intelligence to make art that people could make otherwise.

The display of technology cannot be the purpose of the art.

Creative AI should, above all, be itself and not something else. And secondly it should allow users to forget that its artificial intelligence altogether. Otherwise it will be little more than advertising for the tech, or an alibi for the artist.

Fortunately, there is a predecessor that can serve as a model, and which follows the two directions above: Hip hop. Hip hop was an art form determined, from its inception, by technological innovation. Kool Herc invented the two-turntable setup that allowed the isolation of the break, and Grandmaster Flash developed backspin, punch phrasing, and scratching. These developments required enormous technical facility but also a concentration on effects. The artists shaped the tech in response to audience reactions.

Hip hop also demanded an entirely new musicality to maximize the effects of the innovation. Building beats and sampling required a comprehensive musical knowledge. The best DJs had the widest access to music of all kinds, and were each, in a sense, archivists. They engaged in raids on the past, using history for their own purposes.

Just as hip hop artists developed a consummate familiarity with earlier forms of popular music, the artists of artificial intelligence who use large language models will need to understand the history of the sentence and the development of literary style in all forms and across all genres. Linguistic AI will demand the skills of close reading and a historical breadth as the basic terms of creation.

And when we look at the bad AI art available now the failings of the art are almost never technical. Its usually a failure to possess deep knowledge, or sometimes any knowledge, of narrative technique or poesis.

In its early years, hip hop had a defiance and a focus on effect which AI art should aspire to. They showed a willingness and capacity to create and abandon values. They did not worship their instruments. They concentrated on the results, and that spirit largely survives. A good question to ask as a rough guide to the creative direction of AI art: What would Ye do? WWYD?

III. The Stakes

Creative AI promises more powerful illusions and more all-consuming worlds. Eric Schmidt, at The Atlantic, recently offered an example of the future awaiting us:

If you imagine a child born today, you give the child a baby toy or a bear, and that bear is AI-enabled. And every year the child gets a better toy. Every year the bear gets smarter, and in a decade, the child and the bear who are best friends are watching television and the bear says, I dont really like this television show. And the kid says, Yeah, I agree with you.

Despite this terrifying promise, AI art will probably remain small and marginal in the short term, just as film was for several decades after its birth.

The development of creative AI is much, much more important than how cool the new short stories or interactive games can be. For one thing, artistic practice may serve as a desperately needed bridge between artificial intelligence and the humanities. As it stands, those who understand literature and history dont understand the technology that is about to transform the framework of language, and those who are building the technology that is revolutionizing language dont understand literature or history.

Also, the political uses of artificial intelligence will follow creative practices. Thats certainly what happened with film. A few decades after the The Arrival of a Train at La Ciotat, Lenin was using film as the primary propaganda method of the Soviet Union, and the proto-fascist Gabriele DAnnunzio filmed his triumphal entrance into the city of Fiume. Whatever forms creative AI takes will, almost immediately, be used to manipulate and control mass audiences.

Creative AI is a confrontation with the fact that an unknown number of aspects of art, so vital to our sense of human freedom, can be reduced to algorithms, to a series of external instructions. Marovecs paradoxthat the more complex and high-level a task, the easier it is to computeis fully at play. Capacities requiring a lifetime of dedication to master, like a personal literary style, can simply be programmed. The basic things remain mysteries. What makes an image powerful? What makes a story compelling? The computers have no answers to these questions.

There is a line thrusting through the world and ourselves dividing what is computable from what is not. It is a line driving straight into the heart of the mystery of humanity. AI art will ride on this line.

_________________________________________________

[This story was generated by means of natural language processing, using Cohere AI and Sudowrite accessing GPT-3.]

The rain in the market smelled like rusting metal and wet stones. The stallholders had no real need to sell nor did they care much for their customers. There was a cookery demonstration. There was a magician. There was a video games stall. There was a beauty parlour. The rain was like a mist at first, fine and barely noticeable, but not long after the streets were flowing with a torrent of mud and water.

Among huddles of people, they met in a stall that sold umbrellas. The eyes of one were large and green, soft and milky. The others eyes were like iced coffee.

Shyness came upon them at once. Shyness and fear. A butchers boy, with a beautiful nose, stood beside a post, making grimaces at a plan that was chalked out on the top of it. A ragged little boy, barefooted, and with his face smeared with blood, from having just grazed his nose against the corner of a post, began playing at marbles with other boys of his own size. Their smiles were interminable, wavering and forgetful, and it seemed as though they could not control their lips, that they smiled against their will while they thought of something else.

Alone?

Yes.

The rain became like a dirty great mop being wrung out above their heads. The market became more uneasy, and gave place to a sea of noises that on both sides added to the general clamour. The crowd began to press in on them, to snatch at their coats, to groan, to criticize and to complain of cold and hunger, of want of clean clothes, of lack of decent shelter. The rain was unremittingjust like the flow of people, the flow of traffic, the flow of tired animals. The crowd erupted and all at once it seemed that there were too many people.

When the crowd closed up again, the two were separated from one another. The rain died down and the market was now very different. They looked for each other like lost children in a train station. It was a different kind of a market, darker, older, dingier, more chaotic. The pavement was covered with mud and mire and straw and dung.

They met by accident, which is only a way of saying that we have not looked for something before it comes forward, that they were both in the world and the world is small.

*

They never met again, or maybe they did.

Maybe, at first, they had the same delight in touching, in meeting, in forming, in blurring, in drawing out. They had secrets, and they shared those secrets. As ones hands rolled over the other, they lay as still as fish. It seemed to both of them that they could not live in the old way; they could not go on living as though there were nothing new in their lives. They had to settle down together somewhere, to live for themselves, alone, to have their own home, where they would be their own masters. They went abroad, changed their lives. One was a manager of a railway branch line. The other became a teacher in a school. And the large study in which they spent their evenings was so full of pictures and flowers that it was difficult to move about without upsetting something. Pictures of all sorts, landscapes in water-colour, engravings after the old masters, and the albums filled with the photographs of relatives, friends, and children, were scattered everywhere about the bookcases, on the tables, on the chairs. Love is like money: the kind you have and do not want to lose, the kind you lose and treasure. The thought of death, which had moved them so profoundly, no longer caused in either the former fear and remorse, a sound that lost its echo in the endless, sad retreat, a phantom of caresses down hallways empty and forsaken.

Maybe they lived that life. Maybe they didnt. But in the market, among the detritus, the splintered edges, they had once found each other, and found each other and lost each other again. They had said only that, yes, they were alone.

The rain had smelled like sodden horses and rusting metal and wet stones.

Read more:
Does Artificial Intelligence Really Have the Potential to Create Transformative Art? - Literary Hub