Archive for the ‘Alphago’ Category

Taming AI to the benefit of humans – Opinion – Chinadaily.com.cn – China Daily

[Photo/VCG]

For decades, artificial intelligence (AI) has captivated humanity as an enigmatic and elusive entity, often depicted in sci-fi films. Will it emerge as a benevolent angel, devotedly serving mankind, or a malevolent demon, poised to seize control and annihilate humanity?

Previous sci-fi movies featuring AI often portray evil-minded enemies set on destroying humanity, such as The Terminator, The Matrix and Blade Runner. Experts, including late British theoretical physicist Stephen Hawking and Tesla CEO Elon Musk, have expressed concern about the potential risks of AI, with Hawking warning that it could lead to the end of the human race. These tech gurus understand the limitations of human intelligence when compared to rapidly evolving technologies like supercomputers, Big Data and cloud computing, and fear that AI will soon become too powerful to control.

In March 2016, AlphaGo, a computer program developed by Google DeepMind, decisively beat Lee Sedol, a 9-dan Korean professional Go player, with a score of 4-1. In May 2017, AlphaGo crushed Kejie 3-0, China's then-top Go player. This historic event marked the first time a machine had defeated a human at Go, widely considered one of the most complex and challenging games in the world. The victory shattered skepticism about AI's capabilities and instilled a sense of awe and fear in many. This sentiment was further reinforced when "Master," the updated version of AlphaGo, achieved an unprecedented 60-game winning streak, beating dozens of top-notch players from China, South Korea and Japan, driving human players to despair.

These victories sparked widespread interest and debate about the potential of AI and its impact on society. Some saw it as a triumph of human ingenuity and technological progress, while others expressed concern about the implications for employment, privacy and ethics. Overall, AlphaGos dominance in Go signaled a turning point in the history of AI and became a reminder of the power and potential of this rapidly evolving field.

If AlphaGo was an AI prodigy that impressed humans with its exceptional abilities, then Chat GPT, which made its debut earlier this year, along with its more powerful successor GPT, has left humans both awestruck with admiration and fearful of its potential negative impact.

GPT, or Generative Pre-trained Transformer, a language model AI, has the ability to generate human-like responses to text prompts, making it seem like you are having a conversation with a human. GPT-3, the latest version of the model, has 175 billion parameters, making it the largest language model AI to date. Some have claimed that it has passed the Turing test.

Indisputably, AI has the potential to revolutionize many industries, from healthcare and education to finance and manufacturing to transportation, by providing more accurate diagnoses, reducing accidents and analyzing large amounts of data. It is anticipated that AIs rapid development will bring immeasurable benefits to humans.

Yet, history has shown us that major technological advancements can be a double-edged sword, capable of bringing both benefits and drawbacks. For instance, the discovery of nuclear energy has led to the creation of nuclear weapons, which have caused immense destruction and loss of life. Similarly, the widespread use of social media has revolutionized communication, but it has also led to the spread of misinformation and cyberbullying.

Despite their impressive performance, the latest versions of GPT and its Chinese counterparts, such as Baidu's Wenxin Yiyan, are not entirely reliable or trustworthy due to fatal bugs. Despite my attempts to request specific metrical poems by famous ancient Chinese poets, these seemingly omniscient chatbots would display fake works they had cobbled together from their database instead of authentic ones. Even when I corrected them, they would continue to provide incorrect answers without acknowledging their ignorance. Until this bug is resolved, these chatbots cannot be considered a reliable tool.

Furthermore, AI has advanced in image and sound generation through deep learning and neural networks, including the use of GANs for realistic images and videos and text-to-speech algorithms for human-like speech. However, without strict monitoring, these advancements could be abused for criminal purposes, such as deepfake technology for creating convincing videos of people saying or doing things they never did, leading to the spread of false information or defamation.

It has been discovered that AI is being used for criminal purposes. On April 25th, the internet security police in Pingliang City, Gansu Province, uncovered an article claiming that nine people had died in a train collision that morning. Further investigation revealed that the news was entirely false. The perpetrator, a man named Hong, had utilized ChatGPT and other AI products to generate a large volume of fake news and profit illegally. Hong's use of AI tools allowed him to quickly search for and edit previous popular news stories, making them appear authentic and facilitating the spread of false information. In this case, AI played a significant role in the commission of the crime.

Due to the potential risks that AI poses to human society, many institutions worldwide have imposed bans or restrictions on GPT usage, citing security risks and plagiarism concerns. Some countries have also requested that GPT meet specific requirements, such as the European Union's proposed regulations that mandate AI systems to be transparent, explainable and subject to human oversight.

China has always prioritized ensuring the safety, reliability and controllability of AI to better empower global sustainable development. In its January 2023 Position Paper on Strengthening Ethical Governance of Artificial Intelligence, China actively advocates for the concepts of "people-oriented" and "AI for good".

In conclusion, while AI is undoubtedly critical to technological and social advancement, it must be tamed to serve humankind as a law-abiding and people-oriented assistant, rather than a deceitful and rebellious troublemaker. Ethics must take precedence, and legislation should establish regulations and accountability mechanisms for AI. An international consensus and concerted action are necessary to prevent AI from endangering human society.

The author is a Shenzhen-based English tutor.

The opinions expressed here are those of the writer and do not necessarily represent the views of China Daily and China Daily website.

If you have a specific expertise, or would like to share your thought about our stories, then send us your writings at opinion@chinadaily.com.cn, and comment@chinadaily.com.cn.

Read more from the original source:
Taming AI to the benefit of humans - Opinion - Chinadaily.com.cn - China Daily

To understand AI’s problems look at the shortcuts taken to create it – EastMojo

A machine can only do whatever we know how to order it to perform, wrote the 19th-century computing pioneer Ada Lovelace. This reassuring statement was made in relation to Charles Babbages description of the first mechanical computer.

Lady Lovelace could not have known that in 2016, a program called AlphaGo, designed to play and improve at the board game Go, would not only be able to defeat all of its creators, but would do it in ways that they could not explain.

Opt out orcontact usanytime. See ourPrivacy Policy

In 2023, the AI chatbot ChatGPT is taking this to another level, holding conversations in multiple languages, solving riddles and even passing legal and medical exams. Our machines are now able to do things that we, their makers, do not know how to order them to do.

This has provoked both excitement and concern about the potential of this technology. Our anxiety comes from not knowing what to expect from these new machines, both in terms of their immediate behaviour and of their future evolution.

We can make some sense of them, and the risks, if we consider that all their successes, and most of their problems, come directly from the particular recipe we are following to create them.

The reason why machines are now able to do things that we, their makers, do not fully understand is because they have become capable of learning from experience. AlphaGo became so good by playing more games of Go than a human could fit into a lifetime. Likewise, no human could read as many books as ChatGPT has absorbed.

Its important to understand that machines have become intelligent without thinking in a human way. This realisation alone can greatly reduce confusion, and therefore anxiety.

ADVERTISEMENT

CONTINUE READING BELOW

Intelligence is not exclusively a human ability, as any biologist will tell you, and our specific brand of it is neither its pinnacle nor its destination. It may be difficult to accept for some, but intelligence has more to do with chickens crossing the road safely than with writing poetry.

In other words, we should not necessarily expect machine intelligence to evolve towards some form of consciousness. Intelligence is the ability to do the right thing in unfamiliar situations, and this can be found in machines, for example those that recommend a new book to a user.

If we want to understand how to handle AI, we can return to a crisis that hit the industry from the late 1980s, when many researchers were still trying to mimic what we thought humans do. For example, they were trying to understand the rules of language or human reasoning, to program them into machines.

That didnt work, so they ended up taking some shortcuts. This move might well turn out to be one of the most consequential decisions in our history.

The first shortcut was to rely on making decisions based on statistical patterns found in data. This removed the need to actually understand the complex phenomena that we wanted the machines to emulate, such as language. The auto-complete feature in your messaging app can guess the next word without understanding your goals.

ADVERTISEMENT

CONTINUE READING BELOW

While others had similar ideas before, the first to make this method really work, and stick, was probably Fredrick Jelinek at IBM, who invented statistical language models, the ancestors of all GPTs, while working on machine translation.

In the early 1990s, he summed up that first shortcut by quipping: Whenever I fire a linguist, our systems performance goes up. Though the comment may have been said jokingly, it reflected a real-world shift in the focus of AI away from attempts to emulate the rules of language.

This approach rapidly spread to other domains, introducing a new problem: sourcing the data necessary to train statistical algorithms.

Creating the data specifically for training tasks would have been expensive. A second shortcut became necessary: data could be harvested from the web instead.

As for knowing the intent of users, such as in content recommendation systems, a third shortcut was found: to constantly observe users behaviour and infer from it what they might click on.

ADVERTISEMENT

CONTINUE READING BELOW

By the end of this process, AI was transformed and a new recipe was born. Today, this method is found in all online translation, recommendations and question-answering tools.

For all its success, this recipe also creates problems. How can we be sure that important decisions are made fairly, when we cannot inspect the machines inner workings?

How can we stop machines from amassing our personal data, when this is the very fuel that makes them operate? How can a machine be expected to stop harmful content from reaching users, when it is designed to learn what makes people click?

It doesnt help that we have deployed all this in a very influential position at the very centre of our digital infrastructure, and have delegated many important decisions to AI.

For instance, algorithms, rather than human decision makers, dictate what were shown on social media in real time. In 2022, the coroner who ruled on the tragic death of 14-year-old Molly Russell partly blamed an algorithm for showing harmful material to the child without being asked to.

ADVERTISEMENT

CONTINUE READING BELOW

As these concerns derive from the same shortcuts that made the technology possible, it will be challenging to find good solutions. This is also why the initial decisions of the Italian privacy authority to block ChatGPT created alarm.

Initially, the authority raised the issues of personal data being gathered from the web without a legal basis, and of the information provided by the chatbot containing errors. This could have represented a serious challenge to the entire approach, and the fact that it was solved by adding legal disclaimers, or changing the terms and conditions, might be a preview of future regulatory struggles.

Dear Reader, Over the past four years, EastMojo revolutionised the coverage of Northeast India through our sharp, impactful, and unbiased overage. And we are not saying this: you, our readers, say so about us. Thanks to you, we have become Northeast Indias largest, independent, multimedia digital news platform.Now, we need your help to sustain what you started.We are fiercely protective of our independent status and would like to remain so: it helps us provide quality journalism free from biases and agendas. From travelling to the remotest regions to cover various issues to paying local reporters honest wages to encourage them, we spend our money on where it matters.Now, we seek your support in remaining truly independent, unbiased, and objective. We want to show the world that it is possible to cover issues that matter to the people without asking for corporate and/or government support. We can do it without them; we cannot do it without you.Support independent journalism, subscribe to EastMojo.

Thank you,Karma PaljorEditor-in-Chief,eastmojo.com

We need good laws, not doomsaying. The paradigm of AI shifted long ago, but it was not followed by a corresponding shift in our legislation and culture. That time has now come.

An important conversation has started about what we should want from AI, and this will require the involvement of different types of scholars. Hopefully, it will be based on the technical reality of what we have built, and why, rather than on sci-fi fantasies or doomsday scenarios.

Nello Cristianini, Professor of Artificial Intelligence, University of Bath

ADVERTISEMENT

CONTINUE READING BELOW

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Also Read | Peanut butter is a liquid: the physics of this and other oddfluids

Like Loading...

Related

Latest Stories

Read the original:
To understand AI's problems look at the shortcuts taken to create it - EastMojo

Terence Tao Leads White House’s Generative AI Working Group … – Pandaily

On May 13th, Terence Tao, an award winning Australia-born Chinese mathematician, announced that he and physicist Laura Greene will co-chair a working group studying the impacts of generative artificial intelligence technology on the Presidential Council of Advisors on Science and Technology (PCAST). The group will hold a public meeting during the PCAST conference on May 19th, where Demis Hassabis, founder of DeepMind and creator of AlphaGo, as well as Stanford University professor Fei-Fei Li among others will give speeches.

According to Terence Taos blog, the group mainly researches the impact of generative AI technology in scientific and social fields, including large-scale language models based on text such as ChatGPT, image generators like DALL-E 2 and Midjourney, as well as scientific application models for protein design or weather forecasting. It is worth mentioning that Lisa Su, CEO of AMD, and Phil Venables, Chief Information Security Officer of Google Cloud are also members of this working group.

According to an article posted on the official website of the White House, PCAST develops evidence-based recommendations for the President on matters involving science, technology, and innovation policy, as well as on matters involving scientific and technological information that is needed to inform policy affecting the economy, worker empowerment, education, energy, the environment, public health, national and homeland security, racial equity, and other topics.

SEE ALSO: Mathematician Terence Tao Comments on ChatGPT

After the emergence of ChatGPT, top mathematicians like Terence Tao also paid great attention to it and began exploring how artificial intelligence could help them complete their work. In an article titled How will AI change mathematics? Rise of chatbots highlights discussion in the Nature Journal, Andrew Granville, a number theorist at McGill University in Canada, also said that we are studying a very specific question: will machines change mathematics? Mathematician Kevin Buzzard agrees, saying that in fact, now even Fields Medal winners and other very famous mathematicians are interested in this field, which shows that it has become popular in an unprecedented way.

Previously, Terence Tao wrote on the decentralized social network Mastodon, Today was the first day that I could definitively say that #GPT4 has saved me a significant amount of tedious work. In his experimentation, Terence Tao discovered many hidden features of ChatGPT such as searching for formulas, parsing documents with code formatting, rewriting sentences in academic papers and sometimes even semantically searching incomplete math problems to generate hints.

See the original post here:
Terence Tao Leads White House's Generative AI Working Group ... - Pandaily

Why we should be concerned about advanced AI – Epigram

By Gaurav Yadav, Second year, Law

In 1955, four scientists coined the term artificial intelligence (AI) and embarked on a summer research project aimed at developing machines capable of using language, forming abstractions and solving problems typically reserved for humans. Their ultimate goal was to create machines rivalling human intelligence. The past decade has witnessed a remarkable transformation in AI capabilities, but this rapid progress should prompt more caution than enthusiasm.

The foundation of AI lies in machine learning, a process by which machines learn from data without explicit programming. Using vast datasets and statistical methods, algorithms identify patterns and relationships in the data, later using these patterns to make predictions or decisions on previously unseen data. The current paradigm in machine learning involves developing artificial neural networks that mimic the human brain's structure.

AI systems can be divided into two categories: 'narrow' and 'general'. A 'narrow' AI system excels in specific tasks, such as image recognition or strategic games like chess or Go, whereas artificial general intelligence (AGI) refers to a system proficient across a wide range of tasks, comparable to a human being.

A growing number of people worry that the emergence of advanced AI could lead to an existential crisis. Advanced AI broadly means AI systems capable of performing all cognitive tasks typically completed by humansenvision an AI system managing a company's operations as its CEO.

Daniel Eth, a former research scholar at the Future of Humanity Institute, University of Oxford, describes the potential outcome for advanced AI as one that could involve a single AGI surpassing human experts in most fields and disciplines. Another possibility entails an ecosystem of specialised AI systems, collectively capable of virtually all cognitive tasks. While researchers may disagree on the necessity of AGI or whether current AI models are approaching advanced capabilities, a general consensus exists that advanced or transformative AI systems are theoretically feasible.

Though certain aspects of this discussion might evoke a science fiction feel, recent AI breakthroughs seem to have blurred the line between fantasy and reality. Notable examples include large language models like GPT-4 and AlphaGo's landmark victory over Lee Sedol. These advancements underscore the potential for transformative AI systems in the future. AI systems can now recognise images, produce videos, excel at StarCraft, and produce text that is indistinguishable from human writing. The state of the art in AI is now a moving target, with AI capabilities advancing year after year.

Why should we be concerned about advanced AI?

If advanced AI is unaligned with human goals, it could pose significant risks for humanity. The 'alignment problem'which is the problem of aligning the goals of an AI with human objectivesis difficult because of the black-box nature of neural networks. It is incredibly hard to know what is going on inside of an AI when its coming up with outputs. AI systems might develop their own goals that diverge from ours, which are challenging to detect and counteract.

For instance, a reinforcement learning model (another form of machine learning), controlling a boat in a racing game maximised its score by circling and collecting power-ups rather than finishing the race. Given its aim was to achieve the highest score possible, it will go about finding ways to do that even if it breaks our understanding of how to play the game.

It may seem far-fetched to argue that advanced AI systems could pose an existential risk to humanity based on this humorous example. However, if we entertain the idea that AI systems can develop goals misaligned with our intentions, it becomes easier to envision a scenario where an advanced AI system could lead to disastrous consequences for mankind.

Imagine a world where advanced AI systems gain prominence within our economic and political systems, taking control of or being granted authority over companies and institutions. As these systems accumulate power, they may eventually surpass human control, leaving us vulnerable to permanent disempowerment.

What do we do about this?

There is a growing field of professionals that are working in the field of AI safety, who are focused on solving the alignment problem and ensuring that advanced AI systems do not spiral out of control.

Presently, their efforts encompass various approaches, such as interpretability work, which aims to decipher the inner workings of otherwise opaque AI systems. Another approach involves ensuring that AI systems are truthful with us. A specific branch of this work, known as eliciting latent knowledge, explores the extraction of "knowledge" from AI systems, effectively compelling them to be honest.

At the same time, significant work is being carried out in the realm of AI governance. This includes efforts to minimise the risks associated with advanced AI systems by focusing on policy development and fostering institutional change. Organisations such as the Centre for Governance of AI are actively engaged in projects addressing various aspects of AI governance. By promoting responsible AI research and implementation, these initiatives seek to ensure that advanced AI systems are developed and deployed in ways that align with human values and societal interests.

The field of AI safety remains alarmingly underfunded and understaffed, despite the potential risks of advanced AI systems. Benjamin Hilton estimates that merely 400 people globally are actively working to reduce the likelihood of AI-related existential catastrophes. This figure is strikingly low compared to the vast number of individuals working to advance AI capabilities, which Hilton suggests is approximately 1,000 times greater.

If this has piqued your interest or concern, you might want to consider pursuing a career in AI safety. To explore further, you could read the advice provided by 80,000 Hours, a website that provides support to help students and graduates switch into careers that tackle the worlds most pressing problems, or deepen your understanding of the field of AI safety by enrolling in the AGI Safety Fundamentals Course.

Featured image: Generated using DALL-E by OpenAI

Go here to see the original:
Why we should be concerned about advanced AI - Epigram

Purdue President Chiang to grads: Let Boilermakers lead in … – Purdue University

Purdue President Mung Chiang made these remarks during the universitys Spring Commencement ceremonies May 12-14.

Opening

Today is not just any graduation but the commencement at a special place called Purdue, with a history that is rich and distinct and an accelerating momentum of excellence at scale. There is nothing more exciting than to see thousands of Boilermakers celebrate a milestone in your lives with those who have supported you. And this commencement has a special meaning to me as my first in the new role serving our university.

President Emeritus Mitch Daniels gave 10 commencement speeches, each an original treatise, throughout the Daniels Decade. I was tempted to simply ask generative AI engines to write this one for me. But I thought itd be more fun to say a few thematic words by a human for fellow humans before that becomes unfashionable.

AI at Purdue

Sometime back in the mid-20th century, AI was a hot topic for a while. Now it is again; so hot that no computation is too basic to self-anoint as AI and no challenge seems too grand to be out of its reach. But the more you know how tools such as machine learning work, the less mysterious they become.

For the moment, lets assume that AI will finally be transformational to every industry and to everyone: changing how we live, shaping what we believe in, displacing jobs. And disrupting education.

Well, after IBMs Deep Blue beat the world champion, we still play chess. After calculators, children are still taught how to add numbers. Human beings learn and do things not just as survival skills, but also for fun, or as a training of our mind.

That doesnt mean we dont adapt. Once calculators became prevalent, elementary schools pivoted to translating real-world problems into math formulations rather than training for the speed of adding numbers. Once online search became widely available, colleges taught students how to properly cite online sources.

Some have explored banning AI in education. That would be hard to enforce; its also unhealthy as students need to function in an AI-infused workplace upon graduation. We would rather Purdue evolve teaching AI and teaching with AI.

Thats why Purdue offers multiple major and minor degrees, fellowships and scholarships in AI and in its applications. Some will be offered as affordable online credentials, so please consider coming back to get another Purdue degree and enjoy more final exams!

And thats why Purdue will explore the best way to use AI in serving our students: to streamline processes and enhance efficiency so that individualized experiences can be offered at scale in West Lafayette. Machines free up human time so that we can do less and watch Netflix on a couch, or we can do more and create more with the time saved.

Pausing AI research is even less practical, not the least because AI is not a well-defined, clearly demarcated area in isolation. All universities and companies around the world would have to stop any research that involves math. My Ph.D. co-advisor, Professor Tom Cover, did groundbreaking work in the 1960s on neural networks and statistics, not realizing those would later become useful in what others call AI. We would rather Purdue advance AI research with nuanced appreciation of the pitfalls, limitations and unintended consequences in its deployment.

Thats why Purdue just launched the universitywide Institute of Physical AI. Our faculty are the leaders at the intersection of virtual and physical, where the bytes of AI meet the atoms of what we grow, make and move from agriculture tech to personalized health care. Some of Purdues experts develop AI to check and contain AI through privacy-preserving cybersecurity and fake video detection.

Limitations and Limits

As it stands today, AI is good at following rules, not breaking rules; reinforcing patterns, not creating patterns; mimicking whats given, not imagining beyond their combinations. Even individualization algorithms, ironically, work by first grouping many individuals into a small number of similarity classes.

At least for now, the more we advance artificial intelligence, the more we marvel at human intelligence. Deep Blue vs. Kasparov, or AlphaGo vs. Lee, were not fair comparisons: the machines used four orders of magnitude more energy per second! Both the biological mechanisms that generate energy from food and the amount of work we do per Joule must be astounding to machines envy. Can AI be as energy efficient as it is fast? Can it take in energy sources other than electricity? When someday it does, and when combined with sensors and robotics that touch the physical world, youd have to wonder about the fundamental differences between humans and machines.

Can AI, one day, make AI? And stop AI?

Can AI laugh, cry and dream? Can it contain multitudes and contradictions like Walt Whitman?

Will AI be aware of itself, and will it have a soul, however awareness and souls are defined? Will it also be T.S. Eliots infinitely suffering things?

Where does an AI life start and stop anyway? What constitutes the identity of one AI, and how can it live without having to die? Indeed, if the memory and logic chips sustain and merge, is AI all collectively one life? And if AI duplicates a humans mind and memory, is that human life going to stay on forever, too?

These questions will stay hypothetical until breakthroughs more architectural than just compounding silicon chips speed and exploding data to black-box algorithms.

However, if given sufficient time and as a matter of time, some of these questions are bound to eventually become real, what then is uniquely human? What would still be artificial about artificial intelligence? Some of that eventuality might, with bumps and twists, show up faster than we had thought. Perhaps in your generation!

Freedoms and Rights

If Boilermakers must face these questions, perhaps it does less harm to consider off switches controlled by individual citizens than a ban by some bureaucracy. May the medicine be no worse than the disease, and regulations by government agencies not be granular or static, for governments dont have a track record of understanding fast-changing technologies, let alone micromanaging them. Some might even argue that government access to data and arbitration of algorithms counts among the most worrisome uses of AI.

What we need are basic guardrails of accountability, in data usage compensation, intellectual property rights and legal liability.

We need skepticism in scrutinizing the dependence of AI engines output on their input. Data tends to feed on itself, and machines often give humans what we want to see.

We need to preserve dissent even when its inconvenient, and avoid philosopher kings dressed in AI even when the alternative appears inefficient.

We need entrepreneurs in free markets to invent competing AI systems and independently maximize choices outside the big tech oligopoly. Some of them will invent ways to break big data.

Where, when and how is data collected, stored and used? Like many technologies, AI is born neutral but suffers the natural tendency of being abused, especially in the name of the collective good. Todays most urgent and gravest nightmare of AI is its abuse by authoritarian regimes to irreversibly lock in the Orwellian 1984: the surveillance state oppressing rights, aided and abetted by AI three-quarters of a century after that bleak prophecy.

We need verifiable principles of individual rights, reflecting the Constitution of our country, in the age of data and machines around the globe. For example, MOTA:

My worst fear about AI is that it shrinks individual freedom. Our best hope for AI is that it advances individual freedom. That it presents more options, not more homogeneity. That the freedom to choose and free will still prevail.

Let us preserve the rights that survived other alarming headlines in centuries past.

Let our students sharpen the ability to doubt, debate and dissent.

Let a university, like Purdue, present the vista of intellectual conflicts and the toil of critical thinking.

Closing

Now, about asking AI engines to write this speech. We did ask it to write a commencement speech for the president of Purdue University on the topic of AI, after I finished drafting my own.

Im probably not intelligent enough or didnt trust the circular clichs on the web, but what I wrote had almost no overlap with what AI did. I might be biased, but the AI version reads like a B- high school essay, a grammatically correct synthesis with little specificity, originality or humor. Its so toxically generic that even adding a human in the loop to build on it proved futile. Its so boring that you would have fallen asleep even faster than you just did. By the way, you can wake up now: Im wrapping up at last.

Maybe most commencement speeches and strategic plans sound about the same: Universities have made it too easy for language models! Maybe AI can remind us to try and be a little less boring in what we say and how we think. Maybe bots can murmur: Dont you ChatGPT me whenever were just echoing in an ever smaller and louder echo chamber down to the templated syntax and tired words. Smarter AI might lead to more interesting humans.

Well, there were a few words of overlap between my draft and AIs. So, heres from both some bytes living in a chip and a human Boilermaker to you all on this 2023 Purdue Spring Commencement: Congratulations, and Boiler Up!

Read this article:
Purdue President Chiang to grads: Let Boilermakers lead in ... - Purdue University