Archive for the ‘Artificial Intelligence’ Category

If artificial intelligence is intelligent, why is it artificial? – Arab News

When William Shakespeare titled his play Twelfth Night, he also offered up the alternative title of What You Will. Perhaps the initial title appeared too opaque or confusing? Humanitys latest play has been given the title artificial intelligence, but I suggest that we clear up some of the confusion and call it What We Will.

With the appearance of this new technology and its rapidly expanding powers, we are rushing to try to understand what it is we are unleashing. We are well aware it could have a tremendous impact on our society, but this latest discovery does not come from a Michelangelo, Beethoven or an Einstein, the last of which would be able to summarize his new understanding of the universe in one short, simple phrase. Instead, this discovery has emerged from a collection of young perhaps brilliant minds, none of whom fully understand what it is we are dealing with or where we are being led.

I cannot help but dwell on this new phenomenon being labeled both intelligent and artificial, with each of these adjectives depending on the other in a somewhat confusing way. An intelligence that is artificial is essentially reliant on humans to spread it and, eventually, to hide or disguise its artificiality. To compete with our own minds and intelligence, it has to be fully released into the world but also adopted by humans, knowingly or unknowingly. It is a novel entry to an already complex world.

Having been derived from what we call deep machine learning, artificial intelligence is able to digest immense quantities of information and make connections we have not yet made. As such, it could offer us interesting new concepts, identifying patterns we may have missed, and could make a fine assistant for some of our tasks and decision-making. Artificial intelligence could exponentially accelerate research into cures for cancer and other such pioneering applications. It will also help us to automate tasks, while reducing human error, particularly when it comes to repetitive tasks.

This arms race is not unlike the nuclear arms race, and its consequences could be equally damaging

Hassan bin Youssef Yassin

However, artificial intelligence has already started a new technological arms race between world powers, all scrambling to develop its most advanced potential military applications. It is not at all implausible that artificial intelligence could, in the relatively near future, direct wars by identifying targets and dispatching drones, and even developing strategy and rapid countermoves, just as computers today can beat the worlds greatest chess grandmasters with ease.

This arms race is not unlike the nuclear arms race, and its consequences could be equally damaging, leading to a new cold war of wits. What is most confusing about artificial intelligence today is that it is still a guessing game. We know that a massive wave is heading our way, but we do not yet know where, when or how big it will be. As Yuval Noah Harari, author of Sapiens, wrote in The Economist last week, we urgently need to regulate AI and new technologies. We need an equivalent of the Food and Drug Administration for new technology, and we need it yesterday, he wrote. Harari also contrasted new technologies such as AI with older technologies that revolutionized our world and our geopolitical realities by reminding us that nukes cannot invent more powerful nukes, (but) AI can make exponentially more powerful AI.

We are entering a new field of technological wizardry that is creating a whole new set of challenges for human society, but we cannot let this allow us to forget the many tremendous challenges we are already facing. More than the confusion artificial intelligence has already created, it is also one of those shiny new things that we cannot take our eyes or our minds off. As is our habit, we are again ignoring the more pressing challenges of environmental degradation, poverty, war and hatred that every day reduce our chances of handing over a livable world to our children.

Unfortunately, we do not have much to show for decades of effort to tame our own worst instincts and intelligence

Hassan bin Youssef Yassin

The environment is certainly not artificial; we are destroying it with every passing day, yet we know we cannot survive without it. It is not artificial to realize that we are killing our once-fertile agricultural lands, just as we are killing our oceans, but we know we cannot live only off polluted air and water. Ecosystems around the world are breaking down, as floods, wildfires and hurricanes retaliate to destroy our living spaces. I doubt that artificial intelligence will come to us with a sudden fix before it is too late.

It is warranted for us to wonder how we can regulate as diffuse and confusing a threat as artificial intelligence and other new technologies. Over the past century, we have tried to regulate warfare, we have tried to regulate weapons of mass destruction, but look at us today, embroiled in a new European war that every day threatens to turn into a nuclear-armed confrontation between world powers. Unfortunately, we do not have much to show for decades of effort to tame our own worst instincts and intelligence.

Human discoveries are key to our history and of course they have brought great advances and opportunities for us human beings. But very often they have also come with heavy price tags, as we have discovered with climate change and the destruction of our environment. Earth has unfortunately been exhausted by our greed, hatred and disregard. That is why we must make sure that we shape artificial intelligence as What We Will, because it is our responsibility to ensure that it provides us with real intelligence and not with artifice and even greater confusion.

Visit link:
If artificial intelligence is intelligent, why is it artificial? - Arab News

UT expert breaks down the pros and cons of artificial intelligence – WATE 6 On Your Side

KNOXVILLE, Tenn. (WATE) With artificial intelligence systems increasing in popularity, a big question that still lingers is how these technological advances will impact the education system and society.

Artificial intelligence, or AI, is described as software that can perform tasks that traditionally have been thought to require human intelligence.

Lynne Parker, the Associate Vice Chancellor and Director of the AI Tennessee Initiative at the University of Tennessee Knoxville, explained AI is popping up more in everyday headlines because theres more access to information than ever before and these systems can be more widely used by the public.

Parker said with systems like ChatGPT as an example, it can go out into the cyber universe and produce text that can answer almost any question or prompt.

ChatGPT is a massive AI software that has been trained by people using data thats available all across the world, so internet data, data thats from books, papers, Parker said of ChatGPT which was created by OpenAI.

She spoke about how AI is challenging the entire education system, including the professors at UT Knoxville, to reexamine how they look at assignments moving forward while still trying to reach their learning objectives.

Were having to rethink how we assess learning, how we achieve the kind of learning objectives that we want so that it cannot cause students to want to go use these tools and present that as their own materials, said Parker. Instead, perhaps they can use these tools as a starting point or critique text that has been generated.

When asked if there is any software that could detect whether a students work was produced using AI, Parker noted there is one created by the same company behind ChatGPT. However, she shares it is not 100% accurate.

Parker also shared she feels there will need to be a new level of transparency across society, stating anyone using an AI tool needs to disclose it.

Anyone who uses a tool like this for anything, it could be to generate a work of art, or a poem, or an essay, or a paper, they should declare that.

Parker took an art competition for example and said there may need to be new categories; a purely human created category and an AI assisted category. She also shared that publishers in the research world are allowing people to use AI, not to write an entire paper, but maybe parts of a paper as long as researchers disclose that information accordingly.

Parker described herself as an AI optimist, saying while changes may be ahead, early studies are showing AI can help productivity.

Early evidence is showing that people who are really good at generating ideas but who struggle to get those ideas down on paper are helped a lot by these kinds of tools because they can help get you started, Parker shared.

Here is the original post:
UT expert breaks down the pros and cons of artificial intelligence - WATE 6 On Your Side

Lithography-free photonic chip offers speed and accuracy for … – Science Daily

Photonic chips have revolutionized data-heavy technologies. On their own or in concert with traditional electronic circuits, these laser-powered devices send and process information at the speed of light, making them a promising solution for artificial intelligence's data-hungry applications.

In addition to their incomparable speed, photonic circuits use significantly less energy than electronic ones. Electrons move relatively slowly through hardware, colliding with other particles and generating heat, while photons flow without losing energy, generating no heat at all. Unburdened by the energy loss inherent in electronics, integrated photonics are poised to play a leading role in sustainable computing.

Photonics and electronics draw on separate areas of science and use distinct architectural structures. Both, however, rely on lithography to define their circuit elements and connect them sequentially. While photonic chips don't make use of the transistors that populate electronic chips' ever-shrinking and increasingly layered grooves, their complex lithographic patterning guides laser beams through a coherent circuit to form a photonic network that can perform computational algorithms.

But now, for the first time, researchers at the University of Pennsylvania School of Engineering and Applied Science have created a photonic device that provides programmable on-chip information processing without lithography, offering the speed of photonics augmented by superior accuracy and flexibility for AI applications.

Achieving unparalleled control of light, this device consists of spatially distributed optical gain and loss. Lasers cast light directly on a semiconductor wafer, without the need for defined lithographic pathways.

Liang Feng, Professor in the Departments of Materials Science and Engineering (MSE) and Electrical Systems and Engineering (ESE), along with Ph.D. student Tianwei Wu (MSE) and postdoctoral fellows Zihe Gao and Marco Menarini (ESE), introduced the microchip in a recent study published in Nature Photonics.

Silicon-based electronic systems have transformed the computational landscape. But they have clear limitations: they are slow in processing signal, they work through data serially and not in parallel, and they can only be miniaturized to a certain extent. Photonics is one of the most promising alternatives because it can overcome all these shortcomings.

"But photonic chips intended for machine learning applications face the obstacles of an intricate fabrication process where lithographic patterning is fixed, limited in reprogrammability, subject to error or damage and expensive," says Feng. "By removing the need for lithography, we are creating a new paradigm. Our chip overcomes those obstacles and offers improved accuracy and ultimate reconfigurability given the elimination of all kinds of constraints from predefined features."

Without lithography, these chips become adaptable data-processing powerhouses. Because patterns are not pre-defined and etched in, the device is intrinsically free of defects. Perhaps more impressively, the lack of lithography renders the microchip impressively reprogrammable, able to tailor its laser-cast patterns for optimal performance, be the task simple (few inputs, small datasets) or complex (many inputs, large datasets).

In other words, the intricacy or minimalism of the device is a sort of living thing, adaptable in ways no etched microchip could be.

"What we have here is something incredibly simple," says Wu. "We can build and use it very quickly. We can integrate it easily with classical electronics. And we can reprogram it, changing the laser patterns on the fly to achieve real-time reconfigurable computing for on-chip training of an AI network."

An unassuming slab of semiconductor, the device couldn't be simpler. It's the manipulation of this slab's material properties that is the key to research team's breakthrough in projecting lasers into dynamically programmable patterns to reconfigure the computing functions of the photonic information processor.

This ultimate reconfigurability is critical for real-time machine learning and AI.

"The interesting part," says Menarini, "is how we are controlling the light. Conventional photonic chips are technologies based on passive material, meaning its material scatters light, bouncing it back and forth. Our material is active. The beam of pumping light modifies the material such that when the signal beam arrives, it can release energy and increase the amplitude of signals."

"This active nature is the key to this science, and the solution required to achieve our lithography-free technology," adds Gao. "We can use it to reroute optical signals and program optical information processing on-chip."

Feng compares the technology to an artistic tool, a pen for drawing pictures on a blank page.

"What we have achieved is exactly the same: pumping light is our pen to draw the photonic computational network (the picture) on a piece of unpatterned semiconductor wafer (the blank page)."

But unlike indelible lines of ink, these beams of light can be drawn and redrawn, their patterns tracing innumerable paths to the future.

See original here:
Lithography-free photonic chip offers speed and accuracy for ... - Science Daily

System Overl04d: The Takeover of Artificial Intelligence … – The Southern Digest

The takeover of AI, or artificial intelligence, is a topic of concern for many people. With the rapid advancements in technology, there is a fear that AI could eventually become more intelligent than humans and take over many aspects of our lives.

One of the primary concerns with the takeover of AI is that it could lead to massive job loss. As AI becomes more advanced, it can replace many of the tasks that are currently performed by humans. For example, self-driving cars and trucks could replace the need for human drivers, and automated factories could replace human workers. This could lead to widespread unemployment and economic instability.

Another concern with the takeover of AI is that it could lead to a loss of control. As AI becomes more advanced, it may become more difficult for humans to understand or predict its behavior. This could lead to unintended consequences, such as the creation of autonomous weapons that could cause harm without human intervention. Additionally, there is a fear that AI could become so advanced that it could make decisions on its own, without human oversight.

There is also a concern that the takeover of AI could lead to a loss of privacy. As AI becomes more integrated into our lives, it will have access to a vast amount of data about us. This data could be used to create personalized advertising or to make decisions about our lives without our consent.

Despite these concerns, there are also potential benefits to the takeover of AI. For example, AI could be used to solve many of the world's most pressing problems, such as climate change or disease.

In conclusion, the takeover of AI is a complex and multifaceted issue that requires careful consideration. While there are certainly risks associated with the development of AI, there are also many potential benefits. It is important for us to continue to explore the possibilities of AI while also taking steps to mitigate the risks. By doing so, we can ensure that AI is used in a way that benefits humanity and does not pose a threat to our way of life.

Read more:
System Overl04d: The Takeover of Artificial Intelligence ... - The Southern Digest

Why artificial intelligence can’t bring the dead back to life – Fox News

This year is shaping up to be the year of artificial intelligence. ChatGPT has stolen most of the headlines, but it is only the most infamous in a wide assortment AI platforms. One of the most recent to arrive on the scene is HereAfter AI, an app that can "preserve memories with an app that interviews you about your life." The goal: to "let loved ones hear meaningful stories by chatting with the virtual you." Heaven, not in the clouds, but the cloud. Nirvana on your iPhone. Reincarnation through silicon.

The problem is, it wont work. Cant work, in fact.

At this point, no one doubts we can use AI to simulate a generic person, or even a particular person. But this could only ever be a simulation, not the real deal. The reason doesnt have to do with the technical limitations of AI. It rather has to do with the fact that humans are not disembodied souls or pure spirits that could be uploaded to a computer in the first place. Our bodies are not only biological realities--they are a crucial part of who we are.

A couple examples bring the point home: if you are a dancer or an athlete or a musician, you know that when you dance a tango or go in for a layup or run an arpeggio, you think with your body. If you try to think with your head ("first step there, just like so"), youll trip up. Thats why I cant dance I overthink it. Eliminate the body by putting me on an app, and youve eliminated what made me me in the first place.

OPENAI SAYS CHATGPT FEATURE LETTING USERS DISABLE CHAT HISTORY NOW AVAILABLE

Even if it were possible to upload loved ones to a computer, it isnt clear that this would be something we would want. When we lose a loved one, we would do anything to have that person back with us. Thats a natural human desire. But think through what it would mean never to lose anyone, to always have our loved ones in an app, ready for consultation. Not only our parents and grandparents would be part of our lives, but multiple generations of great-grandparents as well. That may be goodit may be, well, strange. But there's no question it would be different than anything we've ever experienced. Imagine the conversations around the Thanksgiving table. Interesting? Absolutely. Something we deeply desire? Not as clear.

There are also problems for HereAfter AI that come directly from how AI is created. To create an AI, one of the first steps is "training": feeding the model massive amounts of data. The model then looks for patterns in these data to transform them into something new. The more training data; the better the model. Thats why Facebook and Twitter and the others are data-hungry: the more data they gather, the better their models become. And it is why ChatGPT is such a powerful form of AI: it was trained on massive amounts of data. As in: all of Wikipedia, millions of ebooks, and snapshots of the entire internet.

Heres the issue: in creating an AI to mimic those weve lost, well need to train the model. How do we do that? HereAfter AI has the answer: feed the model text threads, personal letters, emails, home videos: the list goes on. As with all models, more data means a better model. The closer you come to bringing back someone you love.

CLICK HERE TO GET THE OPINION NEWSLETTER

How many of us, though, in attempting to bring back a loved one, would feed HereAfter AI all the snarky things a loved one said? The times grandma didnt give us the benefit of the doubt? The times a spouse spouted conspiracy theories or garbled words or just plain got things wrong? The times a child lied? Not much of that, Im guessing. Train it on the happy times instead.

CLICK HERE TO GET THE FOX NEWS APP

But a model, of course, is only as good as its training data. Any "person" weve created using only happy data will be but a shiny veneer of a genuine human being. All of us have bumps and warts, failings and shortcomings, biases, and blindspots. Thats part of being human. Sometimes our shortcomings are our most endearing parts: my family loves me because and not despite my quirks and limitations. Remove the bumps and warts, and you havent created a human at all. Youve instead created a saccharine caricature, dressed in a skin that resembles someone you used to know.

In the Harry Potter series, Albus Dumbledore reflects on Lord Voldemorts quest for immortality: "humans do have a knack for choosing precisely those things that are worst for them." HereAfter AI is no Lord Voldemort, but theyve made the same mistake. Life on an app--for either you or your loved ones--is not heaven. Its not something we even want. What is it? Impossible.

Go here to read the rest:
Why artificial intelligence can't bring the dead back to life - Fox News