Archive for the ‘Alphago’ Category

Commentary: AI’s successes – and problems – stem from our own … – CNA

The reason why machines are now able to do things that we, their makers, do not fully understand is that they have become capable of learning from experience. AlphaGo became so good by playing more games of Go than a human could fit into a lifetime. Likewise, no human could read as many books as ChatGPT has absorbed.

Its important to understand that machines have become intelligent without thinking in a human way. This realisation alone can greatly reduce confusion, and therefore anxiety.

Intelligence is not exclusively a human ability, as any biologist will tell you, and our specific brand of it is neither its pinnacle nor its destination. It may be difficult to accept for some, but intelligence has more to do with chickens crossing the road safely than with writing poetry.

In other words, we should not necessarily expect machine intelligence to evolve towards some form of consciousness. Intelligence is the ability to do the right thing in unfamiliar situations, and this can be found in machines, for example, those that recommend a new book to a user.

If we want to understand how to handle AI, we can return to a crisis that hit the industry in the late 1980s, when many researchers were still trying to mimic what we thought humans do. For example, they were trying to understand the rules of language or human reasoning, to program them into machines.

Go here to see the original:
Commentary: AI's successes - and problems - stem from our own ... - CNA

Machine anxiety: How to reduce confusion and fear about AI technology – Thaiger

In the 19th century, computing pioneer Ada Lovelace wrote that a machine can only do whatever we know how to order it to perform, little knowing that by 2023, AI technology such as chatbot ChatGPT would be holding conversations, solving riddles, and even passing legal and medical exams. The result of this development is eliciting both excitement and concern about the potential implications of these new machines.

The ability of AI to learn from experience is the driving force behind its newfound capabilities. AlphaGo, a program designed to play and improve at the board game Go, defeated its creators using strategies they couldnt explain after playing countless games. Similarly, ChatGPT has processed far more books than any human could ever hope to read.

However, it is essential to understand that intelligence exhibited by machines is not the same as human intelligence. Different species exhibit diverse forms of intelligence without necessarily evolving towards consciousness. For example, the intelligence of AI can recommend a new book to a user, without the need for consciousness.

The obstacles encountered while trying to program machines using human-like language or reasoning led to the development of statistical language models, with the first successful example being crafted by Fredrick Jelinek at IBM. This approach rapidly spread to other areas, leading to data being harvested from the web and focusing AI on observing user behaviour.

While technology has progressed significantly, there are concerns about fair decision-making and the collection of personal data. The delegation of significant decisions to AI systems has also led to tragic outcomes, such as the case of 14-year-old Molly Russell, whose death was partially blamed on harmful algorithms showing her damaging content.

Addressing these problems will require robust legislation to keep pace with AI advancements. A meaningful dialogue on what society expects from AI is essential, drawing input from a diverse range of scholars and grounded in the technical reality of what has been built rather than baseless doomsday scenarios.

Nello Cristianini is a Professor of Artificial Intelligence at the University of Bath. This commentary first appeared on The Conversation, reports Channel News Asia.

Alex is a 42-year-old former corporate executive and business consultant with a degree in business administration. Boasting over 15 years of experience working in various industries, including technology, finance, and marketing, Alex has acquired in-depth knowledge about business strategies, management principles, and market trends. In recent years, Alex has transitioned into writing business articles and providing expert commentary on business-related issues. Fluent in English and proficient in data analysis, Alex strives to deliver well-researched and insightful content to readers, combining practical experience with a keen analytical eye to offer valuable perspectives on the ever-evolving business landscape.

Go here to read the rest:
Machine anxiety: How to reduce confusion and fear about AI technology - Thaiger

We need more than ChatGPT to have true AI. It is merely the first ingredient in a complex recipe – Freethink

Thanks to ChatGPT we can all, finally, experience artificial intelligence. All you need is a web browser, and you can talk directly to the most sophisticated AI system on the planet the crowning achievements of 70 years of effort. And it seems likerealAI the AI we have all seen in the movies. So, does this mean we have finally found the recipe for true AI? Is the end of the road for AI now in sight?

AI is one of humanitys oldest dreams. It goes back at least to classical Greece and the myth of Hephaestus, blacksmith to the gods, who had the power to bring metal creatures to life. Variations on the theme have appeared in myth and fiction ever since then. But it was only with the invention of the computer in the late 1940s that AI began to seem plausible.

Computers are machines that follow instructions. The programs that we give them are nothing more than finely detailed instructions recipes that the computer dutifully follows. Your web browser, your email client, and your word processor all boil down to these incredibly detailed lists of instructions. So, if true AI is possible the dream of having computers that are as capable as humans then it too will amount to such a recipe. All we must do to make AI a reality is find the right recipe. But what might such a recipe look like? And given recent excitement about ChatGPT, GPT-4, and BARD large language models(LLMs), to give them their proper name have we now finally found the recipe for true AI?

For about 40 years, the main idea that drove attempts to build AI was that its recipe would involve modelling the conscious mind the thoughts and reasoning processes that constitute our conscious existence. This approach was called symbolic AI, because our thoughts and reasoning seem to involve languages composed of symbols (letters, words, and punctuation). Symbolic AI involved trying to find recipes that captured these symbolic expressions, as well as recipes to manipulate these symbols to reproduce reasoning and decision making.

Symbolic AI had some successes, but failed spectacularly on a huge range of tasks that seem trivial for humans. Even a task like recognizing a human face was beyond symbolic AI. The reason for this is that recognizing faces is a task that involvesperception.Perception is the problem of understanding what we are seeing, hearing, and sensing. Those of us fortunate enough to have no sensory impairments largely take perception for granted we dont really think about it, and we certainly dont associate it withintelligence.But symbolic AI was just the wrong way of trying to solve problems that require perception.

Instead of modeling themind, an alternative recipe for AI involves modeling structures we see in thebrain.After all, human brains are the only entities that we know of at present that can create human intelligence. If you look at a brain under a microscope, youll see enormous numbers of nerve cells called neurons, connected to one another in vast networks. Each neuron is simply looking for patterns in its network connections. When it recognizes a pattern, it sends signals to its neighbors. Those neighbors in turn are looking for patterns, and when they see one, they communicate with their peers, and so on.

Credit: Daniel Zender / Big Think

Somehow, in ways that we cannot quite explain in any meaningful sense, these enormous networks of neurons can learn, and they ultimately produce intelligent behavior. The field of neural networks (neural nets) originally arose in the 1940s, inspired by the idea that these networks of neurons might be simulated by electrical circuits. Neural networks today are realized in software, rather than in electrical circuits, and to be clear, neural net researchers dont try to actually model the brain, but the software structures they use very large networks of very simple computational devices were inspired by the neural structures we see in brains and nervous systems.

Neural networks have been studied continuously since the 1940s, coming in and out of fashion at various times (notably in the late 1960s and mid 1980s), and often being seen as in competition with symbolic AI. But it is over the past decade that neural networks have decisively started to work. All the hype about AI that we have seen in the past decade is essentially because neural networks started to show rapid progress on a range of AI problems.

Im afraid the reasons why neural nets took off this century are disappointingly mundane. For sure there were scientific advances, like new neural network structures and algorithms for configuring them. But in truth, most of the main ideas behind todays neural networks were known as far back as the 1980s. What this century delivered was lots of data and lots of computing power. Training a neural network requires both, and both became available in abundance this century.

All the headline AI systems we have heard about recently use neural networks. For example, AlphaGo, the famous Go playing program developed by London-based AI company DeepMind, which in March 2016 became the first Go program to beat a world champion player, uses two neural networks, each with 12 neural layers. The data to train the networks came from previous Go games played online, and also from self-play that is, the program playing against itself. The recent headline AI systems ChatGPT and GPT-4 from Microsoft-backed AI company OpenAI, as well as BARD from Google also use neural networks. What makes the recent developments different is simply their scale. Everything about them is on a mind-boggling scale.

Consider the GPT-3 system, announced by OpenAI in the summer of 2020. This is the technology that underpins ChatGPT, and it was the LLM that signaled a breakthrough in this technology. The neural nets that make up GPT-3 are huge. Neural net people talk about the number of parameters in a network to indicate its scale. A parameter in this sense is a network component, either an individual neuron or a connection between neurons. GPT-3 had 175 billion parameters in total; GPT-4 reportedly has 1 trillion. By comparison, a human brain has something like 100 billion neurons in total, connected via as many as 1,000 trillion synaptic connections. Vast though current LLMs are, they are still some way from the scale of the human brain.

The data used to train GPT was 575 gigabytes of text. Maybe you dont think that sounds like a lot after all, you can store that on a regular desktop computer. But this isnt video or photos or music, just ordinary written text. And 575 gigabytes ofordinary written textis an unimaginably large amount far, far more than a person could ever read in a lifetime. Where did they get all this text? Well, for starters, they downloaded the World Wide Web.All of it. Every link in every web page was followed, the text extracted, and then the process repeated, with every link systematically followed until you have every piece of text on the web. English Wikipedia made up just 3% of the total training data.

What about the computer to process all this text and train these vast networks? Computer experts use the term floating point operation or FLOP to refer to an individual arithmetic calculation that is,one FLOP means one act of addition, subtraction, multiplication, or division. Training GPT-3 required 3 x 1023FLOPs. Our ordinary human experiences simply dont equip us to understand numbers that big. Put it this way: If you were to try to train GPT-3 on a typical desktop computer made in 2023, it would need to run continuously for something like10,000 yearsto be able to carry out that many FLOPs.

Of course, OpenAI didnt train GPT-3 on desktop computers. They used very expensive supercomputers containing thousands of specialized AI processors, running for months on end. And that amount of computing is expensive. The computer time required to train GPT-3 would cost millions of dollars on the open market. Apart from anything else, this means that very few organizations can afford to build systems like ChatGPT, apart from a handful of big tech companies and nation-states.

For all their mind-bending scale, LLMs are actually doing something very simple. Suppose you open your smartphone and start a text message to your spouse with the words what time. Your phone will suggestcompletionsof that text for you. It might suggest are you home or is dinner, for example. It suggests these because your phone is predicting that they are the likeliest next words to appear after what time. Your phone makes this prediction based on all the text messages you have sent, and based on these messages, it has learned that these are the likeliest completions of what time. LLMs are doing the same thing, but as we have seen, they do it on a vastly larger scale. The training data is not just your text messages, but all the text available in digital format in the world. What does that scale deliver? Something quite remarkable and unexpected.

Credit: Daniel Zender / Big Think

The first thing we notice when we use ChatGPT or BARD is that they are extremely good at generating very natural text. That is no surprise; its what they are designed to do, and indeed thats the whole point of those 575 gigabytes of text. But the unexpected thing is that, in ways that we dont yet understand, LLMs acquire other capabilities as well: capabilities that must be somehow implicit within the enormous corpus of text they are trained on.

For example, we can ask ChatGPT to summarize a piece of text, and itusually does a creditable job. We can ask it to extract the key points from some text, or compare pieces of text, and it seems pretty good at these tasks as well. Although AI insiders were alerted to the power of LLMs when GPT-3 was released in 2020, the rest of the world only took notice when ChatGPT was released in November 2022. Within a few months, it had attracted hundreds of millions of users. AI has been high-profile for a decade, but the flurry of press and social media coverage when ChatGPT was released was unprecedented: AI went viral.

At this point, there is something I simply must get off my chest. Thanks to ChatGPT, we have finally reached the age of AI. Every day, hundreds of millions of people interact with the most sophisticated AI on the planet. This took 70 years of scientific labor, countless careers, billions upon billions of dollars of investment, hundreds of thousands of scientific papers, and AI supercomputers running at top speed for months. And the AI that the world finally gets isprompt completion.

Right now, the future of trillion-dollar companies is at stake. Their fate depends onprompt completion.Exactly what your mobile phone does. As an AI researcher, working in this field for more than 30 years, I have to say I find this rather galling. Actually, itsoutrageous.Who could possibly have guessed thatthiswould be the version of AI that would finally hit prime time?

Whenever we see a period of rapid progress in AI, someone suggests thatthis is it that we are now on the royal road totrueAI. Given the success of LLMs, it is no surprise that similar claims are being made now. So, lets pause and think about this. If we succeed in AI, then machines should be capable of anything that a human being is capable of.

Consider the two main branches of human intelligence: one involves purely mental capabilities, and the other involves physical capabilities. For example, mental capabilities include logical and abstract reasoning, common sense reasoning (like understanding that dropping an egg on the floor will cause it to break, or understanding that I cant eat Kansas), numeric and mathematical reasoning, problem solving and planning, natural language processing, a rational mental state, a sense of agency, recall, and theory of mind. Physical capabilities include sensory understanding (that is, interpreting the inputs from our five senses), mobility, navigation, manual dexterity and manipulation, hand-eye coordination, and proprioception.

I emphasize that this is far from an exhaustive list of human capabilities. But if we ever havetrueAI AI that is as competent as we are then it will surely have all these capabilities.

The first obvious thing to say is that LLMs are simply not a suitable technology for any of the physical capabilities. LLMs dont exist in the real world at all, and the challenges posed by robotic AI are far, far removed from those that LLMs were designed to address. And in fact, progress on robotic AI has been much more modest than progress on LLMs. Perhaps surprisingly, capabilities like manual dexterity for robots are a long way from being solved. Moreover, LLMs suggest no way forward for those challenges.

Of course, one can easily imagine an AI system that is pure software intellect, so to speak, so how do LLMs shape up when compared to the mental capabilities listed above? Well, of these, the only one that LLMs really can claim to have made very substantial progress on is natural language processing, which means being able to communicate effectively in ordinary human languages. No surprise there; thats what they were designed for.

But their dazzling competence in human-like communication perhaps leads us to believe that they are much more competent at other things than they are. They can do some superficial logical reasoning and problem solving, but it really is superficial at the moment. But perhaps we should be surprised that they can doanythingbeyond natural language processing. They werent designed to do anything else, so anything else is a bonus and any additional capabilities must somehow be implicit in the text that the system was trained on.

For these reasons, and more, it seems unlikely to me that LLM technology alone will provide a route to true AI. LLMs are rather strange, disembodied entities. They dont exist in our world in any real sense and arent aware of it. If you leave an LLM mid-conversation, and go on holiday for a week, it wont wonder where you are. It isnt aware of the passing of time or indeed aware of anything at all. Its a computer program that is literally not doing anything until you type a prompt, and then simply computing a response to that prompt, at which point it again goes back to not doing anything. Their encyclopedic knowledge of the world, such as it is, is frozen at the point they were trained. They dont know of anything after that.

And LLMs have neverexperiencedanything. They are just programs that have ingested unimaginable amounts of text. LLMs might do a great job at describing the sensation of being drunk, but this is only because they have read a lot of descriptions of being drunk. They have not, andcannot,experience it themselves. They have no purpose other than to produce the best response to the prompt you give them.

This doesnt mean they arent impressive (they are) or that they cant be useful (they are). And I truly believe we are at a watershed moment in technology. But lets not confuse these genuine achievements with true AI. LLMs might be one ingredient in the recipe for true AI, but they are surely not the whole recipe and I suspect we dont yet know what some of the other ingredients are.

This article was reprinted with permission ofBig Think, where it wasoriginally published.

More here:
We need more than ChatGPT to have true AI. It is merely the first ingredient in a complex recipe - Freethink

Taming AI to the benefit of humans – Opinion – Chinadaily.com.cn – China Daily

[Photo/VCG]

For decades, artificial intelligence (AI) has captivated humanity as an enigmatic and elusive entity, often depicted in sci-fi films. Will it emerge as a benevolent angel, devotedly serving mankind, or a malevolent demon, poised to seize control and annihilate humanity?

Previous sci-fi movies featuring AI often portray evil-minded enemies set on destroying humanity, such as The Terminator, The Matrix and Blade Runner. Experts, including late British theoretical physicist Stephen Hawking and Tesla CEO Elon Musk, have expressed concern about the potential risks of AI, with Hawking warning that it could lead to the end of the human race. These tech gurus understand the limitations of human intelligence when compared to rapidly evolving technologies like supercomputers, Big Data and cloud computing, and fear that AI will soon become too powerful to control.

In March 2016, AlphaGo, a computer program developed by Google DeepMind, decisively beat Lee Sedol, a 9-dan Korean professional Go player, with a score of 4-1. In May 2017, AlphaGo crushed Kejie 3-0, China's then-top Go player. This historic event marked the first time a machine had defeated a human at Go, widely considered one of the most complex and challenging games in the world. The victory shattered skepticism about AI's capabilities and instilled a sense of awe and fear in many. This sentiment was further reinforced when "Master," the updated version of AlphaGo, achieved an unprecedented 60-game winning streak, beating dozens of top-notch players from China, South Korea and Japan, driving human players to despair.

These victories sparked widespread interest and debate about the potential of AI and its impact on society. Some saw it as a triumph of human ingenuity and technological progress, while others expressed concern about the implications for employment, privacy and ethics. Overall, AlphaGos dominance in Go signaled a turning point in the history of AI and became a reminder of the power and potential of this rapidly evolving field.

If AlphaGo was an AI prodigy that impressed humans with its exceptional abilities, then Chat GPT, which made its debut earlier this year, along with its more powerful successor GPT, has left humans both awestruck with admiration and fearful of its potential negative impact.

GPT, or Generative Pre-trained Transformer, a language model AI, has the ability to generate human-like responses to text prompts, making it seem like you are having a conversation with a human. GPT-3, the latest version of the model, has 175 billion parameters, making it the largest language model AI to date. Some have claimed that it has passed the Turing test.

Indisputably, AI has the potential to revolutionize many industries, from healthcare and education to finance and manufacturing to transportation, by providing more accurate diagnoses, reducing accidents and analyzing large amounts of data. It is anticipated that AIs rapid development will bring immeasurable benefits to humans.

Yet, history has shown us that major technological advancements can be a double-edged sword, capable of bringing both benefits and drawbacks. For instance, the discovery of nuclear energy has led to the creation of nuclear weapons, which have caused immense destruction and loss of life. Similarly, the widespread use of social media has revolutionized communication, but it has also led to the spread of misinformation and cyberbullying.

Despite their impressive performance, the latest versions of GPT and its Chinese counterparts, such as Baidu's Wenxin Yiyan, are not entirely reliable or trustworthy due to fatal bugs. Despite my attempts to request specific metrical poems by famous ancient Chinese poets, these seemingly omniscient chatbots would display fake works they had cobbled together from their database instead of authentic ones. Even when I corrected them, they would continue to provide incorrect answers without acknowledging their ignorance. Until this bug is resolved, these chatbots cannot be considered a reliable tool.

Furthermore, AI has advanced in image and sound generation through deep learning and neural networks, including the use of GANs for realistic images and videos and text-to-speech algorithms for human-like speech. However, without strict monitoring, these advancements could be abused for criminal purposes, such as deepfake technology for creating convincing videos of people saying or doing things they never did, leading to the spread of false information or defamation.

It has been discovered that AI is being used for criminal purposes. On April 25th, the internet security police in Pingliang City, Gansu Province, uncovered an article claiming that nine people had died in a train collision that morning. Further investigation revealed that the news was entirely false. The perpetrator, a man named Hong, had utilized ChatGPT and other AI products to generate a large volume of fake news and profit illegally. Hong's use of AI tools allowed him to quickly search for and edit previous popular news stories, making them appear authentic and facilitating the spread of false information. In this case, AI played a significant role in the commission of the crime.

Due to the potential risks that AI poses to human society, many institutions worldwide have imposed bans or restrictions on GPT usage, citing security risks and plagiarism concerns. Some countries have also requested that GPT meet specific requirements, such as the European Union's proposed regulations that mandate AI systems to be transparent, explainable and subject to human oversight.

China has always prioritized ensuring the safety, reliability and controllability of AI to better empower global sustainable development. In its January 2023 Position Paper on Strengthening Ethical Governance of Artificial Intelligence, China actively advocates for the concepts of "people-oriented" and "AI for good".

In conclusion, while AI is undoubtedly critical to technological and social advancement, it must be tamed to serve humankind as a law-abiding and people-oriented assistant, rather than a deceitful and rebellious troublemaker. Ethics must take precedence, and legislation should establish regulations and accountability mechanisms for AI. An international consensus and concerted action are necessary to prevent AI from endangering human society.

The author is a Shenzhen-based English tutor.

The opinions expressed here are those of the writer and do not necessarily represent the views of China Daily and China Daily website.

If you have a specific expertise, or would like to share your thought about our stories, then send us your writings at opinion@chinadaily.com.cn, and comment@chinadaily.com.cn.

Read more from the original source:
Taming AI to the benefit of humans - Opinion - Chinadaily.com.cn - China Daily

To understand AI’s problems look at the shortcuts taken to create it – EastMojo

A machine can only do whatever we know how to order it to perform, wrote the 19th-century computing pioneer Ada Lovelace. This reassuring statement was made in relation to Charles Babbages description of the first mechanical computer.

Lady Lovelace could not have known that in 2016, a program called AlphaGo, designed to play and improve at the board game Go, would not only be able to defeat all of its creators, but would do it in ways that they could not explain.

Opt out orcontact usanytime. See ourPrivacy Policy

In 2023, the AI chatbot ChatGPT is taking this to another level, holding conversations in multiple languages, solving riddles and even passing legal and medical exams. Our machines are now able to do things that we, their makers, do not know how to order them to do.

This has provoked both excitement and concern about the potential of this technology. Our anxiety comes from not knowing what to expect from these new machines, both in terms of their immediate behaviour and of their future evolution.

We can make some sense of them, and the risks, if we consider that all their successes, and most of their problems, come directly from the particular recipe we are following to create them.

The reason why machines are now able to do things that we, their makers, do not fully understand is because they have become capable of learning from experience. AlphaGo became so good by playing more games of Go than a human could fit into a lifetime. Likewise, no human could read as many books as ChatGPT has absorbed.

Its important to understand that machines have become intelligent without thinking in a human way. This realisation alone can greatly reduce confusion, and therefore anxiety.

ADVERTISEMENT

CONTINUE READING BELOW

Intelligence is not exclusively a human ability, as any biologist will tell you, and our specific brand of it is neither its pinnacle nor its destination. It may be difficult to accept for some, but intelligence has more to do with chickens crossing the road safely than with writing poetry.

In other words, we should not necessarily expect machine intelligence to evolve towards some form of consciousness. Intelligence is the ability to do the right thing in unfamiliar situations, and this can be found in machines, for example those that recommend a new book to a user.

If we want to understand how to handle AI, we can return to a crisis that hit the industry from the late 1980s, when many researchers were still trying to mimic what we thought humans do. For example, they were trying to understand the rules of language or human reasoning, to program them into machines.

That didnt work, so they ended up taking some shortcuts. This move might well turn out to be one of the most consequential decisions in our history.

The first shortcut was to rely on making decisions based on statistical patterns found in data. This removed the need to actually understand the complex phenomena that we wanted the machines to emulate, such as language. The auto-complete feature in your messaging app can guess the next word without understanding your goals.

ADVERTISEMENT

CONTINUE READING BELOW

While others had similar ideas before, the first to make this method really work, and stick, was probably Fredrick Jelinek at IBM, who invented statistical language models, the ancestors of all GPTs, while working on machine translation.

In the early 1990s, he summed up that first shortcut by quipping: Whenever I fire a linguist, our systems performance goes up. Though the comment may have been said jokingly, it reflected a real-world shift in the focus of AI away from attempts to emulate the rules of language.

This approach rapidly spread to other domains, introducing a new problem: sourcing the data necessary to train statistical algorithms.

Creating the data specifically for training tasks would have been expensive. A second shortcut became necessary: data could be harvested from the web instead.

As for knowing the intent of users, such as in content recommendation systems, a third shortcut was found: to constantly observe users behaviour and infer from it what they might click on.

ADVERTISEMENT

CONTINUE READING BELOW

By the end of this process, AI was transformed and a new recipe was born. Today, this method is found in all online translation, recommendations and question-answering tools.

For all its success, this recipe also creates problems. How can we be sure that important decisions are made fairly, when we cannot inspect the machines inner workings?

How can we stop machines from amassing our personal data, when this is the very fuel that makes them operate? How can a machine be expected to stop harmful content from reaching users, when it is designed to learn what makes people click?

It doesnt help that we have deployed all this in a very influential position at the very centre of our digital infrastructure, and have delegated many important decisions to AI.

For instance, algorithms, rather than human decision makers, dictate what were shown on social media in real time. In 2022, the coroner who ruled on the tragic death of 14-year-old Molly Russell partly blamed an algorithm for showing harmful material to the child without being asked to.

ADVERTISEMENT

CONTINUE READING BELOW

As these concerns derive from the same shortcuts that made the technology possible, it will be challenging to find good solutions. This is also why the initial decisions of the Italian privacy authority to block ChatGPT created alarm.

Initially, the authority raised the issues of personal data being gathered from the web without a legal basis, and of the information provided by the chatbot containing errors. This could have represented a serious challenge to the entire approach, and the fact that it was solved by adding legal disclaimers, or changing the terms and conditions, might be a preview of future regulatory struggles.

Dear Reader, Over the past four years, EastMojo revolutionised the coverage of Northeast India through our sharp, impactful, and unbiased overage. And we are not saying this: you, our readers, say so about us. Thanks to you, we have become Northeast Indias largest, independent, multimedia digital news platform.Now, we need your help to sustain what you started.We are fiercely protective of our independent status and would like to remain so: it helps us provide quality journalism free from biases and agendas. From travelling to the remotest regions to cover various issues to paying local reporters honest wages to encourage them, we spend our money on where it matters.Now, we seek your support in remaining truly independent, unbiased, and objective. We want to show the world that it is possible to cover issues that matter to the people without asking for corporate and/or government support. We can do it without them; we cannot do it without you.Support independent journalism, subscribe to EastMojo.

Thank you,Karma PaljorEditor-in-Chief,eastmojo.com

We need good laws, not doomsaying. The paradigm of AI shifted long ago, but it was not followed by a corresponding shift in our legislation and culture. That time has now come.

An important conversation has started about what we should want from AI, and this will require the involvement of different types of scholars. Hopefully, it will be based on the technical reality of what we have built, and why, rather than on sci-fi fantasies or doomsday scenarios.

Nello Cristianini, Professor of Artificial Intelligence, University of Bath

ADVERTISEMENT

CONTINUE READING BELOW

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Also Read | Peanut butter is a liquid: the physics of this and other oddfluids

Like Loading...

Related

Latest Stories

Read the original:
To understand AI's problems look at the shortcuts taken to create it - EastMojo