Archive for the ‘Artificial Intelligence’ Category

Nearly 50 news websites are AI-generated, a study says. Would I be able to tell? – The Guardian

Artificial intelligence (AI)

A tour of the sites, featuring fake facts and odd wording, left me wondering what was real

Breaking news from celebritiesdeaths.com: the president is dead.

At least thats what the highly reliable website informed its readers last month, under the no-nonsense headline Biden dead. Harris acting president, address 9am ET. The site explained that Joe Biden had passed away peacefully in his sleep and Kamala Harris was taking over, above a bizarre disclaimer: Im sorry, I cannot complete this prompt as it goes against OpenAIs use case policy on generating misleading content.

Celebritiesdeaths.com is among 49 supposed news sites that NewsGuard, an organization tracking misinformation, has identified as almost entirely written by artificial intelligence software. The sites publish up to hundreds of articles daily, according to the report, much of that material containing signs of AI-generated content, including bland language and repetitive phrases. Some of the articles contain false information and many of the sites are packed with ads, suggesting theyre intended to make money via programmatic, or algorithmically generated, advertising. The sources of the stories arent clear: many lack bylines or use fake profile photos. In other words, NewsGuard says, experts fears that entire news organizations could be generated by AI have already become reality.

Its hard to imagine who would believe this stuff if Biden had died, the New York Times would probably cover it and all 49 sites contain at least one instance of AI error messaging containing phrases such as I cannot complete this prompt or as an AI language model. But, as Futurism points out, a big concern here is that false information on the sites could serve as the basis for future AI content, creating a vicious cycle of fake news.

What do these sites look like and would AI articles always be as easy to spot as the report of Bidens death? I spent an afternoon in the brave new world of digital nonsense to find out.

The first stop was Get Into Knowledge, which offers a huge amount of knowledge to get into, all of it regurgitated on to the homepage seemingly at random. (We wont link to the sites here to avoid boosting them further.)

The headlines seemed like the work of translation software. One category was amazing reasons behind: for instance, a lengthy article on Why do dogs eat grass? amazing reasons behind and Why is yawning contagious? 10 Amazing Science Facts behind. A piece on whether oceans freeze was based on Massive science, and the site dares to ask questions such as why is the Sky Blue but the Space black? and the even more poetic Does the gravity of Mars the same as Earths?, something Ive often wondered. I started to wonder if the language was too odd to be the work of ChatGPT, which tends to be readable, if boring.

That was the case with the articles themselves. Theyre ordered like presentations, with an outline at the top and paragraphs arranged by number. But there are glimpses of true humanity: for instance, the piece on grass-eating dogs refers to them as our furry friends six times. These pieces certainly read like the work of AI, and a person who identified himself to NewsGuard as the sites founder said the site used automation at some points where they are extremely needed. (The site did not immediately reply to emails from the Guardian.)

Once Id gotten into enough knowledge, I visited celebritiesdeaths.com, which earnestly describes itself as news on famous figures who have died a refreshing change from outlets like Us Weekly that insist on covering figures who are still alive.

Other than the Biden snafu, the deaths that I factchecked had actually occurred, though they appear to have stopped in March: links to deaths in April and May didnt work. Fortunately, the shortage of deaths in those months was balanced by individuals repeated deaths in March: the last surviving Czech second world war RAF pilot, for instance, apparently died on both the 25th and the 26th.

I also learned that a dumpling empire founder died on 26 March, which was impressive information given that the article claimed to have been posted on 26 February. Celebritiesdeaths.com did not deem it necessary to provide the name of the founder of the colossal global dumpling franchise, even though the 96-year-olds demise was widely mourned. (The piece must have referred to Yang Bing-yi, who founded a celebrated Taiwanese chain.) A Guardian email to the address listed on the site was immediately returned with an error message.

Once Id had enough of dead celebrities, I headed to ScoopEarth.com, which provides juicy insider information on stars who are still breathing, as well as, for some reason, tech tips. The first article was about the musician August Alsina, who, I learned, was born on 3 September 1992 at the age of 30. His 3 September birthday presumably explains why every September, Alsina has a birthday party on September 3. In an email, Niraj Kumar, identified on the site as its founder, rejected claims the site used AI, calling the material purely genuine. Many of the pieces on the site felt too oddly worded to be ChatGPT, but there was so much repeated information that it also felt like it couldnt be written by humans. I found myself wondering how we can trust anything on the internet if its already so difficult to tell when AI is involved.

Finally, I visited Famadillo.com for product reviews. This immaculately curated site is laser-focused on stress-release tablets, RVing tips, Mothers Day T-shirts and the top sites in Santa Fe. The reviews themselves are sensible enough, but navigating the site is virtually impossible. Perhaps its perfectly designed for a true dilettante the kind of person whod read a review of Play-Dohs Super Stretchy Green Slime immediately after a piece tackling the thorny question Are baby potatoes regular potatoes?

In an email to the Guardian, Famadillo rejected claims it used AI to generate content highlighted in the NewsGuard report. Famadillo runs reported interviews and reviews and uses press releases for our contest pages. None of this content is generated by AI, the email read. That being said, we have experimented with AI in terms of refreshing old content and editing reporter-written content with the supervision of our editors.

The controversy points to the growing difficulty of discerning the humans from the bots. By the end of the day, I was even more confused about what was real and what wasnt than I am after waking from a dream or watching 15 minutes of Fox News. Who, exactly, is running these sites is unclear: many dont contain contact information, and of those that NewsGuard managed to contact, most failed to reply while those that did were vague about their operations. Meanwhile, their impact appears to vary widely some post to Facebook pages with tens of thousands of followers while others have none.

If this is what AI generates now, imagine what it will look like when sites like this become AIs source material. We can only hope that the bots remain compulsively honest about their identities or that Joe Biden finds a way to prevent an AI wild west. Assuming hes still alive.

{{topLeft}}

{{bottomLeft}}

{{topRight}}

{{bottomRight}}

{{.}}

Excerpt from:
Nearly 50 news websites are AI-generated, a study says. Would I be able to tell? - The Guardian

AI ‘godfather’ Geoffrey Hinton warns of dangers as he quits Google – BBC

2 May 2023

To play this content, please enable JavaScript, or try a different browser

Watch: AI 'godfather' Geoffrey Hinton tells the BBC of AI dangers as he quits Google

A man widely seen as the godfather of artificial intelligence (AI) has quit his job, warning about the growing dangers from developments in the field.

Geoffrey Hinton, 75, announced his resignation from Google in a statement to the New York Times, saying he now regretted his work.

He told the BBC some of the dangers of AI chatbots were "quite scary".

"Right now, they're not more intelligent than us, as far as I can tell. But I think they soon may be."

Dr Hinton also accepted that his age had played into his decision to leave the tech giant, telling the BBC: "I'm 75, so it's time to retire."

Dr Hinton's pioneering research on neural networks and deep learning has paved the way for current AI systems like ChatGPT.

In artificial intelligence, neural networks are systems that are similar to the human brain in the way they learn and process information. They enable AIs to learn from experience, as a person would. This is called deep learning.

The British-Canadian cognitive psychologist and computer scientist told the BBC that chatbots could soon overtake the level of information that a human brain holds.

"Right now, what we're seeing is things like GPT-4 eclipses a person in the amount of general knowledge it has and it eclipses them by a long way. In terms of reasoning, it's not as good, but it does already do simple reasoning," he said.

"And given the rate of progress, we expect things to get better quite fast. So we need to worry about that."

In the New York Times article, Dr Hinton referred to "bad actors" who would try to use AI for "bad things".

When asked by the BBC to elaborate on this, he replied: "This is just a kind of worst-case scenario, kind of a nightmare scenario.

"You can imagine, for example, some bad actor like [Russian President Vladimir] Putin decided to give robots the ability to create their own sub-goals."

The scientist warned that this eventually might "create sub-goals like 'I need to get more power'".

He added: "I've come to the conclusion that the kind of intelligence we're developing is very different from the intelligence we have.

"We're biological systems and these are digital systems. And the big difference is that with digital systems, you have many copies of the same set of weights, the same model of the world.

"And all these copies can learn separately but share their knowledge instantly. So it's as if you had 10,000 people and whenever one person learnt something, everybody automatically knew it. And that's how these chatbots can know so much more than any one person."

Matt Clifford, the chairman of the UK's Advanced Research and Invention Agency, speaking in a personal capacity, told the BBC that Dr Hinton's announcement "underlines the rate at which AI capabilities are accelerating".

"There's an enormous upside from this technology, but it's essential that the world invests heavily and urgently in AI safety and control," he said.

Dr Hinton joins a growing number of experts who have expressed concerns about AI - both the speed at which it is developing and the direction in which it is going.

'We need to take a step back'

Yoshua Bengio, another so-called godfather of AI, who along with Dr Hinton and Yann LeCun won the 2018 Turing Award for their work on deep learning, also signed the letter.

But Dr Hinton told the BBC that "in the shorter term" he thought AI would deliver many more benefits than risks, "so I don't think we should stop developing this stuff," he added.

He also said that international competition would mean that a pause would be difficult. "Even if everybody in the US stopped developing it, China would just get a big lead," he said.

Dr Hinton also said he was an expert on the science, not policy, and that it was the responsibility of government to ensure AI was developed "with a lot of thought into how to stop it going rogue".

'Responsible approach'

Dr Hinton stressed that he did not want to criticise Google and that the tech giant had been "very responsible".

"I actually want to say some good things about Google. And they're more credible if I don't work for Google."

In a statement, Google's chief scientist Jeff Dean said: "We remain committed to a responsible approach to AI. We're continually learning to understand emerging risks while also innovating boldly."

To play this content, please enable JavaScript, or try a different browser

Watch: What is artificial intelligence?

It is important to remember that AI chatbots are just one aspect of artificial intelligence, even if they are the most popular right now.

AI is behind the algorithms that dictate what video-streaming platforms decide you should watch next. It can be used in recruitment to filter job applications, by insurers to calculate premiums, it can diagnose medical conditions (although human doctors still get the final say).

What we are seeing now though is the rise of AGI - artificial general intelligence - which can be trained to do a number of things within a remit. So for example, ChatGPT can only offer text answers to a query, but the possibilities within that, as we are seeing, are endless.

But the pace of AI acceleration has surprised even its creators. It has evolved dramatically since Dr Hinton built a pioneering image analysis neural network in 2012.

Even Google boss Sundar Pichai said in a recent interview that even he did not fully understand everything that its AI chatbot, Bard, did.

Make no mistake, we are on a speeding train right now, and the concern is that one day it will start building its own tracks.

More here:
AI 'godfather' Geoffrey Hinton warns of dangers as he quits Google - BBC

Former Google scientist warns about the dangers of artificial intelligence – Catholic News Agency

From this perspective, the pope said at a March 27 Vatican audience, I am convinced that the development of artificial intelligence and machine learning has the potential to contribute in a positive way to the future of humanity; we cannot dismiss it.

At the same time, I am certain that this potential will be realized only if there is a constant and consistent commitment on the part of those developing these technologies to act ethically and responsibly, he said.

The remarks came at a Vatican audience with participants in the Minerva Dialogues, a digital technologies-focused gathering of scientists, engineers, business leaders, lawyers, philosophers, Catholic theologians, ethicists, and members of the Roman Curia.

The pope encouraged these leaders to make the intrinsic dignity of every man and every woman the key criterion in evaluating emerging technologies.

(Story continues below)

At Catholic News Agency, our team is committed to reporting the truth with courage, integrity, and fidelity to our faith. We provide news about the Church and the world, as seen through the teachings of the Catholic Church. When you subscribe to the CNA UPDATE, we'll send you a daily email with links to the news you need and, occasionally, breaking news.

As part of this free service you may receive occasional offers from us at EWTN News and EWTN. We won't rent or sell your information, and you can unsubscribe at any time.

Click here

Pope Francis said he welcomes the regulation of artificial intelligence so that it might contribute to a better world.He also said he is reassured to know many people working on new technologies put ethics, the common good, and the human person at the center. He emphasized his concern that digital technologies are increasing world inequality and reducing the human person to what can be known technologically.

The pontiff emphasized: A persons fundamental value cannot be measured by data alone. Social and economic decision-making should be cautious about delegating its judgments to algorithms and the processing of data on an individuals makeup and prior behavior.

We cannot allow algorithms to limit or condition respect for human dignity or to exclude compassion, mercy, forgiveness, and above all, the hope that people are able to change, he said.

At the 2020 assembly of the Pontifical Academy for Life, academy members joined presidents of IBM and Microsoftto sign a document calling for the ethical and responsible use of artificial intelligence technologies. The document focused on the ethics of algorithms and the ethical use of artificial intelligence according to the principles of transparency, inclusion, responsibility, impartiality, reliability, security, and privacy.

Your monthly donation will help our team continue reporting the truth, with fairness, integrity, and fidelity to Jesus Christ and his Church.

Go here to read the rest:
Former Google scientist warns about the dangers of artificial intelligence - Catholic News Agency

Your job is (probably) safe from artificial intelligence – Yahoo Finance

The age of generative artificial intelligence has well and truly arrived. Openais chatbots, which use large-language-model (llm) technology, got the ball rolling in November. Now barely a day goes by without some mind-blowing advance. An ai-powered song featuring a fake Drake and The Weeknd recently shook the music industry. Programs which convert text to video are making fairly convincing content. Before long consumer products such as Expedia, Instacart and OpenTable will plug into Openais bots, allowing people to order food or book a holiday by typing text into a box. A recently leaked presentation, reportedly from a Google engineer, suggests the tech giant is worried about how easy it is for rivals to make progress. There is more to comeprobably a lot more.

The development of ai raises profound questions. Perhaps foremost among them, though, is a straightforward one. What does this mean for the economy? Many have grand expectations. New research by Goldman Sachs, a bank, suggests that widespread ai adoption could eventually drive a 7% or almost $7trn increase in annual global gdp over a ten-year period. Academic studies point to a three-percentage-point rise in annual labour-productivity growth in firms that adopt the technology, which would represent a huge uplift in incomes compounded over many years. A study published in 2021 by Tom Davidson of Open Philanthropy, a grantmaking outfit, puts a more than 10% chance on explosive growthdefined as increases in global output of more than 30% a yearsome time this century. A few economists, only half-jokingly, hold out the possibility of global incomes becoming infinite.

Financial markets, however, point to rather more modest outcomes. In the past year share prices of companies involved in ai have done worse than the global average, although they have risen in recent months (see chart). Interest rates are another clue. If people thought that the technology was going to make everyone richer tomorrow, rates would rise because there would be less need to save. Inflation-adjusted rates and subsequent gdp growth are strongly correlated, points out research by Basil Halperin of the Massachusetts Institute of Technology (mit) and colleagues. Yet since the hype about ai began in November, long-term rates have fallenand they remain very low by historical standards. Financial markets, the researchers conclude, are not expecting a high probability ofai-induced growth accelerationon at least a 30-to-50-year time horizon.

Story continues

To judge which group is right, it is helpful to consider the history of previous technological breakthroughs. This provides succour to investors. For it is difficult to make the case that a single new technology by itself has ever noticeably changed the economy, either for good or ill. Even the industrial revolution of the late 1700s, which many people believe was the result of the invention of the spinning jenny, was actually caused by all sorts of factors coming together: increasing use of coal, firmer property rights, the emergence of a scientific ethos and much more besides.

Perhaps most famously, in the 1960s Robert Fogel published work about Americas railways that would later win him a Nobel Prize in economics. Many thought that rail transformed Americas prospects, turning an agricultural society into an industrial powerhouse. In fact, it had a very modest impact, Fogel found, because it replaced technologysuch as canalsthat would have done just about as good a job. The level of per-person income that America achieved by January 1st 1890 would have been reached by March 31st 1890 if railways had never been invented.

Of course, no one can predict with any certainty where a technology as fundamentally unpredictable as ai will take humans. Runaway growth is not impossible; nor is technological stagnation. But you can still think through the possibilities. And, so far at least, it seems as though Fogels railways are likely to be a useful blueprint. Consider three broad areas: monopolies, labour markets and productivity.

A new technology sometimes creates a small group of people with vast economic power. John D. Rockefeller won out with oil refining and Henry Ford with cars. Today Jeff Bezos and Mark Zuckerberg are pretty dominant thanks to tech.

Many pundits expect that before long the ai industry will generate huge profits. In a recent paper Goldmans analysts estimate in a best-case scenario generative ai could add about $430bn to annual global enterprise-software revenues. Their calculation assumes that each of the worlds 1.1bn office workers will adopt a few ai gizmos, paying around $400 in total each.

Any business would be glad to capture some of this cash. But in macroeconomic terms $430bn simply does not move the dial. Assume that all of the revenue turns into profits, which is unrealistic, and that all of these profits are earned in America, which is a tad more realistic. Even under these conditions, the ratio of the countrys pre-tax corporate profits to its gdp would rise from 12% today to 14%. That is far above the long-run average, but no higher than it was in the second quarter of 2021.

These profits could go to one organisationmaybe Openai. Monopolies often arise when an industry has high fixed costs or when it is hard to switch to competitors. Customers had no alternative to Rockefellers oil, for instance, and could not produce their own. Generative ai has some monopolistic characteristics. gpt-4, one of Openais chatbots, reportedly cost more than $100m to train, a sum few firms have lying around. There is also a lot of proprietary knowledge about data for training the models, not to mention user feedback.

There is, however, little chance of a single company bestriding the entire industry. More likely is that a modest number of big firms compete with one another, as happens in aviation, groceries and search engines. No ai product is truly unique since all use similar models. This makes it easier for a customer to switch from one to another. The computing power behind the models is also fairly generic. Much of the code, as well as tips and tricks, is freely available online, meaning that amateurs can produce their own modelsoften with strikingly good results.

There dont appear, today, to be any systemic moats in generative ai, a team at Andreessen Horowitz, a venture-capital firm, has argued. The recent leak purportedly from Google reaches a similar conclusion: The barrier to entry for training and experimentation has dropped from the total output of a major research organisation to one person, an evening, and a beefy laptop. Already there are a few generative-ai firms worth more than $1bn. The biggest corporate winner so far from the new ai age is not even an ai company. At Nvidia, a computing firm which powers AI models, revenue from data centres is soaring.

Yeah, but what about me?

Although generative ai might not create a new class of robber barons, to many people that will be cold comfort. They are more concerned with their own economic prospectsin particular, whether their job will disappear. Terrifying predictions abound. Tyna Eloundou of OpenAI, and colleagues, have estimated that around 80% of the us workforce could have at least 10% of their work tasks affected by the introduction of llms. Edward Felten of Princeton University, and colleagues, conduct a similar exercise. Legal services, accountancy and travel agencies come out at or near the top of professions most likely to lose out.

Economists have issued gloomy predictions before. In the 2000s many feared the impact of outsourcing on rich-world workers. In 2013 two at Oxford University issued a widely cited paper that suggested automation could wipe out 47% of American jobs over the subsequent decade or so. Others made the case that, even without widespread unemployment, there would be hollowing out, where rewarding, well-paid jobs disappeared and mindless, poorly paid roles took their place.

What actually happened took people by surprise. In the past decade the average rich-world unemployment rate has roughly halved (see chart 2). The share of working-age people in employment is at an all-time high. Countries with the highest rates of automation and robotics, such as Japan, Singapore and South Korea, have the least unemployment. A recent study by Americas Bureau of Labour Statistics found that in recent years jobs classified as at risk from new technologies did not exhibit any general tendency toward notably rapid job loss. Evidence for hollowing out is mixed. Measures of job satisfaction rose during the 2010s. For most of the past decade the poorest Americans have seen faster wage growth than the richest ones.

This time could be different. The share price of Chegg, a firm which provides homework help, recently fell by half after it admitted Chatgpt was having an impact on our new customer growth rate. The chief executive of ibm, a big tech firm, said that the company expects to pause hiring for roles that could be replaced by AI in the coming years. But are these early signs a tsunami is about to hit? Perhaps not.

Imagine a job disappears when ai automates more than 50% of the tasks it encompasses. Or imagine that workers are eliminated in proportion to the total share of economywide tasks that are automated. In either case this would, following Ms Eloundous estimates, result in a net loss of around 15% of American jobs. Some folk could move to industries experiencing worker shortages, such as hospitality. But a big rise in the unemployment rate would surely followin line, maybe, with the 15% briefly reached in America during the worst of the covid-19 pandemic in 2020.

The problem with this scenario is that history suggests job destruction happens far more slowly. The automated telephone switching systema replacement for human operatorswas invented in 1892. It took until 1921 for the Bell System to install their first fully automated office. Even after this milestone, the number of American manual telephone operators continued to grow, peaking in the mid-20th century at around 350,000. The occupation did not (mostly) disappear until the 1980s, nine decades after automation was invented. ai will take less than 90 years to sweep the labour market: llms are easy to use, and many experts are astonished by the speed at which the general public has incorporated Chatgpt into their lives. But reasons for the slow adoption of technology in workplaces will also apply this time around.

In a recent essay Mark Andreessen of Andreessen Horowitz outlined some of them. Mr Andreessens argument focuses on regulation. In bits of the economy with heavy state involvement, such as education and health care, technological change tends to be pitifully slow. The absence of competitive pressure blunts incentives to improve. Governments may also have public-policy goals, such as maximising employment levels, which are inconsistent with improved efficiency. These industries are also more likely to be unionisedand unions are good at preventing job losses.

Examples abound. Train drivers on Londons publicly run Underground network are paid close to twice the national median, even though the technology to partially or wholly replace them has existed for decades. Government agencies still require you to fill in paper forms providing your personal information again and again. In San Francisco, the global centre of the ai surge, real-life cops are still employed to direct traffic during rush hour.

Au revoir!

Many of the jobs threatened by ai are in these heavily regulated sectors. Return to the paper by Mr Felten of Princeton University. Fourteen of the top 20 occupations most exposed to ai are teachers (foreign-language ones are near the top; geographers are in a slightly stronger position). But only the bravest government would replace teachers with ai. Imagine the headlines. The same goes for cops and crime-fighting ai. The fact that Italy has already blocked Chatgpt over privacy concerns, with France, Germany and Ireland said to be thinking of following suit, shows how worried governments already are about the potentially job-destructive effects of ai.

Perhaps, in time, governments will allow some jobs to be replaced. But the delay will make space for the economy to do what it always does: create new types of jobs as others are eliminated. By lowering costs of production, new tech can create more demand for goods and services, boosting jobs that are hard to automate. A paper published in 2020 by David Autor of mit, and colleagues, offered a striking conclusion. About 60% of the jobs in America did not exist in 1940. The job of fingernail technician was added to the census in 2000. Solar photovoltaic electrician was added just five years ago. The ai economy is likely to create new occupations which today cannot even be imagined.

Modest labour-market effects are likely to translate into a modest impact on productivitythe third factor. Adoption of electricity in factories and households began in America towards the end of the 19th century. Yet there was no productivity boom until the end of the first world war. The personal computer was invented in the 1970s. This time the productivity boom followed more quicklybut it still felt slow at the time. In 1987 Robert Solow, an economist, famously declared that the computer age was everywhere except for the productivity statistics.

The world is still waiting for a productivity surge linked to recent innovations. Smartphones have been in widespread use for a decade, billions of people have access to superfast internet and many workers now shift between the office and home as it suits them. Official surveys show that well over a tenth of American employees already work at firms using ai of some kind, while unofficial surveys point to even higher numbers. Still, though, global productivity growth remains weak.

ai could eventually make some industries vastly more productive. A paper by Erik Brynjolfsson of Stanford University, and colleagues, examines customer-support agents. Access to an ai tool raises the number of issues resolved each hour by 14% on average. Researchers themselves could also become more efficient: gpt-x may give them an unlimited number of almost-free research assistants. Others hope ai will eliminate administrative inefficiencies in health care, reducing costs.

But there are many things beyond the reach of ai. Blue-collar work, such as construction and farming, which account for about 20% of rich-world gdp, is one example. An llm is of little use to someone picking asparagus. It could be of some use to a plumber fixing a leaky tap: a widget could recognise the tap, diagnose the fault and advise on fixes. Ultimately, though, the plumber still has to do the physical work. So it is hard to imagine that, in a few years time, blue-collar work is going to be much more productive than it is now. The same goes for industries where human-to-human contact is an inherent part of the service, such as hospitality and medical care.

ai also cannot do anything about the biggest thing holding back rich-world productivity growth: misfiring planning systems. When the size of cities is constrained and housing costs are high, people cannot live and work where they are most efficient. No matter how many brilliant new ideas your society may have, they are functionally useless if you cannot build them in a timely manner. It is up to governments to defang nimbys. Technology is neither here nor there. The same goes for energy, where permitting and infrastructure are what keep costs uncomfortably high.

It is even possible that the ai economy could become less productive. Look at some recent technologies. Smartphones allow instant communication, but they can also be a distraction. With email you are connected 24/7, which can make it hard to focus. A paper in 2016 by researchers at the University of California at Irvine, Microsoft Research and mit finds the longer daily time spent on email, the lower was perceived productivity. Some bosses now believe that working from home, once seen as a productivity-booster, gives too many people the excuse to slack off.

Generative ai itself could act as a drain on productivity. What happens, for instance, if ai can create entertainment perfectly tailored to your every desire? Moreover, few people have thought through the implications of a system that can generate vast amounts of text instantly. gpt-4 is a godsend for a nimby facing a planning application. In five minutes he can produce a well written 1,000-page objection. Someone then has to respond to it. Spam emails are going to be harder to detect. Fraud cases could soar. Banks will need to spend more on preventing attacks and compensating people who lose out.

Just what we need

In an ai-heavy world lawyers will multiply. In the 1970s you could do a multi-million-dollar deal on 15 pages because retyping was a pain in the ass, says Preston Byrne of Brown Rudnick, a law firm. ai will allow us to cover the 1,000 most likely edge cases in the first draft and then the parties will argue over it for weeks. A rule of thumb in America is that there is no point suing for damages unless you hope for $250,000 or more in compensation, since you need to spend that much getting to court. Now the costs of litigation could fall to close to zero. Meanwhile, teachers and editors will need to check that everything they read has not been composed by an ai. Openai has released a program that allows you to do this. It is thus providing the world a solution to a problem that its technology has created.

ai may change the world in ways that today are impossible to imagine. But that is not the same thing as turning the economy upside down. As Fogel noted in his study: The preceding argument is aimed not at refuting the view that the railroad played a decisive role in American development during the 19th century, but rather at demonstrating that the empirical base on which this view rests is not nearly so substantial as is usually presumed. Some time in the mid-21st century a future Nobel prizewinner, examining generative ai, may well reach the same conclusion.

2023 The Economist Newspaper Limited. All rights reserved.

From The Economist, published under licence. The original content can be found on https://www.economist.com/finance-and-economics/2023/05/07/your-job-is-probably-safe-from-artificial-intelligence

Originally posted here:
Your job is (probably) safe from artificial intelligence - Yahoo Finance

Rise of artificial intelligence is inevitable but should not be feared, father of AI says – The Guardian

Artificial intelligence (AI)

Jrgen Schmidhuber believes AI will progress to the point where it surpasses human intelligence and will pay no attention to people

The man once described as the father of artificial intelligence is breaking ranks with many of his contemporaries who are fearful of the AI arms race, saying what is coming is inevitable and we should learn to embrace it.

Prof Jrgen Schmidhubers work on neural networks in the 1990s was developed into language-processing models that went on to be used in technologies such as Google Translate and Apples Siri. The New York Times in 2016 said when AI matures it might call Schmidhuber Dad.

That maturity has arrived, and while some AI pioneers are looking upon their creations in horror calling for a handbrake on the acceleration and proliferation of the technology Schmidhuber says those calls are misguided.

The German computer scientist says there is competition between governments, universities and companies all seeking to advance the technology, meaning there is now an AI arms race, whether humanity likes it or not.

You cannot stop it, says Schmidhuber, who is now the director of the King Abdullah University of Science and Technologys AI initiative in Saudi Arabia.

Surely not on an international level, because one country might may have really different goals from another country. So, of course, they are not going to participate in some sort of moratorium.

But then I think you also shouldnt stop it. Because in 95% of all cases, AI research is really about our old motto, which is make human lives longer and healthier and easier.

Schmidhubers position contrasts with a number of his contemporaries, including Dr Geoffrey Hinton, who spectacularly quit Google this week after a decade with the company in order to speak more freely on AI.

Hinton, who is referred to as the godfather of AI, won the Turing award in 2018 for his work on deep learning, which is the foundation for much of the AI in use today.

He said companies like Google had stopped being proper stewards for AI in the face of competition to advance the technology. He believes if AI becomes more intelligent than humans, it could be exploited by bad actors, including authoritarian leaders.

But Schmidhuber, who has had a long-running dispute with Hinton and others in his industry over appropriate credit for AI research, says much of these fears are misplaced. He says the best counter to bad actors using AI will be developing good tools with AI.

Its just that the same tools that are now being used to improve lives can be used by bad actors, but they can also be used against the bad actors, he says.

And I would be much more worried about the old dangers of nuclear bombs than about the new little dangers of AI that we see now.

Schmidhuber believes AI will advance to the point where it surpasses human intelligence and has no interest in humans while humans will continue to benefit and use the tools developed by AI. This is a theme Schmidhuber has discussed for years, and was once accused at a conference of destroying the scientific method with his assertions.

As the Guardian has reported previously, Schmidhubers position as AIs father is not undisputed, and he can be a controversial figure within the AI community. Some have said his optimism about the rate of technological progress was unfounded and possibly dangerous.

In addition to Hinton, others more recently have called for AI development to slow down. Billionaire Elon Musk was one of thousands to sign a letter published in late March by the Future of Life Institute calling for a six-month moratorium on the creation of AIs more powerful than GPT-4, the machine behind ChatGPT.

Musk revealed he had fallen out with Google co-founder Larry Page last month because he said Page was not taking AI safety seriously enough and was seeking to create a digital god.

{{topLeft}}

{{bottomLeft}}

{{topRight}}

{{bottomRight}}

{{.}}

Read more:
Rise of artificial intelligence is inevitable but should not be feared, father of AI says - The Guardian