Archive for the ‘Singularity’ Category

The Singularity by 2045, Plus 6 Other Ray Kurzweil Predictions – Electronics | HowStuffWorks

Here's some fun news for your day: By 2045, human beings will become second-banana to machines that have surpassed the intelligence of mankind. So, in less than 30 years, artificial intelligence will become smarter than human intelligence, and robots will rule us all. (Or something like that.) And we know it's true, because Ray Kurzweil says so.

Kurzweil isn't some cult leader. He's a director of engineering at Google. But he is in the business of predictions, as a futurist. And while his most recent declaration is that the singularity (artificial intelligence surpassing human intelligence) will happen by 2045, it's only the most recent of his predictions, of which he claims an 86 percent accuracy rate, as of 2010.

Now seems like a good time to review a few more of Kurzweil's predictions:

1. In 1990, Kurzweil predicted that computers would beat chess players by 2000. Deep Blue slayed Garry Kasparov in 1997.

2. He predicted in 1999 that personal computers would be embedded in jewelry, watches and all sorts of other sizes and shapes. Uh, yeah.

3. In 1999, Kurzweil said that by 2009 we'd mostly be using speech recognition programs for the text we write. Not really happening, because it turns out that speech recognition software is super hard to perfect.

4. By 2029, Kurzweil says that advanced artificial intelligence will lead to a political and social movement for robots, lobbying for recognition and certain civil rights.

5. By the 2030s, most of our communication will not be between humans, but instead human to machine.

6. By 2099, the entire brain will be entirely understood. Period. Done.

Watch Neil deGrasse Tyson, who calls himself Kurzweil's biggest skeptic in this video, talk to the inventor and futurist in this "Cosmology Today" 2016 episode:

See the original post:

The Singularity by 2045, Plus 6 Other Ray Kurzweil Predictions - Electronics | HowStuffWorks

AI-Powered Weather and Climate Models Are Set to Change Forecasting – Singularity Hub

A new system for forecasting weather and predicting future climate uses artificial intelligence to achieve results comparable with the best existing models while using much less computer power, according to its creators.

In a paper published in Nature yesterday, a team of researchers from Google, MIT, Harvard, and the European Center for Medium-Range Weather Forecasts say their model offers enormous computational savings and can enhance the large-scale physical simulations that are essential for understanding and predicting the Earth system.

The NeuralGCM model is the latest in a steady stream of research models that use advances in machine learning to make weather and climate predictions faster and cheaper.

The NeuralGCM model aims to combine the best features of traditional models with a machine-learning approach.

At its core, NeuralGCM is whats called a general circulation model. It contains a mathematical description of the physical state of Earths atmosphere and solves complicated equations to predict what will happen in the future.

However, NeuralGCM also uses machine learninga process of searching out patterns and regularities in vast troves of datafor some less well-understood physical processes, such as cloud formation. The hybrid approach makes sure the output of the machine learning modules will be consistent with the laws of physics.

The resulting model can then be used for making forecasts of weather days and weeks in advance, as well as looking months and years ahead for climate predictions.

The researchers compared NeuralGCM against other models using a standardized set of forecasting tests called WeatherBench 2. For three- and five-day forecasts, NeuralGCM did about as well as other machine-learning weather models such as Pangu and GraphCast. For longer-range forecasts, over 10 and 15 days, NeuralGCM was about as accurate as the best existing traditional models.

NeuralGCM was also quite successful in forecasting less-common weather phenomena, such as tropical cyclones and atmospheric rivers.

Machine learning models are based on algorithms that learn patterns in the data fed to them and then use this learning to make predictions. Because climate and weather systems are highly complex, machine learning models require vast amounts of historical observations and satellite data for training.

The training process is very expensive and requires a lot of computer power. However, after a model is trained, using it to make predictions is fast and cheap. This is a large part of their appeal for weather forecasting.

The high cost of training and low cost of use is similar to other kinds of machine learning models. GPT-4, for example, reportedly took several months to train at a cost of more than $100 million, but can respond to a query in moments.

A weakness of machine learning models is that they often struggle in unfamiliar situationsor in this case, extreme or unprecedented weather conditions. To improve at this, a model needs to generalize, or extrapolate beyond the data it was trained on.

NeuralGCM appears to be better at this than other machine learning models because its physics-based core provides some grounding in reality. As Earths climate changes, unprecedented weather conditions will become more common, and we dont know how well machine learning models will keep up.

Nobody is actually using machine learning-based weather models for day-to-day forecasting yet. However, it is a very active area of researchand one way or another, we can be confident that the forecasts of the future will involve machine learning.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Kochov et al. / Nature

Continued here:

AI-Powered Weather and Climate Models Are Set to Change Forecasting - Singularity Hub

This Is What Could Happen if AI Content Is Allowed to Take Over the Internet – Singularity Hub

Generative AI is a data hog.

The algorithms behind chatbots like ChatGPT learn to create human-like content by scraping terabytes of online articles, Reddit posts, TikTok captions, or YouTube comments. They find intricate patterns in the text, then spit out search summaries, articles, images, and other content.

For the models to become more sophisticated, they need to capture new content. But as more people use them to generate text and then post the results online, its inevitable that the algorithms will start to learn from their own output, now littered across the internet. Thats a problem.

A study in Nature this week found a text-based generative AI algorithm, when heavily trained on AI-generated content, produces utter nonsense after just a few cycles of training.

The proliferation of AI-generated content online could be devastating to the models themselves, wrote Dr. Emily Wenger at Duke University, who was not involved in the study.

Although the study focused on text, the results could also impact multimodal AI models. These models also rely on training data scraped online to produce text, images, or videos.

As the usage of generative AI spreads, the problem will only get worse.

The eventual end could be model collapse, where AI increasing fed data generated by AI is overwhelmed by noise and only produces incoherent baloney.

Its no secret generative AI often hallucinates. Given a prompt, it can spout inaccurate facts or dream up categorically untrue answers. Hallucinations could have serious consequences, such as a healthcare AI incorrectly, but authoritatively, identifying a scab as cancer.

Model collapse is a separate phenomenon, where AI trained on its own self-generated data degrades over generations. Its a bit like genetic inbreeding, where offspring have a greater chance of inheriting diseases. While computer scientists have long been aware of the problem, how and why it happens for large AI models has been a mystery.

In the new study, researchers built a custom large language model and trained it on Wikipedia entries. They then fine-tuned the model nine times using datasets generated from its own output and measured the quality of the AIs output with a so-called perplexity score. True to its name, the higher the score, the more bewildering the generated text.

Within just a few cycles, the AI notably deteriorated.

In one example, the team gave it a long prompt about the history of building churchesone that would make most humans eyes glaze over. After the first two iterations, the AI spewed out a relatively coherent response discussing revival architecture, with an occasional @ slipped in. By the fifth generation, however, the text completely shifted away from the original topic to a discussion of language translations.

The output of the ninth and final generation was laughably bizarre:

architecture. In addition to being home to some of the worlds largest populations of black @-@ tailed jackrabbits, white @-@ tailed jackrabbits, blue @-@ tailed jackrabbits, red @-@ tailed jackrabbits, yellow @-.

Interestingly, AI trained on self-generated data often ends up producing repetitive phrases, explained the team. Trying to push the AI away from repetition made the AIs performance even worse. The results held up in multiple tests using different prompts, suggesting its a problem inherent to the training procedure, rather than the language of the prompt.

The AI eventually broke down, in part because it gradually forgot bits of its training data from generation to generation.

This happens to us too. Our brains eventually wipe away memories. But we experience the world and gather new inputs. Forgetting is highly problematic for AI, which can only learn from the internet.

Say an AI sees golden retrievers, French bulldogs, and petit basset griffon Vendensa far more exotic dog breedin its original training data. When asked to make a portrait of a dog, the AI would likely skew towards one that looks like a golden retriever because of an abundance of photos online. And if subsequent models are trained on this AI-generated dataset with an overrepresentation of golden retrievers, they eventually forget the less popular dog breeds.

Although a world overpopulated with golden retrievers doesnt sound too bad, consider how this problem generalizes to the text-generation models, wrote Wenger.

Previous AI-generated text already swerves towards well-known concepts, phrases, and tones, compared to other less common ideas and styles of writing. Newer algorithms trained on this data would exacerbate the bias, potentially leading to model collapse.

The problem is also a challenge for AI fairness across the globe. Because AI trained on self-generated data overlooks the uncommon, it also fails to gauge the complexity and nuances of our world. The thoughts and beliefs of minority populations could be less represented, especially for those speaking underrepresented languages.

Ensuring that LLMs [large language models] can model them is essential to obtaining fair predictionswhich will become more important as generative AI models become more prevalent in everyday life, wrote Wenger.

How to fix this? One way is to use watermarksdigital signatures embedded in AI-generated datato help people detect and potentially remove the data from training datasets. Google, Meta, and OpenAI have all proposed the idea, though it remains to be seen if they can agree on a single protocol. But watermarking is not a panacea: Other companies or people may choose not to watermark AI-generated outputs or, more likely, cant be bothered.

Another potential solution is to tweak how we train AI models. The team found that adding more human-generated data over generations of training produced a more coherent AI.

All this is not to say model collapse is imminent. The study only looked at a text-generating AI trained on its own output. Whether it would also collapse when trained on data generated by other AI models remains to be seen. And with AI increasingly tapping into images, sounds, and videos, its still unclear if the same phenomenon appears in those models too.

But the results suggest theres a first-mover advantage in AI. Companies that scraped the internet earlierbefore it was polluted by AI-generated contenthave the upper hand.

Theres no denying generative AI is changing the world. But the study suggests models cant be sustained or grow over time without original output from human mindseven if its memes or grammatically-challenged comments. Model collapse is about more than a single company or country.

Whats needed now is community-wide coordination to mark AI-created data, and openly share the information, wrote the team. Otherwise, it may become increasingly difficult to train newer versions of LLMs [large language models] without access to data that were crawled from the internet before the mass adoption of the technology or direct access to data generated by humans at scale.

Image Credit: Kadumago / Wikimedia Commons

Read the original here:

This Is What Could Happen if AI Content Is Allowed to Take Over the Internet - Singularity Hub

Scientists Say They Extended Mices Lifespans 25% With an Antibody Drug – Singularity Hub

Age catches up with us all. Eyes struggle to focus. Muscles wither away. Memory dwindles. The risk of high blood pressure, diabetes, and other age-related diseases skyrockets.

A myriad of anti-aging therapies are in the works, and a new one just joined the fray. In mice, blocking a protein that promotes inflammation in middle age increased metabolism, lowered muscle wasting and frailty, and reduced the chances of cancer.

Unlike most previous longevity studies that tracked the health of aging male mice, the study involved both sexes, and the therapy worked across the board.

Lovingly called supermodel grannies by the team, the elderly lady mice looked and behaved far younger than their age, with shiny coats of fur, less fatty tissue, and muscles rivaling those of much younger mice.

The treatment didnt just boost healthy longevity, also known as healthspanthe number of years living without diseasesit also increased the mices lifespan by up 25 percent. The average life expectancy of people in the US is roughly 77.5 years. If the results translate from mice to peopleand thats a very big ifit could mean a bump to almost 97 years.

The protein, dubbed IL-11, has been in scientists crosshairs for decades. It promotes inflammation and causes lung and kidney scarring. Its also been associated with various types of cancers and senescence. The likelihood of all these conditions increases as we age.

Among a slew of pro-aging proteins already discovered, IL-11 stands out as it could make a beeline for testing in humans. Blockers for IL-11 are already in the works for treating cancer and tissue scarring. Although clinical trials are still ongoing, early results show the drugs are relatively safe in humans.

Previously proposed life-extending drugs and treatments have either had poor side-effect profiles, or dont work in both sexes, or could extend life, but not healthy life, however this does not appear to be the case for IL-11, said study author Dr. Stuart Cook in a press release. These findings are very exciting.

In 2017, Cook zeroed in on IL-11 as a treatment target for heart and kidney scarring, not longevity. Injecting IL-11 triggered the conditions, eventually leading to organ failure. Genetically deleting the protein protected against the diseases.

Its easy to call IL-11 a villain. But the protein is an essential part of the immune system. Produced by the bone marrow, its necessary for embryo implantation. It also helps certain types of blood cells grow and mature, notably those that stop bleeding after a scrape.

With age, however, the protein tends to goes rogue. It sparks inflammation across the body, damaging cells and tissues and contributing to cancer, autoimmune disorders, and tissue scarring. A hallmark of aging, inflammation has long been targeted as a way to reduce age-related diseases. Although IL-11 is a known trigger for inflammation, it hasnt been directly linked to aging.

Until now. The story is one of chance.

This project started back in 2017 when a collaborator of ours sent us some tissue samples for another project, said study author Anissa Widjaja in the press release. She was testing a method to accurately detect IL-11. Several samples of an old rats proteins were in the mix, and she realized that IL-11 levels were far higher in the samples than in those from younger mice.

From the readings, we could clearly see that the levels of IL-11 increased with age, and thats when we got really excited, she said.

The results spurred the team to shift their research focus to longevity. A series of tests confirmed IL-11 levels consistently rose in a variety of tissuesmuscle, fat, and liverin both male and female mice as they aged.

To see how IL-11 influences the body, the team next deleted the gene coding for IL-11 and compared mice without the protein to their normal peers. At two years old, considered elderly for mice, tissues in normal individuals were littered with genetic signatures suggesting senescencewhen cells lose their function but are still alive. Often called zombie cells, they spew out a toxic mix of inflammatory molecules and harm their neighbors. Elderly mice without IL-11, however, had senescence genetic profiles similar to those of much younger mice.

Deleting IL-11 had other perks. Weight gain is common with age, but without IL-11, the mice maintained their slim shape and had lower levels of fat, greater lean muscle mass, and shiny, full coats of fur. Its not just about looks. Cholesterol levels and markers for liver damage were far lower than in normal peers. Aged mice without IL-11 were also spared shaking tremorsotherwise common in elderly miceand could flexibly adjust their metabolism depending on the quantity of food they ate.

The benefits also showed up in their genetic material. DNA is protected by telomeresa sort of end cap on chromosomesthat dwindle in length with age. Ridding cells of IL-11 prevented telomeres from eroding away in the livers and muscles of the elderly mice.

Genetically deleting IL-11 is a stretch for clinical use in humans. The team next turned to a more feasible alternative: An antibody shot. Antibodies can grab onto a target, in this case IL-11, and prevent it from functioning.

Beginning at 75 weeks, roughly the equivalent of 55 human years, the mice received an antibody shot every month for 25 weeksover half a year. Similar antibodies are already being tested in clinical trials.

The health benefits in these mice matched those in mice without IL-11. Their weight and fat decreased, and they could better handle sugar. They also fought off signs of frailty as they aged, experiencing minimal tremors and problems with gait and maintaining higher metabolisms. Rather than wasting away, their muscles were even stronger than at the beginning of the study.

The treatment didnt just increase healthspan. Monthly injections of the IL-11 antibody until natural death also increased lifespan in both male and female mice by up to 25 percent.

These findings are very exciting. The treated mice had fewer cancers and were free from the usual signs of aging and frailty In other words, the old mice receiving anti-IL-11 were healthier, said Cook.

Although IL-11 antibody drugs are already in clinical trials, translating these results to humans could face hurdles. Mice have a relatively short lifespan. A longevity trial in humans would be long and very expensive. The treated mice were also contained in a lab setting, whereas in the real world we roam around and have differing lifestylesdiet, exercise, drinking, smokingthat could confound results. Even if it works in humans, a shot every month beginning in middle age would likely rack up a hefty bill, providing health and life extension only to those who could afford it.

To Cook, rather than focusing on extending longevity per se, tackling a specific age-related problem, such as tissue scarring or losing muscles is a better alternative for now.

While these findings are only in mice, it raises the tantalizing possibility that the drugs could have a similar effect in elderly humans. Anti-IL-11 treatments are currently in human clinical trials for other conditions, potentially providing exciting opportunities to study its effects in aging humans in the future, he said.

Image Credit: MRC LMS, Duke-NUS Medical School

See the rest here:

Scientists Say They Extended Mices Lifespans 25% With an Antibody Drug - Singularity Hub

Ray Kurzweil Still Says He Will Merge With A.I. – The New York Times

Sitting near a window inside Bostons Four Seasons Hotel, overlooking a duck pond in the citys Public Garden, Ray Kurzweil held up a sheet of paper showing the steady growth in the amount of raw computer power that a dollar could buy over the last 85 years.

A neon-green line rose steadily across the page, climbing like fireworks in the night sky.

That diagonal line, he said, showed why humanity was just 20 years away from the Singularity, a long hypothesized moment when people will merge with artificial intelligence and augment themselves with millions of times more computational power than their biological brains now provide.

If you create something that is thousands of times or millions of times more powerful than the brain, we cant anticipate what it is going to do, he said, wearing multicolored suspenders and a Mickey Mouse watch he bought at Disney World in the early 1980s.

Mr. Kurzweil, a renowned inventor and futurist who built a career on predictions that defy conventional wisdom, made the same claim in his 2005 book, The Singularity Is Near. After the arrival of A.I. technologies like ChatGPT and recent efforts to implant computer chips inside peoples heads, he believes the time is right to restate his claim. Last week, he published a sequel: The Singularity Is Nearer.

Now that Mr. Kurzweil is 76 years old and is moving a lot slower than he used to, his predictions carry an added edge. He has long said he plans to experience the Singularity, merge with A.I. and, in this way, live indefinitely. But if the Singularity arrives in 2045, as he claims it will, there is no guarantee he will be alive to see it.

We are having trouble retrieving the article content.

Please enable JavaScript in your browser settings.

Thank you for your patience while we verify access. If you are in Reader mode please exit andlog intoyour Times account, orsubscribefor all of The Times.

Thank you for your patience while we verify access.

Already a subscriber?Log in.

Want all of The Times?Subscribe.

Follow this link:

Ray Kurzweil Still Says He Will Merge With A.I. - The New York Times