Archive for the ‘Artificial General Intelligence’ Category

What Is AI? How Artificial Intelligence Works (2024) – Shopify

Your favorite streaming service, your email spam filter, and your smart thermostat have one thing in common: Theyre all powered by artificial intelligence(AI). AI was once the stuff of science fiction, but its now part of our daily lives. AI technology can simulate human intelligence, letting machines conquer tasks that were once the sole province of the human brain.

AI systems arent just for consumer use. If you own a business, you can probably use AI tools to simplify your workflow, tackle gnawing problems, and perform tasks youd rather not do yourself. Heres an overview of artificial intelligence.

The term artificial intelligence, or AI, refers to the simulation of human intelligence by machines, mainly computer systems. It includes areas of computer science research such as machine learning (ML), natural language processing (NLP), computer vision, and robotics. Through algorithms and data, an AI system can analyze vast amounts of information and derive insights or make predictions. Advanced AI systems even learn from their mistakes and reprogram themselves, much as a human might do.

Sophisticated AI systems function as artificial neural networks that replicate the human brain. Deep neural networks operate without human intervention, meaning that an AI program teaches itself to perform specific tasks, much in the same way a human can.

Artificial intelligence encompasses the various sub-disciplines of computer science that focus on enabling machines to mimic human intelligence and perform tasks typically requiring human cognition. Much of todays AI capabilities revolve around four key concepts: machine learning, deep learning, reinforcement learning, and natural language processing (NLP). Heres a breakdown of each of these AI techniques:

Machine learning (ML) hinges on AI algorithmscomplex mathematical formulas that let systems learn from and make predictions or decisions based on data. These machine learning algorithms let computers identify patterns in large datasets without being explicitly programmed to do so.

An array of AI training processes makes machine learning possible. These include supervised learning (where AI models learn from labeled data) and unsupervised learning (where AI models discover patterns in unlabeled training data).

Deep learning is a subset of machine learning inspired by the structure and function of the human brains neural networks. Deep learning models are built with more than three layers of artificial neural networks (ANNs).

A neural network can perform different functions depending on its architecture. Convolutional neural networks (CNNs) are particularly effective for recognizing images, while recurrent neural networks (RNNs) excel in sequence data processing, such as language translation and speech recognition. Deep learning algorithms have been instrumental in the development of AI capabilities like speech recognition, image recognition, computer vision, and autonomous driving to name just a few examples.

Reinforcement learning is an area of machine intelligence where computer systems are trained to make sequential decisions. These systems learn through interaction with the environment, receiving feedback based on their actions. Computer scientists leverage mathematical optimization and neural networks to achieve deep reinforcement AI techniques that play a major role in AI projects such as robotics, game playing, recommendation systems, and self-driving cars.

Natural language processing (NLP) is a branch of AI concerned with enabling computers to understand, interpret, and generate human language. NLP techniques include text analysis, sentiment analysis, entity recognition, and machine translation. NLP algorithms use statistical methods, rule-based approaches, machine learning, and deep learning techniques to process and analyze text.

All of this helps generative AI tools build and use large language models (LLM) that communicate with human beings. Data scientists have used NLP to build virtual assistants like Siri, chatbots, language translation services, and text summarization tools.

AI systems are categorized based on their capabilities and functionalities. Here are four core types of AI, with real-life artificial intelligence examples for each:

Shopify Magic

Shopify Magic makes it easier to start, run, and grow your business. Our groundbreaking commerce-focused AI empowers entrepreneurs like you to be more creative, productive, and successful than ever before.

AI designed for commerce

Strong AI and weak AI are terms used to differentiate artificial intelligence based on its capabilities and similarities to human intelligence. Heres a breakdown of each:

Weak AI, also known as narrow AI, refers to artificial intelligence systems that operate based on predefined rules, algorithms, or machine learning models trained on specific datasets. These can feature both structured and unstructured datain other words, data that is labeled and organized by programmers and random data that requires more deductive reasoning.

Examples of weak AI include virtual assistants like Siri and Alexa, product recommendation systems, image recognition algorithms, and language translation services. Although these systems can appear intelligent within their limited domains, they do not possess consciousness, self-awareness, or the ability to apply their knowledge to new situations.

Strong AI, also known as artificial general intelligence (AGI) or human-level AI, refers to artificial intelligence systems with the ability to understand, learn, and apply knowledge across a wide range of tasks and domains at a level comparable to human intelligence. Although strong AI is still largely theoretical, it aims to replicate the full spectrum of human cognitive abilities, including reasoning, problem-solving, creativity, and emotional intelligence.

Strong AI systems would possess consciousness, self-awareness, and the capacity to adapt to novel situations, learn from experiences, and absorb knowledge beyond their initial training data. This could theoretically make it quite difficult to distinguish between the output of a generative AI model and a human.

Artificial intelligence offers a multitude of benefits. Here are three benefits of AI:

A significant advantage of AI is its ability to automate repetitive tasks, leading to increased efficiency and productivity. AI-powered systems can perform tasks faster and more accurately than humans, reducing errors and freeing up valuable time for employees to focus on higher-value activities.

Machine learning algorithms can identify patterns, trends, and correlations within data, helping businesses make more informed decisions. From personalized recommendations in ecommerce to predictive maintenance in manufacturing, AI-powered analytics enhance decision-making processes, leading to better outcomes and competitive advantages.

Advanced AI technologies such as natural language processing, computer vision, and autonomous systems drive groundbreaking innovations in various fields such as health care, finance, and transportation. This potential will help make artificial intelligence important to the global economy in the years and decades to come.

To be sure, there are some potential downsides to AI, including:

AI programs can perform an increasing number of tasks performed by humans. Downstream, this could result in unemployment or underemployment in certain industries, such as accounting and software coding, potentially leading to socio-economic upheaval. Additionally, the unequal distribution of the benefits of AI technology could exacerbate income inequality, widening the gap between skilled and unskilled workers.

AI raises ethical and social concerns related to privacy, bias, transparency, and accountability. For instance, AI algorithms may perpetuate or amplify biases present in training data, leading to unfair or discriminatory outcomes. AI used for surveillance and facial recognition could raise questions about privacy and civil liberties.

Excessive reliance on AI systems can pose significant business risks, including the potential for misusing the vast amounts of sensitive data they contain, such as medical records or personal financial information. Moreover, the complexity of AI systems makes them challenging to understand and control fully, increasing the potential for unintended consequences and data breaches.

Applications of AI include automation, data analysis, decision-making support, personalization, natural language processing, image recognition, robotics, and health care diagnostics, among others.

The main purpose of AI is to develop systems and technologies that can mimic human intelligence to perform tasks, make decisions, and solve problems efficiently.

AI is a tool thats neither inherently good nor bad. Its impact depends on how its developed, deployed, and regulated.

Follow this link:

What Is AI? How Artificial Intelligence Works (2024) - Shopify

Vitalik Buterin says OpenAI’s GPT-4 has passed the Turing test – Cointelegraph

OpenAIs GPT-4, a generative artificial intelligence (AI) model, has passed the Turing test, according to Ethereum co-founder Vitalik Buterin.

The Turing test is a nebulous benchmark for AI systems purported to determine how human-like a conversational model is. The term was coined on account of famed mathematician Alan Turing who proposed the test in 1950.

According to Turing, at the time, an AI system capable of generating text that fools humans into thinking theyre having a conversation with another human would demonstrate the capacity for thought.

Nearly 75 years later, the person largely credited with conceiving the worlds second most popular cryptocurrency has interpreted recent preprint research out of theUniversity of California San Diego as indicating that a production model has finally passed the Turing test.

Researchers at the University of California San Diego recently published a preprint paper titled People cannot distinguish GPT-4 from a human in a Turing test. In it, they had approximately 500 human test subjects interact with humans and AI models in a blind test to determine whether they could figure out which was which.

According to the research, humans mistakenly determined that GPT-4 was a human 56% of the time. This means that a machine fooled humans into thinking it was one of them more often than not.

According to Buterin, an AI system capable of fooling more than half of the humans it interacts with qualifies as passing the Turing test.

Buterin added:

Buterin qualified his statement by saying, Ok not quite, because humans get guessed as humans 66% of the time vs 54% for bots, but a 12% difference is tiny; in any real-world setting that basically counts as passing.

He also later added, in response to commentary on his original cast, that the Turing test is by far the single most famous socially accepted milestone for AI is serious shit now. So its good to remind ourselves that the milestone has now been crossed.

Artificial general intelligence (AGI) and the Turing test are not necessarily related, despite the two terminologies often being conflated. Turing formulated his test based on his mathematical acumen and predicted a scenario where AI could fool humans into thinking it was one of them through conversation.

It bears mention that the Turing test is an ephemeral construct with no true benchmark or technical basis. There is no scientific consensus as to whether machines are capable of thought as living organisms are or as to how such a feat would be measured. Simply put, AGI or an AIs ability to think isnt currently measurable or defined by the scientific or engineering communities.

Turing made his conceptual predictions long before the advent of token-based artificial intelligence systems and the onset of generative adversarial networks, the precursor to todays generative AI systems.

Complicating matters further is the idea of AGI, which is often associated with the Turing test. In scientific parlance, a general intelligence is one that should be capable of any intelligence-based feat. This precludes humans, as no person has shown general capabilities across the spectrum of human intellectual endeavor. Thus, it follows that an artificial general intelligence would feature thought capabilities far beyond that of any known human.

That being said, its clear that GPT-4 doesnt fit the bill of true general intelligence in the strictly scientific sense. However, that hasnt stopped denizens of the AI community from using the term AGI to indicate any AI system capable of fooling a significant number of humans.

In the current culture, its typical to see terms and phrases such as AGI, humanlike, and passes the Turing test to refer to any AI system that outputs content comparable to the content produced by humans.

Related: Were just scratching the surface of crypto and AI Microsoft exec

Read this article:

Vitalik Buterin says OpenAI's GPT-4 has passed the Turing test - Cointelegraph

"I lost trust": Why the OpenAI team in charge of safeguarding humanity imploded – Vox.com

Editors note, May 17, 2024, 11:45 pm ET: This story has been updated to include a post-publication statement that another Vox reporter received from OpenAI.

For months, OpenAI has been losing employees who care deeply about making sure AI is safe. Now, the company is positively hemorrhaging them.

Ilya Sutskever and Jan Leike announced their departures from OpenAI, the maker of ChatGPT, on Tuesday. They were the leaders of the companys superalignment team the team tasked with ensuring that AI stays aligned with the goals of its makers, rather than acting unpredictably and harming humanity.

Theyre not the only ones whove left. Since last November when OpenAIs board tried to fire CEO Sam Altman only to see him quickly claw his way back to power at least five more of the companys most safety-conscious employees have either quit or been pushed out.

Whats going on here?

If youve been following the saga on social media, you might think OpenAI secretly made a huge technological breakthrough. The meme What did Ilya see? speculates that Sutskever, the former chief scientist, left because he saw something horrifying, like an AI system that could destroy humanity.

But the real answer may have less to do with pessimism about technology and more to do with pessimism about humans and one human in particular: Altman. According to sources familiar with the company, safety-minded employees have lost faith in him.

Its a process of trust collapsing bit by bit, like dominoes falling one by one, a person with inside knowledge of the company told me, speaking on condition of anonymity.

Not many employees are willing to speak about this publicly. Thats partly because OpenAI is known for getting its workers to sign offboarding agreements with non-disparagement provisions upon leaving. If you refuse to sign one, you give up your equity in the company, which means you potentially lose out on millions of dollars.

Not many employees are willing to speak about this publicly. Thats partly because OpenAI is known for getting its workers to sign offboarding agreements with non-disparagement provisions upon leaving. If you refuse to sign one, you give up your equity in the company, which means you potentially lose out on millions of dollars.

(OpenAI did not respond to a request for comment in time for publication. After publication of my colleague Kelsey Pipers piece on OpenAIs post-employment agreements, OpenAI sent her a statement noting, We have never canceled any current or former employees vested equity nor will we if people do not sign a release or nondisparagement agreement when they exit. When Piper asked if this represented a change in policy, as sources close to the company had indicated to her, OpenAI replied: This statement reflects reality.)

One former employee, however, refused to sign the offboarding agreement so that he would be free to criticize the company. Daniel Kokotajlo, who joined OpenAI in 2022 with hopes of steering it toward safe deployment of AI, worked on the governance team until he quit last month.

OpenAI is training ever-more-powerful AI systems with the goal of eventually surpassing human intelligence across the board. This could be the best thing that has ever happened to humanity, but it could also be the worst if we dont proceed with care, Kokotajlo told me this week.

OpenAI says it wants to build artificial general intelligence (AGI), a hypothetical system that can perform at human or superhuman levels across many domains.

I joined with substantial hope that OpenAI would rise to the occasion and behave more responsibly as they got closer to achieving AGI. It slowly became clear to many of us that this would not happen, Kokotajlo told me. I gradually lost trust in OpenAI leadership and their ability to responsibly handle AGI, so I quit.

And Leike, explaining in a thread on X why he quit as co-leader of the superalignment team, painted a very similar picture Friday. I have been disagreeing with OpenAI leadership about the companys core priorities for quite some time, until we finally reached a breaking point, he wrote.

OpenAI did not respond to a request for comment in time for publication.

To get a handle on what happened, we need to rewind to last November. Thats when Sutskever, working together with the OpenAI board, tried to fire Altman. The board said Altman was not consistently candid in his communications. Translation: We dont trust him.

The ouster failed spectacularly. Altman and his ally, company president Greg Brockman, threatened to take OpenAIs top talent to Microsoft effectively destroying OpenAI unless Altman was reinstated. Faced with that threat, the board gave in. Altman came back more powerful than ever, with new, more supportive board members and a freer hand to run the company.

When you shoot at the king and miss, things tend to get awkward.

Publicly, Sutskever and Altman gave the appearance of a continuing friendship. And when Sutskever announced his departure this week, he said he was heading off to pursue a project that is very personally meaningful to me. Altman posted on X two minutes later, saying that this is very sad to me; Ilya is a dear friend.

Yet Sutskever has not been seen at the OpenAI office in about six months ever since the attempted coup. He has been remotely co-leading the superalignment team, tasked with making sure a future AGI would be aligned with the goals of humanity rather than going rogue. Its a nice enough ambition, but one thats divorced from the daily operations of the company, which has been racing to commercialize products under Altmans leadership. And then there was this tweet, posted shortly after Altmans reinstatement and quickly deleted:

So, despite the public-facing camaraderie, theres reason to be skeptical that Sutskever and Altman were friends after the former attempted to oust the latter.

And Altmans reaction to being fired had revealed something about his character: His threat to hollow out OpenAI unless the board rehired him, and his insistence on stacking the board with new members skewed in his favor, showed a determination to hold onto power and avoid future checks on it. Former colleagues and employees came forward to describe him as a manipulator who speaks out of both sides of his mouth someone who claims, for instance, that he wants to prioritize safety, but contradicts that in his behaviors.

For example, Altman was fundraising with autocratic regimes like Saudi Arabia so he could spin up a new AI chip-making company, which would give him a huge supply of the coveted resources needed to build cutting-edge AI. That was alarming to safety-minded employees. If Altman truly cared about building and deploying AI in the safest way possible, why did he seem to be in a mad dash to accumulate as many chips as possible, which would only accelerate the technology? For that matter, why was he taking the safety risk of working with regimes that might use AI to supercharge digital surveillance or human rights abuses?

For employees, all this led to a gradual loss of belief that when OpenAI says its going to do something or says that it values something, that that is actually true, a source with inside knowledge of the company told me.

That gradual process crescendoed this week.

The superalignment teams co-leader, Jan Leike, did not bother to play nice. I resigned, he posted on X, mere hours after Sutskever announced his departure. No warm goodbyes. No vote of confidence in the companys leadership.

Other safety-minded former employees quote-tweeted Leikes blunt resignation, appending heart emojis. One of them was Leopold Aschenbrenner, a Sutskever ally and superalignment team member who was fired from OpenAI last month. Media reports noted that he and Pavel Izmailov, another researcher on the same team, were allegedly fired for leaking information. But OpenAI has offered no evidence of a leak. And given the strict confidentiality agreement everyone signs when they first join OpenAI, it would be easy for Altman a deeply networked Silicon Valley veteran who is an expert at working the press to portray sharing even the most innocuous of information as leaking, if he was keen to get rid of Sutskevers allies.

The same month that Aschenbrenner and Izmailov were forced out, another safety researcher, Cullen OKeefe, also departed the company.

And two weeks ago, yet another safety researcher, William Saunders, wrote a cryptic post on the EA Forum, an online gathering place for members of the effective altruism movement, who have been heavily involved in the cause of AI safety. Saunders summarized the work hes done at OpenAI as part of the superalignment team. Then he wrote: I resigned from OpenAI on February 15, 2024. A commenter asked the obvious question: Why was Saunders posting this?

No comment, Saunders replied. Commenters concluded that he is probably bound by a non-disparagement agreement.

Putting all of this together with my conversations with company insiders, what we get is a picture of at least seven people who tried to push OpenAI to greater safety from within, but ultimately lost so much faith in its charismatic leader that their position became untenable.

I think a lot of people in the company who take safety and social impact seriously think of it as an open question: is working for a company like OpenAI a good thing to do? said the person with inside knowledge of the company. And the answer is only yes to the extent that OpenAI is really going to be thoughtful and responsible about what its doing.

With Leike no longer there to run the superalignment team, OpenAI has replaced him with company co-founder John Schulman.

But the team has been hollowed out. And Schulman already has his hands full with his preexisting full-time job ensuring the safety of OpenAIs current products. How much serious, forward-looking safety work can we hope for at OpenAI going forward?

Probably not much.

The whole point of setting up the superalignment team was that theres actually different kinds of safety issues that arise if the company is successful in building AGI, the person with inside knowledge told me. So, this was a dedicated investment in that future.

Even when the team was functioning at full capacity, that dedicated investment was home to a tiny fraction of OpenAIs researchers and was promised only 20 percent of its computing power perhaps the most important resource at an AI company. Now, that computing power may be siphoned off to other OpenAI teams, and its unclear if therell be much focus on avoiding catastrophic risk from future AI models.

To be clear, this does not mean the products OpenAI is releasing now like the new version of ChatGPT, dubbed GPT-4o, which can have a natural-sounding dialogue with users are going to destroy humanity. But whats coming down the pike?

Its important to distinguish between Are they currently building and deploying AI systems that are unsafe? versus Are they on track to build and deploy AGI or superintelligence safely? the source with inside knowledge said. I think the answer to the second question is no.

Leike expressed that same concern in his Friday thread on X. He noted that his team had been struggling to get enough computing power to do its work and generally sailing against the wind.

Most strikingly, Leike said, I believe much more of our bandwidth should be spent getting ready for the next generations of models, on security, monitoring, preparedness, safety, adversarial robustness, (super)alignment, confidentiality, societal impact, and related topics. These problems are quite hard to get right, and I am concerned we arent on a trajectory to get there.

When one of the worlds leading minds in AI safety says the worlds leading AI company isnt on the right trajectory, we all have reason to be concerned.

Yes, I'll give $5/month

Yes, I'll give $5/month

We accept credit card, Apple Pay, and Google Pay. You can also contribute via

Here is the original post:

"I lost trust": Why the OpenAI team in charge of safeguarding humanity imploded - Vox.com

63% of surveyed Americans want government legislation to prevent super intelligent AI from ever being achieved – PC Gamer

Generative AI may well be en vogue right now, but when it comes to artificial intelligence systems that are way more capable than humans, the jury is definitely unanimous in its view. A survey of American voters showed that 63% of respondents believe government regulations should be put in place to actively prevent it from ever being achieved, let alone be restricted in some way.

The survey, carried out by YouGov for the Artificial Intelligence Policy Institute (via Vox) took place last September. While it only sampled a small number of voters in the USjust 1,118 in totalthe demographics covered were broad enough to be fairly representative of the wider voting population.

One of the specific questions asked in the survey focused on "whether regulation should have the goal of delaying super intelligence." Specifically, it's talking about artificial general intelligence (AGI), something that the likes of OpenAI and Google are actively working on trying to achieve. In the case of the former, its mission expressly states this, with the goal of "ensur[ing] that artificial general intelligence benefits all of humanity" and it's a view shared by those working in the field. Even if that is one of the co-founders of OpenAI on his way out of the door...

Regardless of how honourable OpenAI's intentions are, or maybe were, it's a message that's currently lost on US voters. Of those surveyed, 63% agreed with the statement that regulation should aim to actively prevent AI superintelligence, 21% felt that didn't know, and 16% disagreed altogether.

The survey's overall findings suggest that voters are significantly more worried about keeping "dangerous [AI] models out of the hands of bad actors" rather than it being of benefit to us all. Research into new, more powerful AI models should be regulated, according to 67% of the surveyed voters, and they should be restricted in what they're capable of. Almost 70% of respondents felt that AI should be regulated like a "dangerous powerful technology."

That's not to say those people weren't against learning about AI. When asked about a proposal in Congress that expands access to AI education, research, and training, 55% agreed with the idea, whereas 24% opposed it. The rest chose that "Don't know" response.

I suspect that part of the negative view of AGI is the average person will undoubtedly think 'Skynet' when questioned about artificial intelligence better than humans. Even with systems far more basic than that, concerns over deep fakes and job losses won't help with seeing any of the positives that AI can potentially bring.

Keep up to date with the most important stories and the best deals, as picked by the PC Gamer team.

The survey's results will no doubt be pleasing to the Artificial Intelligence Policy Institute, as it "believe[s] that proactive government regulation can significantly reduce the destabilizing effects from AI." I'm not suggesting that it's influenced the results in any way, as my own, very unscientific, survey of immediate friends and family produced a similar outcomei.e. AGI is dangerous and should be heavily controlled.

Regardless of whether this is true or not, OpenAI, Google, and others clearly have lots of work ahead of them, in convincing voters that AGI really is beneficial to humanity. Because at the moment, it would seem that the majority view of AI becoming more powerful is an entirely negative one, despite arguments to the contrary.

See original here:

63% of surveyed Americans want government legislation to prevent super intelligent AI from ever being achieved - PC Gamer

Top OpenAI researcher resigns, saying company prioritized ‘shiny products’ over AI safety – Fortune

Jan Leike, OpenAIs head of alignment whose team focused on AI safety, has resigned from the company, saying that over the past years, safety culture and processes have taken a backseat to shiny products.

In a post on X, the former Twitter, Leike added that he had been disagreeing with OpenAI leadership about the companys core priorities for quite some time, until we reached a breaking point.

OpenAI is shouldering an enormous responsibility on behalf of humanity, he continued.We are getting long overdue in getting incredibly serious about the implications of AGI [artificial general intelligence.

Leikes resignation comes just a couple of days after his co-lead on OpenAIs Superalignment team, chief scientist Ilya Sutskever, announced he was leaving the company. In his post announcing his departure, Sustskever wrote that he was confident that OpenAI will build AGI that is both safe and beneficial.

Both Leike and Sutskevers departures come after months of speculation about what happened in November 2023, when OpenAI nonprofit board fired CEO Sam Altman and removed president Greg Brockman as chairman. Even after Altman was reinstated to his role as CEO and to a position on the board, it was clear that issues around the safety of the AI OpenAI is building was a point of contention among members of the board and others focused on AI safety within the company. After Altman was reinstated, Sutskever seemed to disappear, with many wondering whether he had been ousted.

Today, Bloomberg reported that OpenAI has dissolved Leike and Sutskevers Superalignment team, which will be folded into broaderresearch efforts at the company.

At the end of his post thread on X, Leike spoke directly to OpenAI employees: To all OpenAI employees, I want to say: Learn to feel the AGI. Act with the gravitas appropriate for what youre doing. I believe you can ship the cultural change thats needed. I am counting on you.

Read the original:

Top OpenAI researcher resigns, saying company prioritized 'shiny products' over AI safety - Fortune