Archive for the ‘Artificial Intelligence’ Category

The Scariest Part About Artificial Intelligence – The New Republic

This problem is finally getting a small piece of the attention it deserves, thanks to recent coverage by the Financial Times, Nature, and The Atlantic. But the tech industrys fossil fuellike tactics of greenwashing, gaslighting, and refusing to comment are going to make thorough reporting on this difficult. The closest weve gotten to candor came when Open AI founder Sam Altman admitted at Davos that A.I. will consume much more energy than expected, straining our grids. He admitted that the situation could become untenable: Theres no way to get there without a breakthrough.

Researching this issue gives one all the feelings of a dystopian twentieth-century sci-fi movie about parasitical robots stealing our human essence and ultimately killing us off. At every point, we want to yell at the screen, Dont let the robots in there! We wonder: Cant they be stopped?

If there was an A.I. abolition movement, Id join it today, ideally advocating exuberantly cruel penalties for the tech moguls who have ensnared us in this destructive and frivolous gambit. But being of a more constructive bent, Green New Deal co-author and Massachusetts Senator Ed Markey last month introduced the Artificial Intelligence Environmental Impacts Act of 2024. Its unfortunately mild, calling upon government agencies to do what the industry isnt doing: measure and investigate A.I.s environmental footprint. Its perhaps a politically feasible first step, especially given bipartisan social and cultural concerns about A.I.

Here is the original post:
The Scariest Part About Artificial Intelligence - The New Republic

Artificial intelligence vs machine learning: what’s the difference? – ReadWrite

There are so many buzzwords in the tech world these days that keeping up with the latest trends can be challenging. Artificial intelligence (AI) has been dominating the news, so much so that AI was named the most notable word of 2023 by Collins Dictionary. However, specific terms like machine learning have often been used instead of AI.

Introduced by American computer scientist Arthur Samuel in 1959, the term machine learning is described as a computers ability to learn without being explicitly programmed.

For one, machine learning (ML) is a subset of artificial intelligence (AI). While they are often used interchangeably, especially when discussing big data, these popular technologies have several distinctions, including differences in their scope, applications, and beyond.

Most people are now aware of this concept. Still, artificial intelligence actually refers to a collection of technologies integrated into a system, allowing it to think, learn, and solve complex problems. It has the capacity to copy cognitive abilities similar to human beings, enabling it to see, understand, and react to spoken or written language, analyze data, offer suggestions, and beyond.

Meanwhile, machine learning is just one area of AI that automatically enables a machine or system to learn and improve from experience. Rather than relying on explicit programming, it uses algorithms to sift through vast datasets, extract learning from the data, and then utilize this to make well-informed decisions. The learning part is that it improves over time through training and exposure to more data.

Machine learning models are the results or knowledge the program acquires by running an algorithm on training data. The more data used, the better the models performance.

Machine learning is an aspect of AI that enables machines to take knowledge from data and learn from it. In contrast, AI represents the overarching principle of allowing machines or systems to understand, reason, act, or adapt like humans.

Hence, think of AI as the entire ocean, encompassing various forms of marine life. Machine learning is like a specific species of fish in that ocean. Just as this species lives within the broader environment of the ocean, machine learning exists within the realm of AI, representing just one of many elements or aspects. However, it is still a significant and dynamic part of the entire ecosystem.

Machine learning cannot impersonate human intelligence, which is not its aim. Instead, it focuses on building systems that can independently learn from and adapt to new data through identifying patterns. AIs goal, on the other hand, is to create machines that can operate intelligently and independently, simulating human intelligence to perform a wide range of tasks, from simple to highly complex ones.

For example, when you receive emails, your email service uses machine learning algorithms to filter out spam. The ML system has been trained on vast datasets of emails, learning to distinguish between spam and non-spam by recognizing patterns in the text, sender information, and other attributes. Over time, it adapts to new types of spam and your personal preferences like which emails you mark as spam or not continually improving its accuracy.

In this scenario, your email provider may use AI to offer smart replies, sort emails into categories (like social, promotions, primary), and even prioritize essential emails. This AI system understands the context of your emails, categorizes them, and suggests short responses based on the content it analyzes. It mimics a high level of understanding and response generation that usually requires human intelligence.

There are three main types of machine learning and some specialized forms, including supervised, unsupervised, semi-supervised, and reinforcement learning.

In supervised learning, the machine is taught by an operator. The user supplies the machine learning algorithm with a recognized dataset containing specific inputs paired with their correct outputs, and the algorithm has to figure out how to produce these outputs from the given inputs. Although the user is aware of the correct solutions, the algorithm needs to identify patterns, all while learning from them and making predictions. If the predictions have errors, the user has to correct them, and this cycle repeats until the algorithm reaches a substantial degree of accuracy or performance.

Semi-supervised learning falls between supervised and unsupervised learning. Labeled data consists of information tagged with meaningful labels, allowing the algorithm to understand the data, whereas unlabeled data does not contain these informative tags. Using this mix, machine learning algorithms can be trained to assign labels to unlabeled data.

Unsupervised learning involves training the algorithm on a dataset without explicit labels or correct answers. The goal is for the model to identify patterns and relationships in the data by itself. It tries to learn the underlying structure of the data to categorize it into clusters or spread it along dimensions.

Finally, reinforcement learning looks at structured learning approaches, in which a machine learning algorithm is given a set of actions, parameters, and goals. The algorithm then has to navigate through various scenarios by experimenting with different strategies, assessing each outcome to identify the most effective approach. It employs a trial-and-error approach, drawing on previous experiences to refine its strategy and adjust its actions according to the given situation, all to achieve the best possible result.

In financial contexts, AI and machine learning serve as essential tools for tasks like identifying fraudulent activities, forecasting risks, and offering enhanced proactive financial guidance. Apparently, AI-driven platforms can now offer personalized educational content based on an individuals financial behavior and needs. By delivering bite-sized, relevant information, these platforms ensure users are well-equipped to make informed financial decisions, leading to better credit scores over time. Nvidia AI posted on X that generative AI was being incorporated into curricula.

During the Covid-19 pandemic, machine learning also gave insights into the most urgent events. They are also powerful weapons for cybersecurity, helping organizations protect themselves and their customers by detecting anomalies. Mobile app developers have actively integrated numerous algorithms and explicit programming to make their apps fraud-free for financial institutions.

Featured image: Canva

Read the original post:
Artificial intelligence vs machine learning: what's the difference? - ReadWrite

AI singularity may come in 2027 with artificial ‘super intelligence’ sooner than we think, says top scientist – Livescience.com

Humanity could create an artificial intelligence (AI) agent that is just as smart as humans in as soon as the next three years, a leading scientist has claimed.

Ben Goertzel, a computer scientist and CEO of SingularityNET, made the claim during the closing remarks at the Beneficial AGI Summit 2024 on March 1 in Panama City, Panama. He is known as the "father of AGI" after helping to popularize the term artificial general intelligence (AGI) in the early 2000s.

The best AI systems in deployment today are considered "narrow AI" because they may be more capable than humans in one area, based on training data, but can't outperform humans more generally. These narrow AI systems, which range from machine learning algorithms to large language models (LLMs) like ChatGPT, struggle to reason like humans and understand context.

However, Goertzel noted AI research is entering a period of exponential growth, and the evidence suggests that artificial general intelligence (AGI) where AI becomes just as capable as humans across several areas independent of the original training data is within reach. This hypothetical point in AI development is known as the "singularity."

Goertzel suggested 2029 or 2030 could be the likeliest years when humanity will build the first AGI agent, but that it could happen as early as 2027.

Related: Artificial general intelligence when AI becomes more capable than humans is just moments away, Meta's Mark Zuckerberg declares

If such an agent is designed to have access to and rewrite its own code, it could then very quickly evolve into an artificial super intelligence (ASI) which Goertzel loosely defined as an AI that has the cognitive and computing power of all of human civilization combined.

"No one has created human-level artificial general intelligence yet; nobody has a solid knowledge of when we're going to get there. I mean, there are known unknowns and probably unknown unknowns. On the other hand, to me it seems quite plausible we could get to human-level AGI within, let's say, the next three to eight years," Goertzel said.

He pointed to "three lines of converging evidence" to support his thesis. The first is modeling by computer scientist Ray Kurzweil in the book "The Singularity is Near" (Viking USA, 2005), which has been refined in his forthcoming book "The Singularity is Nearer" (Bodley Head, June 2024). In his book, Kurzweil built predictive models that suggest AGI will be achievable in 2029, largely centering on the exponential nature of technological growth in other fields.

Goertzel also pointed to improvements made to LLMs within a few years, which have "woken up so much of the world to the potential of AI." He clarified LLMs in themselves will not lead to AGI because the way they show knowledge doesn't represent genuine understanding, but that LLMs may be one component in a broad set of interconnected architectures.

The third piece of evidence, Goertzel said, lay in his work building such an infrastructure, which he has called "OpenCog Hyperon," as well as associated software systems and a forthcoming AGI programming language, dubbed "MeTTa," to support it.

OpenCog Hyperon is a form of AI infrastructure that involves stitching together existing and new AI paradigms, including LLMs as one component. The hypothetical endpoint is a large-scale distributed network of AI systems based on different architectures that each help to represent different elements of human cognition from content generation to reasoning.

Such an approach is a model other AI researchers have backed, including Databricks CTO Matei Zaharia in a blog post he co-authored on Feb. 18 on the Berkeley Artificial Intelligence Research (BAIR) website.

Goertzel admitted, however, that he "could be wrong" and that we may need a "quantum computer with a million qubits or something."

"My own view is once you get to human-level AGI, within a few years you could get a radically superhuman AGI unless the AGI threatens to throttle its own development out of its own conservatism," Goertzel added. "I think once an AGI can introspect its own mind, then it can do engineering and science at a human or superhuman level. It should be able to make a smarter AGI, then an even smarter AGI, then an intelligence explosion. That may lead to an increase in the exponential rate beyond even what Ray [Kurzweil] thought."

Go here to read the rest:
AI singularity may come in 2027 with artificial 'super intelligence' sooner than we think, says top scientist - Livescience.com

Artificial Intelligence, Real Consequences: The use of Artificial Intelligence platforms in higher-education – The Justice

Before I began to write this article, one of my professors had given me the suggestion to use ChatGPT to create a title for this piece. I did not do that, and will be very offended if you think I did. However, I did decide to give ChatGPT a chance and typed, Can you please create a title for a school newspaper article which features three interviews with professors at Brandeis University discussing the potential benefits and drawback of ChatGPT in their respective fields of study and the classrooms in which they teach in? In response, I got:

Exploring the Impact of ChatGPT: Perspectives from Brandeis University Professors

Aside from the use of title making and the temptation of lightening ones neverending workload, AI usage has been a rising concern in the education sector, which can both serve as a resource and threaten the purpose of education in the first place. I was able to speak with three Brandeis professors community, all teaching different subjects and with different experiences regarding the use of ChatGPT and other forms of artificial intelligence in their classrooms.

On Feb. 13, I spoke with Prof. Elizabeth Bradfield (ENG). As a poet, Bradfield believes that AI should have no role in the creative process of writing poetry and other creative writing pieces. ChatGPT could be useful for things like getting lists of poems or finding useful information for a poem, but Bradfield said,I still have to do the reading and the thinking. She said using artificial intelligence would be the opposite of creating art.

When talking about the joy and emotions that accompany writing and the writing process, Bradfield added on, Why would I give that away to AI? As an educator, Bradfield would not encourage her students in the use of AI to create a poem. If she found out that someone had handed her a poem created by AI, Bradfield stated that it would be a huge betrayal of trust. And why would I want to waste my time writing feedback for an AI poem?

After speaking with Bradfield, I also got the opportunity to have a conversation with Prof. Dylan Cashman (COSI) on Feb. 29. Cashman teaches two Computer Science elective courses as well as a few introductory courses. When first discussing the invention of AI and its rising popularity, Cashman stated that it has changed a lot of peoples lives, regarding the ethical and professional questions that have risen out of its increased usage. When asked about what measures he would take in the case that a student handed a coded assignment using AI, Cashman replied with, I think we are still learning what to do in that case.

On the use of artificial intelligence in elective computer science courses versus introductory ones, Cashman said his greatest concern with the usage of AI in computer science classrooms would be, Do you care about the product that they are producing, or the process that they undergo while doing it? And I think its a case by case basis by class.

Cashman also mentioned the concern of the fairness of grading when grading an assignment with AI usage versus one without one, as many AI detecting softwares are not very accurate, according to Cashman. An increasing concern for Cashman has been maintaining the essence of the learning process, where he stated, In a formative assessment: I want them to hit a wall and I want them to get over that wall. That is truly the value of education. If someone uses AI I worry about that a lot.

However, Cashman believes that in some cases, like editing, writing and advanced electives more concerned with short-term research, using artificial intelligence can have an optimistic outcome. As a final remark, Cashman stated, I think people are trying to decide what policies and cultural norms about AI should be based on how AI is being used right now. And people should get aware of how it will get better.

Finally, on March 1, I was able to speak briefly about AI in the field of legal studies with Prof. Douglas Smith (LGLS), who began working at Brandeis as a Guberman Teaching Fellow. Smith works as the director of Legal and Education Programs with The Right to Immigration Institute. When asked about the use of AI in his professional career, Smith replied, I used it at a conference we just had, a law and society conference in Puerto Rico. I think its great. I dont think I would rely on it, but its great to talk to.

As an educator, Smith is not opposed to the use of ChatGPT by his students when used properly. I love ChatGPT. I encourage students to use it as a tool, as a research tool, and as a research tool they should cite it, said Smith.

From the various insights of these three educators, the common consensus seems to be that we are still figuring it out. ChatGPT and other artificial intelligence platforms and applications can be useful as a guide or aiding resource, but also presents bigger problems like corrupting academic integrity and presenting bigger implications for professional fields such as medicine and law.

Editors Note: Justice Arts & Culture Editor Nemma Kalra 26 is associated with The Right to Immigration Institute and was not consulted, did not contribute to, nor edit any parts of this article.

Read the original post:
Artificial Intelligence, Real Consequences: The use of Artificial Intelligence platforms in higher-education - The Justice

A.I. Is Learning What It Means to Be Alive – The New York Times

In 1889, a French doctor named Francois-Gilbert Viault climbed down from a mountain in the Andes, drew blood from his arm and inspected it under a microscope. Dr. Viaults red blood cells, which ferry oxygen, had surged 42 percent. He had discovered a mysterious power of the human body: When it needs more of these crucial cells, it can make them on demand.

In the early 1900s, scientists theorized that a hormone was the cause. They called the theoretical hormone erythropoietin, or red maker in Greek. Seven decades later, researchers found actual erythropoietin after filtering 670 gallons of urine.

And about 50 years after that, biologists in Israel announced they had found a rare kidney cell that makes the hormone when oxygen drops too low. Its called the Norn cell, named after the Norse deities who were believed to control human fate.

It took humans 134 years to discover Norn cells. Last summer, computers in California discovered them on their own in just six weeks.

The discovery came about when researchers at Stanford programmed the computers to teach themselves biology. The computers ran an artificial intelligence program similar to ChatGPT, the popular bot that became fluent with language after training on billions of pieces of text from the internet. But the Stanford researchers trained their computers on raw data about millions of real cells and their chemical and genetic makeup.

The researchers did not tell the computers what these measurements meant. They did not explain that different kinds of cells have different biochemical profiles. They did not define which cells catch light in our eyes, for example, or which ones make antibodies.

We are having trouble retrieving the article content.

Please enable JavaScript in your browser settings.

Thank you for your patience while we verify access. If you are in Reader mode please exit andlog intoyour Times account, orsubscribefor all of The Times.

Thank you for your patience while we verify access.

Already a subscriber?Log in.

Want all of The Times?Subscribe.

See more here:
A.I. Is Learning What It Means to Be Alive - The New York Times