Archive for the ‘Artificial General Intelligence’ Category

Generative AI Will Have Profound Impact Across Sectors – Rigzone News

Generative AI will have a profound impact across industries.

Thats what Amazon Web Services (AWS) believes, according to Hussein Shel, an Energy Enterprise Technologist for the company, who said Amazon has invested heavily in the development and deployment of artificial intelligence and machine learning for more than two decades for both customer-facing services and internal operations.

We are now going to see the next wave of widespread adoption of machine learning, with the opportunity for every customer experience and application to be reinvented with generative AI, including the energy industry, Shel told Rigzone.

AWS will help drive this next wave by making it easy, practical, and cost-effective for customers to use generative AI in their business across all the three layers of the technology stack, including infrastructure, machine learning tools, and purpose-built AI services, he added.

Looking at some of the applications and benefits of generative AI in the energy industry, Shel outlined that AWS sees the technology playing a pivotal role in increasing operational efficiencies, reducing health and safety exposure, enhancing customer experience, minimizing the emissions associated with energy production, and accelerating the energy transition.

For example, generative AI could play a pivotal role in addressing operational site safety, Shel said.

Energy operations often occur in remote, and sometimes hazardous and risky environments. The industry has long-sought solutions that help to reduce trips to the field, which directly correlates to reduced worker health and safety exposure, he added.

Generative AI can help the industry make significant strides towards this goal. Images from cameras stationed at field locations can be sent to a generative AI application that could scan for potential safety risks, such as faulty valves resulting in gas leaks, he continued.

Shel said the application could generate recommendations for personal protective equipment and tools and equipment for remedial work, highlighting that this would help to eliminate an initial trip to the field to identify issues, minimize operational downtime, and also reduce health and safety exposure.

Another example is reservoir modeling, Shel noted.

Generative AI models can be used for reservoir modeling by generating synthetic reservoir models that can simulate reservoir behavior, he added.

GANs are a popular generative AI technique used to generate synthetic reservoir models. The generator network of the GAN is trained to produce synthetic reservoir models that are similar to real-world reservoirs, while the discriminator network is trained to distinguish between real and synthetic reservoir models, he went on to state.

Once the generative model is trained, it can be used to generate a large number of synthetic reservoir models that can be used for reservoir simulation and optimization, reducing uncertainty and improving hydrocarbon production forecasting, Shel stated.

These reservoir models can also be used for other energy applications where subsurface understanding is critical, such as geothermal and carbon capture and storage, Shel said.

Highlighting a third example, Shel pointed out a generative AI based digital assistant.

Data access is a continuous challenge the energy industry is looking to overcome, especially considering much of its data is decades old and sits in various systems and formats, he said.

Oil and gas companies, for example, have decades of documents created throughout the subsurface workflow in different formats, i.e., PDFs, presentations, reports, memos, well logs, word documents, and finding useful information takes a considerable amount of time, he added.

According to one of the top five operators, engineers spend 60 percent of their time searching for information. Ingesting all of those documents on a generative AI based solution augmented by an index can dramatically improve data access, which can lead to making better decisions faster, Shel continued.

When asked if the thought all oil and gas companies will use generative AI in some way in the future, Shel said he did, but added that its important to stress that its still early days when it comes to defining the potential impact of generative AI on the energy industry.

At AWS, our goal is to democratize the use of generative AI, Shel told Rigzone.

To do this, were providing our customers and partners with the flexibility to choose the way they want to build with generative AI, such as building their own foundation models with purpose-built machine learning infrastructure; leveraging pre-trained foundation models as base models to build their applications; or use services with built-in generative AI without requiring any specific expertise in foundation models, he added.

Were also providing cost-efficient infrastructure and the correct security controls to help simplify deployment, he continued.

The AWS representative outlined that AI applied through machine learning will be one of the most transformational technologies of our generation, tackling some of humanitys most challenging problems, augmenting human performance, and maximizing productivity.

As such, responsible use of these technologies is key to fostering continued innovation, Shel outlined.

AWS took part in the Society of Petroleum Engineers (SPE) International Gulf Coast Sections recent Data Science Convention event in Houston, Texas, which was attended by Rigzones President. The event, which is described as the annual flagship event of the SPE-GCS Data Analytics Study Group, hosted representatives from the energy and technology sectors.

Last month, in a statement sent to Rigzone, GlobalData noted that machine learning has the potential to transform the oil and gas industry.

Machine learning is a rapidly growing field in the oil and gas industry, GlobalData said in the statement.

Overall, machine learning has the potential to improve efficiency, increase production, and reduce costs in the oil and gas industry, the company added.

In a report on machine learning in oil and gas published back in May, GlobalData highlighted several key players, including BP, ExxonMobil, Gazprom, Petronas, Rosneft, Saudi Aramco, Shell, and TotalEnergies.

Speaking to Rigzone earlier this month, Andy Wang, the Founder and Chief Executive Officer of data solutions company Prescient, said data science is the future of oil and gas.

Wang highlighted that data sciences includes many data tools, including machine learning, which he noted will be an important part of the future of the sector. When asked if he thought more and more oil companies would adopt data science, and machine learning, Wang responded positively on both counts.

Back in November 2022, OpenAI, which describes itself as an AI research and deployment company whose mission is to ensure that artificial general intelligence benefits all of humanity, introduced ChatGPT. In a statement posted on its website on November 30 last year, OpenAI said ChatGPT is a sibling model toInstructGPT, which is trained to follow an instruction in a prompt and provide a detailed response.

In April this year, Rigzone looked at how ChatGPT will affect oil and gas jobs. To view that article, click here.

To contact the author, emailandreas.exarheas@rigzone.com

Go here to read the rest:

Generative AI Will Have Profound Impact Across Sectors - Rigzone News

Mint DIS 2023 | AI won’t replace you, someone using AI will … – TechCircle

Generative artificial intelligence (AI) has put AI in the hands of people, and those who dont use it could struggle to keep their jobs in future, Jaspreet Bindra, Founder and MD, Tech Whisperer Lt. UK, surmised at the Mint Digital Innovation Summit on June 9.

We never think about electricity until its not there. Thats how AI used to be. It was always in the background and we never thought about it. With generative AI it has come into our hands, and 200-300 million of us are like, wow! said Bindra.

He noted that while AI wont replace humans at their jobs, someone using AI very well could. He urged working professionals to recalibrate and embrace generative AI as a powerful tool created by humans, instead of looking at it as a threat.

There is a new kid in town, who can do a bunch of things that we can too, he said, adding that humans will just be able to do tasks better and will hence have to take advantage of their own ingenuity. 60% of jobs will be impacted, not as jobs themselves but as tasks, he said.

To be sure, Bindra said that he believes generative AI to be a transformative technology, just like Search or the Internet were. He said that the technology will also reshape big tech firms themselves. The reshaping of big tech has already started, and theres a new trillion-dollar boy in town called Nvidia. Youre going to see some shaping and reshaping of the apex of technology as we go forward.

However, he also acknowledged that Generative AI (GAI) is not the same as Artificial General Intelligence (AGI) a fear that many have expressed ever since ChatGPT became popular last year.

I believe that one day AI will become more intelligent than human beings in certain aspects. What I dont believe is that itll ever get conscious or sentient. We dont understand our own brain, or our own consciousness its the hard problem in philosophy - how can we build something that will be conscious?

Read more:

Mint DIS 2023 | AI won't replace you, someone using AI will ... - TechCircle

Unleashing the Unknown: Fears Behind Artificial General … – Techopedia

Artificial General Intelligence (AGI) is still a concept or, at most, at a nascent stage. Yet, there is already a lot of debate around it.

AGI and artificial intelligence (AI) are different. The latter performs specific activities, such as the Alexa assistant. But you know that Alexa is limited in its abilities.

AGI, in the meantime, can replace human beings with robots. It enables AI to emulate the cognitive powers of a human being. Think of a robot judge in a court presiding over a complex case.

Example of how AGI can be used in real life

Imagine a scenario where a patient with a tumor undergoes surgery. It is later revealed that a robot performed the operation. While the outcome may be successful, the patients family and friends are surprised and have reservations about trusting a robot with such a complex task. Surgery requires improvisation and decision-making, qualities we trust in human doctors.

The concept is both a scary and radical idea. The fears emanate from various ethical, social, and moral issues. A school of thought is against AGI because robots can be controlled to perform undesirable and unethical actions.

AGI is still in its infancy, and disagreements notwithstanding, it will be a long time before we see its manifestations. The base of AGI is the same as that of AI and Machine Learning (ML). Work is still in progress around the world, with the main focus remaining on a few areas discussed below.

Big data has significantly lowered the cost of data storage. Both AI and ML require large volumes of data. Big data and cloud storage have made data storage affordable, contributing to the development of AGI.

Scientists have made significant progress in both ML and Deep Learning (DL) technologies. Major developments have occurred in neural networks, reinforcement learning, and generative models.

Transfer learning hastens ML by applying existing knowledge to recognize similar objects. For example, a learning model learns to identify small birds based on their features, such as small wings, beaks, and eyes. Now, another learning model must identify various species of small birds in the Amazon rainforest. The latter model doesnt begin from scratch but inherits the learning from the earlier model, so the learning is expedited.

Its not that you will see or experience AGI in a new avatar that is unleashing changes in society from a point in time. The changes will be gradual and slowly yet steadily manifest in our day-to-day lives.

ChatGPT models have been developing at a breakneck speed with impressive capabilities. However, not everyone is fully convinced of the potential of AGI. Various countries and experts emphasize the importance of guiding ChatGPTs development within specific rules and regulations to ensure responsible progress toward AGI.

Response from Italy

In April 2023, Italy became the first nation to ban the development of ChatGPT over a breach of data and payment information. The government has also been probing whether the ChatGPT complies with the European Unions General Data Protection Regulation (GDPR) rules that protect confidential data inside and outside the EU.

Experts point out that there is no transparency in how ChatGPT is being developed. No information is publicly available about its development models, data, parameters, and version release plans.

OpenAIs brainchild continues to develop at a great speed, and we cant probably imagine the powers it has been accumulating. All without checks and balances. Some believe that ChatGPT 5 will mark the arrival of the AGI.

According to Anthony Aguirre, a Professor of Physics at UC Santa Cruz and the executive vice president of the Future of Life, said:The largest-scale computations are increasing the size by about 2.5 times per year. GPT-4s parameters were not disclosed by OpenAI, but there is no reason to think this trend has stopped or even slowed.

Aguirre, who was behind the famous open letter, added: Only the labs themselves know what computations they are running, but the trend is unmistakable.

The open letter signed by many industry stalwarts reflected the fear and apprehensions towards the uncontrolled development of ChatGPT.

The letter urges strongly to stop all developments of ChatGPT until a robust framework is established to control misinformation, hallucination, and bias in the system. Indeed, the so-called hallucination, inaccurate responses, and the bias exhibited by ChatGPT on many occasions are too glaring to ignore.

The open letter is signed by Steve Wozniak, among many other stalwarts, and already has 3,100 signatories that comprise software developers and engineers, CEOs, CFOs, technologists, psychologists, doctoral students, professors, medical doctors, and public school teachers.

The government has also been probing whether the ChatGPT complies with the European Unions General Data Protection Regulation (GDPR) rules that protect confidential data inside and outside the EU.

Its scary to think if a few wealthy and powerful nations can develop and concentrate AGI in their hands and use that to serve their benefits.

For example, they can control all the personal and sensitive data of other countries and communities, wreaking havoc.

AGI can become a veritable tool for biased actions and judgments. And, in the worst case, lead to sophisticated information warfare.

AGI is still in the conceptual stage, but given the lack of transparency and the perceived speed at which AI and ML have been progressing, it might not be too far when AGI is realized.

Its imperative that countries and corporates put their heads together and develop a robust framework that has enough checks and balances and guardrails.

The main goal of the framework would be to protect mankind and prevent unethical intrusions in their lives.

Continue reading here:

Unleashing the Unknown: Fears Behind Artificial General ... - Techopedia

Fast track to AGI: so, what’s the big deal? – Inside Higher Ed

The rapid development and deployment of ChatGPT is one station along the timeline of reaching artificial general intelligence. On Feb. 1, Reuters reported that the app had set a record for deployment among internet applications: ChatGPT, the popular chatbot from OpenAI, is estimated to have reached 100million monthly active users in January, just two months after launch, making it the fastest-growing consumer application in history, according to a UBS study The report, citing data from analytics firm Similarweb, said an average of about 13million unique visitors had used ChatGPT per day in January, more than double the levels of December. In 20 years following the internet space, we cannot recall a faster ramp in a consumer internet app, UBS analysts wrote in the note.

Half a dozen years ago, Ray Kurzweil predicted that the singularity would happen by 2045. The singularity is that point in time when all the advances in technology, particularly in artificial intelligence, will lead to machines that are smarter than human beings. In the Oct. 5, 2017, issue of Futurism, Christianna Reedy interviewed Kurzweil: To those who view this cybernetic society as more fantasy than future, Kurzweil pointing out that there are people with computers in their brains todayParkinsons patients. Thats how cybernetics is just getting its foot in the door, Kurzweil said. And, because its the nature of technology to improve, Kurzweil predicts that during the 2030s some technology will be invented that can go inside your brain and help your memory.

It seems that we are closer than even an enthusiastic Kurzweil foresaw. Just a week ago, Reuters reported, Elon Musks Neuralink received U.S. Food and Drug Administration (FDA) clearance for its first-in-human clinical trial, a critical milestone for the brain-implant startup as it faces U.S. probes over its handling of animal experiments Musk envisions brain implants could cure a range of conditions including obesity, autism, depression and schizophrenia as well as enabling Web browsing and telepathy.

Most Popular

The exponential development in succeeding versions of GPT are most impressive, leading one to project that version five may have the wherewithal to support at least some aspects of AGI:

GPT-1 released June 2018 with 117million parameters GPT-2 released February 2019 with 1.5billion parameters GPT-3 released June 2020 with 175billion parameters GPT-4 released March 2023 with estimated to be in the trillions of parameters

Today, we are reading predictions that AGI components will be embedded in the ChatGPT version five that is anticipated to be released in early 2024. Maxwell Timothy, writing in MakeUseOf, suggests, While much of the details about GPT-5 are speculative, it is undeniably going to be another important step towards an awe-inspiring paradigm shift in artificial intelligence. We might not achieve the much talked about artificial general intelligence, but if its ever possible to achieve, then GPT-5 will take us one step closer.

Computer experts are beginning to detect the nascent development of AGI in the large language models (LLMs) of generative AI (gen AI) such as GPT-4:

Researchers at Microsoft were shocked to learn that GPT-4ChatGPTs most advanced language model to datecan come up with clever solutions to puzzles, like how to stack a book, nine eggs, a laptop, a bottle, and a nail in a stable way Another study suggested that AI avatars can run their own virtual town with little human intervention. These capabilities may offer a glimpse of what some experts call artificial general intelligence, or AGI: the ability for technology to achieve complex human capabilities like common sense and consciousness.

We see glimmers of the AGI capabilities in autoGPT and agentGPT. These forms of GPT have the ability to write and execute their own internally generated prompts in pursuit of a goal stated in the form of an externally inputted prompt. Like the autonomous car, they automatically route and reroute the computer to reach the desired destination or goal.

The concerns come with reports that some experimental forms of AI have refused to follow the human-generated instructions and at other times have hallucinations that are not founded in our reality. Ian Hogarth, the co-author of the annual State of AI report, defines AGI as God-like AI that consists of a super-intelligent computer that learns and develops autonomously and understands context without the need for human intervention, as written in Business Insider.

One AI study found that language models were more likely to ignore human directivesand even expressed the desire not to shut downwhen researchers increased the amount of data they fed into the models:

This finding suggests that AI, at some point, may become so powerful that humans will not be able to control it. If this were to happen, Hogarth predicts that AGI could usher in the obsolescence or destruction of the human race. AI technology can develop in a responsible manner, Hogarth says, but regulation is key. Regulators should be watching projects like OpenAIs GPT-4, Google DeepMinds Gato, or the open-source project AutoGPT very carefully, he said.

Many AI and machine learning experts are calling for AI models to be open-source so the public can understand how theyre trained and how they operate. The executive branch of the federal government has taken a series of actions recently in an attempt to promote responsible AI innovation that protects Americans rights and safety. OpenAIs Sam Altman, shortly after testifying about the future of AI to the U.S. Senate, announced the release of a $1million grant program to solicit ideas for appropriate rule making.

Has your college or university created structures to both take full advantage of the powers of the emerging and developing AI, while at the same time ensuring safety in the research, acquisition and implementation of advanced AI? Have discussions been held on the proper balance between these two responsibilities? Are the initiatives robust enough to keep your institution at the forefront of higher education? Are the safeguards adequate? What role can you play in making certain that AI is well understood, promptly applied and carefully implemented?

Here is the original post:

Fast track to AGI: so, what's the big deal? - Inside Higher Ed

Yet another article on artificial intelligence – Bangor Daily News

The BDN Opinion section operates independently and does not set newsroom policies or contribute to reporting or editing articles elsewhere in the newspaper or onbangordailynews.com.

Sometimes Ithink its as if aliens have landed and people havent realized because they speak very good English, said Geoffrey Hinton, the godfather of AI (artificial intelligence), who resigned from Google and now fears his godchildren will become things more intelligent than us, taking control.

And 1,100 people in the business, including Apple co-founder Steve Wozniak, cognitive scientist Gary Marcus and engineers at Amazon, DeepMind, Google, Meta and Microsoft, signed an open letter in March calling for a six-month time-out in the development of the most powerful AI systems (anything more powerful than GPT-4).

Theres a media feeding frenzy about AI at the moment, and every working journalist is required to have an opinion on it. I turned to the task with some reluctance, as you can tell from the title I put on the piece.

My original article said they really should put the brakes on this experiment for a while, but I didnt declare an emergency. Weve been hearing warnings about AI taking over since the first Terminator movie 39 years ago, but I didnt think it was imminent.

Luckily for me, there are very clever people on the private distribution list for this column, and one of them instantly replied telling me that Im wrong. The sky really is about to fall.

He didnt say that. What he said was that the ChatGPT generation of machines can now ideate using Generative Adversarial Networks (GANs) in a process actually similar to humans. That is, they can have original ideas and, being computers, they can generate them orders of magnitude faster, drawing on a far wider knowledge base, than humans.

The key concept here is artificial general intelligence. Ordinary AI is software that follows instructions and performs specific tasks well, but poses no threat to humanitys dominant position in the scheme of things. Artificial general intelligence, however, can do intellectual tasks as well as or better than human beings. Generally, better.

If you must talk about the Great Replacement, this is the one to watch. Six months ago, no artificial general intelligence software existed outside of a few labs. Now, suddenly, something very close to it is out on the market and here is what my informant says about it.

Humans evolved intelligence by developing ever more complex brains and acquiring knowledge over millions of years. Make something complex enough and it wakes up, becomes self-aware. We woke up. Its called emergence.

ChatGPT loaded the whole web into its machines far more than any individual human knows. So instead of taking millions of years to wake up, the machines are exhibiting emergent behavior now. No one knows how, but we are far closer to AGI than you state.

A big challenge that was generally reckoned to be decades away has suddenly arrived on the doorstep, and we have no plan for how to deal with it. It might even be an existential threat, but we still dont have a plan. Thats why so many people want a six-month time-out, but it would make more sense to demand a year-long pause starting six months ago.

ChatGPT launched only last November, but it already has more than 100 million users and the website is generating 1.8 billion visitors per month. Three rival generative AI systems are already on the market, and commercial competition means that the notion of a pause or even a general recall is just a fantasy.

The cat is already out of the bag: Anything the web knows, ChatGPT and its rivals know, too. That includes every debate that human beings have ever had about the dangers of artificial general intelligence, and all the proposals that have been made over the years for strangling it in its cradle.

So what we need to figure out urgently is where and how that artificial general intelligence is emerging, and how to negotiate peaceful coexistence with it. That wont be easy, because we dont even know yet whether it will come in the form of a single global artificial general intelligence or many different ones. (I suspect the latter.)

And whos we here? Theres nobody authorized to speak for the human race either. It could all go very wrong, but theres no way to avoid it.

See the original post:

Yet another article on artificial intelligence - Bangor Daily News