Archive for the ‘Artificial General Intelligence’ Category

How to Win the AI War – Tablet Magazine

Virtually everything that everyone has been saying about AI has been misleading or wrong. This is not surprising. The processes of artificial intelligence and its digital workhorse, machine learning, can be mysteriously opaque even to its most experienced practitioners, let alone its most ignorant critics.

But when the public debate about any new technology starts to get out of control and move in dangerous directions, its time to clue the public and politicians in on whats really happening and whats really at stake. In this case, its essential to understand what a genuine national AI strategy should look like and why its crucial for the U.S. to have one.

The current flawed paradigm reads like this: How can the government mitigate the risks and disruptive changes flowing from AIs commercial and private sector? The leading advocate for this position is Sam Altman, CEO of OpenSource AI, the company that set off the current furor with its ChatGPT application. When Altman appeared before the Senate on May 13, he warned: I think if this technology goes wrong, it can go quite wrong. He also offered a solution: We want to work with the government to prevent that from happening.

In the same way that Altman volunteering for regulation allows him to use his influence over the process to set rules that he believes will favor his company, government is all too ready to cooperate. Government also sees an advantage in hyping the fear of AI and fitting it into the regulatory model as a way to maintain control over the industry. But given how few members of Congress understand the technology, their willingness to oversee a field that commercial companies founded and have led for more than two decades should be treated with caution.

Instead, we need a new paradigm for understanding and advancing AIone that will enable us to channel the coming changes to national ends. In particular, our AI policy needs to restore American technological, economic, and global leadershipespecially vis a vis Chinabefore its too late.

Its a paradigm that uses public power to unleash the private sector, and transform the national landscape, to win the AI future.

A reasonable discussion of AI has to start by disposing of two misconceptions.

First is the threat of artificial intelligence applications becoming so powerful and pervasive at a late stage of their development they decide to replace humanitya scenario known as Artificial General Intelligence (AGI). This is the Rise of the Machines fantasy left over from The Terminator movies of the 1980s when artificial intelligence research was still in its infancy.

The other is that the advent of AI will mean a massive loss of jobs and the end of work itself, as human laborand even human purposeis replaced by an algorithm-driven workforce. Fear mongers like to point to the recent Goldman Sachs study that suggested AI could replace more than 300 million jobs in the United States and Europewhile also adding 7% to the total value of goods and services around the world.

Most of these concerns stem from the publics misunderstanding what AI and its internal engine, Machine Learning (ML), can and cannot do.

ML describes a computers ability to recognize patterns in large sets of datawhether that data are sounds, images, words, or financial transactions. Scientists call the mathematical representation of these data sets a tensor. As long as data can be converted into a tensor, its ready for ML and its more sophisticated offspring, Deep Learning, which builds algorithms mimicking the brains neural network in order to create self-correcting predictive models through repeated testing of datasets to correct and validate the initial model.

The result is a prediction curve based on past patterns (e.g., given the correlation between A and B in the past, we can expect AB to appear again in the future). The more data, the more accurate the predictive model becomes. Patterns that were unrecognizable in tens of thousands of examples can suddenly be obvious in the millionth or ten millionth example. They then become the model for writing a ChatGPT essay that can imitate the distinct speech patterns of Winston Churchill, or for predicting fluctuations in financial markets, or for defeating an adversary on the battlefield.

AI/ML is all about using pattern recognition to generate prediction models, which constantly sharpen their accuracy through the data feedback loop. Its a profoundly powerful technology but its still very far from thinking, or anything approaching human notions of consciousness.

As AI scientist Erik Larson explained in his 2021 book The Myth of Artificial Intelligence, Machine learning can never supply real understanding because the analysis of data does not bridge to knowledge of the causal structure of the world [which is] essential for intelligence. What machine learning doesassociating data points with each otherdoesnt scale to causal thinking or imagining. An AI program can mimic this kind of intelligence, perhaps enough to fool a human observer. But its inferiority to that observer in thinking, imagining, or creating, remains permanent.

Inevitably AI developments are going to be disruptivethey already arebut not in the way people think or the way the government wants you to think.

The first step is realizing that AI is a bottom up and not top-down revolution. It is driven by a wide range of individual entrepreneurs and small companies, as well as the usual mega players like Microsoft and Google and Amazon. Done right, its a revolution that means more freedom and autonomy for individual users, not less.

AI can perform many of the menial repetitive tasks that most of us would associate with human intelligence. It can sort and categorize with speed and efficiency; it can recognize patterns in words and images most of us might miss, and put together known facts and relationships in ways that anticipate development of similar patterns in the future. As well demonstrate, AIs unprecedented power to sharpen the process of predicting what might happen next, based on its insights into whats happened before, actually empowers people to do what they do best: decide for themselves what they want to do.

Any technological revolution so sweeping and disruptive is bound to generate risks, as did the Industrial Revolution in the late eighteenth century and the computer revolution in the late twentieth. But in the end the risks are far outweighed by the endless possibilities. Thats why calls for a moratorium on large-scale AI research, or creating government entities to regulate what AI applications are allowed or banned, not only fly in the face of empirical reality but play directly into the hands of those who want to use AI as a tool for furthering the power of the administrative, or even absolute, state. That kind of centralized top-down regulatory control is precisely the path that AI development has taken in China. It is also the direction that many of the leading voices calling for AI regulation in the U.S. would like our country to move in.

Critics and AI fearmongers cant escape one ineluctable fact: there is no way to put the AI gini back in its bottle. According to a company that tracks startup companies, Tracxn Technologies, at the end of 2022 there were more than 13,398 AI startups in this country. A recent Adobe study found that seventy-seven percent of consumers now use some form of AI technology. A McKinsey survey on the state of AI in 2022 found that AI adoption more than doubled since 2017 (from 20% to 50%), with 63% of businesses expecting investment in AI to increase over the next three years.

We need a new paradigm for understanding and advancing AIone that will enable us to channel the coming changes to national ends.

Facebook

Email

Once its clear what AI cant do, what can it do? This is what Canadian AI experts Ajay Agrawal, Joshua Gans, and Avi Goldfarb explain in their 2022 book, Power and Prediction. What happens with AI prediction, they write, is that prediction and judgment become decoupled. In other words, AI uses its predictive powers to lay out increasingly exact options for action; but the ultimate decision on which option to choose still belongs to the programs users judgment.

Heres where scary predictions about AI will put people out of work need to be put in proper perspective. The recent Goldman Sachs report predicted the jobs lost or displaced could be as many as 300 million; the World Economic Forum put the number at 85 million by 2025. What these predictions dont take into account is how many jobs will be created thanks to AI, including jobs with increased autonomy and responsibility since AI/ML will be doing the more tedious chores.

In fact, a January 2022 Forbes article summarized a study by the University of Warwick this way: What appears clear from the research is that AI and associated technologies do indeed disrupt the labor market with some jobs going and others emerging, but across the board there are more jobs created than lost.

Wide use of AI has the potential to move decision-making down to those who are closest to the problem at hand by expanding their options. But if government is allowed to exercise strict regulatory control over AI, it is likely to both stifle that local innovation and abuse its oversight role to grant the government more power at the expense of individual citizens.

Fundamentally, instead of being distracted by worrying about the downsides of AI, we have to see this technology as essential to a future growth economy as steam was to the Industrial Revolution or electricity to the second industrial revolution.

The one country that understood early on that a deliberate national AI strategy can make all the difference between following or leading a technological revolution of this scale was China. In 2017 Chinese President Xi Jinping officially set aside $150 billion to make China the first AI-driven nation by 2030. The centerpiece of the plan is a massive police-surveillance apparatus that gathers data on citizens whenever and wherever it can. In a recent U.S. government ranking of companies producing the most accurate facial recognition technology, the top five were all Chinese. Its no wonder that half of all the surveillance cameras in the world today are in China, while companies like Huawei and TikTok are geared to provide the Chinese government with access to data outside Chinas borders.

By law, virtually all the work that Chinese companies do in AI research and development supports the Chinese military and intelligence services in sharpening their future force posture. Meanwhile, China enjoys a booming export business selling those same AI capabilities to autocratic regimes from Iran and North Korea to Russia and Syria.

Also in 2017, the same year that Xi announced his massive AI initiative, Chinas Peoples Liberation Army began using AIs predictive aptitude to give it a decisive edge on the battlefield. AI-powered military applications included enhanced command-and-control functions, building swarm technology for hypersonic missiles and UAVs, as well as object- and facial-recognition targeting software and AI-enabled cyber deterrence.

No calls for an international moratorium will slow down Beijings work on AI. They should not slow Americas efforts, either. Thats why former Google CEO Eric Schmidt, who co-authored a book with Henry Kissinger expressing great fears about the future of AI, has also warned that the six-month moratorium on AI research some critics recently proposed would only benefit Beijing. Back in October 2022 Schmidt told an audience that the U.S. is already steadily losing its AI arms race with China.

And yet the United States is where artificial intelligence first started back in the 1950s. Weve been the leaders in AI research and innovation ever since, even if China has made rapid gainsChina now hosts more than one thousand major AI firms, all of whom have direct ties with the Chinese government and military.

It would clearly be foolish to cede this decisive edge to China. But the key to maintaining our advantage lies in harnessing the technology already out there, rather than painstakingly building new AI models to specific government-dictated requirementswhether its including anti-bias applications, or limiting by law what kind of research AI companies are allowed to do.

What about the threat to privacy and civil liberties? Given the broad, ever-growing base of private AI innovation and research, the likelihood of government imposing a China-like monopoly over the technology is less than the likelihood that a bad actor, whether state or non-state, will use AI for deception and deep fake videos to disrupt and confuse the public during a presidential election or a national crisis.

The best response to the threat, however, is not to slow down, but to speed up AIs most advanced developments, including those that will offer means to counter AI fakery. That means expanding the opportunities for the private sector to carry on by maintaining as broad a base for AI innovation as possible.

For example, traditional microprocessors and CPUs are not designed for ML. Thats why with the rise of AI, Graphics Processing Unit (GPU) are in demand. What was once relegated to high-end gaming PCs and workstations is now the most sought-after processor in the public cloud. Unlike CPUs, GPUs come with thousands of cores that speed up the ML training process. Even for running a trained model for inferencing, more sophisticated GPUs will be key for AI.

So will Field Programmable Gate Array or FPGA processors, which can be tailored for specific types of workloads. Traditional CPUs are designed for general-purpose computing while FPGAs can be programmed in the field after they are manufactured, for niche computing tasks such as training ML models.

The government halting or hobbling AI research in the name of a specious assessment of risks is likely to harm developments in both these areas. On the other hand, government spending can foster research and development, and help increase the U.S. edge in next-generation AI/ML.

No calls for an international moratorium will slow down Beijings work on AI. They should not slow Americas efforts, either.

Facebook

Email

AI/ML is an arena where the United States enjoys a hefty scientific and technological edge, a government willing to spend plenty of money, and obvious strategic and economic advantages in expanding our AI reach. So whats really hampering serious thinking about a national AI strategy?

I fear what we are seeing is a failure of nerve in the face of a new technologya failure that will cede its future to our competitors, China foremost among them. If we had done this with nuclear technology, the Cold War would have had a very different ending. We cant let that happen this time.

Of course, there are unknown risks with AI, as with any disruptive technology. The speed with which AI/ML, especially in its Deep Learning phase, can arrive at predictive results that startle its creators. Similarly, the threat of Deep Fake videos and other malicious uses of AI are warnings about what can happen when a new technology runs off the ethical rails.

At the same time, the U.S. governments efforts to censor misinformation on social media and the Biden White Houses executive order requiring government-developed AI to reflect its DEI ideology fail to address the genuine risks of AI, while using concerns about the technology as a pretext to clamp down on free speech and ideological dissent.

This is as much a matter of confidence in ourselves as anything else. In a recent blogpost in Marginal Revolution, George Mason University professor Tyler Cowen expressed the issue this way:

What kind of civilization is it that turns away from the challenge of dealing with more. . . intelligence? That has not the self-confidence to confidently confront a big dose of more intelligence? Dare I wonder if such societies might not perish under their current watch, with or without AI?

China is confidently using AI to strengthen its one-party surveillance state. America must summon the confidence to harness the power of AI to our own vision of the future.

Read the rest here:

How to Win the AI War - Tablet Magazine

Fears of the Singularity Byline Times – Byline Times

Receive our Behind the Headlines email and well post a free copy of Byline Times

Six months ago, the general view was that AI was still some way away from taking over the world. The sudden irruption of chatGPT onto the scene has changed that.

Advertisers and marketers are delighted to have a quick and effective source of copy, while in offices and business meetings around the world GPT-written reports and pitches are already a boon. All that is required is the right prompts and a bit of human-effected tweaking of the resulting text though even this latter will soon be unnecessary.

In other spheres, matters are not so rosy.

Educators at secondary and tertiary levels are anxious that students work will be produced by AI, depriving the students themselves of what educates them: the effort. Writers of romantic fiction and erotica are staring into the abyss as publishers recognise that GPT can churn out scores of novels a day, all very plausible indeed, good specimens of their genres, at a great saving: no advances and royalties to authors, scarcely any copy-editing, straight from desk-top computer to e-book format in minutes. Or moments.

Do not for a moment imagine that this last point is fanciful. The plots of every Mills and Boon novella are practically identical; names and situations (doctor-nurse, boss-secretary, even with sexes reversed) may change, but the format does not. As it happens, the basic structure of a Mills and Boon novella is the same as a Jane Austen novel: boy meets girl, problems ensue, problems overcome, happiness.

Quite where the changes, good and bad, wrought by chatGPT will happen and how far they will go remains to be seen. These are early days in a dizzyingly rapid process. And chatGPT is only one of many galloping advances in many areas of AI application, for scarcely any of which are we prepared: no frameworks of management, ethics, sensible regulation or anticipation are in place, and in many respects cannot be in place because we do not yet know what ramifying changes there will be.

The great fear that has prompted even some leaders in the AI field to utter cries of caution is artificial general intelligence, AGI. This is envisioned as human-like intelligence but on steroids vastly great than human intelligence, and therefore capable of taking over the world. As has been well said, AGI might be the last thing humanity produces. Once it exists, it is game over: there will be no controlling it. The fears are apocalyptic enough to have been pooh-poohed not just by wishful thinkers but others in the AI field itself. But all these voices have been oddly muted since chatGPT arrived.

The real question, however, is not whether an AI system might be as intelligent as a human being or more intelligent than a human being.

The intelligence of an AI system is a different and more potent thing, in some key respects, than human intelligence a fact already obvious in many of the standard applications of AI, most notably in trawling patterns from vast stores of data, patterns unobservable even to the smartest human because even they cannot hold all the data compresently to mind and recognise the myriad interconnections constituting the patterns within it.

And more significantly still no human mind has the 100% degree of rationality, the remorseless logic, with which an AI system can draw inferences from the mass of data it surveys and the patterns it sees. With that level of data available to it, an AGI will be able to act on the conclusions of those inferences, given that not doing so would be irrational.

There are broadly two ways this could go dramatically different in outcome for humanity.

One is that the AGI in effect asks itself what is the most destructive and disruptive thing on the planet? The answer, of course, is human beings. With the level of knowledge it possesses it will know, or be able to work out in fractions of a second, how to wipe humanity off the face of the Earth.

It will know how to access nuclear power stations and nuclear arsenals, how to override their controls and blow them all up simultaneously, how to release deadly viruses from medical research laboratories worldwide, how to over-activate drainage systems or water outflows from dams or electronically controlled locks on rivers and canals, not simply to prompt widespread floods but to increase pressure on geological fault lines to precipitate earthquakes, because it will have the data showing how mismanagement of water has caused devastating earthquakes in China and other places. But in any case interdiction of water supplies, and their pollution, will be an effective way of killing large numbers of people if they survive the nuclear holocaust already unleashed.

These are probably just a few things an AGI would initiate in the first fractions of a second of realising that it would be illogical to permit humanitys continuing existence, given the murderous pressure it exerts on the millions of other life forms on the planet. An arithmetical approach to policy the utilitarian approach makes killing off humanity in the interests of all other life forms a no-brainer.

Kate Devlin dispels the sudden Science Fiction panic around superintelligence, and looks at the real threats to employment and the environment from AI and machine learning

Kate Devlin

But the other possible outcome is very different. The AGI might have picked up on considerations of ethical and aesthetic value. It might have a way of factoring in the good side of humanitys output over history, and the most treasured aspects of human subjectivity: love, pleasure, enjoyment, creativity, kindness, sympathy, tolerance, friendship. It might conclude that these are things worth preserving and fostering.

It might therefore ask itself what inhibits and corrupts these things and, instead of wiping out humanity, it might wipe out those things instead: the prompts and opportunities for greed, out-group hostility, aggression, selfishness, division, inequality, resentment, ignorance. Key to this is the profit motive, the money-power nexus money and power being the reverse and obverse of the same (yes) coin.

It could take over the computer systems that run the worlds banks and redistribute all the holdings equally among the worlds people, and impose a limit on future deposits that take them above the average of everyone elses deposits. It could access lawyers and accountants computers and annul the titles to physical and other assets. And so on.

It probably would not do these things, however, because trade would collapse and much of the world would soon starve, so it would come up with an even smarter way to redistribute wealth, stop the relentless profit motive that drives big business and ultra-high net worth individuals to ruin the planets environment, control its politics, increase inequality, foster divisions and animosities to keep people distracted, even sponsor wars in order to sell arms and distract yet more.

How would it do that? I can think of a couple of ways, laborious and time-consuming (you know, stuff like true democracy in which all voters are informed and sensible and have rational, effective constitutional arrangements) but because it is The AGI, the god which has created itself out of the seeds sown by Babbage and Turing, it will know far better than any of us how to do it.

I wonder which future the future will be. For AGI is coming.

Read more from the original source:

Fears of the Singularity Byline Times - Byline Times

OpenAI CEO Says Worldcoin AI Concerns Will Soon Diminish – BeInCrypto

Worldcoin co-founder Sam Altman affirmed that privacy concerns will decrease after Worldcoin open-sources its code.

The Open AI boss and Worldcoin co-founder suggested that blockchain and crypto could allay fears presented by the threat of super artificial intelligence (AI).

Speaking at the Worldcoin Seoul Meetup last weekend, Worldcoin co-founder Sam Altman said no one has answers to some of artificial intelligences toughest questions.

Meanwhile, CEO Alex Bania said the Worldcoin blockchain needs 100 million users to reach critical mass.

Worldcoins South Korean event comes in the wake of the launch of its World App in May. The company wants to build a decentralized digital identity system coupled with a new wealth distribution model using Worldcoin.

Its orb technology photographs human irises and uses AI to learn what is truly human. Bania said the new technologys importance will grow as the importance of distinguishing between human and synthetic content grows.

AGI [Artificial General Intelligence] and Worldcoin are individual ideas, but they are relevant at a time when AGI is coming, said Altman. Worldcoin is also related to universal basic income, and there will be a lot of economic progress through it.

Altman has set his sights on Korea as the next virtual currency and artificial intelligence hub. He said he would expand investments in Korean AI firms.

Worldcoin has got into hot water for gaps between its public messaging and user experiences.

A Massachusetts Institute of Technology report revealed Worldcoin contractors offered developing nations free money in exchange for biometric data. Representatives collected more data than they admitted and failed to acquire meaningful consent.

Respondents received Worldcoin and were told iris imaging was necessary to ensure equitable allocations. This is despite investors Andreessen Horowitz and Worldcoin employees receiving a 10% allocation apiece.

Worldcoins orb stores iris data encoded as an IrisHash and reportedly uses zero-knowledge proofs to check whether the information already exists on the Worldcoin network.

Crypto trading bots are becoming increasingly important in optimizing trading strategies and maximizing profits. Learn more here with our guide.

While the company says data on the orb will eventually be deleted, details on interim handling of user data are scant, except that they will be fodder for AI algorithms to learn about humans.

European regulators impose fines of up to 4% of global revenues for inadequately protecting the data of European citizens.

For BeInCryptos latestBitcoin(BTC) analysis,click here.

In adherence to the Trust Project guidelines, BeInCrypto is committed to unbiased, transparent reporting. This news article aims to provide accurate, timely information. However, readers are advised to verify facts independently and consult with a professional before making any decisions based on this content.

View post:

OpenAI CEO Says Worldcoin AI Concerns Will Soon Diminish - BeInCrypto

Your Ultimate Guide to Chat GPT and Other Abbreviations – KDnuggets

ML (machine learning) is an approach to solving difficult computational problems instead of coding using a programming language you build an algorithm that learns the solution from data samples.

AI (artificial intelligence) is a field of computer science dealing with problems (e.g., image classification, working with human language) that are difficult to solve using traditional programming. ML and AI go hand in hand, with ML being a tool to solve problems formulated in AI.

AGI (artificial general intelligence) - is the correct term for what popular culture usually implies by AI the ability of computers to achieve human-like intellectual capabilities and broad reasoning. It is still the holy grail for researchers working in the AI field.

An artificial neural network (ANN) is a class of ML algorithms and data structures (or models for short) so called because it was inspired by the structure of biological neural tissue. But this doesnt completely mimic all the biological mechanisms behind it. Rather, ANNs are complicated mathematical functions that are based on ideas from living species biology.

Neural networks are layered structures consisting of uniform units interconnected with each other in a network. The way these units are interconnected is called architecture. Each connection has an associated number called weight and the weights store information the model learns from data. So, when you read the model has 2 billion parameters, it means that there are 2 billion connections (and weights) in the model, and it roughly designates the information capacity of the neural network.

Neural networks have been studied since the 1980s but made a real impact when the computer games industry introduced cheap personal supercomputers known as graphical processing units (GPUs). Researchers adapted this hardware for the neural network training process and achieved impressive results. One of the first deep learning architectures, the convolutional neural network (CNN), was able to carry out sophisticated image recognition that was difficult with classical computer vision algorithms. Since then, ML with neural networks has been rebranded as deep learning, with deep referring to the complicated NN architectures the networks are able to explore.

Id recommend videos by Grant Sanderson available on his animated math channel.

To work with human language using computers, language must be defined mathematically. This approach should be sufficiently generic to include the distinctive features of every language. In 2003 researchers discovered how to represent language with neural networks and called it the neural probabilistic language model or LM for short. This works like predictive text in a mobile phone given some initial sequence of words (or tokens), the model can predict the next possible words with their respective probabilities. Continuing this process using previously generated words as input (this is autoregression) the model can generate text in the language for which it was trained.

Representing sequences of items was a challenging problem for neural networks. There were several attempts to solve the problem (mostly around variations of recurrent neural networks), which yielded some important ideas (e.g., word embedding, encoder-decoder architecture, and attention mechanism). In 2017 a group of Google researchers proposed a new NN architecture that they called a transformer. It combined all these ideas with effective practical implementation. It was designed to solve the language translation problem (hence the name) but proved to be efficient for capturing the statistical properties of any sequence data.

OpenAI experimented with transformers to build a neural probabilistic language model. The results of their experiments are called GPT (generative pre-trained transformer) models. Pre-trained means they were training the transformer NN on a large body of texts mined on the Internet and then taking its decoder part for language representation and text generation. There were several generations of GPTs:

Given the enormous number of parameters GPT models have (in fact, you need a huge computational cluster with hundreds to thousands of GPUs to train and serve these models), they were called Large Language Models (LLMs).

The original GPT-3 is still a word prediction engine and thus is mostly of interest to AI researchers and computational linguists. Given some initial seed or prompt, it can generate text infinitely, which makes little practical sense. The OpenAI team continued to experiment with the model, trying to fine-tune it to treat prompts as instructions to execute. They fed in a large dataset of human-curated dialogues and invented a new approach (RLHF reinforcement learning from human feedback) to significantly speed up this process with another neural network as a validator agent (typical in AI research). They released a model called InstructGPT as an MVP based on a smaller GPT-3 version and in November 2022 released a full-featured version called ChatGPT. With its simple chatbot and web UI, it changed the IT world.

Given that LLMs are just sophisticated statistical machines, the generation process could go in an unexpected and unpleasant direction. This type of result is sometimes called an AI hallucination, but from the algorithmic perspective, it is still valid, though unexpected, by human users.

Raw LLMs require treatment and additional fine-tuning with human validators and RLHF, as previously mentioned. This is to align LLMs with human expectations, and not surprisingly the process itself is called alignment. This is a long and tedious procedure with considerable human work involved; this could be considered LLM quality assurance. The alignment of the models is what distinguishes OpenAI/Microsoft ChatGPT and GPT-4 from their open-source counterparts.

Neural networks are black boxes (a huge array of numbers with some structure on top). There are some methods to explore and debug their internals but the exceptional generalization qualities of GPTs remain unexplained. This is the main reason behind the ban movement some researchers think we are playing with fire (science fiction gives us fascinating scenarios of AGI birth and technological singularity) before we get a better understanding of the processes underlying LLMs.

The most popular include:

GPTs are the most mature models with API access provided by OpenAI and Microsoft Azure OpenAI services (if you need a private subscription). But this is the frontier of AI and many interesting things have happened since the release of ChatGPT. Google has built its PaLM-2 model; Meta open-sourced their LLaMA models for researchers, which spurred lots of tweaks and enhancements (e.g., Alpaca from Stanford) and optimization (now you can run LLMs on your laptop and even smartphone).

Huggingface provides BLOOM and StarCoder and HuggingChat which are completely open source, without the LLaMA research-only limitation. Databricks trained their own completely open-source Dolly model. Lmsys.org is offering its own Vicuna LLM. Nvidias deep learning research team is developing its Megatron-LM model. The GPT4All initiative is also worth mentioning.

However, all these open-source alternatives are still behind OpenAIs major tech (especially in the alignment perspective) but the gap is rapidly closing.

The easiest way is to use OpenAI public service or their platform API playground, which offers lower-level access to the models and more control over network inner workings (specify system context, tune generation parameters, etc). But you should carefully review their service agreements since they use user interactions for additional model improvements and training. Alternatively, you can choose Microsoft Azure OpenAI services, which provide the same API and tools but with private model instances.

If you are more adventurous, you can try LLM models hosted by HuggingFace, but youll need to be more skilled with Python and data science tooling. Denis Shipilov is experienced Solutions Architect with wide range of expertise from distributed systems design to the BigData and Data Science related projects.

Read more here:

Your Ultimate Guide to Chat GPT and Other Abbreviations - KDnuggets

Generative AI Will Have Profound Impact Across Sectors – Rigzone News

Generative AI will have a profound impact across industries.

Thats what Amazon Web Services (AWS) believes, according to Hussein Shel, an Energy Enterprise Technologist for the company, who said Amazon has invested heavily in the development and deployment of artificial intelligence and machine learning for more than two decades for both customer-facing services and internal operations.

We are now going to see the next wave of widespread adoption of machine learning, with the opportunity for every customer experience and application to be reinvented with generative AI, including the energy industry, Shel told Rigzone.

AWS will help drive this next wave by making it easy, practical, and cost-effective for customers to use generative AI in their business across all the three layers of the technology stack, including infrastructure, machine learning tools, and purpose-built AI services, he added.

Looking at some of the applications and benefits of generative AI in the energy industry, Shel outlined that AWS sees the technology playing a pivotal role in increasing operational efficiencies, reducing health and safety exposure, enhancing customer experience, minimizing the emissions associated with energy production, and accelerating the energy transition.

For example, generative AI could play a pivotal role in addressing operational site safety, Shel said.

Energy operations often occur in remote, and sometimes hazardous and risky environments. The industry has long-sought solutions that help to reduce trips to the field, which directly correlates to reduced worker health and safety exposure, he added.

Generative AI can help the industry make significant strides towards this goal. Images from cameras stationed at field locations can be sent to a generative AI application that could scan for potential safety risks, such as faulty valves resulting in gas leaks, he continued.

Shel said the application could generate recommendations for personal protective equipment and tools and equipment for remedial work, highlighting that this would help to eliminate an initial trip to the field to identify issues, minimize operational downtime, and also reduce health and safety exposure.

Another example is reservoir modeling, Shel noted.

Generative AI models can be used for reservoir modeling by generating synthetic reservoir models that can simulate reservoir behavior, he added.

GANs are a popular generative AI technique used to generate synthetic reservoir models. The generator network of the GAN is trained to produce synthetic reservoir models that are similar to real-world reservoirs, while the discriminator network is trained to distinguish between real and synthetic reservoir models, he went on to state.

Once the generative model is trained, it can be used to generate a large number of synthetic reservoir models that can be used for reservoir simulation and optimization, reducing uncertainty and improving hydrocarbon production forecasting, Shel stated.

These reservoir models can also be used for other energy applications where subsurface understanding is critical, such as geothermal and carbon capture and storage, Shel said.

Highlighting a third example, Shel pointed out a generative AI based digital assistant.

Data access is a continuous challenge the energy industry is looking to overcome, especially considering much of its data is decades old and sits in various systems and formats, he said.

Oil and gas companies, for example, have decades of documents created throughout the subsurface workflow in different formats, i.e., PDFs, presentations, reports, memos, well logs, word documents, and finding useful information takes a considerable amount of time, he added.

According to one of the top five operators, engineers spend 60 percent of their time searching for information. Ingesting all of those documents on a generative AI based solution augmented by an index can dramatically improve data access, which can lead to making better decisions faster, Shel continued.

When asked if the thought all oil and gas companies will use generative AI in some way in the future, Shel said he did, but added that its important to stress that its still early days when it comes to defining the potential impact of generative AI on the energy industry.

At AWS, our goal is to democratize the use of generative AI, Shel told Rigzone.

To do this, were providing our customers and partners with the flexibility to choose the way they want to build with generative AI, such as building their own foundation models with purpose-built machine learning infrastructure; leveraging pre-trained foundation models as base models to build their applications; or use services with built-in generative AI without requiring any specific expertise in foundation models, he added.

Were also providing cost-efficient infrastructure and the correct security controls to help simplify deployment, he continued.

The AWS representative outlined that AI applied through machine learning will be one of the most transformational technologies of our generation, tackling some of humanitys most challenging problems, augmenting human performance, and maximizing productivity.

As such, responsible use of these technologies is key to fostering continued innovation, Shel outlined.

AWS took part in the Society of Petroleum Engineers (SPE) International Gulf Coast Sections recent Data Science Convention event in Houston, Texas, which was attended by Rigzones President. The event, which is described as the annual flagship event of the SPE-GCS Data Analytics Study Group, hosted representatives from the energy and technology sectors.

Last month, in a statement sent to Rigzone, GlobalData noted that machine learning has the potential to transform the oil and gas industry.

Machine learning is a rapidly growing field in the oil and gas industry, GlobalData said in the statement.

Overall, machine learning has the potential to improve efficiency, increase production, and reduce costs in the oil and gas industry, the company added.

In a report on machine learning in oil and gas published back in May, GlobalData highlighted several key players, including BP, ExxonMobil, Gazprom, Petronas, Rosneft, Saudi Aramco, Shell, and TotalEnergies.

Speaking to Rigzone earlier this month, Andy Wang, the Founder and Chief Executive Officer of data solutions company Prescient, said data science is the future of oil and gas.

Wang highlighted that data sciences includes many data tools, including machine learning, which he noted will be an important part of the future of the sector. When asked if he thought more and more oil companies would adopt data science, and machine learning, Wang responded positively on both counts.

Back in November 2022, OpenAI, which describes itself as an AI research and deployment company whose mission is to ensure that artificial general intelligence benefits all of humanity, introduced ChatGPT. In a statement posted on its website on November 30 last year, OpenAI said ChatGPT is a sibling model toInstructGPT, which is trained to follow an instruction in a prompt and provide a detailed response.

In April this year, Rigzone looked at how ChatGPT will affect oil and gas jobs. To view that article, click here.

To contact the author, emailandreas.exarheas@rigzone.com

Go here to read the rest:

Generative AI Will Have Profound Impact Across Sectors - Rigzone News