Archive for the ‘Artificial Intelligence’ Category

What if artificial intelligence isnt the apocalypse? – EL PAS USA

In just six months, searches for artificial intelligence on Google have multiplied by five. ChatGPT launched on November 30 of 2022 already has tens of millions of users. And Sam Altman, the CEO of OpenAI, the company that created ChatGPT, has already appeared before the United States Congress to explain himself and answer questions about the impact of AI. By comparison, it took Mark Zuckerberg 14 years to go to Washington to talk about the role of Facebook in society.

Altman has oddly been quite blunt about the technology that his firm produces. My worst fears are that we can cause significant harm to the world I think if this technology goes wrong, it can go quite wrong, he said while testifying. However, some analysts have noted that the words about his supposed fears may be carefully calculated, with the intention of encouraging more stringent regulation so as to hinder the rise of competitors to OpenAI, which already occupies the dominant position in the sector.

Heavy and bombastic phrases about the explosion of AI have already spawned their own memes. The term criti-hype created in 2021 to define criticism of a new technology has become popularized, thanks to ChatGPT. A pioneering example of criti-hype was the case of Cambridge Analytica, when the company was accused of harvesting Facebook data to understand and influence the electorate during the 2016 presidential election.

The pinnacle of these statements was the departure of Geoffrey Hinton known as the godfather of AI from Google. He left the company to be able to speak freely about the dangers of AI: From what we know so far about the functioning of the human brain, our learning process is probably less efficient than that of computers, he told EL PAS in an interview after departing from Google.

Meanwhile, the U.K. governments outgoing chief scientific adviser has just said that AI could be as big as the Industrial Revolution was. There are already groups trying to organize, so that their trades are not swept away by this technology.

There are too many prophecies and fears about AI to list. But theres also the possibility that the impact of this technology will actually be bearable. What if everything ended up going slower than is predicted, with fewer shake ups in society and the economy? This opinion is valid, but it hasnt been deeply explored amidst all the hype. While its hard to deny the impact of AI in many areas, changing the world isnt so simple. Previous revolutions have profoundly changed our way of life, but humans have managed to adapt without much turbulence. Could AI also end up being a subtle revolution?

At the very least, [AI has caused] a big structural change in what software can do, says Benedict Evans, an independent analyst and former partner at Andreessen Horowitz, one of Silicon Valleys leading venture capital firms. It will probably allow a lot of new things to be possible. This makes people compare it to the iPhone. It could also be more than that: it could be more comparable to the personal computer, or to the graphical user interface, which allows interaction with the computer through the graphical elements on the screen.

These new AI and machine learning (ML) technologies obviously carry a lot of weight in the tech world. My concern is not that AI will replace humans, says Meredith Whittaker, president of Signal, a popular messaging app, but Im deeply concerned that companies will use it to demean and diminish the position of their workers today. The danger is not that AI will do the job of workers: its that the introduction of AI by employers will be used to make these jobs worse, further exacerbating inequality.

It must be noted that the new forms of AI still make a lot of mistakes. Jos Hernndez-Orallo a researcher at the Leverhulme Center for the Future of Intelligence at Cambridge University has been studying these so-called hallucinations for years. At the moment, [AI is] at the level of a know-it-all brother-in-law. But in the future, [it may be] an expert, perhaps knowing more about some subjects than others. This is what causes us anxiety, because we dont yet know in which subjects [the AI] is most reliable, he explains.

Its impossible to build a system that never fails, because well always be asking questions that are more and more complex. At the moment, the systems are capable of the best and the worst Theyre very unpredictable, he adds.

But if this technology isnt so mature, why has it had such a sudden and broad impact in the past few months? There are at least two reasons, says Hernndez-Orallo: first, commercial pressure. The biggest problem comes because there is commercial, media and social pressure for these systems to always respond to something, even when they dont know how. If higher thresholds were set, these systems would fail less, but they would almost always answer I dont know, because there are thousands of ways to summarize a text.

The second reason, he notes, is human perception: We have the impression that an AI system must be 100% correct, like a mixture of a calculator and an encyclopedia. But this isnt the case. For language models, generating a plausible but false text is easy. The same happens with audio, video, code. Humans do it all the time, too. Its especially evident in children, who respond with phrases that sound good, but may not make sense. With kids, we just tell them thats funny, but we dont go to the pediatrician and say that my son hallucinates a lot. In the case of both children and certain types of AI, [there is an ability] to imitate things as best as possible, he explains.

The large impact on the labor market will fade when its clear that there are things that the AI doesnt properly complete. Similarly, when the AI is questioned and we are unsure of the answer it offers, disillusionment will set in. For instance, if a student asks a chatbot about a specific book that they havent read, it may be difficult for them to determine if the synopsis is completely reliable. In some cases, even a margin of doubt will be unacceptable. Its likely that, in the future, humans using AI will even assume (and accept) that the technology will make certain errors. But with all the hype, we havent reached that stage yet.

The long-term realization of AIs limited impact still doesnt mean that the main fear that AI is more advanced than human intelligence will go away. In the collective imagination, this fear becomes akin to the concept of a machine taking control of the worlds software and destroying humans.

People use this concept for everything, Hernndez-Orallo shrugs. The questions [that really need to be asked when thinking about] a general-purpose system like GPT-4 are: how much capacity does it have? Does it need to be more powerful than a human being? And what kind of human being an average one, the smartest one? What tasks is it specifically going to be used for? All of [the answers to these questions] are very poorly defined at this point.

Matt Beane, a professor at UC Santa Barbara, opines that since weve imagined machines that can replace us, we now fear them. We have strong evidence that shows how we rely on criticism and fear as well as imagination and assertiveness when it comes to thinking about new technologies.

Fear has been the most recurring emotion when it comes to this issue. We seem to fall into a kind of trance around these [AI] systems, telling these machines about our experiences, says Whittaker. Reflexively, we think that theyre human We begin to assume that theyre listening to us. And if we look at the history of the systems that preceded ChatGPT, its notable that, while these systems were much less sophisticated, the reaction was often the same. People locked themselves in a surrogate intimate relationship with these systems when they used them. And back then just like today the experts were predicting that these systems would soon (always soon, never now) be able to replace humans entirely.

Sign up for our weekly newsletter to get more English-language news coverage from EL PAS USA Edition

Go here to read the rest:
What if artificial intelligence isnt the apocalypse? - EL PAS USA

Adopting Artificial Intelligence: Things Leaders Need to Know – InfoQ.com

Artificial intelligence (AI) can help companies identify new opportunities and products, and stay ahead of the competition. Senior software managers should understand the basics of how this new technology works, why agility is important in developing AI products, and how to hire or train people for new roles.

Zorina Alliata spoke about leading AI change at OOP 2023 Digital.

In recent studies, 57% of companies said they will use AI and ML in the next threeyears, Alliata explained:

Chances are, your company already uses some form of AI or ML. If not, there is a high chance that they will do so in the very near future in order to stay competitive.

Alliata mentioned that AI and ML are increasingly being used in a variety of industries, from movie recommendations to self-driving cars, and are expected to have a major impact on businesses in the coming years.

Software leaders should be able to understand how the delivery of ML models is different from regular software development. To manage the ML development process correctly, it is important to have agility by using a methodology that allows for quick pivots, iterations, and continuous improvement, Alliata said.

According to Alliata, software leaders should be prepared to hire or train for new roles such as data scientist, data engineer, ML engineer. She mentioned that such roles might not yet exist in current software engineering teams, and they require very specific skills.

InfoQ interviewed Zorina Alliata about adopting AI and ML in companies.

InfoQ: Why should companies care about artificial intelligence and machine learning?

Zorina Alliata: AI and ML can help companies to make better decisions, increase efficiency, and reduce costs. With AI and ML they can automate repetitive processes and improve the customer experience significantly.

A few years ago when I had a fender bender with my car, I had to communicate with my insurance company through phone calls, and take time off work to take my car to specific repair shops. Just last year when my teenage son bumped his car in the parking lot, he used his mobile app to communicate with the insurance company right away, upload images of the car damage, get a rental car, and arrange for his car to be dropped off for repairs by a technician. He could see the status of the repairs online, he received automatic reports and his car was delivered at home when fixed. Behind his pleasant experience, there was a lot of AI and ML - image recognition, chatbots, sentiment analysis.

Another thing companies can benefit from is mining insights from data. For example, looking at all your sales data, the algorithms might find patterns that were not previously known. A common use for this is in segmenting and clustering populations in order to better define a focused message. If you can cluster all people with a high propensity to buy a certain type of insurance policy, then your marketing campaigns can be much more effective.

InfoQ: What should senior software managers know about artificial intelligence and machine learning?

Alliata: Let me give you an example. We sometimes do what we call unsupervised learning - that is, we analyse huge quantities of data just to see what patterns we can find. There is no clear variable to optimize, there is no defined end result.

Many years ago, I read about this airline that used unsupervised learning on their data and the machine came back with the following insight: it found that people who were born on a Tuesday were more likely to order vegetarian meals on a flight. This was not a question anyone had posed, or an insight anyone was ready for.

As a software development manager, how do you plan for whatever weird or amazing insight the algorithms will deliver? We just might not even know what we are looking for until later in the project. This is very different from regular software development where we have a very clear outcome stated from the beginning, for example: display all flyers and their meals on a webpage.

InfoQ: What can companies do to prepare themselves for AI adoption?

Alliata: Education comes first. As a leader, you should understand what the benefits of using AI and ML are for your company, and understand a bit about how the technology works. Also, it is your task to communicate and openly discuss how AI will change the work and how it will affect the people in their current jobs.

Having a solid strategy and a solid set of business use cases that will provide real value is a great way to get started, and to use as your message and vision.

Promoting lean budgeting and agile teams will help quickly show value before large investments in AI resources and technology are made.

Establishing a culture of continuous improvement and continuous learning is also necessary. The technology is changing constantly and the development teams need time to keep up with the newest research and innovation.

See the original post here:
Adopting Artificial Intelligence: Things Leaders Need to Know - InfoQ.com

Daniel Schmachtenberger: Artificial Intelligence and The … – Resilience

(Conversation recorded on May 04th, 2023)

Show Summary

On this episode, Daniel Schmachtenberger returns to discuss a surprisingly overlooked risk to our global systems and planetary stability: artificial intelligence. Through a systems perspective, Daniel and Nate piece together the biophysical history that has led humans to this point, heading towards (and beyond) numerous planetary boundaries and facing geopolitical risks all with existential consequences. How does artificial intelligence, not only add to these risks, but accelerate the entire dynamic of the metacrisis? What is the role of intelligence vs wisdom on our current global pathway, and can we change course? Does artificial intelligence have a role to play in creating a more stable system or will it be the tipping point that drives our current one out of control?

About Daniel Schmachtenberger

Daniel Schmachtenberger is a founding member of The Consilience Project, aimed at improving public sensemaking and dialogue.

The throughline of his interests has to do with ways of improving the health and development of individuals and society, with a virtuous relationship between the two as a goal.

Towards these ends, hes had particular interest in the topics of catastrophic and existential risk, civilization and institutional decay and collapse as well as progress, collective action problems, social organization theories, and the relevant domains in philosophy and science.

Watch on YouTube

Show Notes & Links to Learn More:

PDF Transcript

00:00 Daniel Schmachtenberger info + TGS episodes part 1 and part 2 and part 3 and part 4 + part 5

Overview of Nates story: Animated videos, Economics for the Future Beyond the Superorganism

Daniels recommendations on further AI learning: Eliezer Yudkowsky on Bankless, David Bohm & Krishnamurti Conversations, Iain McGilchrist The Master and His Emissary, Robert Miles Videos on AI

00:03 ChatGPT, AI art and programming, Deep Fakes

04:25 Humans are a social species

05:17 Money is a claim on energy

05:25 Fossil energy is incredibly powerful but finite

05:39 Other non-renewable inputs to the economy

06:44 Money is primarily created through commercial banks but increasingly through central banks

06:52 Interest is not created when money is created

07:50 How AI obscures the truth and hurts social discourse

08:54 How AI affects jobs

09:30 Humans unique problem solving intelligence

11:22 100 million users in 6 weeks for ChatGPT faster adoption than any technology ever

12:31 Cognitive bias Nates work on cognitive bias

16:01 Indigenous genocide and culture destruction, extinct and endangered species

23:06 Unabomber critique of the advancement of technology

23:21 Indigenous perspectives that resist the adoption of certain tech

24:34 Genghis Khan, Alexander the Great

26:35 Adoption of the plow and loss of animism

28:56 Antonio Turiel + TGS Podcast

31:40 Humans long history of environmental destruction

32:45 We are hitting planetary boundaries everywhere

34:20 Facebooks advertising algorithms adverse societal effects

36:25 Golden Retrievers co-evolution with humans

39:32 Jevons Paradox

40:05 Since 1990s weve increased efficiency 36% but increased 63% increase in energy use

41:02 Orders of effects

45:50 Maximum Power Principle

47:32 There are lots of different types of intelligence

48:20 Other hominids

53:09 Human ability to have abstractions of time and space

54:38 Laozi Tao Te Ching

57:14 Studies showing people dying of obesity are dying of nutrient deficiency

1:02:15 Co-selecting factors of evolution homeodynamics

1:04:30 Tyson Yunkaporta

1:05:00 Samantha Sweetwater

1:07:23 The Sabbath

1:13:04 Chestertons Fence

1:13:50 Dialectics

1:21:08 E.O. Wilson & David Sloan Wilson Multilevel Selection

1:24:25 Recursive Innovation

1:26:15 Dunbars Number

1:30:09 Hobbesian State of Nature

1:32:10 Humans are not specifically adapted to any particular environment

1:32:25 Neoteny in humans.

1:39:37 Economic Comparative Advantage

1:40:30 Nates 2023 Earth Day Talk

1:43:25 Origins and types of capitalism

1:46:02 Ilya Prigogine

1:46:22 Moloch

1:46:55 Adam Smith Invisible Hand

1:50:32 Eliezer Yudkowsky

1:50:35 Nick Bostsrom

1:51:05 AI systems prowess at chess and other military strategy games

1:54:03 Swarming algorithms and AI regulation of flight patterns

1:56:40 Humans lack of intuition for exponential curves

2:04:04 WarGames

2:04:06 Mutually Assured Destruction

2:05:40 Open AI

2:06:12 Anthropic

2:06:52 Motivated Reasoning

2:08:38 Technology is Not Values Neutral paper

2:15:45 Unknown unknowns Donald Rumsfeld

2:19:50 Shareholder Value

1:33:25 Energy and resource needs of AI

2:39:15 Eliezer Yudkowsky on Bankless

2:39:40 Machine Intelligence Research Institute

2:49:07 Risk of Generalized Artificial Intelligence

2:59:55 David Bohm & Krishnamurti Conversations

3:02:59 Ian McGilchrist The Master and His Emissary

3:10:01 Robert Miles Videos on AI

Teaser photo credit: CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=6533149

More:
Daniel Schmachtenberger: Artificial Intelligence and The ... - Resilience

Artificial intelligence catalyzes gene activation research and uncovers rare DNA sequences – Phys.org

This article has been reviewed according to ScienceX's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

peer-reviewed publication

trusted source

proofread

Artificial intelligence has exploded across our news feeds, with ChatGPT and related AI technologies becoming the focus of broad public scrutiny. Beyond popular chatbots, biologists are finding ways to leverage AI to probe the core functions of our genes.

Previously, University of California San Diego researchers who investigate DNA sequences that switch genes on used artificial intelligence to identify an enigmatic puzzle piece tied to gene activation, a fundamental process involved in growth, development and disease. Using machine learning, a type of artificial intelligence, School of Biological Sciences Professor James T. Kadonaga and his colleagues discovered the downstream core promoter region (DPR), a "gateway" DNA activation code that's involved in the operation of up to a third of our genes.

Building from this discovery, Kadonaga and researchers Long Vo ngoc and Torrey E. Rhyne have now used machine learning to identify "synthetic extreme" DNA sequences with specifically designed functions in gene activation.

Publishing in the journal Genes & Development, the researchers tested millions of different DNA sequences through machine learning (AI) by comparing the DPR gene activation element in humans versus fruit flies (Drosophila). By using AI, they were able to find rare, custom-tailored DPR sequences that are active in humans but not fruit flies and vice versa. More generally, this approach could now be used to identify synthetic DNA sequences with activities that could be useful in biotechnology and medicine.

"In the future, this strategy could be used to identify synthetic extreme DNA sequences with practical and useful applications. Instead of comparing humans (condition X) versus fruit flies (condition Y) we could test the ability of drug A (condition X) but not drug B (condition Y) to activate a gene," said Kadonaga, a distinguished professor in the Department of Molecular Biology.

"This method could also be used to find custom-tailored DNA sequences that activate a gene in tissue 1 (condition X) but not in tissue 2 (condition Y). There are countless practical applications of this AI-based approach. The synthetic extreme DNA sequences might be very rare, perhaps one-in-a-million if they exist they could be found by using AI."

Machine learning is a branch of AI in which computer systems continually improve and learn based on data and experience. In the new research, Kadonaga, Vo ngoc (a former UC San Diego postdoctoral researcher now at Velia Therapeutics) and Rhyne (a staff research associate) used a method known as support vector regression to train machine learning models with 200,000 established DNA sequences based on data from real-world laboratory experiments. These were the targets presented as examples for the machine learning system. They then fed 50 million test DNA sequences into the machine learning systems for humans and fruit flies and asked them to compare the sequences and identify unique sequences within the two enormous data sets.

While the machine learning systems showed that human and fruit fly sequences largely overlapped, the researchers focused on the core question of whether the AI models could identify rare instances where gene activation is highly active in humans but not in fruit flies. The answer was a resounding "yes." The machine learning models succeeded in identifying human-specific (and fruit fly-specific) DNA sequences. Importantly, the AI-predicted functions of the extreme sequences were verified in Kadonaga's laboratory by using conventional (wet lab) testing methods.

"Before embarking on this work, we didn't know if the AI models were 'intelligent' enough to predict the activities of 50 million sequences, particularly outlier 'extreme' sequences with unusual activities. So, it's very impressive and quite remarkable that the AI models could predict the activities of the rare one-in-a-million extreme sequences," said Kadonaga, who added that it would be essentially impossible to conduct the comparable 100 million wet lab experiments that the machine learning technology analyzed since each wet lab experiment would take nearly three weeks to complete.

The rare sequences identified by the machine learning system serve as a successful demonstration and set the stage for other uses of machine learning and other AI technologies in biology.

"In everyday life, people are finding new applications for AI tools such as ChatGPT. Here, we've demonstrated the use of AI for the design of customized DNA elements in gene activation. This method should have practical applications in biotechnology and biomedical research," said Kadonaga. "More broadly, biologists are probably at the very beginning of tapping into the power of AI technology."

More information: Long Vo ngoc et al, Analysis of the Drosophila and human DPR elements reveals a distinct human variant whose specificity can be enhanced by machine learning, Genes & Development (2023). DOI: 10.1101/gad.350572.123

Journal information: Genes & Development

Originally posted here:
Artificial intelligence catalyzes gene activation research and uncovers rare DNA sequences - Phys.org

Pittsburgh researchers using artificial intelligence to help cancer patients – WTAE Pittsburgh

A laboratory in Lawrenceville is harnessing the intellectual talent of Pittsburgh's research institutions to target cancer. We speak with a man on a mission to help cancer patients by using artificial intelligence. It starts with the cryogenically frozen tumor. Predictive Oncology CEO Raymond Vennare doesn't like the term tumor to refer to the cancer they study.I refer to them as human beings. These human beings are repurposing their lives for us for a purpose, to be able to find cures to help their descendants; that's their legacy, Vennare said. Vennare is not a scientist. He's a businessman who builds biotech companies. He's had a bullseye on cancer for 15 years.What's different about this venture? The mission: to get cancer drugs that work to market, years faster.And what would have taken three to five years and millions of dollars, we were able to do in a couple of cycles in 11, 12, 13 weeks, Vennare said.In pre-trial drug development tumor heterogeneity, patient heterogeneity isn't introduced early enough, said Amy Ewing, a senior scientist at Predictive Oncology.Translation: Predictive Oncology's scientists are focusing on cell biology, molecular biology, computational biology and bioinformatics to determine how cancer drugs work on real human tumor tissue.A bank of invaluable tumor samples allows them to crunch that data faster.Remember, those samples are people.When I think about cancer, I see their faces, Vennare said. I don't see cells on a computer screen.Vennare sees his brother, Alfred.He was my first best friend. I grew up, Al, Alfred was always there. And whenever I needed something, Alfred was always there.He also thinks of his parents.In my case, my mother and my father and my brother sequentially died of cancer, which means I was the caregiver. My family was the caregiver, my siblings and my sister were caregivers for five consecutive years, he said.Ewing thinks of her father.I lost my father to prostate cancer about a year ago, she said. So to me, I have a deeper understanding now of what it means to have another day, or another month, or another year. I think that's really what gets me up in the morning now is to say that I want to carry on his legacy and help somebody else have more time with their family members.With a board of scientific advisors that includes an astronaut and some of the top scientists in the country, Vennare says ethics is part of the ongoing artificial intelligence conversation."The purpose is to make the job of the scientist easier, so they can expedite the process of discovery, he said. It's not AI that's going to do that, it's the scientists that are going to do that.Venarre says Predictive Oncology is agnostic, meaning this company seeks to help drug companies quickly zero in on effective drugs for all kinds of cancer.

A laboratory in Lawrenceville is harnessing the intellectual talent of Pittsburgh's research institutions to target cancer. We speak with a man on a mission to help cancer patients by using artificial intelligence.

It starts with the cryogenically frozen tumor. Predictive Oncology CEO Raymond Vennare doesn't like the term tumor to refer to the cancer they study.

I refer to them as human beings. These human beings are repurposing their lives for us for a purpose, to be able to find cures to help their descendants; that's their legacy, Vennare said.

Vennare is not a scientist. He's a businessman who builds biotech companies. He's had a bullseye on cancer for 15 years.

What's different about this venture? The mission: to get cancer drugs that work to market, years faster.

And what would have taken three to five years and millions of dollars, we were able to do in a couple of cycles in 11, 12, 13 weeks, Vennare said.

In pre-trial drug development tumor heterogeneity, patient heterogeneity isn't introduced early enough, said Amy Ewing, a senior scientist at Predictive Oncology.

Translation: Predictive Oncology's scientists are focusing on cell biology, molecular biology, computational biology and bioinformatics to determine how cancer drugs work on real human tumor tissue.

A bank of invaluable tumor samples allows them to crunch that data faster.

Remember, those samples are people.

When I think about cancer, I see their faces, Vennare said. I don't see cells on a computer screen.

Vennare sees his brother, Alfred.

He was my first best friend. [When] I grew up, Al, Alfred was always there. And whenever I needed something, Alfred was always there.

He also thinks of his parents.

In my case, my mother and my father and my brother sequentially died of cancer, which means I was the caregiver. My family was the caregiver, my siblings and my sister were caregivers for five consecutive years, he said.

Ewing thinks of her father.

I lost my father to prostate cancer about a year ago, she said. So to me, I have a deeper understanding now of what it means to have another day, or another month, or another year. I think that's really what gets me up in the morning now is to say that I want to carry on his legacy and help somebody else have more time with their family members.

With a board of scientific advisors that includes an astronaut and some of the top scientists in the country, Vennare says ethics is part of the ongoing artificial intelligence conversation.

"The purpose is to make the job of the scientist easier, so they can expedite the process of discovery, he said. It's not AI that's going to do that, it's the scientists that are going to do that.

Venarre says Predictive Oncology is agnostic, meaning this company seeks to help drug companies quickly zero in on effective drugs for all kinds of cancer.

Read more:
Pittsburgh researchers using artificial intelligence to help cancer patients - WTAE Pittsburgh