Archive for the ‘Alphago’ Category

AI predicts the structure of all known proteins and opens a new universe for science – EL PAS USA

AlphaFold's prediction of the structure of vitellogenin, an essential protein for all animals that lay eggs.Deepmind

DeepMinds artificial intelligence (AI) software has predicted the structure of nearly every known protein about 200 million molecules. Knowing the structure of these molecules will help scientists understand the biology of every living thing on the planet, as well as how devastating diseases like malaria, Alzheimers and cancer develop.

Were at the beginning of new era of digital biology, said Demis Hassabis, the AI and neuroscience expert who is the principal developer of AlphaFold, the neural network system that has almost completely solved one of the biggest challenges in the field of biology.

A child chess prodigy and expert video gamer, Hassabis is a British citizen who founded DeepMind in 2010, a company that creates artificial intelligence systems capable of learning like humans. In 2013, DeepMind developed a system that surpasses human level performance on Atari video games. The following year, Google announced that it had bought the company for US$500 million. In 2017, DeepMinds AlphaGo system beat all the top players of Go, the highly complex Asian board game similar to chess. Hassabis then focused his company on a much bigger challenge predicting the 3D shapes of proteins by reading their 2D gene sequences written in DNA letters.

Knowing the 3D structure of these molecules is essential for understanding how they function, but it is an immensely difficult problem to solve. Some have compared it with trying to put together a jigsaw puzzle with tens of thousands of blank pieces.

Without advanced technology, figuring out the structure or shape of a single protein composed of 100 basic units (amino acids) could take up to 13.7 billion years, the age of the universe. Some scientists using electron microscopy or huge particle accelerators such as the one at the European Synchrotron Radiation Facility in Grenoble (France) reduced the problem-solving time to several years. But Googles AlphaFold system can determine the structure of a protein in just a few seconds.

This protein universe is a gift to humanity, said Hassabis during a joint July 26 press briefing conference with the European Molecular Biology Laboratory (EMBL), an intergovernmental organization dedicated to molecular biology research that collaborated in AlphaFolds development.

Before AlphaFold, it took 60 years and thousands of scientists to determine the structures of about 200,000 proteins. This research was used as learning material for AlphaFold, which searched for valid patterns that predict the shape of proteins. By 2021, it had successfully predicted the structures of a million proteins, including all human proteins. The latest release of AlphaFold results extends the number to 200 million proteins virtually every known protein of every living thing on the planet.

DeepMind is providing free and open access to the AlphaFold code and protein database, both of which can be downloaded. A search of this Google of life database will display the 2D sequence of a protein and a 3D model with a corresponding level of reliability, which has a margin of error comparable to or lower than conventional prediction methods.

It is important to note that AlphaFold does not determine reality it predicts reality. AlphaFold reads the genetic sequence and estimates the most likely configuration of its amino acids. The prediction has a high level of reliability, which saves a lot of time and money for scientists doing theoretical work, as they dont need to use expensive equipment to determine the actual structure of a protein until absolutely necessary.

The applications of this new tool are virtually endless because microscopic proteins are involved in every conceivable biological process, such as bee colony collapse and crop heat resistance. A team led by Matt Higgins at the University of Oxford (UK) has used AlphaFold to help develop an antibody (a type of protein) that is capable of neutralizing one of the proteins that must be present for the malaria pathogen to reproduce. This could accelerate research to develop the first highly effective vaccine against the disease, thereby preventing mosquito transmission of the parasite.

Another AlphaFold-related success is the development of the most detailed nuclear pore structure available. Nuclear pores are a doughnut-shaped protein complex that is the gateway to the nucleus of human cells, and have been linked to a host of diseases, including cancer and cardiovascular disease. Jan Kosinski, an EMBL researcher and co-leader of the nuclear pore modeling effort, told EL PAS that AlphaFold provides scientists with unprecedented access to understanding how the recipe of life (written in the genome) works when translated into proteins.

Hassabis and his colleagues and DeepMind and EMBL say that they have analyzed the risks involved in making the AlphaFold system and data openly accessible. The benefits clearly outweigh the risks, said Hassabis, adding that its up to the international community to decide whether to restrict use of the technology as it develops further.

One of the most practical applications of AlphaFold is the design of tailor-made molecules that can block harmful proteins or, better yet, modulate their activity, a much more desirable effect when developing new drugs, said Carlos Fernndez, a scientist with the Spanish National Research Council (CSIC) and leader of the structural biology group of the Spanish Society for Biochemistry and Molecular Biology (SEBBM). His team has used AlphaFold to predict part of the structure of a protein complex necessary for propagating the trypanosome found in sub-Saharan Africa that causes sleeping sickness.

Years of work now lie ahead to confirm the accuracy of AlphaFolds predictions, says biologist Jos Mrquez, an expert in protein structure at the European Synchrotron Radiation Facility in Grenoble. The next frontier for AlphaFold will be its use in designing protein-blocking or protein-activating drugs, a problem they are already tackling, said Mrquez. And theres another puzzle to solve: AlphaFold cannot say why a protein is shaped as it is, which could be an essential element of research on diseases like Alzheimers or Parkinsons, both of which are related to misfolded proteins.

Alfonso Valencia, director of life sciences at the National Supercomputing Center in Barcelona (Spain), discusses some of the systems shortcomings. AlphaFold cant solve everything because it can only predict what is in the domain of known things. For example, it cannot accurately predict the structure of proteins that protect against freezing because they are rare, and the databases dont contain many samples. Nor can it predict the consequences of mutations, an issue of great concern to medicine, said Valencia.

Valencia acknowledges the advantages of providing free and open access to AlphaFold, which enables other scientists to improve or modify the system as needed. Its clear that the DeepMind people are looking to win the Nobel Prize by acting transparently, said Valencia. Its great for their image and gives them a competitive advantage over other companies like Facebook. On the other hand, they did hint that they might reserve specific health data for private use and drug development.

Read more:
AI predicts the structure of all known proteins and opens a new universe for science - EL PAS USA

What is Ethereum Gray Glacier? Should you be worried? – Cryptopolitan

In the coming week, Ethereum developers will pass another upgrade to the mainnet. Dubbed Gray Glacier, the upgrade is designed to further delay the Ice Age/Difficulty Bomb by months ahead of the long-awaited Merge to the Beacon chain or the proof-of-stake (PoS) system.

This article explains everything you need to know about the upcoming Gray Glacier upgrade and what an average user is expected to do.

The Ethereum Difficulty Bomb has long existed on the blockchain. It was originally introduced to automatically raise the difficulty level of mining or solving proof-of-work (PoW) puzzles at a predefined block number. The end result of the Difficulty Bomb is longer than normal block times (and thus less ETH rewards for miners), or Ice Age, which is a situation where the network freezes and stops producing blocks.

The Difficulty Bomb was ingrained into the blockchain for a certain reason. It will disincentivize miners to stop mining on the current network Ethereum 1.0 after a successful transition to Ethereum 2.0. This indicates that the bomb can only be allowed to detonate if/after the Merge is completed.

Tim Beiko, a core Ethereum developer, explained that the Difficulty Bomb also helps to curtail scam forks or spin-offs from Ethereum because it would require decent technical knowledge to remove the bomb rule from those forks else, the bomb will eventually detonate and freeze the fork.

[] this is one I think is probably way underrated is the idea that it makes it a bit harder to create a scam fork of Ethereum. Two years or three years ago, there was, like, Bitcoin Diamond, Bitcoin Unlimited, Bitcoin Gold, all these forks of forks of forks. The reason in large part you dont see those on Ethereum is because they require not only a one-line change like a lot of these Bitcoin forks do but they also require people to run the updated software, Tim Beiko.

Most importantly, the Difficulty Bomb creates a sense of urgency for the core developers working on Ethereum 2.0. So, it acts more like a force function that ensure the developers are quick at decision-making so that the development doesnt stagnate or get prolonged.

The Difficulty Bomb was expected to launch this month. However, given the Merge is yet to happen, the developers agreed to prolong the bomb with the upcoming Gray Glacier upgrade. The decision was propelled by the alert that the network was already undergoing a noticeable decline in the rate of block issuance because of the previous June 2022 schedule.

The Gray Glacier upgrade will prolong the Difficulty of Bomb by 700,000 blocks, or roughly 100 days. It will be activated at block 15,050,000, which is expected to be on Wednesday, June 29, but it might change due to variations in block times and time zones. The update will be made on the mainnet and not the testnets since the bomb only affects the former.

Meanwhile, there are speculations that the prolongment of the Difficulty Bomb means developers are buying more time; hence, the Merge could still be months away from happening. Lately, the co-founder of Ethereum, Vitalik Buterin, said the transition could happen in August. However, a more plausible prediction is that Ethereum 2.0 could be finalized before the end of the year since Gray Glacier could be the last prolongment to the Difficulty bomb.

The Gray Glacier upgrade isnt something for the average Ethereum holders or investors to worry about. Except told otherwise, nothing is required of the users, as crypto exchanges, wallet providers, etc., would handle the technical requirements for the upcoming mainnet upgrade.

Early today, leading crypto exchange Binance announced it would support the Gray Glacier upgrade. ETH and ERC-20 tokens transactions will be suspended starting from 09:43 (UTC) Wednesday. However, trading of the said cryptos would not be interrupted.

Node operators and miners are required to download the latest version of the Ethereum client, Besu 22.4.3; Erigon 2022.06.03-alpha; go-ethereum (geth) Camaron (v1.10.19); and Nethermind v1.13.3.

See the original post:
What is Ethereum Gray Glacier? Should you be worried? - Cryptopolitan

How AI and human intelligence will beat cancer – VentureBeat

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!

2016 saw the completion of a significant milestone for humanity: artificial intelligence (AI) beat the world champion in the Go game. For context, Go is a board game previously thought to require too much human intuition for a computer to succeed in, and as a result, it was a North Star for AI.

For years, researchers tried and failed to create an AI system that could beat humans in the game. Until AlphaGo.

In 2016, AlphaGo, an AI system created by Googles DeepMind, not only beat its champion human counterpart (Lee Sedol); it demonstrated that machines could find playing strategies that no human would come up with. AlphaGo shocked the world when it performed its unimaginable move #37. It was a move so counterintuitive and strange to human experts that after AlphaGo played it, it stunned and perplexed Lee and all the onlookers and world experts. It ultimately led to the technologys triumph during that game.

Beyond exemplifying AIs potential in this context, the Go game demonstrated that AI could and should help humanity come up with the Move 37 for significant, real-world problems. Among these include fighting cancer.

Like board games, there is a particular element of a game in the proverbial contest between the human immune system and cancer. If the immune system is the policeman guarding the health of the body, cancer is like a mobster that is trying to elude capture. While the immune system police search for harmful cancer cells, viruses, infections and any disorders, cancer is busy coming up with various tactics of subversion, deceit and destruction.

Centuries ago, scientists and doctors operated largely in the dark when attempting to cure diseases and had to rely solely on their intuition. Today, however, humanity is uniquely positioned to fully utilize available resources with advancements in high throughput and measurement of biological data. We can now create AI models and use every bit of available data to allow these AIs to augment our innate intuition.

To illustrate this concept more clearly, consider the case of CAR-T cells edited with CRISPR (a genetic editing technology) to create a promising therapeutic option in treating cancer. Many current and past approaches in the field relied on a single researcher or academic groups intuition for prioritizing which genes to test edit. For example, some of the worlds experts in genetically engineered T cells came up with the idea of trying to knock out the PD1, which did not play out to improve patient outcomes. In this case, genes were not compared head-to-head, and a lot of human intuition was required to decide how to best proceed.

Recently, with advances in high-throughput single-cell CRISPR sequencing methods, we are nearing the possibility of simply testing all genes simultaneously on equal footing and in various experimental scenarios. This makes the data a better fit for AI and, in this case, we have the opportunity of letting AI help us decide on which genes look most promising to modify in patients to fight their cancer.

The ability to run extensive AI experiments and generate data for fighting cancer is a game-changer. Biology and disease are so complex that it is improbable that current and past strategies, driven largely by human intuition, are the best approaches. In fact, we predict that in the next 10 years, we will have an equivalent of a Move 37 against cancer: a therapy that at first may seem counterintuitive (and at which human intuition alone would not arrive) but that in the end, shocks us all and wins the game for patients.

Luis Voloch is CTO and cofounder of Immunai.

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even considercontributing an articleof your own!

Read More From DataDecisionMakers

More:
How AI and human intelligence will beat cancer - VentureBeat

Race-by-race tips and preview for Newcastle on Monday – Sydney Morning Herald

Odds and Evens: Split.

Hard to go past progressive three-year-old 3. Mojo Classic who roared home from the back to claim his maiden fourth-up and four weeks between runs. Races like he will eat up the extra trip, especially sticking to a big track, and hes bred to thrive on rain-affected ground.Dangers: Stablemate filly 4. Stella Glow is also on the rise having notched her maiden win in similar ground third-up as a well backed favourite, and comes through some handy form lines. Keep safe 2. Duble Memory who surged home from a wide draw to win a class 1 in heavy ground third-up, while 1. Leica Bita Fun fourth-up and honest 5. Thailand, who draws inside, are both capable of running into the minor end of the money.How to play it: Mojo Classic win; quinella 3 and 4.Odds and Evens: Split.

The girls lock horns in a tricky sprint, with several first-starters who are likely to have a big impact on the market. One of them, home-track Teofilo three-year-old 4. Mirrie Dancer, can make an instant statement. Liked the way she worked home strong from a mile back in heavy ground in the latest of two trials, and she looks well prepared for this distance. Drawn wide, but that might be too her pattern advantage.Dangers: Provincial three-year-old 5. Oakfield Redgum returns for only a second start behind a steady trial, and draws to get cover. Big watch on debutant Nicconi three-year-old 3. Golden Gate who has been taken along slowly at the trials. Liked the way hes performed in two hit-outs, the latest slow to begin and not settling early before working home well in open company, and is bred to handle the conditions.

How to play it: Mirrie Dancer each way.

Odds and Evens: Split.

Loading

Like provincial seven-year-old 4. Emperor Harada on suitable heavy ground.Dangers: Metro six-year-old 1. Skyray with multiple gear changes third-up is the clear threat in what is now a very thin affair.How to play it: Emperor Harada win; quinella 1 and 4.

Odds and Evens: Split.

Lonhro three-year-old 3. Hotstep debuts behind two forward trials on rain-affected ground. Trial jockey sticks for the real thing, and significantly he wears blinkers in a race thats down to four runners.Dangers: Another first starter in blinkers, 2. Beer Palace has had three recent trials and is the clear threat.

How to play it: Hotstep win; quinella 2 and 3.

Best Bets: Race 4 (3) Mojo Classic, Race 7 (3) Hotstep.Best Value: Race 5 (4) Mirrie Dancer.

Tips supplied by Racing NSWFull form and race replays available at racingnsw.com.au.

News, results and expert analysis from the weekend of sport sent every Monday. Sign up for our Sport newsletter.

Original post:
Race-by-race tips and preview for Newcastle on Monday - Sydney Morning Herald

A gentle introduction to model-free and model-based reinforcement learning – TechTalks

Image credit: 123RF (with modifications)

Reinforcement learning is one of the exciting branches of artificial intelligence. It plays an important role in game-playing AI systems, modern robots, chip-design systems, and other applications.

There are many different types of reinforcement learning algorithms, but two main categories are model-based and model-free RL. They are both inspired by our understanding of learning in humans and animals.

Nearly every book on reinforcement learning contains a chapter that explains the differences between model-free and model-based reinforcement learning. But seldom are the biological and evolutionary precedents discussed in books about reinforcement learning algorithms for computers.

I found a very interesting explanation of model-free and model-based RL in The Birth of Intelligence, a book that explores the evolution of intelligence. In a conversation with TechTalks, Daeyeol Lee, neuroscientist and author of The Birth of Intelligence, discussed different modes of reinforcement learning in humans and animals, AI and natural intelligence, and future directions of research.

In the late nineteenth century, psychologist Edward Thorndike proposed the law of effect, which states that actions with positive effects in a particular situation become more likely to occur again in that situation, and responses that produce negative effects become less likely to occur in the future.

Thorndike explored the law of effect with an experiment in which he placed a cat inside a puzzle box and measured the time it took for the cat to escape it. To escape, the cat had to manipulate a series of gadgets such as strings and levers. Thorndike observed that as the cat interacted with the puzzle box, it learned the behavioral responses that could help it escape. Over time, the cat became faster and faster at escaping the box. Thorndike concluded that the cat learned from the reward and punishments that its actions provided.

The law of effect later paved the way for behaviorism, a branch of psychology that tries to explain human and animal behavior in terms of stimuli and responses.

The law of effect is also the basis for model-free reinforcement learning. In model-free reinforcement learning, an agent perceives the world, takes an action, and measures the reward. The agent usually starts by taking random actions and gradually repeats those that are associated with more rewards.

You basically look at the state of the world, a snapshot of what the world looks like, and then you take an action. Afterward, you increase or decrease the probability of taking the same action in the given situation depending on its outcome, Lee said. Thats basically what model-free reinforcement learning is. The simplest thing you can imagine.

In model-free reinforcement learning, theres no direct knowledge or model of the world. The RL agent must directly experience every outcome of each action through trial and error.

Thorndikes law of effect was prevalent until the 1930s, when Edward Tolman, another psychologist, discovered an important insight while exploring how fast rats could learn to navigate mazes. During his experiments, Tolman realized that animals could learn things about their environment without reinforcement.

For example, when a rat is let loose in a maze, it will freely explore the tunnels and gradually learn the structure of the environment. If the same rat is later reintroduced to the same environment and is provided with a reinforcement signal, such as finding food or searching for the exit, it can reach its goal much quicker than animals who did not have the opportunity to explore the maze. Tolman called this latent learning.

Latent learning enables animals and humans to develop a mental representation of their world and simulate hypothetical scenarios in their minds and predict the outcome. This is also the basis of model-based reinforcement learning.

In model-based reinforcement learning, you develop a model of the world. In terms of computer science, its a transition probability, how the world goes from one state to another state depending on what kind of action you produce in it, Lee said. When youre in a given situation where youve already learned the model of the environment previously, youll do a mental simulation. Youll basically search through the model youve acquired in your brain and try to see what kind of outcome would occur if you take a particular series of actions. And when you find the path of actions that will get you to the goal that you want, youll start taking those actions physically.

The main benefit of model-based reinforcement learning is that it obviates the need for the agent to undergo trial-and-error in its environment. For example, if you hear about an accident that has blocked the road you usually take to work, model-based RL will allow you to do a mental simulation of alternative routes and change your path. With model-free reinforcement learning, the new information would not be of any use to you. You would proceed as usual until you reached the accident scene, and then you would start updating your value function and start exploring other actions.

Model-based reinforcement learning has especially been successful in developing AI systems that can master board games such as chess and Go, where the environment is deterministic.

In some cases, creating a decent model of the environment is either not possible or too difficult. And model-based reinforcement learning can potentially be very time-consuming, which can prove to be dangerous or even fatal in time-sensitive situations.

Computationally, model-based reinforcement learning is a lot more elaborate. You have to acquire the model, do the mental simulation, and you have to find the trajectory in your neural processes and then take the action, Lee said.

Lee added, however, that model-based reinforcement learning does not necessarily have to be more complicated than model-free RL.

What determines the complexity of model-free RL is all the possible combinations of stimulus set and action set, he said. As you have more and more states of the world or sensor representation, the pairs that youre going to have to learn between states and actions are going to increase. Therefore, even though the idea is simple, if there are many states and those states are mapped to different actions, youll need a lot of memory.

On the contrary, in model-based reinforcement learning, the complexity will depend on the model you build. If the environment is really complicated but can be modeled with a relatively simple model that can be acquired quickly, then the simulation would be much simpler and cost-efficient.

And if the environment tends to change relatively frequently, then rather than trying to relearn the stimulus-action pair associations whenever the world changes, you can have a much more efficient outcome if youre using model-based reinforcement learning, Lee said.

Basically, neither model-based nor model-free reinforcement learning is a perfect solution. And wherever you see a reinforcement learning system tackling a complicated problem, theres a likely chance that it is using both model-based and model-free RLand possibly more forms of learning.

Research in neuroscience shows that humans and animals have multiple forms of learning, and the brain constantly switches between these modes depending on the certainty it has on them at any given moment.

If the model-free RL is working really well and it is accurately predicting the reward all the time, that means theres less uncertainty with model-free and youre going to use it more, Lee said. And on the contrary, if you have a really accurate model of the world and you can do the mental simulations of whats going to happen every moment of time, then youre more likely to use model-based RL.

In recent years, there has been growing interest in creating AI systems that combine multiple modes of reinforcement learning. Recent research by scientists at UC San Diego shows that combining model-free and model-based reinforcement learning achieves superior performance in control tasks.

If you look at a complicated algorithm like AlphaGo, it has elements of both model-free and model-based RL, Lee said. It learns the state values based on board configurations, and that is basically model-free RL, because youre trying values depending on where all the stones are. But it also does forward search, which is model-based.

But despite remarkable achievements, progress in reinforcement learning is still slow. As soon as RL models are faced with complex and unpredictable environments, their performance starts to degrade. For example, creating a reinforcement learning system that played Dota 2 at championship level required tens of thousands of hours of training, a feat that is physically impossible for humans. Other tasks such as robotic hand manipulation also require huge amounts of training and trial-and-error.

Part of the reason reinforcement learning still struggles with efficiency is the gap remaining in our knowledge of learning in humans and animals. And we have much more than just model-free and model-based reinforcement learning, Lee believes.

I think our brain is a pandemonium of learning algorithms that have evolved to handle many different situations, he said.

In addition to constantly switching between these modes of learning, the brain manages to maintain and update them all the time, even when they are not actively involved in decision-making.

When you have multiple learning algorithms, they become useless if you turn some of them off. Even if youre relying on one algorithmsay model-free RLthe other algorithms must continue to run. I still have to update my world model rather than keep it frozen because if I dont, several hours later, when I realize that I need to switch to the model-based RL, it will be obsolete, Lee said.

Some interesting work in AI research shows how this might work. A recent technique inspired by psychologist Daniel Kahnemans System 1 and System 2 thinking shows that maintaining different learning modules and updating them in parallel helps improve the efficiency and accuracy of AI systems.

Another thing that we still have to figure out is how to apply the right inductive biases in our AI systems to make sure they learn the right things in a cost-efficient way. Billions of years of evolution have provided humans and animals with the inductive biases needed to learn efficiently and with as little data as possible.

The information that we get from the environment is very sparse. And using that information, we have to generalize. The reason is that the brain has inductive biases and has biases that can generalize from a small set of examples. That is the product of evolution, and a lot of neuroscientists are getting more interested in this, Lee said.

However, while inductive biases might be easy to understand for an object recognition task, they become a lot more complicated for abstract problems such as building social relationships.

The idea of inductive bias is quite universal and applies not just to perception and object recognition but to all kinds of problems that an intelligent being has to deal with, Lee said. And I think that is in a way orthogonal to the model-based and model-free distinction because its about how to build an efficient model of the complex structure based on a few observations. Theres a lot more that we need to understand.

Go here to read the rest:
A gentle introduction to model-free and model-based reinforcement learning - TechTalks