Archive for the ‘Alphago’ Category

AlphaZero beat humans at Chess and StarCraft, now it’s working with quantum computers – The Next Web

A team of researchers from Aarhus University in Denmark let DeepMinds AlphaZero algorithm loose on a few quantum computing optimization problems and, much to everyones surprise, the AI was able to solve the problems without any outside expert knowledge. Not bad for a machine learning paradigm designed to win at games like Chess and StarCraft.

Youve probably heard of DeepMind and its AI systems. The UK-based Google sister-company is responsible for both AlphaZero and AlphaGo, the systems that beat the worlds most skilled humans at the games of Chess and Go. In essence, what both systems do is try to figure out what the optimal next set of moves is. Where humans can only think so many moves ahead, the AI can look a bit further using optimized search and planning methods.

Related:DeepMinds AlphaZero AI is the new champion in chess, shogi, and Go

When the Aarhus team applied AlphaZeros optimization abilities to a trio of problems associated with optimizing quantum functions an open problem for the quantum computing world they learned that its ability to learn new parameters unsupervised transferred over from games to applications quite well.

Per the study:

AlphaZero employs a deep neural network in conjunction with deep lookahead in a guided tree search, which allows for predictive hidden-variable approximation of the quantum parameter landscape. To emphasize transferability, we apply and benchmark the algorithm on three classes of control problems using only a single common set of algorithmic hyperparameters.

The implications for AlphaZeros mastery over the quantum universe could be huge. Controlling a quantum computer requires an AI solution because operations at the quantum level quickly become incalculable by humans. The AI can find optimum paths between data clusters in order to emerge better solutions in tandem with computer processors. It works a lot like human heuristics, just scaled to the nth degree.

An example of this would be an algorithm that helps a quantum computer sort through near-infinite combinations of molecules to come up with chemical compounds that would be useful in the treatment of certain illnesses. The current paradigm would involve developing an algorithm that relies on human expertise and databases with previous findings to point it in the right direction.

But the kind of problems were looking at quantum computers to solve dont always have a good starting point. Some of these, optimization problems like the Traveling Salesman Problem, need an algorithm thats capable of figuring things out without the need for constant adjustment by developers.

DeepMinds algorithm and AI system may be the solution quantum computings been waiting for. The researchers effectively employ AlphaZero as a Tabula Rasa for quantum optimization: It doesnt necessarily need human expertise to find the optimum solution to a problem at the quantum computing level.

Before we start getting too concerned about unsupervised AI accessing quantum computers, its worth mentioning that so far AlphaZeros just solved a few problems in order to prove a concept. We know the algorithms can handle quantum optimization, now its time to figure out what we can do with it.

The researchers have already received interest from big tech and other academic institutions with queries related to collaborating on future research. Not for nothing, but DeepMinds sister-company Google has a little quantum computing program of its own. Were betting this isnt the last weve heard of AlphaZeros adventures in the quantum computing world.

Read next: Cyberpunk 2077 has been delayed to September (thank goodness)

Visit link:
AlphaZero beat humans at Chess and StarCraft, now it's working with quantum computers - The Next Web

Trust AI it knows more than we do – ITS International

The Information Age has given rise to many society-shaping tools and technologies, nearly all of them revolving around the gathering, dissemination, and analysis of information. Data collection and data analysis technologies have held a careful balance, but as data collection hardware improves and passive data collection becomes more prevalent, it is not an uncommon problem for transportation agencies to have access to more information than is useful in the scope of human analysis through regular spreadsheet programs. Enter artificial intelligence (AI): the nebulous superhero of data crunching, here to rescue a transportation network inundated with data and make some sense of it all.

With all this data lying around, its important to make the most of it. Agencies have a responsibility to the public to make their networks safe and efficient. This is difficult to do with tight funding and ageing infrastructure, but data is cheap and plentiful - the problem is that the data is spread out across and within agencies, often held in silos, and few people are in a position to take a wide view of the network. The advent of sweeping AI capable of making long-range decisions is inevitable and necessary for the future of the worlds infrastructure. Implementing AI into the planning process is the only way to create an optimal transportation network in the future, and public agencies should be preparing themselves for that eventuality.

Regional and national AIs are inevitable for several reasons. Thanks to fibre networks and 5G, the information of todays world moves at nearly the speed of light. It is unreasonable to expect a human to still make the best possible decisions while overwhelmed with all that information. AI has the power to process the flood of data into human-sized chunks or even make optimal decisions by itself, faster and more consistently than any person.

More importantly than the speed, the true wonder of AI is that it can make better decisions and think on a higher plane than human brains. The DeepMind projects AlphaGo and AlphaGo Zero (see box) are clear examples that - when given the correct scope and enough data - the machines can create strategies that even humans who are masters of their craft cannot understand without retroactive analysis. This same high-level perspective and creativity can be applied to solving transportation problems, and find solutions that the engineers and planners wouldnt have considered.

The primary limitation to all AI is the availability of data, which is what makes transportation planning a perfect area for strategic AI implementation. The transportation industry is data-rich and largely controlled by government organisations everywhere in the world. This makes data collection easier to standardise and provides more rigid boundaries and defined scopes than many other use cases. A unified, holistic view allows planners to design for sustainability, address environmental and social problems, and maximise the available resources even in a flagging economy. As long as it is designed well and phased in properly, strategic regional AIs would give the worlds transportation networks a huge edge when tackling equally huge problems like climate change and suburban sprawl.

There are a few steps agencies should start taking to prepare themselves for the eventuality of strategic AI. The first is the implementation of data-driven processes on fronts that often receive less attention than operations, from project prioritisation to construction and maintenance. Americas public transportation sector has recently been experimenting with a push for data-driven funding and prioritisation decisions. The analysis required for funding applications has been increased, and projects compete through a scoring system based on a number of sustainability metrics which push the overall system further towards sustainable design. The construction side is making a similar shift, with tablet-based inspections, 3D/4D modelling and drones moving to the forefront. All of these serve the goal of creating a system built on data and keeping that data readily available for future use.

Agencies should also focus on connected infrastructure and real-time GIS modelling to expand their passive collection capabilities. Passively gathering data is a vital step in the creation of smart regions and has been the clear direction of transportation networks for some time. Smart city initiatives are common now, but only regional agencies like state departments of transportation have the resources and jurisdiction to start building smart regions. In addition to being part of a transition to AI, these smart regions have added benefits to technologies like connected and autonomous vehicles (C/AVs) and initiatives like Mobility as a Service (MaaS). Taking a wider approach to planning is the best way to start a transition to AI-driven strategy, which means extrapolating ideas that were formerly confined to cities or even city blocks.

From an even broader perspective, public agencies around the world need to start taking a more holistic approach to mobility. The separation of road and bridge, transit, toll, and other transportation entities, is no longer an effective situation. These organisations frequently compete with each other, bickering over right of way and funding, standards and policy, and causing waste on every level of decision-making. Even if strategic AI was not inevitable, breaking down the walls between transportation agencies would still be a necessity. The public sector needs to share data, incorporate first/last mile and micromobility into the networks, and start truly looking at better mobility as a collective goal.

The end goal of all this transition and change is to implement regional strategic AIs into the transportation sector to augment the planning process. These could begin by simply making suggestions, providing data and analysis on demand, and processing a real-time model of the network using all the data the agency has available from both passive and active collection. And who knows what the future of these AIs will bring? Like the AlphaGo programs, they may come up with strategies that human designers would never have considered. They could operate on a higher level of understanding, like a child playing chess with an adult, and we may just have to trust in decisions we dont understand. But with enormous, global problems like climate change around the corner, strategic AI may be the only enormous, global solution we have.

AlphaGo & AlphaGo Zero

Ancient Chinese board game Go is known as the most challenging classical game for artificial intelligence because of its complexity. It involves multiple layers of strategic thinking as players place their own black or white stones on a board to surround and capture their opponents. While this seems relatively simple, there are 10 to the power of 170 possible board configurations - more than the number of atoms in the known universe which makes Go a googol times more complex than chess.

AlphaGo is the first computer program to defeat a professional human Go player, the first to defeat a Go world champion, and is arguably the strongest Go player in history. It combines advanced search tree with deep neural networks, which take a description of the Go board as an input and process it through a number of different network layers containing millions of neuron-like connections.

AlphaGo learnt the game by playing thousands of matches with amateur and professional players. But a more advanced version, AlphaGo Zero, learnt by playing against itself, starting from completely random play and accumulating thousands of years of human knowledge in a few days. It replaced hand-crafted rules with a deep neural network and algorithms that knew nothing beyond the basic rules. Its creative response and ability to master complex games demonstrates that a single algorithm can learn how to discover new knowledge in a range of settings.

Source: DeepMind.com

See more here:
Trust AI it knows more than we do - ITS International

Seizing Artificial Intelligence’s Opportunities in the 2020s – AiThority

Artificial Intelligence (AI) has made major progress in recent years. But even milestones like AlphaGo or the narrow AI used by big tech only scratch the surface of the seismic changes yet to come.

Modern AI holds the potential to upend entire profession while unleashing brand new industries in the process. Old assumptions will no longer hold, and new realities will dictate those who are swallowed by the tides of change from those able to anticipate and ride the AI wave headlong into a prosperous future.

Heres how businesses and employees can both leverage AI in the 2020s.

Like many emerging technologies, AI comes with a substantial learning curve. As a recent McKinsey report highlights, AI is a slow burn technology that requires a heavy upfront investment, with returns only ramping up well down the road.

Because of this slow burn, an AI front-runner and an AI laggard may initially appear to be on equal footing. The front-runner may even be a bit behind during early growing pains. But as the effects of AI adoption kick in, the gap between the two widens dramatically and exponentially. McKinseys models estimate that within around 10 years, the difference in cumulative net change in cash flow between front-runners and laggards could be as high as 145 percent.

The first lesson for any business hoping to seize new AI opportunities is to start making moves to do so right now.

Read More: How is Artificial Intelligence (AI) Changing the Future of Architecture?

Despite popular opinion, the coming AI wave will be mostly a net positive for employees. The World Economic Forum found that by 2022, AI and Machine Learning will have created over 130 million new jobs. Though impressive, these gains will not be distributed evenly.

Jobs characterized by unskilled and repetitive tasks face an uncertain future, while jobs in need of greater social and creative problem-solving will spike. According to McKinsey, the coming decade could see a 10 percent fall in the share of low digital skill jobs, with a corresponding rise in the share of jobs requiring high digital skill.

So how can employees successfully navigate the coming future of work? One place to start is to investigate the past. Nearly half a century ago, the first ATM was installed outside Barclays Bank in London. In 1967, the thought of bank tellers surviving the introduction of automated teller machines felt impossible. ATMs caught on like wildfire, cut into tellers hours, offered unbeatable flexibility and convenience, and should have all but wiped tellers out.

But, in fact, exactly the opposite happened. No longer having to handle simple deposits freed tellers up to engage with more complex and social facets of the business. They started advising customers on mortgages and loans, forging relationships and winning loyalty. Most remarkable of all, in the years following the ATMs introduction, the total number of tellers employed worldwide didnt fall off a cliff. In fact, it rose higher than ever.

Though AI could potentially threaten some types of jobs, many jobs will see rising demand. Increased reliance on automated systems for core business functions, frees up valuable employee time and enables them to focus on different areas to add even more value to the company.

As employees grow increasingly aware of the changing nature of work, they are also clamoring for avenues for development, aware that they need to hold a variety of skills to remain relevant in a dynamic job market. Companies will, therefore, need to provide employees with a wide range of experiences and the opportunity to continuously enhance their skillsets or suffer high turnover. This is already a vital issue to businesses with the cost of losing an employee equating to 90%-200% of their annual salary. This costs each large enterprise an estimated $400 million a year. If employees feel their role is too restrictive or that their organization is lagging, their likelihood of leaving will climb.

The only way to capture the full value of AI for business is to retain the highly skilled employees capable of wielding it. Departmental silos and rigid job descriptions will have no place in the AI future.

Read More: How Artificial Intelligence and Blockchain is Revolutionizing Mobile Industry in 2020?

For employees to maximize their chances of success in the face of rapid AI advancement, they must remain flexible and continuously acquire new skills. Both businesses and employees will need to realign their priorities in accordance with new realities. Workers will have to be open to novel ideas and perspectives, while employers will need to embrace the latest technological advancements.

Fortunately, the resources and avenues for ambitious employers to pursue continued growth for their employees are blossoming. Indeed, the very AI advancements prompting the need for accelerated career development paths are also powering technologies to maximize and optimize professional enrichment.

AI is truly unlocking an exciting new future of work. Smart algorithms now enable hyper-flexible workplaces to seamlessly shuffle and schedule employee travel, remote work, and mentorship opportunities. At the cutting edge, these technologies can even let employees divide their time between multiple departments across their organization. AI can also tailor training and reskilling programs to each employees unique goals and pace.

The rise of AI holds the promise of great change, but if properly managed, it can be a change for the better.

Read More: Predictions of AI AdTech in 2020

See more here:
Seizing Artificial Intelligence's Opportunities in the 2020s - AiThority

Can synthetic biology help deliver an AI brain as smart as the real thing? – Genetic Literacy Project

In building the worlds first airplane at the dawn of the 20th century, the Wright Brothers took inspiration from the insightful movements of birds. They observed and reverse-engineered aspects of the wing in nature, which in turn helped them make important discoveries about aerodynamics and propulsion.

Similarly, to build machines that think, why not seek inspiration from the three pounds of matter that operates between our ears? Geoffrey Hinton, a pioneer ofartificial intelligenceand winner of theTuring Award, seemed to agree: I have always been convinced that the only way to get artificial intelligence to work is to do the computation in a way similar to the human brain.

So whats next for artificial intelligence (AI)?Could the next wave of AI be inspired by rapid advances in biology?Can the tools for understanding brain circuits at the molecular level lead us to a higher, systems-level understanding of how the human mind works?

The answer is likely yes, and the flow of ideas between learning about biological systems and developing artificial ones has actually been going on for decades.

First of all, what does biology have to do with machine learning? It may surprise you to learn that much of the progress in machine learning stems from insights from psychology and neuroscience. Reinforcement learning (RL) one of the three paradigms of machine learning (the two others being supervised learning and unsupervised learning) originates from animal and cognitive neuroscience studies going all the way back to the 1940s. RL is central to some of todays most advanced AI systems, such asAlphaGo, the widely-publicized AI agent developed by leading AI companyGoogle DeepMind. AlphaGo defeated the worlds top-ranked players atGo, a Chinese board game that comprises more board combinations than there are atoms in the universe.

Despite AlphaGos superhuman performance in the game of Go, its human opponent still possesses far more general intelligence. He can drive a car, speak languages, play soccer, and perform a myriad of other tasks in any kind of environment. Current AI systems are largely incapable of using the knowledge learned to play poker and transfer it to another task, like playing a game of Cluedo. These systems are focused on a single, narrow environment and require vast amounts of data, and training time. And still, they make simple errors like mistaking achihuahua for a muffin!

Similar to child learning, reinforcement learning is based on the AI systems interaction with its environment. It performs actions that seek to maximize the reward and avoid punishments. Driven by curiosity, children are active learners that simultaneously explore their surrounding environment and predict their actions outcomes, allowing them to build mental models to think causally. If, for example, they decide to push the red car, spill the flower vase, or crawl the other direction, they will adjust their behavior based on the outcomes of their actions.

Children experience different environments in which they find themselves navigating and interacting with various contexts and objects dispositions, often in unusual manners. Just as child brain development could inspire the development of AI systems, the RL agents learning mechanisms are parallel to the brains learning mechanisms driven by the release of dopamine a neurotransmitter key to the central nervous system which trains the prefrontal cortex in response to experiences and thus shapes stimulus-response associations as well as outcome predictions.

Biology is one of the most promising beneficiaries of artificial intelligence.From investigating mind-boggling combinations of genetic mutations that contribute to obesity to examining the byzantine pathways that lead some cells to go haywire and produce cancer, biology produces an inordinate amount of complex, convoluted data. But the information contained within these datasets often offers valuable insights that could be used to improve our health.

In the field of synthetic biology, where engineers seek to rewire living organisms and program them with new functions,many scientists are harnessing AI to design more effective experiments, analyze their data, and use it to create groundbreaking therapeutics. I recently highlightedfive companies that are integrating machine learning with synthetic biologyto pave the way for better science and better engineering.

Artificial general intelligence (AGI) describes a system that is capable of mimicking human-like abilities such as planning, reasoning, or emotions. Billions of dollars have been invested in this exciting and potentially lucrative area, leading some to make claims like data is the new oil.

Among the many companies working on general artificial intelligence are GooglesDeepMind, the Swiss AI labIDSIA, Nnaisense,Vicarious,Maluuba, theOpenCog Foundation, Adaptive AI,LIDA, andNumenta. Organizations such as theMachine Intelligence Research InstituteandOpenAIalso state AGI as their main goal. One of the goals of the internationalHuman Brain Projectis to simulate the human brain.

Despite a growing body of talent, tools, and high-quality data needed to achieve AGI, we still have a long way to go to achieve this.

Today, AI techniques such as Machine Learning (ML) are ubiquitous in our society, reaching from healthcare and manufacturing to transportation and warfare but are qualified as narrow AI. They indeed process and learn powerfully large amounts of data to identify insightful and informative patterns for a single task, such as predicting airline ticket prices, distinguishing dogs from cats in images, and generating your movie recommendations on Netflix.

In biology, AI is also changing your health care. It is generating more and better drug candidates (Insitro), sequencing your genome (Veritas Genetics), and detecting your cancer earlier and earlier (Freenom).

As humans, we are able to quickly acquire knowledge in one context and generalize it to another environment across novel multiple situations and tasks, which would allow us to develop more efficient self-driving car systems as they need to perform many tasks on the road concurrently. In AI research, this concept is known as transfer learning. It assists an AI system in learning from just a few examples instead of the millions that traditional computing systems usually need to build a system that learns from first principles, abstracts the acquired knowledge, and generalizes it to new tasks and contexts.

To produce more advanced AI, we need to better understand the brains inner workings that allow us to portray the world around us. There is a synergistic mission between understanding biological intelligence and creating an artificial one, seeking inspiration from our brain might help us bridge that gap.

John Cumbers is the founder and CEO ofSynBioBeta, the leading community of innovators, investors, engineers, and thinkers who share a passion for using synthetic biology to build a better, more sustainable universe. He publishes the weekly SynBioBeta Digest, host theSynBioBeta Podcast, and wroteWhats Your Biostrategy?, the first book to anticipate how synthetic biology is going to disrupt virtually every industry in the world. He earned his PhD in Molecular Biology, Cell Biology, and Biochemistry from Brown University. Follow him on Twitter @SynBioBeta or @johncumbers

A version of this article was originally published on Forbes website as Can Synthetic Biology Inspire The Next Wave Of AI? and has been republished here with permission.

See the original post here:
Can synthetic biology help deliver an AI brain as smart as the real thing? - Genetic Literacy Project

These 6 Incredible Discoveries From The Past Decade Have Changed Science Forever – ScienceAlert

From finding the building blocks for life on Mars to breakthroughs in gene editing and the rise of artificial intelligence, here are six major scientific discoveries that shaped the 2010s - and what leading experts say could come next

We don't yet know whether there was ever life on Mars - but thanks to a small, six-wheeled robot, we do know the Red Planet was habitable.

Shortly after landing on 6 August 2012, NASA's Curiosity rover discovered rounded pebbles - new evidence that rivers flowed there billions of years ago.

The proof has since multiplied, showing there was in fact a lot of water on Mars - the surface was covered in hot springs, lakes, and maybe even oceans.

A crater on the Red Planet filled with water ice. (ESA/DLR/FU Berlin, CC BY-SA 3.0 IGO)

Curiosity also discovered what NASA calls the building blocks of life, complex organic molecules, in 2014.

And so the hunt continues for signs that Earth-based life is not (or wasn't always) alone.

Two new rovers will be launched next year - America's Mars 2020 and Europe's Rosalind Franklin rovers, looking for ancient microbes.

"Going into the coming decade, Mars research will shift from the question 'Was Mars habitable?' to 'Did (or does) Mars support life?'" said Emily Lakdawalla, a geologist at The Planetary Society.

We had long thought of the little corner of the Universe that we call home as unique, but observations made thanks to the Kepler space telescope blew apart those pretensions.

Launched in 2009, the Kepler mission helped identify more than 2,600 planets outside of our Solar System, also known as exoplanets - and astronomers believe each star has a planet, meaning there are billions out there.

Kepler's successor TESS was launched by NASA in 2018, as we scope out the potential for extraterrestrial life.

Expect more detailed analysis of the chemical composition of these planets' atmospheres in the 2020s, said Tim Swindle, an astrophysicist at the University of Arizona.

We also got our first glimpse of a black hole this year thanks to the groundbreaking work of the Event Horizon Telescope collaboration.

(Event Horizon Telescope Collaboration)

"What I predict is that by the end of the next decade, we will be making high quality real-time movies of black holes that reveal not just how they look, but how they act on the cosmic stage," Shep Doeleman, the project's director, told AFP.

But one event from the decade undoubtedly stood above the rest: the detection for the first time on September 14, 2015 of gravitational waves, ripples in the fabric of the universe.

The collision of two black holes 1.3 billion years earlier was so powerful it spread waves throughout the cosmos that bend space and travel at the speed of light. That morning, they finally reached Earth.

The phenomenon had been predicted by Albert Einstein in his theory of relativity, and here was proof he was right all along.

Three Americans won the Nobel prize in physics in 2017 for their work on the project, and there have been many more gravitational waves detected since.

Cosmologists meanwhile continue to debate the origin and composition of the universe. The invisible dark matter that makes up its vast majority remains one of the greatest puzzles to solve.

"We're dying to know what it might be," said cosmologist James Peebles, who won this year's Nobel prize in physics.

Clustered Regularly Interspaced Short Palindromic Repeats (CRISPR) - a family of DNA sequences - is a phrase that doesn't exactly roll off the tongue.

(Meletios Verras/iStock)

But the field of biomedicine can now be divided into two eras, one defined during the past decade: before and after CRISPR-Cas9 (or CRISPR for short), the basis for a gene editing technology.

"CRISPR-based gene editing stands above all the others," William Kaelin, a 2019 Nobel prize winner for medicine, told AFP.

In 2012, Emmanuelle Charpentier and Jennifer Doudna reported that they had developed the new tool that exploits the immune defense system of bacteria to edit the genes of other organisms.

It is much simpler than preceding technology, cheaper and easy to use in small labs.

Charpentier and Doudna were showered in awards. but the technique is also far from perfect and can create unintended mutations.

Experts believe this may have happened to Chinese twins born in 2018 as a result of edits performed by a researcher who was widely criticized for ignoring scientific and ethical norms.

Still, CRISPR remains one of the biggest science stories of recent years, with Kaelin predicting an "explosion" in its use to combat human disease.

For decades, doctors had three main weapons to fight cancer: surgery, chemotherapy drugs, and radiation.

The 2010s saw the rise of a fourth, one that was long doubted: immunotherapy, or leveraging the body's own immune system to target tumor cells.

(Design Cells/iStock)

One of the most advanced techniques is known as CAR T-cell therapy, in which a patient's T-cells - part of their immune system - are collected from their blood, modified and reinfused into the body.

A wave of drugs have hit the market since the mid-2010s for more and more types of cancer including melanomas, lymphomas, leukemias and lung cancers - heralding what some oncologists hope could be a golden era.

For William Cance, scientific director of the American Cancer Society, the next decade could bring new immunotherapies that are "better and cheaper" than what we have now.

The decade began with a major new addition to the human family tree: Denisovans, named after the Denisova Cave in the Altai Mountains of Siberia.

Scientists sequenced the DNA of a female juvenile's finger bone in 2010, finding it was distinct both from genetically modern humans and Neanderthals, our most famous ancient cousins who lived alongside us until around 40,000 years ago.

The mysterious hominin species is thought to have ranged from Siberia to Indonesia, but the only remains have been found in the Altai region and Tibet.

We also learned that, unlike previously assumed, Homo sapiens bred extensively with Neanderthals - and our relatives were not the brutish simpletons previously assumed but were responsible for artworks, such as the handprints in a Spanish cave they were credited for crafting in 2018.

They also wore jewelry, and buried their dead with flowers - just like we do.

Next came Homo naledi, remains of which were discovered in South Africa in 2015, while this year, paleontologists classified yet another species found in the Philippines: a small-sized hominin called Homo luzonensis.

Advances in DNA testing have led to a revolution in our ability to sequence genetic material tens of thousands of years old, helping unravel ancient migrations, like that of the Bronze Age herders who left the steppes 5,000 years ago, spreading Indo-European languages to Europe and Asia.

"This discovery has led to a revolution in our ability to study human evolution and how we came to be in a way never possible before," said Vagheesh Narasimhan, a geneticist at Harvard Medical School.

One exciting new avenue for the next decade is paleoproteomics, which allows scientists to analyze bones millions of years old.

"Using this technique, it will be possible to sort out many fossils whose evolutionary position is unclear," said Aida Gomez-Robles, an anthropologist at University College London.

"Neo" skull of Homo naledi from the Lesedi Chamber. (John Hawks/University of the Witwatersrand)

Machine learning - what we most commonly mean when talking about "artificial intelligence" - came into its own in the 2010s.

Using statistics to identify patterns in vast datasets, machine learning today powers everything from voice assistants to recommendations on Netflix and Facebook.

So-called "deep learning" takes this process even further and begins to mimic some of the complexity of a human brain.

It is the technology behind some of the most eye-catching breakthroughs of the decade: from Google's AlphaGo, which beat the world champion of the fiendishly difficult game Go in 2017, to the advent of real-time voice translations and advanced facial recognition on Facebook.

In 2016, for example, Google Translate - launched a decade earlier - transformed from a service that provided results that were stilted at best, nonsensical at worst, to one that offered translations that were far more natural and accurate.

At times, the results even seemed polished.

"Certainly the biggest breakthrough in the 2010s was deep learning - the discovery that artificial neural networks could be scaled up to many real-world tasks," said Henry Kautz, a computer science professor at the University of Rochester.

"In applied research, I think AI has the potential to power new methods for scientific discovery," from enhancing the strength of materials to discovering new drugs and even making breakthroughs in physics, Kautz said.

For Max Jaderberg, a research scientist at DeepMind, owned by Google's parent company Alphabet, the next big leap will come via "algorithms that can learn to discover information, and rapidly adapt and internalize and act on this new knowledge," as opposed to depending on humans to feed them the correct data.

That could eventually pave the way to "artificial general intelligence", or a machine capable of performing any tasks humans can, rather than excelling at a single function.

Agence France-Presse

Read more here:

These 6 Incredible Discoveries From The Past Decade Have Changed Science Forever - ScienceAlert