Archive for the ‘Ai’ Category

‘Set it and forget it’: automated lab uses AI and robotics to improve proteins – Nature.com

Proteins were made in a laboratory by a completely autonomous robot.Credit: Panther Media GmbH/Alamy

A self-driving laboratory comprising robotic equipment directed by a simple artificial intelligence (AI) model successfully reengineered enzymes without any input from humans save for the occasional hardware fix.

It is cutting-edge work, says Hctor Garca Martn, a physicist and synthetic biologist at Lawrence Berkeley National Laboratory in Berkeley, California. They are fully automating the whole process of protein engineering.

Self-driving labs meld robotic equipment with machine-learning models capable of directing experiments and interpreting results to design new procedures. The hope, say researchers, is that autonomous labs will turbo-charge the scientific process and come up with solutions that humans might not have thought of on their own.

Protein engineering is an ideal task for a self-driving lab, says Philip Romero, a protein engineer at the University of WisconsinMadison who led the study1, published on 11 January in Nature Chemical Engineering. Conventional approaches tend to rely on developing an assay for a particular property say, enzyme activity and then screening vast numbers of mutated versions of the protein. So much of the field of protein engineering is monotonous, he says.

The system that Romeros team created is powered by a relatively simple machine-learning model that relates a proteins sequence to its function, and proposes sequence changes to improve function. It delivers protein sequences for testing to lab equipment that makes the protein, measures its activity and then feeds the results back to the model to guide a new round of experiments. We set and forget it, Romero says.

In the study, the researchers tasked their self-driving lab with making metabolic enzymes called glycoside hydrolases more tolerant of high temperatures. After 20 experimental rounds, each of 4 campaigns produced new versions of the enzymes that could operate at temperatures at least 12 C warmer than the proteins the autonomous lab began with.

The researchers first attempted to run their own robotic equipment, but the machines kept breaking. So they turned to a cloud-based lab in California an existing facility containing robotic equipment that can be directed remotely with computer code and set their AI model to send instructions there. The entire experiment took around 6 months, including a 2.5-month pause due to shipping delays, and each 20-round run cost around US$5,200, the researchers estimate. A human might spend up to a year doing the same work.

Increasing the sophistication of self-driving biology labs might require a new generation of hardware, because existing automated lab equipment tends to be made with a human overseer in mind, says Garca Martn. A more fundamental challenge is to create self-driving labs able to generate knowledge that can be interpreted by machines, as well as humans.

Making proteins more heat stable is relatively simple, says Huimin Zhao, a synthetic biologist at the University of Illinois UrbanaChampaign. Its not clear how easily the self-driving lab can be adapted to alter enzymes in other ways.

Romero says his team is working on applying its self-driving lab to other protein-engineering challenges. The group also wants to incorporate more-sophisticated deep-learning tools that have driven advances in protein design.

The researchers are not, however, trying to slim down the scientific workforce. Were not making humans redundant, said study co-author Jacob Rapp, a University of WisconsinMadison protein engineer, at an online seminar presenting the work. Were replacing the boring parts, so that you can focus on the interesting bits of doing your engineering work.

Here is the original post:

'Set it and forget it': automated lab uses AI and robotics to improve proteins - Nature.com

New study: Countless AI experts doesnt know what to think on AI risk – Vox.com

In 2016, researchers at AI Impacts, a project that aims to improve understanding of advanced AI development, released a survey of machine learning researchers. They were asked when they expected the development of AI systems that are comparable to humans along many dimensions, as well as whether to expect good or bad results from such an achievement.

The headline finding: The median respondent gave a 5 percent chance of human-level AI leading to outcomes that were extremely bad, e.g. human extinction. That means half of researchers gave a higher estimate than 5 percent saying they considered it overwhelmingly likely that powerful AI would lead to human extinction and half gave a lower one. (The other half, obviously, believed the chance was negligible.)

If true, that would be unprecedented. In what other field do moderate, middle-of-the-road researchers claim that the development of a more powerful technology one they are directly working on has a 5 percent chance of ending human life on Earth forever?

Each week, we explore unique solutions to some of the world's biggest problems.

In 2016 before ChatGPT and AlphaFold the result seemed much likelier to be a fluke than anything else. But in the eight years since then, as AI systems have gone from nearly useless to inconveniently good at writing college-level essays, and as companies have poured billions of dollars into efforts to build a true superintelligent AI system, what once seemed like a far-fetched possibility now seems to be on the horizon.

So when AI Impacts released their follow-up survey this week, the headline result that between 37.8% and 51.4% of respondents gave at least a 10% chance to advanced AI leading to outcomes as bad as human extinction didnt strike me as a fluke or a surveying error. Its probably an accurate reflection of where the field is at.

Their results challenge many of the prevailing narratives about AI extinction risk. The researchers surveyed dont subdivide neatly into doomsaying pessimists and insistent optimists. Many people, the survey found, who have high probabilities of bad outcomes also have high probabilities of good outcomes. And human extinction does seem to be a possibility that the majority of researchers take seriously: 57.8 percent of respondents said they thought extremely bad outcomes such as human extinction were at least 5 percent likely.

This visually striking figure from the paper shows how respondents think about what to expect if high-level machine intelligence is developed: Most consider both extremely good outcomes and extremely bad outcomes probable.

As for what to do about it, there experts seem to disagree even more than they do about whether theres a problem in the first place.

The 2016 AI impacts survey was immediately controversial. In 2016, barely anyone was talking about the risk of catastrophe from powerful AI. Could it really be that mainstream researchers rated it plausible? Had the researchers conducting the survey who were themselves concerned about human extinction resulting from artificial intelligence biased their results somehow?

The survey authors had systematically reached out to all researchers who published at the 2015 NIPS and ICML conferences (two of the premier venues for peer-reviewed research in machine learning, and managed to get responses from roughly a fifth of them. They asked a wide range of questions about progress in machine learning and got a wide range of answers: Really, aside from the eye-popping human extinction answers, the most notable result was how much ML experts disagreed with one another. (Which is hardly unusual in the sciences.)

But one could reasonably be skeptical. Maybe there were experts who simply hadnt thought very hard about their human extinction answer. And maybe the people who were most optimistic about AI hadnt bothered to answer the survey.

When AI Impacts reran the survey in 2022, again contacting thousands of researchers who published at top machine learning conferences, their results were about the same. The median probability of an extremely bad, e.g., human extinction outcome was 5 percent.

That median obscures some fierce disagreement. In fact, 48 percent of respondents gave at least a 10 percent chance of an extremely bad outcome, while 25 percent gave a 0 percent chance. Responding to criticism of the 2016 survey, the team asked for more detail: how likely did respondents think it was that AI would lead to human extinction or similarly permanent and severe disempowerment of the human species? Depending on how they asked the question, this got results between 5 percent and 10 percent.

In 2023, in order to reduce and measure the impact of framing effects (different answers based on how the question is phrased), many of the key questions on the survey were asked of different respondents with different framings. But again, the answers to the question about human extinction were broadly consistent in the 5-10 percent range no matter how the question was asked.

The fact the 2022 and 2023 surveys found results so similar to the 2016 result makes it hard to believe that the 2016 result was a fluke. And while in 2016 critics could correctly complain that most ML researchers had not seriously considered the issue of existential risk, by 2023 the question of whether powerful AI systems will kill us all had gone mainstream. Its hard to imagine that many peer-reviewed machine learning researchers were answering a question theyd never considered before.

I think the most reasonable reading of this survey is that ML researchers, like the rest of us, are radically unsure about whether to expect the development of powerful AI systems to be an amazing thing for the world or a catastrophic one.

Nor do they agree on what to do about it. Responses varied enormously on questions about whether slowing down AI would make good outcomes for humanity more likely. While a large majority of respondents wanted more resources and attention to go into AI safety research, many of the same respondents didnt think that working on AI alignment was unusually valuable compared to working on other open problems in machine learning.

In a situation with lots of uncertainty like about the consequences of a technology like superintelligent AI, which doesnt yet exist theres a natural tendency to want to look to experts for answers. Thats reasonable. But in a case like AI, its important to keep in mind that even the most well-regarded machine learning researchers disagree with one another and are radically uncertain about where all of us are headed.

A version of this story originally appeared in the Future Perfect newsletter. Sign up here!

Yes, I'll give $5/month

Yes, I'll give $5/month

We accept credit card, Apple Pay, and Google Pay. You can also contribute via

Read more here:

New study: Countless AI experts doesnt know what to think on AI risk - Vox.com

Unlocking a new era for scientific discovery with AI: How Microsoft’s AI screened over 32 million candidates to find a … – Microsoft

AI is transforming every cognitive task we perform, from writing an email to developing software. Since the dawn of civilization, scientific discovery has been the ultimate cognitive task that has made us thrive and prosper as a species. For this reason, scientific discovery has probably the highest impact and is the most exciting use case for AI. We are announcing how the Microsoft Quantum team achieved a major milestone toward that vision, using advanced AI to screen over 32 million candidates to discover and synthesize a new material that holds the potential for better batteriesthe first real-life example of many that will be achieved in a new era of scientific discovery driven by AI.

We believe that chemistry and materials science are the hero scenario for full-scale quantum computers. That led us to design and launch Azure Quantum Elements, a product built specifically to accelerate scientific discovery with the power of AI, cloud computing, and eventually, full-scale quantum computers. Our beliefs were confirmed by working with companies like Johnson Matthey, 1910 Genetics, AkzoNobel, and many others, which led to the launch of Azure Quantum Elements in June. Over the summer, we had already demonstrated a massive screening of materials candidates, but we knew that showing what might be possible is not the same thing as proving the technology could identify something new and novel that could be synthesized. We needed a real proof point and decided to start with something useful from everyday life to hyperscale data centers: battery technology.

As demonstrated in results published in August, we used novel AI models to digitally screen over 32 million potential materials and found over 500,000 stable candidates. However, identifying candidates is only the first step of scientific discovery. Finding a material among those candidates with the right properties for the task, in this case for a new solid-state battery electrolyte, is like finding a needle in a haystack. It would involve lengthy high-performance computing (HPC) calculations and costly lab experimentation that would take multiple lifespans to complete.

Today we are sharing how AI is radically transforming this process, accelerating it from years to weeks to just days. Joining forces with the Department of Energys Pacific Northwest National Laboratory (PNNL), the Azure Quantum team applied advanced AI along with expertise from PNNL to identify a new material, unknown to us and not present in nature, with potential for resource-efficient batteries. Not only that, PNNL scientists synthesized and tested this material candidate from raw material to a working prototype, demonstrating its unique properties and its potential for a sustainable energy-storage solution, using significantly less lithium than other materials announced by industry.

This is important for many reasons. Solid-state batteries are assumed to be safer than traditional liquid or gel-like lithium batteries, and they provide more energy density. Lithium is already relatively scarce, and thus expensive. Mining it is environmentally and geopolitically problematic. Creating a battery that might reduce lithium requirements by approximately 70% could have tremendous environmental, safety, and economic benefits.

This collaboration is just the beginning of an exciting new journey bringing the power of AI to nearly every aspect of scientific research. More broadly, Microsoft is putting these breakthroughs into customers hands through our Azure Quantum Elements platform. It is the combination of scientific expertise and AI that will compress the next 250 years of chemistry and materials science innovation into the next 25, transforming every industry and ultimately unlocking a new era for scientific discovery.

You can learn more about Microsofts approach that enabled this rapid scientific discovery in the following paper.

Many of the hardest problems facing society, like reversing climate change, addressing food insecurity, or solving energy crises, are related to chemistry and materials science. Weve long believed that materials discovery is a key scenario for tackling some of these issues, but time is our greatest challengethe number of possible stable materials that must be explored to find solutions is believed to surpass the number of atoms in the known universe. Thats why at Microsoft, we recently released Azure Quantum Elements. Our cloud platform brings together a new generation of AI, cloud-powered HPC, and eventually quantum computing breakthroughs to empower our partners with the right tools to drive innovation by accelerating their discovery pipeline and dramatically reducing the time to screen new candidates.

PNNL advances the frontiers of knowledge, taking on some of the worlds greatest science and technology challenges. Distinctive strengths in chemistry, Earth sciences, biology, and data science are central to its scientific discovery mission. PNNL has established leadership in developing and validating next-generation energy storage technologies. Among the most recognizable forms of portable energy storage, lithium-ion batteries remain a cornerstone of modern portable energy storage because of their high energy-storage capacity and long lifespan.

Lithium and other strategic elements used in these batteries are finite resources with limited and geographically concentrated supplies. One of the main thrusts of our work at PNNL has been identifying new materials for increased energy storage needs of the future; ones made with sustainable materials that conserve and protect the Earths limited resources.

Through this collaboration, Microsoft and PNNL harnessed AI and cloud-powered HPC to accelerate research aimed at creating new types of battery materialssuch as those that use less lithium than traditional lithium-ion batteries, while maintaining significant conductivity. These new types of batteries could benefit both the environment and consumers. Within nine months, PNNL validated this proof-of-concept, demonstrating the potential of new HPC and AI approaches to significantly accelerate the innovation cycleit would be impossible for researchers to synthesize and test the millions of materials that were evaluated by advanced AI models in less than a week.

To achieve these results, our Azure Quantum team at Microsoft combined cloud-powered HPC calculations with new AI models that estimate characteristics of materials related to energy, force, stress, electronic band gap, and mechanical properties. These models have been trained on millions of data points from materials simulations and are thus able to minimize HPC calculations and predict materials properties 1,500 times faster than traditional density functional theory (DFT) calculations.

We began with 32.6 million candidate materials, created by substituting elements in known crystal structures with a sampling of elements across a subset of the periodic table. As a first application, we filtered this set of candidates using a workflow that combined our AI models of materials with conventional HPC-based simulations.

The first stage of screeningpublished in Augustused AI models. From the initial pool of 32.6 million materials, we found 500,000 materials predicted to be stable. We used AI models to screen this pool of materials for functional properties like redox potential and band gap, further reducing the number of potential candidates to about 800. The second screening stage combined physics simulations with the AI models. Microsoft Azure HPC was used for DFT calculations to confirm the properties from AI screening. AI models have a non-zero prediction error, so the DFT validation step is used to re-compute the properties that the AI models predicted as a higher-accuracy filter. This step was followed by molecular dynamics (MD) simulations to model structural changes.

Then, our Microsoft Quantum researchers used AI-accelerated MD simulations to investigate dynamic properties like ionic diffusivity. These simulations used AI models for forces at each MD step, rather than the slower DFT-based method. This stage reduced the number of candidates to 150. Then, practical features such as novelty, mechanics, and element availability were taken into consideration to create the set of 18 top candidates.

From there, PNNLs expertise provided insights into additional screening parameters that further narrowed the final structural candidates. The researchers at PNNL then synthesized the top candidate, characterized its structure, and measured its conductivity. The new electrolyte candidate uses approximately 70% less lithium compared to existing lithium-ion batteries, by replacing some lithium with sodium, an abundant compound.

In tests across a range of temperatures, the new compound displayed viable ionic conductivity, indicating its potential as a solid-state electrolyte material. After verifying the conductivity of the sodium-lithium chemical composition, the PNNL research team demonstrated the electrolytes technical viability by building a working all-solid-state battery, which was tested at both room temperature and high temperature (~80 C).

The discovery of this new type of electrolyte material is notable not only for its potential as a sustainable energy-storage solution, but also because it demonstrates that researchers can dramatically accelerate time to results with advanced AI models. While further validation and optimization of the material is ongoing, this initial end-to-end process took less than nine months and is the first step in a promising collaboration between Microsoft and PNNL. The discovery of other materials that could increase the sustainability of energy storage is likely on the horizon.

We bring our scientific expertise to bear on picking the most promising material candidates to move forward with. In this case, we had the AI insights that pointed us to potentially fruitful territory so much faster. After Microsofts team discovered 500,000 stable materials with AI that could be used across a variety of transformative applications, we were able to modify, test, and tune the chemical composition of this new material and quickly evaluate its technical viability for a working battery, showing the promise of advanced AI to accelerate the innovation cycle.

This achievement is indicative of the coming paradigm shift in how organizations across a wide range of industries approach research and developmentorganizations can now use computational breakthroughs to accelerate scientific discovery due to the convergence of HPC and AI. While this combination will provide scale and speed for performing quantum chemistry calculations, classical computing cannot solve certain problems without sacrificing accuracy, such as those involving many highly correlated electrons. Quantum supercomputing will help increase accuracy, and Azure Quantum Elements will integrate Microsofts scaled quantum supercomputer when available.

Azure Quantum Elements includes quantum-ready tools to prepare for the fast-approaching quantum future. For example, scientists can use it to identify the active space of molecular systems and estimate the quantum computing resources needed for large active-space systems. These tools will enable the development and optimization of hybrid algorithmsthose that combine classical and scaled quantum computingso that researchers are prepared for a quantum future.

The discovery of 500,000 stable materials with AI, leading to the identification and synthesis of a new material, is just one of the many possibilities for how Azure Quantum Elements will create unprecedented opportunities. Almost all manufactured goods would benefit from innovations in the fields of chemistry and materials science, and our goal is to enable discoveries across all industries by empowering research and development (R&D) teams with a platform that every scientist can use.

Join us in exploring the potential of Azure Quantum Elements to revolutionize chemistry and materials development:

Continue reading here:

Unlocking a new era for scientific discovery with AI: How Microsoft's AI screened over 32 million candidates to find a ... - Microsoft

Survey Reveals Financial Industrys Top Trends for 2024 – Nvidia

The financial services industry is undergoing a significant transformation with the adoption of AI technologies. NVIDIAs fourth annual State of AI in Financial Services Report provides insights into the current landscape and emerging trends for 2024.

The report reveals that an overwhelming 91% of financial services companies are either assessing AI or already using it in production. These firms are using AI to drive innovation, improve operational efficiency and enhance customer experiences.

Portfolio optimization, fraud detection and risk management remain top AI use cases, while generative AI is quickly gaining popularity with organizations keen to uncover new efficiencies.

Below are the reports key findings, which show how the financial services industry is evolving as advanced AI becomes more accessible.

Reflecting a macro-trend seen across industries, large language models (LLMs) and generative AI have emerged as significant areas of interest for financial services companies. Fifty-five percent of survey respondents reported that they were actively seeking generative AI workflows for their companies.

Organizations are exploring generative AI and LLMs for an array of applications ranging from marketing and sales ad copy, email copy and content production to synthetic data generation. Of these use cases, 37% of respondents showed interest in report generation, synthesis and investment research to cut down on repetitive manual work.

Customer experience and engagement was another sought-out use case, with a 34% response rate. This suggests that financial services institutions are exploring chatbots, virtual assistants and recommendation systems to enhance the customer experience.

With 75% of survey respondents considering their organizations AI capabilities to be industry leading or middle of the pack, financial services organizations are becoming more confident in their ability to build, deploy and extract value from AI implementations.

The most popular uses for AI were in operations, risk and compliance, and marketing. To improve operational efficiency, financial organizations are using AI to automate manual processes, enhance data analysis and inform investment decisions.

To enhance risk and compliance, theyre deploying AI to analyze vast amounts of data to identify suspicious activities and anomalous transaction patterns. Theyre also using AI to analyze customer data to predict preferences and deliver personalized marketing campaigns, educational content and targeted promotions.

Companies are already seeing results. Forty-three percent of financial services professionals indicated that AI had improved their operational efficiency, while 42% felt it had helped their business build a competitive advantage.

In previous years, the number one challenge respondents reported was recruiting AI experts and data scientists. A 30% increase this year in survey participants resoundingly responded that data-related challenges were the primary concern. This includes data privacy challenges, data sovereignty and data scattered around the globe governed by different oversight regulations.

The growing attention to these issues reflects the advancing power and complexity of AI models, which require huge, diverse datasets to train, as well as increasing regulatory scrutiny and emphasis on responsible AI.

Recruiting and retaining AI experts remains a challenge, as do budget concerns. But more than 60% of respondents are still planning to increase investment in computing infrastructure or optimizing AI workflows, underscoring the importance of these tools in quickly building and deploying trustworthy AI to overcome these barriers.

By and large, the survey results paint a positive picture of AI bringing greater efficiency to operations, personalization to customer engagements, and precision to investment decisions.

Finance professionals agree. Eighty-six percent of respondents reported a positive impact on revenue, while 82% noted a reduction in costs. Fifty-one percent strongly agreed that AI would be important to their companys future success, a 76% increase from last year.

With this positive outlook, 97% of companies plan to invest more in AI technologies in the near future. Focus areas for future investments include identifying additional AI use cases, optimizing AI workflows and increasing infrastructure spending.

To build and scale impactful AI across the enterprise, financial services organizations need a comprehensive AI platform that empowers data scientists, quants and developers to seamlessly collaborate while minimizing obstacles. To that end, executives are investing more in AI infrastructure and prioritizing high-yield AI use cases to improve employee productivity while delivering superior customer experiences and investment results.

Download the State of AI in Financial Services: 2024 Trends report for in-depth results and insights.

Explore NVIDIAs AI solutions and enterprise-level AI platforms for delivering smarter, more secure financial services and the AI-powered bank.

More:

Survey Reveals Financial Industrys Top Trends for 2024 - Nvidia

NSA is creating a hub for AI security, Nakasone says – The Record from Recorded Future News

The National Security Agency is consolidating its various artificial intelligence efforts into a new hub, its director announced Thursday.

The Artificial Intelligence Security Center will become the spy agencys focal point for AI activities such as leveraging foreign intelligence insights, helping to develop best practices guidelines for the fast-developing technology and creating risk frameworks for AI security, Army Gen. Paul Nakasone said during an event at the National Press Club in Washington.

The new entity will be housed within the agencys Cybersecurity Collaboration Center and help industry understand the threats against their intellectual property and collaborate to help prevent and eradicate threats, Nakasone told the audience, adding it would team with organizations throughout the Defense Department, intelligence community, academia and foreign partners.

The announcement comes after the NSA and U.S. Cyber Command, which Nakasone also helms, recently finished separate reviews of how they would use artificial intelligence in the future. The Central Intelligence Agency also said it plans to launch its own artificial intelligence-based chatbot.

One of the findings of the study was a clear need to focus on AI security, according to Nakasone, who noted NSA has particular responsibilities for such work because the agency is the designated federal manager for national security systems and already has extensive ties to the sprawling defense industrial base.

While U.S. firms are increasingly acquiring and developing generative AI technology, foreign adversaries are also moving quickly to develop and apply their own AI and we anticipate they will begin to explore and exploit vulnerabilities of U.S. and allied AI systems, the four-star warned.

He described AI security as protecting systems from learning, doing and revealing the wrong thing, as well as safeguarding them from digital attacks and ensuring malicious foreign actors can't steal America's innovative AI capabilities.

Nakasone did not specify who would lead the center or how large it might grow.

Today, the U.S leads in this critical area but this lead should not be taken for granted, he said.

Recorded Future

Intelligence Cloud.

No previous article

No new articles

Martin Matishak is a senior cybersecurity reporter for The Record. He spent the last five years at Politico, where he covered Congress, the Pentagon and the U.S. intelligence community and was a driving force behind the publication's cybersecurity newsletter.

View original post here:

NSA is creating a hub for AI security, Nakasone says - The Record from Recorded Future News