Archive for the ‘Artificial Intelligence’ Category

Artificial Intelligence Will Change the World. Here Are the 3 Stocks … – The Motley Fool

Artificial intelligence (AI) is having a breakout moment. Thought leaders across multiple industries have noted that AI is currently at an inflection point and will change how people and organizations work from this point on.

Moreover, AI-related stocks have been some of the best performers in 2023. With that in mind, let's review three stocks these Fool.com contributors think have the most to gain from the AI revolution: Nvidia (NVDA 4.18%), Upstart (UPST -6.62%), andMicrosoft (MSFT -0.55%).

Image source: Getty Images.

Jake Lerch (Nvidia): To me, this choice is simple: No company has more to gain from the unfolding emergence of artificial intelligence (AI) than Nvidia.

That's because the complex algorithms behind groundbreaking AI applications require vast amounts of computing power. Many are trained through deep learning, which requires AI neural networks to sift through unimaginably enormous amounts of data, finding patterns and drawing conclusions. To do this, the AI networks need powerful processors that can quickly perform multiple tasks simultaneously -- something at which Nvidia's products excel.

That's why Microsoft is powering its AI-powered virtual machines with Nvidia's GPUs, and Amazon's AWS has partnered with Nvidia for more than a decade for its machine learning and high-performance computing solutions.

What's more, as AI begins to move out from the cloud, Nvidia's GPUs will be called upon to facilitate this new stage. Autonomous driving will require immense processing power within vehicles to keep passengers safe. Meanwhile, the manufacturing, healthcare, and retail sectors will likely adopt non-cloud-based AI applications requiring Nvidia's products.

Accordingly, Wall Street is raising earnings estimates for Nvidia. The average estimate for current-year earnings has increased from $4.30/share in January to $4.53; next year's estimates have jumped from $5.63 to $6.05. Revenue is expected to increase by 11.5% this year and 24% next year.

In summary, the emergence of AI will be a boon to many different companies, but it's difficult to see how Nvidia won't be the biggest winner of them all.

Will Healy (Upstart): Admittedly, Upstart's ability to pay off handsomely for investors revolves around the fact that it has lost so much. The bear market and struggles with rising interest rates have led to a 96% drop in its value since October 2021.

But the other factor is the growth potential of this consumer finance stock. Not only could Upstart make a comeback, but it may also change an industry through its AI-driven credit scoring system. The current leader in credit scoring, Fair Isaac Corporation, dominates the industry, with approximately 90% of lenders using its FICO score to rate borrowers.

However, Fair Isaac has not significantly changed its scoring system since its introduction in 1989, leaving it vulnerable to disruption. According to an internal study, Upstart's system led to 53% fewer defaults at comparable approval rates.

In an environment of rising interest rates, reducing defaults becomes more of a priority for banks. Also, given the power of this technology, more lenders will probably demand an AI-based solution. Such factors play into the hands of Upstart.

Indeed, interest in the platform continues to rise. The number of banks and credit unions using it has risen from 42 to 92 over the last year. Also, 778 auto dealers use Upstart's tool to evaluate car buyers seeking loans, up from 410 over a 12-month time frame.

Nonetheless, the numbers also highlight that Upstart's potential for massive gains brings with it a considerable level of risk. Only two banks account for 87% of its business. Also, rising rates have reversed revenue growth that shot into the triple digits as recently as one year ago.

In 2022, revenue grew by 1% to $849 million. But that growth ranged from a 156% yearly gain in Q1 to a 52% annual decline by Q4. Additionally, those deteriorating financials led to a loss of $109 million, down from a profit of $135 million in 2021.

Still, the declines have taken its price-to-sales (P/S) ratio to 1.5, down from a peak of 48 in the fall of 2021. That valuation and the much lower stock price could persuade investors to take a chance in the hope that the green flags for Upstart's future outweigh Upstart's risks.

Justin Pope (Microsoft): Everyone knows that Microsoft is one of the world's largest companies -- a conglomerate of enterprise software, cloud technology, gaming, and more. But its blooming relationship with ChatGPT creator OpenAI is starting to reap some promising rewards for long-term investors.

Microsoft added AI capabilities to its Bing search engine, which could potentially change the entire picture for the company. Rival Alphabet has dominated search for years, holding more than 90% of the global search share. That translates to approximately $160 billion in annual revenue for the tech giant. Alphabet's dominance has virtually locked Microsoft out of this highly lucrative market (Alphabet converts about 20% of its revenue into free cash flow).

Bing's resurgence is already showing some signs pointing to future growth. Microsoft CEO Satya Nadella noted on the company's most recent earnings call that Bing now has 100 million daily active users, and daily mobile app downloads have quadrupled since its launch.Even more interesting was a recent report that smartphone manufacturer Samsung was considering dropping Google for Bing as its default search engine.

Investors shouldn't get ahead of themselves -- Bing has a long way to go to become a serious threat to Google's throne. Google is the world's most visited website and has become a verb synonymous with internet searches. That mojo doesn't die overnight.

However, given Microsoft's revenue was $205 billion over the past year, becoming a legitimate competitor in the $160 billion search engine space can make a positive difference in Microsoft's overall long-term growth.

Suzanne Frey, an executive at Alphabet, is a member of The Motley Fools board of directors. John Mackey, former CEO of Whole Foods Market, an Amazon subsidiary, is a member of The Motley Fools board of directors. Jake Lerch has positions in Alphabet, Amazon.com, and Nvidia. Justin Pope has positions in Upstart. Will Healy has positions in Upstart. The Motley Fool has positions in and recommends Alphabet, Amazon.com, Microsoft, Nvidia, and Upstart. The Motley Fool recommends Fair Isaac. The Motley Fool has a disclosure policy.

The rest is here:
Artificial Intelligence Will Change the World. Here Are the 3 Stocks ... - The Motley Fool

Using Artificial Intelligence to Speed up Discovery of New Drugs – Neuroscience News

Summary: Experts see a bright future in the complementary use of artificial intelligence (AI) and structure-based drug discovery for drug discovery. Researchers explain how computational methods will streamline drug discovery by predicting which drug molecules are most likely to bind with the target receptor. The structure-based and AI-based approaches complement each other and can save time and money while yielding better results than traditional trial-and-error methods.

Source: USC

Artificial intelligence can generate poems and essays, create responsive game characters, analyze vast amounts of data and detect patterns that the human eye might miss. Imagine what AI could do for drug discovery, traditionally a time-consuming, expensive process from the bench to the bedside.

Experts see great promise in a complementary approach using AI and structure-based drug discovery, acomputational methodthat relies on knowledge of 3D structures of biological targets.

We recently caught up with Vsevolod Seva Katritch, associate professor of quantitative andcomputational biologyand chemistry at the USC Dornsife College of Letters, Arts and Sciences and the USC Michelson Center for Convergent Bioscience. Katritch is the co-director of the Center for New Technologies in Drug Discovery and Development (CNT3D) at the USC Michelson Center and the lead author of a new review paper published inNature. The paper, co-authored by USC research scientist Anastasiia Sadybekov, describes howcomputational approacheswill streamline drug discovery.

There has been a seismic shift in computational drug discovery in the last few years: an explosion of data availability on clinically relevant, human-protein structuresand molecules that bind them, enormous chemical libraries of drug-like molecules, almost unlimited computing power and new, more efficient computational methods.

The newest excitement is about AI-based drug discovery, but whats even more powerful is a combination of AI and structure-based drug discovery, with both approaches synergistically complementing each other.

Traditionaldrug discoveryis mostly a trial-and-error venture. Its slow and expensive, taking an average of 15 years and $2 billion. Theres a high attrition rate at every step, from target selection to lead optimization. The most opportunities for time andcost savingsreside in the earlier discovery and preclinical stages.

Lets use a lock-and-key analogy. The target receptor is the lock, and the drug that blocks or activates this receptor is a key for this lock. (Of course, the caveat is that in biology nothing is black or white, so some of the working keys switch the lock better than others, and lock is a bit malleable too.)

Heres an example. Lipitor, the bestselling drug of all time, targets an enzyme involved in the synthesis of cholesterol in the liver. A receptor on the enzyme is the lock. Lipitor is the key, fitting into the lock and blocking the activity of the enzyme, triggering a series of events that decrease blood levels of bad cholesterol.

Now, computational approaches allow us to digitally model many billions and even trillions of virtual keys and predict which ones are likely to be good keys. Only a few dozen of the best candidate keys are chemically synthesized and tested.

If the model is good, this process yields better results than traditional trial-and-error testing of millions of random keys. This reduces the physical requirements for synthesis of compounds and testing them more than thousandsfold, while often arriving at better results, as demonstrated by our work and work of many other groups working in this field.

Following the lock-and-key analogy, the structure-based approach takes advantage of our detailed understanding of the locks structure. If the 3D, physical structure of the lock is known, we can use virtual methods to predict the structure of a key that matches the lock.

Themachine learning, or AI-based approach, works best when many keys are already known for our target lock or other similar locks. AI can then analyze this mixture of similar locks and keys and predict the keys that are most likely to fit our target. It does not need exact knowledge of the lock structure, but it needs a large collection of relevant keys.

Thus, the structure-based and AI-based approaches are applicable in different cases and complement each other.

When testing billions and trillions of virtual compounds on cloud computers, computational costs themselves can become a bottleneck. A modular, giga-scale screening technology allows us to speed up and reduce cost dramatically by virtually predicting good parts of the key, combine them together, sort of building the key from several parts. For a 10 billion-compound library, this drops the computational costs from millions of dollars to hundreds, and it allows further scale-ups to trillions of compounds.

Author: Leigh HopperSource: USCContact: Leigh Hopper USCImage: The image is credited to Neuroscience News

Original Research: Closed access.Computational approaches streamlining drug discovery by Anastasiia V. Sadybekov et al. Nature

Abstract

Computational approaches streamlining drug discovery

Computer-aided drug discovery has been around for decades, although the past few years have seen a tectonic shift towards embracing computational technologies in both academia and pharma.

This shift is largely defined by the flood of data on ligand properties and binding to therapeutic targets and their 3D structures, abundant computing capacities and the advent of on-demand virtual libraries of drug-like small molecules in their billions.

Taking full advantage of these resources requires fast computational methods for effective ligand screening. This includes structure-based virtual screening of gigascale chemical spaces, further facilitated by fast iterative screening approaches. Highly synergistic are developments in deep learning predictions of ligand properties and target activities in lieu of receptor structure.

Here we review recent advances in ligand discovery technologies, their potential for reshaping the whole process of drug discovery and development, as well as the challenges they encounter.

We also discuss how the rapid identification of highly diverse, potent, target-selective and drug-like ligands to protein targets can democratize the drug discovery process, presenting new opportunities for the cost-effective development of safer and more effective small-molecule treatments.

See more here:
Using Artificial Intelligence to Speed up Discovery of New Drugs - Neuroscience News

WEIRD AI: Understanding what nations include in their artificial intelligence plans – Brookings Institution

In 2021 and 2022, the authors published a series of articles on how different countries are implementing their national artificial intelligence (AI) strategies. In these articles, we examined how different countries view AI and looked at their plans for evidence to support their goals. In the later series of papers, we examined who was winning and who was losing in the race to national AI governance, as well as the importance of people skills versus technology skills, and concluded with what the U.S. needs to do to become competitive in this domain.

Since these publications, several key developments have occurred in national AI governance and international collaborations. First, one of our key recommendations was that the U.S. and India create a partnership to work together on a joint national AI initiative. Our argument was as follows: India produces far more STEM graduates than the U.S., and the U.S. invests far more in technology infrastructure than India does. A U.S. -India partnership eclipses China in both dimensions and a successful partnership could allow the U.S. to quickly leapfrog China in all meaningful aspects of A.I. In early 2023, U.S. President Biden announced a formal partnership with India to do exactly what we recommended to counter the growing threat of China and its AI supremacy.

Second, as we observed in our prior paper, the U.S. federal government has invested in AI, but largely in a decentralized approach. We warned that this approach, while it may ultimately develop the best AI solution, requires a long ramp up and hence may not achieve all its priorities.

Finally, we warned that China is already in the lead on the achievement of its national AI goals and predicted that it would continue to surpass the U.S. and other countries. News has now come that China is planning on doubling its investment in AI by 2026, and that the majority of the investment will be in new hardware solutions. The U.S. State Department also is now reporting that China leads the U.S. in 37 out of 44 key areas of AI. In short, China has expanded its lead in most AI areas, while the U.S. is falling further and further behind.

Considering these developments, our current blog shifts findings away from national AI plan achievement to a more micro view of understanding the elements of the particular plans of the countries included in our research, and what drove their strategies. At a macro level, we also seek to understand if groups of like-minded countries, which we have grouped by cultural orientation, are taking the same or different approaches to AI policies. This builds upon our previous posts by seeking and identifying consistent themes across national AI plans from the perspective of underlying national characteristics.

In this blog, the countries that are part of our study include 34 nations that have produced public AI policies, as identified in our previous blog posts: Australia, Austria, Belgium, Canada, China, Czechia, Denmark, Estonia, Finland, France, Germany, India, Italy, Japan, South Korea, Lithuania, Luxembourg, Malta, Mexico, Netherlands, New Zealand, Norway, Poland, Portugal, Qatar, Russia, Serbia, Singapore, Spain, Sweden, UAE, UK, Uruguay, and USA.

For each, we examine six key elements in these national AI plansdata management, algorithmic management, AI governance, research and development (R&D) capacity development, education capacity development, and public service reform capacity developmentas they provide insight into how individual countries approach AI deployment. In doing so, we examine commonalities between culturally similar nations which can lead to both higher and lower levels of investment in each area.

We do this by exploring similarities and differences through what is commonly referred to as the WEIRD framework, a typology of countries based on how Western, Educated, Industrialized, Rich, and Democratic they are. In 2010, the concept of WEIRD-ness originated with Joseph Henrich, a professor of human evolutionary biology at Harvard University. The framework describes a set of countries with a particular psychology, motivation, and behavior that can be differentiated from other countries. WEIRD is, therefore, one framework by which countries can be grouped and differentiated to determine if there are commonalities in their approaches to various issues based on similar decision-making processes developed through common national assumptions and biases.

Below are our definitions of each element of national AI plans, followed by where they fall along the WEIRD continuum.

Data management refers to how the country envisages capturing and using the data derived from AI. For example, the Singapore plan defines data management defines [a]s the nations custodian of personal and administrative data, the Government holds a data resource that many companies find valuable. The Government can help drive cross-sectoral data sharing and innovation by curating, cleaning, and providing the private sector with access to Government datasets.

Algorithmic management addresses the countrys awareness of algorithmic issues. For example, the German plan states that: [t]he Federal Government will assess how AI systems can be made transparent, predictable and verifiable so as to effectively prevent distortion, discrimination, manipulation and other forms of improper use, particularly when it comes to using algorithm-based prognosis and decision-making applications.

AI governance refers to the inclusivity, transparency and public trust in AI and the need for appropriate oversight. The language in the French plan asserts: [i]n a world marked by inequality, artificial intelligence should not end up reinforcing the problems of exclusion and the concentration of wealth and resources. With regards to AI, a policy of inclusion should thus fulfill a dual objective: ensuring that the development of this technology does not contribute to an increase in social and economic inequality; and using AI to help genuinely reduce these problems.

Overall, capacity development is the process of acquiring, updating and reskilling human, organizational and policy resources to adapt to technological innovation. We examine three types of capacity development R&D, Education, and Public Service Reform.

R&D capacity development focuses on government incentive programs for encouraging private sector investment in AI. For example, the Luxembourg plan states: [t]he Ministry of the Economy has allocated approximately 62M in 2018 for AI-related projects through R&D grants, while granting a total of approximately 27M in 2017 for projects based on this type of technology. The Luxembourg National Research Fund (FNR), for example, has increasingly invested in research projects that cover big data and AI-related topics in fields ranging from Parkinsons disease to autonomous and intelligent systems approximately 200M over the past five years.

Education capacity development focuses on learning in AI, at the post-secondary, vocational and secondary levels. For example, the Belgian plan states: Overall, while growing, the AI offering in Belgium is limited and insufficiently visible. [W]hile university-college PXL is developing an AI bachelor programme, to date, no full AI Master or Bachelor programmes exist.

Public service reform capacity development focuses on applying AI to citizen-facing or supporting services. For example, the Finnish plan states: Finlands strengths in piloting [AI projects] include a limited and harmonised market, neutrality, abundant technology resources and support for legislation. Promoting an experimentation culture in public administration has brought added agility to the sectors development activities.

In the next step of our analysis, we identify the level of each country and then group countries by their WEIRD-ness. Western uses the World Population Reviews definition of the Latin West, and is defined by being in or out of this group, which is a group of countries sharing a common linguistic and cultural background, centered on Western Europe and its post-colonial footprint. Educated is based on the mean years of schooling in the UN Human Development Index, where 12 years (high school graduate) is considered the dividing point between high and low education. Industrialized adopts the World Bank industry value added of GDP, where a median value of $3500 USD per capita of value added separates high from low industrialization. Rich uses the Credit Suisse Global Wealth Databook mean wealth per adult measure, where $125k USD wealth is the median amongst countries. Democratic applies the Democracy Index of the Economist Intelligence Unit, which differentiates between shades of democratic and authoritarian regimes and where the midpoint of hybrid regimes (5.0 out of 10) is the dividing point between democratic and non-democratic. For example, Australia, Austria, and Canada are considered Western, while China, India and Korea are not. Germany, the U.S., and Estonia are seen as Educated, while Mexico, Uruguay and Spain are not. Canada, Denmark, and Luxemburg are considered Industrialized, while Uruguay, India and Serbia are not. Australia, France, and Luxembourg are determined to be Rich while China, Czechia and India are not. Finally, Sweden, the UK and Finland are found to be Democratic, while China, Qatar and Russia are not.

Figure 1 maps the 34 countries in our sample as follows. Results ranged from the pure WEIRD countries, including many Western European nations and some close trading partners and allies such as the United States, Canada, Australia, and New Zealand.Figure 1: Countries classified by WEIRD framework[1]

By comparing each grouping of countries with the presence or absence of our six data elements (data management, algorithmic management, AI governance, and R&D capability development), we can understand how each country views AI alone and within its particular grouping. For example, wEIRD Japan and Korea are high in all areas except for western and both invest highly in R&D capacity development but not education capacity development.

The methodology used for this blog was Qualitative Configuration Analysis (QCA), which seeks to identify causal recipes of conditions related to the occurrence of an outcome in a set of cases. In QCA, each case is viewed as a configuration of conditions (such as the five elements of WEIRD-ness) where each condition does not have a unique impact on the outcome (an element of AI strategy), but rather acts in combination with all other conditions. Application of QCA can provide several configurations for each outcome, including identifying core conditions that are vital for the outcome and peripheral conditions that are less important. The analysis for each plan element is described below.

Data management has three different configurations of countries that have highly developed plans. In the first configuration, for WeIRD countriesthose that are Western, Industrialized, Rich, and Democratic (but not Educated; e.g., France, Italy, Portugal, and Spain)being Western was the best predictor of having data management as part of their AI plan, and the other components were of much less importance. Of interest, not being Educated was also core, making it more likely that these countries would have data management as part of their plan. This would suggest that these countries recognize that they need to catch up on data management and have put plans in place that exploit their western ties to do so.

In the second configuration, which features WEIrD Czechia, Estonia, Lithuania, and Poland, being Democratic was the core and hence most important predictor and Western, Educated, and Industrialized were peripheral and hence less important. Interestingly, not being Rich made it more likely to have this included. This would suggest that these countries have developed data management plans efficiently, again leveraging their democratic allies to do so.

In the third and final configuration, which includes the WeirD countries of Mexico, Serbia, Uruguay, and weirD India, the only element whose presence mattered was the level of Democracy. That these countries were able to do so in low wealth, education, and industrialization contexts demonstrates the importance of investment in AI data management as a low-cost intervention in building AI policy.

Taken together, there are many commonalities, but a country being Western and/or Democratic were the best predictors of a country having a data governance strategy in its plan. In countries that are Western or Democratic, there is often a great deal of public pressure (and worry) about data governance, and we suspect these countries included data governance to satisfy the demands of their populace.

We also examined what conditions led to the absence of a highly developed data management plan. There were two configurations that had consistently low development of data management. In the first configuration, which features wEIrd Russian and UAE and weIrd China, being neither Rich nor Democratic were core conditions. In the second configuration, which includes wEIRD Japan and Korea, core conditions were being not Western but highly Educated. Common across both configurations was that all countries were Industrialized but not Western. This would suggest that data management is more a concern of western countries than non-western countries, whether they are democratic or not.

However, we also found that the largest grouping of countriesthe 15 WEIRD countries in the samplewere not represented, falling neither in the high or low configurations. We believe that this is due to there being multiple different paths for AI policy development and hence they do not all stress data governance and management. For example, Australia, the UK, and the US have strong data governance, while Canada, Germany and Sweden do not. Future investigation is needed to differentiate between the WEIRDest countries.

For algorithmic management, except for WeirD Mexico, Serbia, and Uruguay, there was no discernable pattern in terms of which countries included an acknowledgment of the need and value of algorithmic management. We had suspected that more WEIRD countries would be sensitive to this, but our data did not support this belief.

We examined the low outcomes for algorithmic management and found two configurations. The first was wEIRD Japan and Korea and weIRD Singapore, where the core conditions were being not Western but Rich and Democratic. The second was wEIrd Russian and UAE and weIrd China, where the core elements were not Rich and not Democratic. Common across the two configurations with six countries was being not Western but Industrialized. Again, this suggests that algorithmic management is more a concern of western nations than non-western ones.

For AI governance, we again found that, except for WeirD Mexico, Serbia, and Uruguay, there was no discernable pattern for which countries included this in their plans and which countries did not. We believed that AI governance and algorithmic management to be more advanced in WEIRD nations and hence this was an unexpected result.

We examined the low outcomes for AI governance and found three different configurations. The first was wEIRD Japan and Korea and weIRD Singapore, where the core conditions were being not Western but Rich and Democratic. The second was wEIrd Russian and UAE, where the core elements were not Western but Educated. The third was weirD India, where the core elements were being not Western but Democratic. Common across the three configurations with six countries was not being of western classification. Again, this suggests that AI governance is more a concern of western nations than nonwestern ones.

There was a much clearer picture of high R&D development, where we found four configurations. The first configuration was the 15 WEIRD countries plus the WEIrD onesCzechia, Estonia, Lithuania, Poland. For the latter, while they are not some of the richer countries, they still manage to invest heavily in developing their R&D.

The second configuration included WeirD Mexico, Serbia, Uruguay, and weirD India. Like data governance, these countries were joined by their generally democratic nature but lower levels of education, industrialization, and wealth.

Conversely, the third configuration included the non-western, non-democratic nations such as weIRd Qatar and weIrd China. This would indicate that capability development is of primary importance for such nations at the expense of other policy elements. The implication is that investment in application of AI is much more important to these nations than its governance.

Finally, the fourth configuration included the non-western but democratic nations such as wEIRD Japan, Korea, and weIRD Singapore. This would indicate that the East, whether democratic or not, is as equally focused on capability development and R&D investment as the West.

We did not find any consistent configurations for low R&D development across the 34 nations.

For high education capacity development, we found two configurations, both with Western but not Rich core conditions. The first includes WEIrD Czechia, Estonia, Lithuania, and Poland while the second includes WeirD Mexico, Serbia, and Uruguay. Common conditions for these seven nations were being Western and Democratic, but not Rich, while the former countries were Educated and Industrialized, while the latter were not. These former eastern-bloc and colonial nations appear to be focusing on creating educational opportunities to catch up with other nations in the AI sphere.

Conversely, we found three configurations of low education capacity development. The first includes wEIRD Japan and Korea and weIRD Singapore, representing the non-Western but Industrialized, Rich, and Democratic nations. The second was weIRd Qatar, not Western or Democratic but Rich and Industrialized, while the third was wEIrd Russia and UAE. The last was weirD India, being Democratic but low in all other areas. The common factor across these countries was being non-western, demonstrating that educational investment to improve AI outcomes is a primarily western phenomenon, irrespective of other plan elements.

We did not find any consistent configurations for high public service reform capacity development, but we did find three configurations for low investment in such plans. The first includes wEIRD Japan and Korea, the second was weIRd Qatar, and the last was weirD India. This common core factor across these three configurations was that they were not western countries, further highlighting the different approaches taken by western and nonwestern countries.

Overall, we expected more commonality in which countries included certain elements, and the fragmented nature of our results likely reflects a very early stage of AI adoption and countries simply trying to figure out what to do. We believe that, over time, WEIRD countries will start to converge on what is important and those insights will be reflected in their national plans.

There is one other message that our results pointed out: the West and the East are taking very different approaches to AI development in their plans. The East is almost exclusively focused on building up its R&D capacity and is largely ignoring the traditional guardrails of technology management (e.g., data governance, data management, education, public service reform). By contrast, the West is almost exclusively focused on ensuring that these guardrails are in place and is spending relatively less effort on building the R&D capacity that is essential to AI development. This is perhaps the reason why many Western technology leaders are calling for a six-month pause on AI development, as that pause could allow suitable guardrails to be put in place. However, we are extremely doubtful that countries like China will see the wisdom in taking a six-month pause and will likely use the pause to create even more space between their R&D capacity and the rest of the world. This all gas, no brakes Eastern philosophy has the potential to cause great global harm but will undeniably increase their domination in this area. We have little doubt about the need for suitable guardrails in AI development but are also equally convinced that a six-month pause is unlikely to be honored by China. Because of Chinas lead, the only prudent strategy is to build the guardrails while continuing to engage in AI development. Otherwise, the West will continue to fall further behind, resulting in the development of a great set of guardrails but with nothing of value to guard.

[1] A capital letter denotes being high in an element of WEIRD-ness while a lowercase letter denotes being low in that element. For example, W means western while w means not western. (Back to top)

More:
WEIRD AI: Understanding what nations include in their artificial intelligence plans - Brookings Institution

Artificial Intelligence Is Driving the Stock Market. It Has Some Advice for You. – Barron’s

Im the elephant in the room, and Im not afraid to sparkle, says a digital replica of former Vice President Mike Pence, dressed in shiny pink with a boa. The video, widely viewed on Twitter this past week, shows prominent figures from the political right dressed in drag. Its based on an Instagram page called RuPublicans, a nod to reality show celebrity RuPaul, featuring portraits made using the artificial intelligence tools Midjourney and ChatGPT-4.

Im firmly against this on multiple counts. Its childish and divisive. Using AI to create so-called deep fake likenesses of politicians sets a dangerous precedent. And Steve Bannon should avoid plunging necklines. But this is one example of AIs place for now in the public consciousness. There are distant applications of profound importance, like fully self-driving cars, and already-here but frivolous ones, like, well, Rudy Garland, a certain former New York City mayor in a cheetah print coat.

Of course, there are already plenty of commercially significant examples, like search results, facial recognition, and credit card fraud detection. But suddenly, AI seems to be taking over the stock market, too.

Strategists at J.P. Morgan point out that the S&P 500s year-to-date gain, recently 8%, has been driven by the narrowest stock leadership since the 1990s. Interest in generative AI and [the] Large Language Model theme appears to be stretched, they write.

Large language models drive conversational bots like ChatGPTfrom Microsoft-backed OpenAIwhich office workers have been tinkering with since it opened to the public in November. I asked this past week if I should sell in May and go away. No seems to have been the answer, only wordier and more meandering, with lots of noncommittal phrases like may not and not necessarily and generally. In other words, robots have already cracked the financial advice business.

Chatbot-themed investments have added $1.4 trillion in stock market value this year. Just six companies were recently responsible for 53% of S&P 500 gains: Microsoft (ticker: MSFT), Alphabet (GOOGL), Amazon.com (AMZN), Meta Platforms (META), Nvidia (NVDA), and Salesforce (CRM). The 10 biggest S&P 500 members have close to their largest index weighting ever.

Advertisement - Scroll to Continue

This would be easier to dismiss as froth if AI wasnt taking center stage this earnings season. Microsoft, Alphabet, and Meta reported solid results. Microsoft is leading the AI arms race, writes investment bank Wedbush. At Alphabet, search is becoming even more valuable as Google turns user behavior into language-model training for better results, says Morgan Stanley. At Meta, users who once relied on a limited set of friends and family for posts are increasingly drawn in by a never-ending supply of AI-recommended videos.

And then there are costs. At a recent conference, a top Microsoft executive pointed out that his developers increased productivity by 55% when they used an AI tool called GitHub Copilot, which turns natural language into coding suggestions. Higher productivity means that fewer programmers are needed, along with fewer support workers. That, as much as recent softness in advertising demand and concerns about the economy, is causing a rapid rethink of head count.

Meta is laying off nearly one-quarter of its workforce, and hasnt ruled out deeper cuts. Morgan Stanley recently laid out what that means for its financial model of the company. It had previously assumed 10% head-count growth in 2024. If it cuts that figure to 2%, the cost reduction would boost earnings by about $1.20 a share, or 8%. Recent layoffs at Meta, Alphabet, and Amazon are partly a reaction to prior overhiring, Morgan Stanley writes. But there are likely to be lasting changes, too: Forward hiring levels should arguably be smaller and more targeted due to rapidly emerging AI productivity drivers.

Advertisement - Scroll to Continue

Meta stock peaked at over $380 in September 2021, then plunged to under $90 in November amid rising interest rates and concerns that the company was blowing too much cash on vague metaverse ambitions. Now that Meta is viewed as a cost-conscious AI play, the stock recently fetched $238. Thats around 28 times this years projected free cash flow, or 20 times the free cash Wall Street sees the company unlocking two years from now. The S&P 500, for comparison, trades at 22 times this years estimated free cash flow.

AI winners are making the index look expensive. But Im not sellingthe dividends will come in handy when the chatbot columnists take over.

For a second opinion on financial markets, I called on an advisor with a pulse: David Kelly, chief global strategist for the asset management side of JPMorgan Chase. Dont sell, he says. Yes, we might get a recession. But inflation is falling, and the Federal Reserve is likely to cut interest rates by next year and into 2025. Rates wont go back to zerolevels that low dont help the economy and lead to financial instability and bubbles, says Kelly. But rates will go low enough to make todays stock prices look reasonable.

Advertisement - Scroll to Continue

The best stock deals are overseas, including in Europe and Japan, which are 30% cheaper than the U.S. relative to earnings, especially now that a 15-year uptrend in the value of the dollar appears to have reversed, says Kelly. Also, buy bonds while you can. In America, theyre becoming a little bit like the cicada bugattractive yields show up briefly and then disappear for many years. And crypto, despite recent gains, is still nonsense, a vehicle for speculation, and very, very vulnerable to some future market downturn.

ChatGPT, if youre wondering, took three paragraphs to explain that Bitcoin could potentially go up, but also that theres the possibility of it going down, and that I should do my own research and make informed decisions. Ill stick with Kelly, unless the robots are reading, in which case I could maybe but not necessarily go either way.

Write to Jack Hough at jack.hough@barrons.com. Follow him on Twitter and subscribe to his Barrons Streetwise podcast.

Read the rest here:
Artificial Intelligence Is Driving the Stock Market. It Has Some Advice for You. - Barron's

Mind the Gap in Standardisation of Cybersecurity for Artificial … – ENISA

This report provides an overview of standards published, under development and planned - and an assessment of their span for the purpose of identifying potential gaps.

EU Agency for Cybersecurity Executive Director, Juhan Lepassaar, declared: Advanced chatbot platforms powered by AI systems are currently used by consumers and businesses alike. The questions raised by AI come down to our capacity to assess its impact, to monitor and control it, with a view to making AI cyber secure and robust for its full potential to unfold. Using adequate standards will help ensure the protection of AI systems and of the data those systems need toprocess in order to operate. I trust this is the approach we need to take if we want to maximise the benefitsfor all of us to securely enjoy the services of AI systems to the full.

This report focuses on the cybersecurity aspects of AI, which are integral to the European legal framework regulating AI, proposed by the European Commission last year dubbed as the AI Act.

What is Artificial Intelligence?

The draft AI Act provides a definition of an AI system as software developed with one or more () techniques () for a given set of human-defined objectives, that generates outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with. In a nutshell, these techniques mainly include: machine learning resorting to methods such as deep learning, logic, knowledge-based and statistical approaches.

It is indeed essential for the allocation of legal responsibilities under a future AI framework to agree on what falls into the definition of an 'AI system'.

However, the exact scope of an AI system is constantly evolving both in the legislative debate on the draft AI Act, as well in the scientific and standardisation communities.

Although broad in contents, this report focuses on machine learning (ML) due to its extensive use across AI deployments. ML has come under scrutiny with respect to vulnerabilities particularly impacting the cybersecurity of an AI implementation.

AI cybersecurity standards: whats the state of play?

As standards help mitigate risks, this study unveils existing general-purpose standards that are readily available for information security and quality management in the context of AI. In order to mitigate some of the cybersecurity risks affecting AI systems, further guidance could be developed to help the user community benefit from the existing standards on AI.

This suggestion has been based on the observation concerning the software layer of AI. It follows that what is applicable to software could be applicable to AI. However, it does not mean the work ends here. Other aspects still need to be considered, such as:

Further observations concern the extent to which the assessment of compliance with security requirements can be based on AI-specific horizontal standards; furthermore, the extent to which this assessment can be based on vertical/sector specific standards calls for attention.

Key recommendations include:

Regulating AI: what is needed?

As for many other pieces of EU legislation, compliance with the draft AI Act will be supported by standards. When it comes to compliance with the cybersecurity requirements set by the draft AI Act, additional aspects have been identified. For example, standards for conformity assessment, in particular related to tools and competences, may need to be further developed. Also, the interplay across different legislative initiatives needs to be further reflected in standardisation activities an example of this is the proposal for a regulation on horizontal cybersecurity requirements for products with digital elements, referred to as the Cyber Resilience Act.

Building on the report and other desk research as well as input received from experts, ENISA is currently examining the need for and the feasibility of an EU cybersecurity certification scheme on AI. ENISA is therefore engaging with a broad range of stakeholders including industry, ESOs and Member States, for the purpose of collecting data on AI cybersecurity requirements, data security in relation to AI, AI risk management and conformity assessment.

AI and cybersecurity will be discussed in two dedicated panels:

ENISA advocated the importance of standardisation in cybersecurity today, at the RSA Conference in San Francisco in the Standards on the Horizon: What Matters Most? in a panel comprising the National Institute of Standards and Technology (NIST).

Further information

Cybersecurity of AI and standardisation 2023 ENISA report

Securing Machine Learning Algorithms 2021 ENISA report

The proposal AI Act

The proposal Cyber Resilience Act

Contact

For press questions and interviews, please contactpress (at) enisa.europa.eu

Follow this link:
Mind the Gap in Standardisation of Cybersecurity for Artificial ... - ENISA