Archive for the ‘Machine Learning’ Category

Build a self-service digital assistant using Amazon Lex and Knowledge Bases for Amazon Bedrock | Amazon Web … – AWS Blog

Organizations strive to implement efficient, scalable, cost-effective, and automated customer support solutions without compromising the customer experience. Generative artificial intelligence (AI)-powered chatbots play a crucial role in delivering human-like interactions by providing responses from a knowledge base without the involvement of live agents. These chatbots can be efficiently utilized for handling generic inquiries, freeing up live agents to focus on more complex tasks.

Amazon Lex provides advanced conversational interfaces using voice and text channels. It features natural language understanding capabilities to recognize more accurate identification of user intent and fulfills the user intent faster.

Amazon Bedrock simplifies the process of developing and scaling generative AI applications powered by large language models (LLMs) and other foundation models (FMs). It offers access to a diverse range of FMs from leading providers such as Anthropic Claude, AI21 Labs, Cohere, and Stability AI, as well as Amazons proprietary Amazon Titan models. Additionally, Knowledge Bases for Amazon Bedrock empowers you to develop applications that harness the power of Retrieval Augmented Generation (RAG), an approach where retrieving relevant information from data sources enhances the models ability to generate contextually appropriate and informed responses.

The generative AI capability of QnAIntent in Amazon Lex lets you securely connect FMs to company data for RAG. QnAIntent provides an interface to use enterprise data and FMs on Amazon Bedrock to generate relevant, accurate, and contextual responses. You can use QnAIntent with new or existing Amazon Lex bots to automate FAQs through text and voice channels, such as Amazon Connect.

With this capability, you no longer need to create variations of intents, sample utterances, slots, and prompts to predict and handle a wide range of FAQs. You can simply connect QnAIntent to company knowledge sources and the bot can immediately handle questions using the allowed content.

In this post, we demonstrate how you can build chatbots with QnAIntent that connects to a knowledge base in Amazon Bedrock (powered by Amazon OpenSearch Serverless as a vector database) and build rich, self-service, conversational experiences for your customers.

The solution uses Amazon Lex, Amazon Simple Storage Service (Amazon S3), and Amazon Bedrock in the following steps:

The following diagram illustrates the solution architecture and workflow.

In the following sections, we look at the key components of the solution in more detail and the high-level steps to implement the solution:

To implement this solution, you need the following:

To create a new knowledge base in Amazon Bedrock, complete the following steps. For more information, refer to Create a knowledge base.

Complete the following steps to create your bot:

Complete the following steps to add QnAIntent:

The Amazon Lex web UI is a prebuilt fully featured web client for Amazon Lex chatbots. It eliminates the heavy lifting of recreating a chat UI from scratch. You can quickly deploy its features and minimize time to value for your chatbot-powered applications. Complete the following steps to deploy the UI:

To avoid incurring unnecessary future charges, clean up the resources you created as part of this solution:

In this post, we discussed the significance of generative AI-powered chatbots in customer support systems. We then provided an overview of the new Amazon Lex feature, QnAIntent, designed to connect FMs to your company data. Finally, we demonstrated a practical use case of setting up a Q&A chatbot to analyze Amazon shareholder documents. This implementation not only provides prompt and consistent customer service, but also empowers live agents to dedicate their expertise to resolving more complex issues.

Stay up to date with the latest advancements in generative AI and start building on AWS. If youre seeking assistance on how to begin, check out the Generative AI Innovation Center.

Supriya Puragundla is a Senior Solutions Architect at AWS. She has over 15 years of IT experience in software development, design and architecture. She helps key customer accounts on their data, generative AI and AI/ML journeys. She is passionate about data-driven AI and the area of depth in ML and generative AI.

Manjula Nagineni is a Senior Solutions Architect with AWS based in New York. She works with major financial service institutions, architecting and modernizing their large-scale applications while adopting AWS Cloud services. She is passionate about designing cloud-centered big data workloads. She has over 20 years of IT experience in software development, analytics, and architecture across multiple domains such as finance, retail, and telecom.

Mani Khanuja is a Tech Lead Generative AI Specialists, author of the book Applied Machine Learning and High Performance Computing on AWS, and a member of the Board of Directors for Women in Manufacturing Education Foundation Board. She leads machine learning projects in various domains such as computer vision, natural language processing, and generative AI. She speaks at internal and external conferences such AWS re:Invent, Women in Manufacturing West, YouTube webinars, and GHC 23. In her free time, she likes to go for long runs along the beach.

Read this article:
Build a self-service digital assistant using Amazon Lex and Knowledge Bases for Amazon Bedrock | Amazon Web ... - AWS Blog

Machine learning was used to sync subtitles in Marvel’s Spider-Man 2 – Game Developer

Sony is convinced machine learning and AI can be used to streamline development, and revealed it already leveraged the tech when developing Marvel's Spider-Man 2.

The PlayStation maker shared the tidbit during a recent corporate strategy meeting where it outlined its long-term "Creative Entertainment Vision."

The Japanese company said its 10-year plan will revolve around harnessing technology to "unleash the creativity of creators," connecting diverse people and values to "foster vibrant communities," and creating new experiences that "go beyond imagination."

The company didn't specify which new technologies it's hoping to deploy, but noted it's already using AI tech and machine learning to "support IP value maximization." What does that mean in practice? Sony claims it's about finding new solutions to existing problems so franchises can be "delivered rapidly and at a low cost."

Throwing out an example of that philosophy in action, Sony explained Marvel's Spider-Man 2 developer Insomniac Games recently "utilized machine learning and applied original voice recognition software specialized for gaming" to enable the automatic synchronization of subtitles in certain languages. It's claimed the technique "significantly" shortened the subtitling process by making it easier to sync subs with character dialogue.

There's been plenty of AI chatter at Sony this week. Naughty Dog studio head Neil Druckmann advocated for the tech in an interview published on the company website and claimed it could "revolutionize" development and enable studios to "take on more adventurous projects and push the boundaries of storytelling in games."

He said AI tools could reduce costs and clear technical hurdles for developers, unlocking their creativity in the process. "With AI, your creativity sets the limits. Understanding art history, composition, and storytelling is essential for effective direction. Tools evolve quicklySome tools once essential, now are obsolete," he continued.

"At Naughty Dog, we transitioned from hand-animating Jak and Daxter to using motion capture in Uncharted, significantly enhancing our storytelling."

Sony isn't the first video game company to hype AI tech. Other major players like EA and Microsoft are pushing the technology, claiming it'll be a tool that empowers creatives across the industry while lowering costs. Some developers, however, are concerned that wielding AI (specifically the generative variety) as a cost-cutting device will invariably mean layoffs and downsizing.

Continue reading here:
Machine learning was used to sync subtitles in Marvel's Spider-Man 2 - Game Developer

Reinforcement learning AI might bring humanoid robots to the real world – Science News Magazine

ChatGPT and other AI tools are upending our digital lives, but our AI interactions are about to get physical. Humanoid robots trained with a particular type of AI to sense and react to their world could lend a hand in factories, space stations, nursing homes and beyond. Two recent papers in Science Robotics highlight how that type of AI called reinforcement learning could make such robots a reality.

Weve seen really wonderful progress in AI in the digital world with tools like GPT, says Ilija Radosavovic, a computer scientist at the University of California, Berkeley. But I think that AI in the physical world has the potential to be even more transformational.

The state-of-the-art software that controls the movements of bipedal bots often uses whats called model-based predictive control. Its led to very sophisticated systems, such as the parkour-performing Atlas robot from Boston Dynamics. But these robot brains require a fair amount of human expertise to program, and they dont adapt well to unfamiliar situations. Reinforcement learning, or RL, in which AI learns through trial and error to perform sequences of actions, may prove a better approach.

We wanted to see how far we can push reinforcement learning in real robots, says Tuomas Haarnoja, a computer scientist at Google DeepMind and coauthor of one of the Science Robotics papers. Haarnoja and colleagues chose to develop software for a 20-inch-tall toy robot called OP3, made by the company Robotis. The team not only wanted to teach OP3 to walk but also to play one-on-one soccer.

Soccer is a nice environment to study general reinforcement learning, says Guy Lever of Google DeepMind, a coauthor of the paper. It requires planning, agility, exploration, cooperation and competition.

The toy size of the robots allowed us to iterate fast, Haarnoja says, because larger robots are harder to operate and repair. And before deploying the machine learning software in the real robots which can break when they fall over the researchers trained it on virtual robots, a technique known as sim-to-real transfer.

Training of the virtual bots came in two stages. In the first stage, the team trained one AI using RL merely to get the virtual robot up from the ground, and another to score goals without falling over. As input, the AIs received data including the positions and movements of the robots joints and, from external cameras, the positions of everything else in the game. (In a recently posted preprint, the team created a version of the system that relies on the robots own vision.) The AIs had to output new joint positions. If they performed well, their internal parameters were updated to encourage more of the same behavior. In the second stage, the researchers trained an AI to imitate each of the first two AIs and to score against closely matched opponents (versions of itself).

To prepare the control software, called a controller, for the real-world robots, the researchers varied aspects of the simulation, including friction, sensor delays and body-mass distribution. They also rewarded the AI not just for scoring goals but also for other things, like minimizing knee torque to avoid injury.

Real robots tested with the RL control software walked nearly twice as fast, turned three times as quickly and took less than half the time to get up compared with robots using the scripted controller made by the manufacturer. But more advanced skills also emerged, like fluidly stringing together actions. It was really nice to see more complex motor skills being learned by robots, says Radosavovic, who was not a part of the research. And the controller learned not just single moves, but also the planning required to play the game, like knowing to stand in the way of an opponents shot.

In my eyes, the soccer paper is amazing, says Joonho Lee, a roboticist at ETH Zurich. Weve never seen such resilience from humanoids.

But what about human-sized humanoids? In the other recent paper, Radosavovic worked with colleagues to train a controller for a larger humanoid robot. This one, Digit from Agility Robotics, stands about five feet tall and has knees that bend backward like an ostrich. The teams approach was similar to Google DeepMinds. Both teams used computer brains known as neural networks, but Radosavovic used a specialized type called a transformer, the kind common in large language models like those powering ChatGPT.

Instead of taking in words and outputting more words, the model took in 16 observation-action pairs what the robot had sensed and done for the previous 16 snapshots of time, covering roughly a third of a second and output its next action. To make learning easier, it first learned based on observations of its actual joint positions and velocity, before using observations with added noise, a more realistic task. To further enable sim-to-real transfer, the researchers slightly randomized aspects of the virtual robots body and created a variety of virtual terrain, including slopes, trip-inducing cables and bubble wrap.

After training in the digital world, the controller operated a real robot for a full week of tests outside preventing the robot from falling over even a single time. And in the lab, the robot resisted external forces like having an inflatable exercise ball thrown at it. The controller also outperformed the non-machine-learning controller from the manufacturer, easily traversing an array of planks on the ground. And whereas the default controller got stuck attempting to climb a step, the RL one managed to figure it out, even though it hadnt seen steps during training.

Reinforcement learning for four-legged locomotion has become popular in the last few years, and these studies show the same techniques now working for two-legged robots. These papers are either at-par or have pushed beyond manually defined controllers a tipping point, says Pulkit Agrawal, a computer scientist at MIT. With the power of data, it will be possible to unlock many more capabilities in a relatively short period of time.

And the papers approaches are likely complementary. Future AI robots may need the robustness of Berkeleys system and the dexterity of Google DeepMinds. Real-world soccer incorporates both. According to Lever, soccer has been a grand challenge for robotics and AI for quite some time.

Read the original post:
Reinforcement learning AI might bring humanoid robots to the real world - Science News Magazine

Study uses AI and machine learning to accelerate protein engineering process – Dailyuw

In recent months, the process of protein design at UW has been revolutionized by the implementation of a machine learning computational approach. In a new paper published in the journal Nature Computational Science, the UW molecular design Berndt Lab reports its findings.

Machine learning, recently applied to the realm of protein engineering, has been effective in reducing the amount of time needed to design proteins that can efficiently perform a biochemical task. The current trial-and-error method of mutating an amino acid sequence can take anywhere from several months to upward of years of tedious analysis. However, with the recent use of machine learning at the Berndt Lab, the future of protein engineering appears promising.

The application of machine learning was used to analyze how mutations to GCaMP, a biosensor that tracks calcium in cells, would affect its behavior. Collaborators provided empirical knowledge of GCaMP, which was then combined with an AI algorithm that could predict the effects of the protein mutations. Well-developed proteins can provide valuable insight to disease and a patients response to treatment.

The machine learning model achieved the equivalent of several years worth of lab mutations in a single night, with a very high rate of success. Of the 17 mutations implemented in real biological cells, five or six were absolute successes. According to Andre Berndt, assistant professor in the department of bioengineering and senior author on the paper, out of 10 mutations you are typically lucky if just one provides a gain of function.

A lot of the mutations that were predicted to be better were indeed better at a much, much faster pace from a much larger pool of virtually tested mutations, Berndt said. So this was a very efficient process just based on the trained model.

Berndts team was comprised of graduate and undergraduate students who collaborated on the study. Lead author Sarah Wait, a Ph.D. candidate in molecular engineering,spearheaded the research by undertaking various roles such as testing mutation variants, engineering data, establishing the machine learning framework, and analyzing the results.

Computational programs can discover all of the really hard-to-observe patterns that, maybe, we wouldnt be able to observe ourselves, Wait said. It's just a really great tool to help us as the researcher[s] discover these really small patterns that may be hidden to us given the amount of data we have to look at in order to actually see them.

Reach contributing writer Ashley Ingalsbe at news@dailyuw.com X: @ashleyiing

Like what youre reading? Support high-quality student journalism by donating here.

See original here:
Study uses AI and machine learning to accelerate protein engineering process - Dailyuw

The 2034 Millionaire’s Club: 3 Machine Learning Stocks to Buy Now – InvestorPlace

Machine learning stocks are gaining traction as the interest in artificial intelligence (AI) and machine learning soars, especially after the launch of ChatGPT by OpenAI. This technology has given us a glimpse of its potential, sparking curiosity about its future applications, and has led to my list of machine learning stocks to buy.

Machine learning stocks present a promising opportunity for growth, with the potential to create significant wealth. As per analyst forecasts, I think around a decade from now is when we will see these companies go parabolic and reach their full growth potential.

These companies leverage machine learning for various applications, including diagnosing life-threatening diseases, preventing credit card fraud, developing chatbots and exploring advanced tech like artificial general intelligence. The future will only get better from here.

So if youre looking for machine learning stocks to buy with substantial upside potential, keep reading to discover three top picks.

Source: Lori Butcher / Shutterstock.com

DraftKings (NASDAQ:DKNG) leverages machine learning to enhance its online sports betting and gambling platform. The company has shown significant growth, with recent revenue increases and expansion in legalized betting markets.

DKNG has significantly revised its revenue outlook for 2024 upwards, expecting it to be between $4.65 billion and $4.9 billion, marking an anticipated year-over-year growth of 27% to 34%. This adjustment reflects higher projections compared to their earlier forecast ranging from $4.50 billion to $4.80 billion. Additionally, the company has increased its adjusted EBITDA forecast for 2024, now ranging from $410 million to $510 million, up from the previous estimate of $350 million to $450 million.

DraftKings has also announced plans to acquire the gambling company Jackpocket for $750 million in a cash-and-stock deal. This acquisition is expected to further enhance DraftKings market presence and capabilities in online betting.

I covered DKNG before, and I still think its one of the best meme stocks that investors can get behind. The companys stock price has risen 72.64% over the past year, and it seems theres still plenty of fuel left in the tank to surge higher.

Source: Sundry Photography / Shutterstock

Cloudflare (NYSE:NET) provides a cloud platform that offers a range of network services to businesses worldwide. The company uses machine learning to enhance its cybersecurity solutions.

Cloudflare has outlined a robust strategy for 2024, focusing on advancing its cybersecurity solutions and expanding its network services. The company expects to generate total revenue between $1.648 billion and $1.652 billion for the year. This revenue forecast reflects a significant increase in their operational scale.

NET is another stock that is leveraging machine learning to its full advantage. Ive been bullish on this company for some time and continue to be so. Notably, Cloudflare is expanding its deployment of inference-tuned graphic processing units (GPUs) across its global network. By the end of 2024, these GPUs will be deployed in nearly every city within Cloudflares network.

NET has been silently integrating many parts of its network within the internets fabric for millions of users, such as through its DNS service, Cloudflare WARP; reverse proxy for website owners; and much more. Around 30% of the 10,000 most popular websites globally use Cloudflare. Many of NETs services can be accessed free of charge.

It is following a classic tech stock strategy of expanding its users, influence and reach over reaching immediate profits, and its financials have slowly scaled with this performance.

Source: VDB Photos / Shutterstock.com

CrowdStrike (NASDAQ:CRWD) is a leading cybersecurity company that uses machine learning to detect and prevent cyber threats.

In its latest quarterly report on Mar. 5, CRWD reported a 102% earnings growth to 95 cents per share and a 33% revenue increase to $845.3 million. Analysts expect a 57% earnings growth to 89 cents per share in the next report and a 27% EPS increase for the full fiscal year ending in January.

Adding to the bull case for CRWD is that it has has partnered with Google Cloud by Alphabet (NASDAQ:GOOG, GOOGL) to enhance AI-native cybersecurity solutions, positioning itself strongly against competitors like Palo Alto Networks (NASDAQ:PANW).

Many contributors here at Investorplace have identified CRWD as one of the best cybersecurity stocks for investors to buy, and I am in agreement here. Its aggressive EPS growth and stock price appreciation (140.04% over the past year), make it a very attractive pick for long-term investors.

On the date of publication, Matthew Farley did not have (either directly or indirectly) any positions in the securities mentioned in this article. The opinions expressed are those of the writer, subject to the InvestorPlace.com Publishing Guidelines.

Matthew started writing coverage of the financial markets during the crypto boom of 2017 and was also a team member of several fintech startups. He then started writing about Australian and U.S. equities for various publications. His work has appeared in MarketBeat, FXStreet, Cryptoslate, Seeking Alpha, and the New Scientist magazine, among others.

Read more:
The 2034 Millionaire's Club: 3 Machine Learning Stocks to Buy Now - InvestorPlace