Archive for the ‘Machine Learning’ Category

Improve productivity when processing scanned PDFs using Amazon Q Business | Amazon Web Services – AWS Blog

Amazon Q Businessis a generative AI-powered assistant that can answer questions, provide summaries, generate content, and extract insights directly from the content in digital as well as scanned PDF documents in your enterprise data sources without needing to extract the text first.

Customers across industries such as finance, insurance, healthcare life sciences, and more need to derive insights from various document types, such as receipts, healthcare plans, or tax statements, which are frequently in scanned PDF format. These document types often have a semi-structured or unstructured format, which requires processing to extract text before indexing with Amazon Q Business.

The launch of scanned PDF document support with Amazon Q Business can help you seamlessly process a variety of multi-modal document types through the AWS Management Console and APIs, across all supported Amazon Q Business AWS Regions. You can ingest documents, including scanned PDFs, from your data sources using supported connectors, index them, and then use the documents to answer questions, provide summaries, and generate content securely and accurately from your enterprise systems. This feature eliminates the development effort required to extract text from scanned PDF documents outside of Amazon Q Business, and improves the document processing pipeline for building your generative artificial intelligence (AI) assistant with Amazon Q Business.

In this post, we show how to asynchronously index and run real-time queries with scanned PDF documents using Amazon Q Business.

You can use Amazon Q Business for scanned PDF documents from the console, AWS SDKs, or AWS Command Line Interface (AWS CLI).

Amazon Q Business provides a versatile suite of data connectors that can integrate with a wide range of enterprise data sources, empowering you to develop generative AI solutions with minimal setup and configuration. To learn more, visit Amazon Q Business, now generally available, helps boost workforce productivity with generative AI.

After your Amazon Q Business application is ready to use, you can directly upload the scanned PDFs into an Amazon Q Business index using either the console or the APIs. Amazon Q Business offers multiple data source connectors that can integrate and synchronize data from multiple data repositories into single index. For this post, we demonstrate two scenarios to use documents: one with the direct document upload option, and another using the Amazon Simple Storage Service (Amazon S3) connector. If you need to ingest documents from other data sources, refer to Supported connectors for details on connecting additional data sources.

In this post, we use three scanned PDF documents as examples: an invoice, a health plan summary, and an employment verification form, along with some text documents.

The first step is to index these documents. Complete the following steps to index documents using the direct upload feature of Amazon Q Business. For this example, we upload the scanned PDFs.

You can monitor the uploaded files on the Data sources tab. The Upload status changes from Received to Processing to Indexed or Updated, as which point the file has been successfully indexed into the Amazon Q Business data store. The following screenshot shows the successfully indexed PDFs.

The following steps demonstrate how to integrate and synchronize documents using an Amazon S3 connector with Amazon Q Business. For this example, we index the text documents.

When the sync job is complete, your data source is ready to use. The following screenshot shows all five documents (scanned and digital PDFs, and text files) are successfully indexed.

The following screenshot shows a comprehensive view of the two data sources: the directly uploaded documents and the documents ingested through the Amazon S3 connector.

Now lets run some queries with Amazon Q Business on our data sources.

Your documents might be dense, unstructured, scanned PDF document types. Amazon Q Business can identify and extract the most salient information-dense text from it. In this example, we use the multi-page health plan summary PDF we indexed earlier. The following screenshot shows an example page.

This is an example of a health plan summary document.

In the Amazon Q Business web UI, we ask What is the annual total out-of-pocket maximum, mentioned in the health plan summary?

Amazon Q Business searches the indexed document, retrieves the relevant information, and generates an answer while citing the source for its information. The following screenshot shows the sample output.

Documents might also contain structured data elements in tabular format. Amazon Q Business can automatically identify, extract, and linearize structured data from scanned PDFs to accurately resolve any user queries. In the following example, we use the invoice PDF we indexed earlier. The following screenshot shows an example.

This is an example of an invoice.

In the Amazon Q Business web UI, we ask How much were the headphones charged in the invoice?

Amazon Q Business searches the indexed document and retrieves the answer with reference to the source document. The following screenshot shows that Amazon Q Business is able to extract bill information from the invoice.

Your documents might also contain semi-structured data elements in a form, such as key-value pairs. Amazon Q Business can accurately satisfy queries related to these data elements by extracting specific fields or attributes that are meaningful for the queries. In this example, we use the employment verification PDF. The following screenshot shows an example.

This is an example of an employment verification form.

In the Amazon Q Business web UI, we ask What is the applicants date of employment in the employment verification form? Amazon Q Business searches the indexed employment verification document and retrieves the answer with reference to the source document.

In this section, we show you how to use the AWS CLI to ingest structured and unstructured documents stored in an S3 bucket into an Amazon Q Business index. You can quickly retrieve detailed information about your documents, including their statuses and any errors occurred during indexing. If youre an existing Amazon Q Business user and have indexed documents in various formats, such as scanned PDFs and other supported types, and you now want to reindex the scanned documents, complete the following steps:

"errorMessage": "Document cannot be indexed since it contains no text to index and search on. Document must contain some text."

If youre a new user and havent indexed any documents, you can skip this step.

The following is an example of using the ListDocuments API to filter documents with a specific status and their error messages:

The following screenshot shows the AWS CLI output with a list of failed documents with error messages.

Now you batch-process the documents. Amazon Q Business supports adding one or more documents to an Amazon Q Business index.

The following screenshot shows the AWS CLI output. You should see failed documents as an empty list.

The following screenshot shows that the documents are indexed in the data source.

If you created a new Amazon Q Business application and dont plan to use it further, unsubscribe and remove assigned users from the application and delete it so that your AWS account doesnt accumulate costs. Moreover, if you dont need to use the indexed data sources further, refer to Managing Amazon Q Business data sources for instructions to delete your indexed data sources.

This post demonstrated the support for scanned PDF document types with Amazon Q Business. We highlighted the steps to sync, index, and query supported document typesnow including scanned PDF documentsusing generative AI with Amazon Q Business. We also showed examples of queries on structured, unstructured, or semi-structured multi-modal scanned documents using the Amazon Q Business web UI and AWS CLI.

To learn more about this feature, refer toSupported document formats in Amazon Q Business. Give it a try on theAmazon Q Business consoletoday! For more information, visitAmazon Q Businessand theAmazon Q Business User Guide. You can send feedback toAWS re:Post for Amazon Qor through your usual AWS support contacts.

Sonali Sahu is leading the Generative AI Specialist Solutions Architecture team in AWS. She is an author, thought leader, and passionate technologist. Her core area of focus is AI and ML, and she frequently speaks at AI and ML conferences and meetups around the world. She has both breadth and depth of experience in technology and the technology industry, with industry expertise in healthcare, the financial sector, and insurance.

Chinmayee Rane is a Generative AI Specialist Solutions Architect at AWS. She is passionate about applied mathematics and machine learning. She focuses on designing intelligent document processing and generative AI solutions for AWS customers. Outside of work, she enjoys salsa and bachata dancing.

Himesh Kumar is a seasoned Senior Software Engineer, currently working at Amazon Q Business in AWS. He is passionate about building distributed systems in the generative AI/ML space. His expertise extends to develop scalable and efficient systems, ensuring high availability, performance, and reliability. Beyond the technical skills, he is dedicated to continuous learning and staying at the forefront of technological advancements in AI and machine learning.

Qing Wei is a Senior Software Developer for Amazon Q Business team in AWS, and passionate about building modern applications using AWS technologies. He loves community-driven learning and sharing of technology especially for machine learning hosting and inference related topics. His main focus right now is on building serverless and event-driven architectures for RAG data ingestion.

Read more:
Improve productivity when processing scanned PDFs using Amazon Q Business | Amazon Web Services - AWS Blog

Learning from virtual experiments to assist users of Small Angle Neutron Scattering in model selection | Scientific Reports – Nature.com

Generation of a dataset of SANS virtual experiments at KWS-1

A code template of the KWS-1 SANS instrument at FRM-II, Garching, was written in McStas (see Supplementary Information for the example code). The instrument description consisted of the following components, set consecutively: a neutron source describing the FRM-II spectrum, a velocity selector, guides that propagate the neutrons to minimize losses, a set of slits to define the divergence of the beam, a sample (one of the recently developed sasmodels component described in the McStas 3.4 documentation), a beamstop, and finally a Position Sensitive Detector (PSD) of size (144times 256) pixels. The sample was changed systematically between 46 SAS models (see Supplementary Information for a complete list of the models considered and their documentation), and for each model, different samples were produced by varying the parameters of the model. The set of 46 SAS models considered presented both isotropic and anisotropic scattering amplitudes. In the anisotropic models, the scattering amplitude is defined to have a dependency on the angle between the incident beam and the orientation of the scattering objects (or structures), which is determined by the model parameters. Consequently, in non-oriented particles with analytical anisotropic models, the resulting scattering pattern can result isotropic. Whenever possible, samples were considered in the dilute regime to avoid structure factor contributions and only observe those arising from the form factor. In models with crystalline structure or with correlations between scatterers where an analytical expression for the scattering amplitude was found, the complete scattering amplitude was considered. In all cases, the analytical expressions were obtained from the small angle scattering models documentation of SasView20 (see Supplementary Information). The instrument template in the Supplementary Information shows how it was also possible to change the instrument configuration when a sample was fixed. The set of parameters that describe the instrument configuration in a given simulation are referred as instrument parameters, and those that define the sample description as sample parameters.

In the case of instrument parameters, a discrete set of 36 instrument configurations were allowed to be selected. This was chosen by the instrument scientist, taking into account the most frequent instrument configurations: two possible values of wavelength (4.5 or 6 ), three possibilities for the distance settings, paired in collimation length - sample to detector distance (8m-1m, 8m-8m, and 20m-20m), three options for the slit configuration (1 cm slit aperture in both directions and a 2 cm wide Hellma Cell; 1.2 cm slit aperture in both directions and a 2cm wide Helma Cell; and 7mm on the horizontal aperture and 1 cm on the vertical aperture with a 1 cm wide Helma Cell), and finally two possible sample holders of different thickness (1mm and 2mm). One of the advantages of MC simulations over analytical approaches to obtain the 2D scattering pattern is that by defining the instrument parameters in the simulation, such as size of apertures for collimation, the sample to detector distance, the size of the detector, the dimensions of the pixels, and so on, the smearing of the data due to instrumental resolution is automatically considered. Therefore, no extra convolution must be performed once the data is collected.

In the case of sample parameters, most parameters describing samples were continuous, and an added difficulty was that the number of parameters per model was not the same nor similar for all models (see Fig. 5).

Distribution of models as a function of the number of parameters, showing the wide range of complexities contemplated in the models set used in this work.There are few models that have more than 15 parameters to set.

There were some models with only two parameters (easy to sample) and several models with more than 15 parameters (hard to sample). Most of the models had around 12 parameters. For p parameters with (n_i) possible choices for parameter i, the number of possible combinations (N) can be calculated as

$$begin{aligned} N = prod _{i=1}^p n_i, end{aligned}$$

(1)

which turns out to be (N=n^p) if (n_i=n) for all (i=1,dots ,p). With only (n=2) possibilities per parameter and (p=15), we rapidly get (N=32768) possible combinations for the complex model, whereas only (N=4) possible combinations for the very simple models. The large complexity of some model descriptions did not allow simulating all possible scenarios without generating a dataset with a large imbalance between classes. Therefore we opted to sample the defined hyper-parameter space strategically by using latin-hypercube sampling21. Briefly explained, this sampling method generates hypercubes in a given high dimensional hyper-parameter space. Then it selects randomly one of these hypercubes, and randomly samples the variables only inside the chosen hypercube. On a later iteration, it selects a new hypercube and repeats the sampling procedure.

Another advantage of MC simulations is that one can perform Monte Carlo integration estimates, which allow to include polydispersity and orientational distributions of scattering objects in a simple and direct manner. On each neutron interaction, the orientation and the polydisperse parameters of the scattering object are randomly chosen from defined probability distributions. For simplicity, distance and dimension parameters (r_i) of the models were allowed to be polydisperse by sampling them from gaussian distributions (taking care of selecting only positive values). The value (r_i) selected on each MC simulation defined the mean value of the gaussian distribution and an extra parameter (Delta r_i) for each (r_i) was included in the MC simulation to define the corresponding variance. The standard deviation of the gaussian distribution on different simulations was allowed to vary between 0 (monodisperse) and (r_i/2) (very polydisperse). In the case of angle parameters that determine the orientation of the scattering object, these were defined by sampling uniformly inside an interval centered at the parameter value (theta _i) and with limits defined by another extra parameter (Delta theta _i). For example, in a cylinder form factor model for the scattering object, both the radius and the length of the cylinders can be polydisperse, and the two angles defining the orientation of the principal axis with respect to the incident beam are allowed to vary uniformly within the simulation defined range. This gives a total of 8 parameters to include polidyspersity and orientational distributions on a single simulation. For more information on how this was implemented in the MC simulation we refer the reader to the documentation of each model that is provided in the Supplementary Information.

We opted for sampling 100 points for each sample model in the models hyper-parameter space due to time-constraints from the simulation side, and to constraints in the database size from the machine learning side. To define the sampling space, we defined upper ((u_b)) and lower ((l_b)) bounds for each sample parameter in each SasView model description. Then we took the default value of the parameter ((p_{0})) given in the SasView documentation as the center point of the sampling region, allowing for sampling in the interval (left[ max (-3 p_{0},l_b),min (3 p_{0},u_b)right]). All sampled parameters were continuous, except the absorption coefficient, which was restricted to have only two possible values (0% or 10%).

The expected dataset size was 331.200 by taking the 46 sample models, 2 absorption coefficients, 100 sample parameters per model, and 36 possible instrument settings. The 46 sample models were chosen so as to be representative, and also to avoid those sample models of high computational cost. Given that some configurations were non optimal, the total dataset was cleaned from zero images (no neutrons arrived in the given virtual experiment) and low statistic images. This was executed by calculating the quantile 0.02 of the standard deviations of the images, and removing them from the database. Also, the quantile 0.99 of the maximum value of the pixels of an image was calculated, and all images with max values higher were removed (for example, images in which simulations failed with saturating pixels). A remaining total of 259.328 virtual experiments defined the final dataset for machine learning purposes, and is the dataset published open access14. For an insight into what the database looks like we show a random selection of one image per model in the dataset in Fig. 6. It is possible to see that there is some variance between models, but also some unfavorable configurations (inadequate instrument paramaters for a given sample) which add noise and difficulties for the classification task. This figure also illustrates that certain anisotropic SAS models can result in isotropic scattering patterns when the scattering objects are completely unoriented (i.e., exhibiting a broad orientational distribution) or oriented in a particular direction with respect to the beam. In such cases, the anisotropy of the scattering pattern due to the form factor cannot be observed. Consequently, from the perspective of machine learning, the observation of an anisotropic scattering pattern directly excludes all isotropic models, whereas the observation of an isotropic scattering pattern does not allow for the direct inference that the model was isotropic.

An insight of the variability present amongst models in random images selected from the dataset. Isotropic (red title) and anisotropic (blue title) images can be found, as well as images with high and poor counting statistics.

Given that we have a dataset of roughly 260.000 virtual experiments, comprising of a set of 46 SANS models measured under different experimental conditions, we can attempt to train supervised machine learning algorithms to predict the SAS model of a sample given the SANS scattering pattern data measured by the PSD at KWS-1. We are taking advantage here of the fact that we know the ground truth of the SAS model used to generate the data by Monte Carlo simulation. The data from a PSD can be seen as an image of one channel, therefore we can use all recent developments in methods for image classification.

It is known by the SANS community that the intensity profile as a function of the scattering vector (q) is normally plotted in logarithmic scale, to be able to see the small features at increasing values of q. In this sense, it is useful for the classification task to perform a logarithmic transformation on the measured data to increase the contribution to the images variance of the features at large q. Since the logarithm is defined only for values larger than 0, and is positive only for values larger than 1, we first add a constant offset of +1 to all pixels and check that there are no negative values in the image. Then we apply the logarithm function to the intensity count in all pixels, emphasizing large q features as can be seen in Fig. 6. Then, we normalized all the images in the dataset to their maximum value in order to take them to values between 0 and 1 as to be independent of the counting statistics of the measurement. The transformed data are then fed to the neural network. Mathematically speaking, the transformation reads

$$begin{aligned} x_{i,j} = frac{log (x_{i,j}+1.0)}{MaxLog}, end{aligned}$$

(2)

for the intensity of pixel (x_{i,j}) in row i and column j, where MaxLog is the maximum of the image after applying the logarithmic transformation. All images were resized to (180times 180) pixels, since the networks used in this work are designed for square input images. The value 180 is a compromise between 144 and 256, in which we believe the loss in information by interpolation and sampling respectively is minimal. We decided to train Convolutional Neural Networks (CNNs) for the task of classification using Pytorch22, by transfering the learning on three architectures (ResNet-5023, DenseNet24, and Inception V325). In all cases, the corresponding PyTorch default weights were used as starting point and all weights were allowed to be modified. Then, we generated an ensemble method, that averaged the last layer weights of all three CNNs and predicted based on the averaged weight. In all cases, we modified the first layer to accept the generated one-channel images of our SANS database in HDF format. We preferred HDF format to keep floating point precision in each pixels intensity count. Also the final fully-connected layer was modified to match the 46 classes, and a soft-max layer was used to obtain values between 0 and 1, to get some notion of probability of classification.

The dataset was split into training, testing, and validation sets in proportions 0.70, 0.20, and 0.10 respectively. For the minimzation problem in multilabel classification, the Cross Entropy loss is a natural selection as the loss function. This function coincides with the multinomial logistic loss and belongs to a set of loss functions that are called comp-sum losses (loss functions obtained by composition of a concave function, such as logarithm in the case of the logistic loss, with a sum of functions of differences of score, such as the negative exponential)15. In our case, we can write the Cross Entropy loss function as

$$begin{aligned} l(x_n,y_n) = -log left( frac{exp (alpha _{y_n}(x_n))}{sum _{c=1}^{C}exp {(alpha _{c}(x_n))}}right) , end{aligned}$$

(3)

where (x_n) is the input, (y_n) is the target label, (alpha _i(x)) is the i-th output value of the last layer when x is the input, and C is the number of classes. In the extreme case where only the correct weight (alpha _{y_n}(x_n)) is equal to 1, the rest are equal to 0, then the quotient is equal to 1, and the logarithm makes the loss function equal to 0. If (alpha _{y_n}(x_n)<1), then the quotient will be between 0 and 1, the logarithm will make it negative, and the -1 pre-factor will transform it to a positive value. Any accepted minimization step of this function forces the weight of the correct label to increase in absolute value.

Finally, for the training phase, Mini-batches were used with a batch size of 64 images during training, and all CNNs were trained during 30 epochs. The Adaptive Moment Estimation (Adam)26 algorithm was used for the minimzation of the loss function, with a learning rate of (eta =1times 10^{-5}). For the testing phase, a batch size of 500 images was used, and for the validation phase, batches of 1000 images were used to increase the support of the estimated final quantities.

The data was obtained from an already completed study that has been published separetly19. It was collected from a sample consisting of a 60(mu)m thick brain slice from a reeler mouse after death. In the cited paper19, they declare that the animal procedures were approved by the institutional animal welfare committee at the Research Centre Jlich GmbH, Germany, and were in accordance with European Union guidelines for the use and care of laboratory animals. For the interest of this work, we only refer to the data for validation of the presented algorithm and we did not sacrifice nor handle any animal lives. The contrast was obtained by deuterated formalin. The irradiation area was of 1 mm(times)1mm. The authors observed an anisotropic Porod scattering ((q<0.04)(^{-1})) that is connected to the preferred orientation of whole nerve fibres, also called axon. They also report a correlation ring ((q=0.083)(^{-1})) that arises from the myelin sheaths, a multilayer of lipid bilayers with the myelin basic protein as a spacer.

Follow this link:
Learning from virtual experiments to assist users of Small Angle Neutron Scattering in model selection | Scientific Reports - Nature.com

Build a self-service digital assistant using Amazon Lex and Knowledge Bases for Amazon Bedrock | Amazon Web … – AWS Blog

Organizations strive to implement efficient, scalable, cost-effective, and automated customer support solutions without compromising the customer experience. Generative artificial intelligence (AI)-powered chatbots play a crucial role in delivering human-like interactions by providing responses from a knowledge base without the involvement of live agents. These chatbots can be efficiently utilized for handling generic inquiries, freeing up live agents to focus on more complex tasks.

Amazon Lex provides advanced conversational interfaces using voice and text channels. It features natural language understanding capabilities to recognize more accurate identification of user intent and fulfills the user intent faster.

Amazon Bedrock simplifies the process of developing and scaling generative AI applications powered by large language models (LLMs) and other foundation models (FMs). It offers access to a diverse range of FMs from leading providers such as Anthropic Claude, AI21 Labs, Cohere, and Stability AI, as well as Amazons proprietary Amazon Titan models. Additionally, Knowledge Bases for Amazon Bedrock empowers you to develop applications that harness the power of Retrieval Augmented Generation (RAG), an approach where retrieving relevant information from data sources enhances the models ability to generate contextually appropriate and informed responses.

The generative AI capability of QnAIntent in Amazon Lex lets you securely connect FMs to company data for RAG. QnAIntent provides an interface to use enterprise data and FMs on Amazon Bedrock to generate relevant, accurate, and contextual responses. You can use QnAIntent with new or existing Amazon Lex bots to automate FAQs through text and voice channels, such as Amazon Connect.

With this capability, you no longer need to create variations of intents, sample utterances, slots, and prompts to predict and handle a wide range of FAQs. You can simply connect QnAIntent to company knowledge sources and the bot can immediately handle questions using the allowed content.

In this post, we demonstrate how you can build chatbots with QnAIntent that connects to a knowledge base in Amazon Bedrock (powered by Amazon OpenSearch Serverless as a vector database) and build rich, self-service, conversational experiences for your customers.

The solution uses Amazon Lex, Amazon Simple Storage Service (Amazon S3), and Amazon Bedrock in the following steps:

The following diagram illustrates the solution architecture and workflow.

In the following sections, we look at the key components of the solution in more detail and the high-level steps to implement the solution:

To implement this solution, you need the following:

To create a new knowledge base in Amazon Bedrock, complete the following steps. For more information, refer to Create a knowledge base.

Complete the following steps to create your bot:

Complete the following steps to add QnAIntent:

The Amazon Lex web UI is a prebuilt fully featured web client for Amazon Lex chatbots. It eliminates the heavy lifting of recreating a chat UI from scratch. You can quickly deploy its features and minimize time to value for your chatbot-powered applications. Complete the following steps to deploy the UI:

To avoid incurring unnecessary future charges, clean up the resources you created as part of this solution:

In this post, we discussed the significance of generative AI-powered chatbots in customer support systems. We then provided an overview of the new Amazon Lex feature, QnAIntent, designed to connect FMs to your company data. Finally, we demonstrated a practical use case of setting up a Q&A chatbot to analyze Amazon shareholder documents. This implementation not only provides prompt and consistent customer service, but also empowers live agents to dedicate their expertise to resolving more complex issues.

Stay up to date with the latest advancements in generative AI and start building on AWS. If youre seeking assistance on how to begin, check out the Generative AI Innovation Center.

Supriya Puragundla is a Senior Solutions Architect at AWS. She has over 15 years of IT experience in software development, design and architecture. She helps key customer accounts on their data, generative AI and AI/ML journeys. She is passionate about data-driven AI and the area of depth in ML and generative AI.

Manjula Nagineni is a Senior Solutions Architect with AWS based in New York. She works with major financial service institutions, architecting and modernizing their large-scale applications while adopting AWS Cloud services. She is passionate about designing cloud-centered big data workloads. She has over 20 years of IT experience in software development, analytics, and architecture across multiple domains such as finance, retail, and telecom.

Mani Khanuja is a Tech Lead Generative AI Specialists, author of the book Applied Machine Learning and High Performance Computing on AWS, and a member of the Board of Directors for Women in Manufacturing Education Foundation Board. She leads machine learning projects in various domains such as computer vision, natural language processing, and generative AI. She speaks at internal and external conferences such AWS re:Invent, Women in Manufacturing West, YouTube webinars, and GHC 23. In her free time, she likes to go for long runs along the beach.

Read this article:
Build a self-service digital assistant using Amazon Lex and Knowledge Bases for Amazon Bedrock | Amazon Web ... - AWS Blog

Machine learning was used to sync subtitles in Marvel’s Spider-Man 2 – Game Developer

Sony is convinced machine learning and AI can be used to streamline development, and revealed it already leveraged the tech when developing Marvel's Spider-Man 2.

The PlayStation maker shared the tidbit during a recent corporate strategy meeting where it outlined its long-term "Creative Entertainment Vision."

The Japanese company said its 10-year plan will revolve around harnessing technology to "unleash the creativity of creators," connecting diverse people and values to "foster vibrant communities," and creating new experiences that "go beyond imagination."

The company didn't specify which new technologies it's hoping to deploy, but noted it's already using AI tech and machine learning to "support IP value maximization." What does that mean in practice? Sony claims it's about finding new solutions to existing problems so franchises can be "delivered rapidly and at a low cost."

Throwing out an example of that philosophy in action, Sony explained Marvel's Spider-Man 2 developer Insomniac Games recently "utilized machine learning and applied original voice recognition software specialized for gaming" to enable the automatic synchronization of subtitles in certain languages. It's claimed the technique "significantly" shortened the subtitling process by making it easier to sync subs with character dialogue.

There's been plenty of AI chatter at Sony this week. Naughty Dog studio head Neil Druckmann advocated for the tech in an interview published on the company website and claimed it could "revolutionize" development and enable studios to "take on more adventurous projects and push the boundaries of storytelling in games."

He said AI tools could reduce costs and clear technical hurdles for developers, unlocking their creativity in the process. "With AI, your creativity sets the limits. Understanding art history, composition, and storytelling is essential for effective direction. Tools evolve quicklySome tools once essential, now are obsolete," he continued.

"At Naughty Dog, we transitioned from hand-animating Jak and Daxter to using motion capture in Uncharted, significantly enhancing our storytelling."

Sony isn't the first video game company to hype AI tech. Other major players like EA and Microsoft are pushing the technology, claiming it'll be a tool that empowers creatives across the industry while lowering costs. Some developers, however, are concerned that wielding AI (specifically the generative variety) as a cost-cutting device will invariably mean layoffs and downsizing.

Continue reading here:
Machine learning was used to sync subtitles in Marvel's Spider-Man 2 - Game Developer

Reinforcement learning AI might bring humanoid robots to the real world – Science News Magazine

ChatGPT and other AI tools are upending our digital lives, but our AI interactions are about to get physical. Humanoid robots trained with a particular type of AI to sense and react to their world could lend a hand in factories, space stations, nursing homes and beyond. Two recent papers in Science Robotics highlight how that type of AI called reinforcement learning could make such robots a reality.

Weve seen really wonderful progress in AI in the digital world with tools like GPT, says Ilija Radosavovic, a computer scientist at the University of California, Berkeley. But I think that AI in the physical world has the potential to be even more transformational.

The state-of-the-art software that controls the movements of bipedal bots often uses whats called model-based predictive control. Its led to very sophisticated systems, such as the parkour-performing Atlas robot from Boston Dynamics. But these robot brains require a fair amount of human expertise to program, and they dont adapt well to unfamiliar situations. Reinforcement learning, or RL, in which AI learns through trial and error to perform sequences of actions, may prove a better approach.

We wanted to see how far we can push reinforcement learning in real robots, says Tuomas Haarnoja, a computer scientist at Google DeepMind and coauthor of one of the Science Robotics papers. Haarnoja and colleagues chose to develop software for a 20-inch-tall toy robot called OP3, made by the company Robotis. The team not only wanted to teach OP3 to walk but also to play one-on-one soccer.

Soccer is a nice environment to study general reinforcement learning, says Guy Lever of Google DeepMind, a coauthor of the paper. It requires planning, agility, exploration, cooperation and competition.

The toy size of the robots allowed us to iterate fast, Haarnoja says, because larger robots are harder to operate and repair. And before deploying the machine learning software in the real robots which can break when they fall over the researchers trained it on virtual robots, a technique known as sim-to-real transfer.

Training of the virtual bots came in two stages. In the first stage, the team trained one AI using RL merely to get the virtual robot up from the ground, and another to score goals without falling over. As input, the AIs received data including the positions and movements of the robots joints and, from external cameras, the positions of everything else in the game. (In a recently posted preprint, the team created a version of the system that relies on the robots own vision.) The AIs had to output new joint positions. If they performed well, their internal parameters were updated to encourage more of the same behavior. In the second stage, the researchers trained an AI to imitate each of the first two AIs and to score against closely matched opponents (versions of itself).

To prepare the control software, called a controller, for the real-world robots, the researchers varied aspects of the simulation, including friction, sensor delays and body-mass distribution. They also rewarded the AI not just for scoring goals but also for other things, like minimizing knee torque to avoid injury.

Real robots tested with the RL control software walked nearly twice as fast, turned three times as quickly and took less than half the time to get up compared with robots using the scripted controller made by the manufacturer. But more advanced skills also emerged, like fluidly stringing together actions. It was really nice to see more complex motor skills being learned by robots, says Radosavovic, who was not a part of the research. And the controller learned not just single moves, but also the planning required to play the game, like knowing to stand in the way of an opponents shot.

In my eyes, the soccer paper is amazing, says Joonho Lee, a roboticist at ETH Zurich. Weve never seen such resilience from humanoids.

But what about human-sized humanoids? In the other recent paper, Radosavovic worked with colleagues to train a controller for a larger humanoid robot. This one, Digit from Agility Robotics, stands about five feet tall and has knees that bend backward like an ostrich. The teams approach was similar to Google DeepMinds. Both teams used computer brains known as neural networks, but Radosavovic used a specialized type called a transformer, the kind common in large language models like those powering ChatGPT.

Instead of taking in words and outputting more words, the model took in 16 observation-action pairs what the robot had sensed and done for the previous 16 snapshots of time, covering roughly a third of a second and output its next action. To make learning easier, it first learned based on observations of its actual joint positions and velocity, before using observations with added noise, a more realistic task. To further enable sim-to-real transfer, the researchers slightly randomized aspects of the virtual robots body and created a variety of virtual terrain, including slopes, trip-inducing cables and bubble wrap.

After training in the digital world, the controller operated a real robot for a full week of tests outside preventing the robot from falling over even a single time. And in the lab, the robot resisted external forces like having an inflatable exercise ball thrown at it. The controller also outperformed the non-machine-learning controller from the manufacturer, easily traversing an array of planks on the ground. And whereas the default controller got stuck attempting to climb a step, the RL one managed to figure it out, even though it hadnt seen steps during training.

Reinforcement learning for four-legged locomotion has become popular in the last few years, and these studies show the same techniques now working for two-legged robots. These papers are either at-par or have pushed beyond manually defined controllers a tipping point, says Pulkit Agrawal, a computer scientist at MIT. With the power of data, it will be possible to unlock many more capabilities in a relatively short period of time.

And the papers approaches are likely complementary. Future AI robots may need the robustness of Berkeleys system and the dexterity of Google DeepMinds. Real-world soccer incorporates both. According to Lever, soccer has been a grand challenge for robotics and AI for quite some time.

Read the original post:
Reinforcement learning AI might bring humanoid robots to the real world - Science News Magazine