Archive for the ‘Artificial Intelligence’ Category

Artificial Intelligence in Operation Monitoring Discovers Patterns Within Drilling Reports – Journal of Petroleum Technology

Artificial Intelligence in Operation Monitoring Discovers Patterns Within Drilling Reports

You have access to this full article to experience the outstanding content available to SPE members and JPT subscribers.

To ensure continued access to JPT's content, please Sign In, JOIN SPE, or Subscribe to JPT

In well-drilling activities, successful execution of a sequence of operations defined in a well project is critical. To provide proper monitoring, operations executed during drilling procedures are reported in daily drilling reports (DDRs). The complete paper provides an approach using machine-learning and sequence-mining algorithms for predicting and classifying the next operation based on textual descriptions. The general goal is to exploit the rich source of information represented by the DDRs to derive methodologies and tools capable of performing automatic data-analysis procedures and assisting human operators in time-consuming tasks.

Classification Tasks. fastText. This is a library discussed in the literature designed to learn word embeddings and text classification. The technique implements a simple linear model with rank constraint, and the text representation is a hidden state that is used to feed classifiers. A softmax function computes the probability distribution over predefined classes.

Conditional Random Fields (CRFs). CRFs are a category of undirected graphical models that allow combination of features from each timestep of the sequence, with the ability to transit between labels for each episode in the input sequence. They were proposed to overcome the problem of bias that existed in techniques such as hidden Markov models and maximum-entropy Markov models.

Recurrent Models. Despite achieving good results in several scenarios and learning word embeddings as a byproduct of its training, the fastText classifier does not properly consider word-ordering information that can be useful for several classification tasks. Such a shortcoming can be addressed by a recurrent neural network (RNN), which considers the fact that a fragment of text is formed by an ordered sequence of words. The authors consider the gated recurrent unit variant, which is easier to train than traditional RNNs and achieves results comparable with those of the long short-term memory unit, while figuring fewer parameters to learn. The methodology of these classifiers is detailed mathematically in the complete paper.

Sequence Prediction. Sequential pattern mining can be defined broadly by the task of discovering interesting subsequences in a set of sequences, where the level of interest can be measured in terms of various criteria such as occurrence frequency, length, and profit, according to the application. The authors focus in this paper on the specific task of sequence prediction.

In the scenario considered, the alphabet is given by an ontology of operations of drilling activities. The sequence is defined according to data stored in DDRs. The proposed methodology considers various sequence prediction algorithms, specifically the following:

These algorithms are detailed in the complete paper. The sequential pattern mining framework (SPMF) was used for algorithm implementation. SPMF is an open-source data-mining library specialized in frequent pattern mining.

Data Sets. The data sets used for the experiments reported in this paper were extracted from different collections of DDRs. Each DDR entry is a record containing a rich set of information about the event being reported, which could be an operation or an occurrence. Two different types of data sets were generated, the operations data sets and the cost data set. The former is used by both classification and sequence prediction tasks, whereas the latter is only used for classification.

Operations Data Sets. The operations data sets were extracted from DDRs of 119 different wellbores, which comprise more than 90,000 entries. The DDR fields of most interest for the experiments applied on this collection are the description and the operation name. The former is a special field used by the reporter to fill in important details about the event in a free-text format. The latter is selected by the reporter from a predefined list of operation names.

For the sequence-mining tasks, only the operation name is used. The data set is viewed as a set of sequences of operations, one for each wellbore. For the classification tasks, both fields are used for supervised learning, with the description as input object and the operation name as label.

The DDRs were preprocessed by an industry specialist with the objective of, first, removing the inconsistencies and, second, normalizing operation names to unify operations that shared semantics. Given the large number of documents, the strategy used for the former objective was to remove entries with the wrong operation name (instead of fixing each one, which would be a much harder task). As for the second objective, after an analysis of the list of operation names and samples of descriptions, each group of overlapping operations was transformed into a single operation.

This process yielded a resulting data set containing more than 38,000 samples and 39 operation types for the classification task and another containing more than 51,000 samples and 41 operations types for the sequence-prediction task.

Costs Data Sets. The costs data set is a collection of DDRs with an extra field (the target field) meant to be used for calculating the cost of each operation performed in a wellbore project. That field usually is multivalued because more than one activity of interest being described might exist in the free-text field of a DDR entry. Each value in that list is a pair containing two types of information: a label for the activity described in the entry and a number pointing to a diameter value.

As opposed to the operations data set, the target field was filled on land by a small group of employees trained specifically for this task. Nevertheless, the costs data set still had to be preprocessed before use in the experiments.

Classification Results. Before evaluating the models, the best values for each hyperparameter are determined using the validation set through a grid search. The proposed models are trained for 30 epochs.

The experimental results regarding accuracy and macro-F1 measures for the costs and operations data sets are presented in the complete paper. In both cases, the fastText classifier, despite being quite simple, yields significant results, posing a strong baseline for the proposed models. Nevertheless, one should recall that the word vectors learned by this first classifier are used as the proposed model embeddings as well.

The other neural networks also consider the complete word ordering in the samples, allowing them to achieve results better than the baseline. Such metrics are further improved by replacing the traditional Softmax layer in the output layer by a CRF. This allows the model to label each entry in the segment not only based on its extracted characteristics but also with respect to the operations ordering. This allows the model to improve the baseline accuracy by 10.94 and 3.85% in the cost and operations data sets, respectively. The proposed model learns not only the most relevant characteristics from each sample but also the patterns in the sequence of operations performed in a well-drilling project.

Sequence-Mining Results. The data set was divided into 10 segments, and the methods were evaluated according to a cross-validation protocol. The cross-validation protocol varies the training and testing data through various executions in order to reduce any bias in the results. For the classification tasks, approaches based on word embeddings and CRFs are exploited. Evaluations were made considering sequences from size 5 to 10 in the data set, using the sequence-prediction methods to predict the next drilling operation.

Table 1 presents the accuracy obtained when considering the sequences of operations as presented in the data set. Table2 shows the accuracy obtained when removing consecutive drilling operations from the data. The data set contains multiple repetitions of operations, contiguous to one another. This makes the data more predictable to the sequence prediction model and explains the higher accuracy obtained in experiments shown in Table 1.

DDR Processing Framework. To make the models discussed available for use in a real-world scenario, a framework is proposed that allows the end user to upload DDRs and analyze them by different applications, one for each specific purpose. One great advantage of using this framework is that the user feeds data once and then has access to several tools for analyzing them.

Currently, a working version of an application for performing the classification tasks already has been implemented. It encapsulates the classification models generated with the experiments and allows the processing of a large number of DDRs, either for operation or cost classification.

Read more:
Artificial Intelligence in Operation Monitoring Discovers Patterns Within Drilling Reports - Journal of Petroleum Technology

Admiral Seguros Is The First Spanish Insurer To Use Artificial Intelligence To Assess Vehicle Damage – PRNewswire

To do this, Admiral Seguros is using an AI solution, developed by the technology company Tractable, which accurately evaluates vehicle damage with photos sent through a web application. The app, via the AI, completes the complex manual tasks that an advisor would normally perform and produces a damage assessment in seconds, often without the need for further review.

Upon receiving the assessment, Admiral Seguros will use it to make immediate payment offers to policyholders when appropriate, allowing them to resolve claims in minutes, even on the first call.

Jose Maria Perez de Vargas, Head of Customer Management at Admiral Seguros, said: "Admiral Seguros continues to advance in digitalisation as a means to provide a better service to our policyholders, providing them with an easy, secure and transparent means of evaluating damages without the need for travel, achieving compensation in a few hours. It's a simple, innovative and efficient claims management process that our clients will surely appreciate."

Adrien Cohen, co-founder and president of Tractable, said: "By using our AI to offer immediate payments, Admiral Seguros will resolve many claims almost instantly, to the delight of its customers. This is central to our mission of using Artificial Intelligence to accelerate recovery, converting the process from weeks to minutes."

Tractable's AI uses deep learning for computer vision, in addition to machine learning techniques. The AI is trained with many millions of photographs of vehicle damage, and the algorithms learn from experience by analyzing a wide variety of different examples. Tractable's technology can be applied globally to any vehicle.

The AI enables insurers to assess car damage, shares recommended repair operations, and guides the claims management process to ensure these are processed and settled as quickly as possible.

According to Admiral Seguros, the application of this technology in the insurance sector will be a great step in digitization and will offer a great improvement in the customer experience of Admiral's insurance brands in Spain, Qualitas Auto and Balumba.

About Tractable:

Tractable develops artificial intelligence for accident and disaster recovery. Its AI solutions have been deployed by leading insurers across Europe, North America and Asia to accelerate accident recovery for hundreds of thousands of households. Tractable is backed by $55m in venture capital and has offices in London, New York City and Tokyo.

About Admiral Seguros

In Spain, Admiral Group plc has been based in Seville since 2006 thanks to the creation of Admiral Seguros. More than 700 people work from there and for the entire national territory, cementing and marketing their two commercial brands: Qualitas Auto, and Balumba.

Recognized as the third best company to work for in Spain, the sixth in Europe and the eighteenth in the world by the consultancy Great Place to Work, Admiral Seguros is committed to a corporate culture focused on people.

SOURCE Tractable

https://tractable.ai

Visit link:
Admiral Seguros Is The First Spanish Insurer To Use Artificial Intelligence To Assess Vehicle Damage - PRNewswire

Expanding Access to Mental Healthcare with Artificial Intelligence – HealthITAnalytics.com

September 29, 2020 -Across the country today, it is widely acknowledged that access to mental healthcare is just as important as clinical care when it comes to overall wellness.

Mental health conditions are incredibly common in the US, impacting tens of millions of people each year, according to the National Institutes of Mental Health (NIMH). However, estimates suggest that only half of people with these conditions receive treatment, mainly due to barriers like clinician shortages, fragmented care, and societal stigma.

For the many individuals suffering from anxiety and depression, these existing barriers coupled with the current healthcare crisis can significantly interfere with the ability to carry out life activities.

The prevalence of mental health disorders particularly depression and anxiety is high. If anything, the prevalence of these conditions has only increased as a result of COVID-19. The need is greater than ever now, Jun Ma,PhD, Beth and GeorgeVitouxProfessor ofMedicineat the University of Illinois Chicago (UIC) department of medicine, told HealthITAnalytics.

To broaden mental healthcare access for people with moderate depression or anxiety, UIC researchers are testing an artificial intelligence-powered virtual agent called Lumen. The team will train the tool to provide patients with problem-solving therapy, a structured approach designed to help people focus on learning cognitive and behavioral skills.

READ MORE: Machine Learning May Support Personalized Mental Health Therapies

The two-phase, five-year project is funded by a $2 million grant from NIMH.

The goal is to meet the many challenges of people who dont have ready access to proven psychotherapy, which has been a longstanding issue, said Ma.

Over the years, my research team has done clinical trials testing the effectiveness and dissemination of different behavioral and psychosocial interventions. The results of that work, combined with the gaps that exist in practice and patient access, have really catalyzed the idea for this project.

Using the same technology as Amazons Alexa, researchers will develop an app that will act as a virtual mental health agent, talking through steps and strategies with patients following a validated treatment protocol.

If we prove this way of delivering problem-solving treatment is a safe and effective, once we put it into production anyone with access to Alexa would be able to access the program. We're very early in the development phase, so it will probably be another few years before its widely available, said Ma.

READ MORE: US Patients See Rising Burdens of Mental Health, Chronic Disease

We're making good strides. Were starting to conduct a user study on a small scale. And the immediate next step after this initial user development and the user testing phase will be a small scale randomized controlled trial (RCT), in which we'll enroll patients with depressive symptoms and/or anxiety.

Individuals will complete eight one-on-one counseling sessions over 12 weeks. In each session, participants will identify a problem they view as affecting their life and as a source of emotional distress, and the counselor will help them define goals and possible solutions. Solutions are then compared, and counselors and patients work to make an action plan to implement the chosen solution.

Researchers will program Lumen using the Alexa Skills Kit to act as the virtual counselor working with participants, taking them through problem-solving steps and encouraging them to engage in meaningful and enjoyable activities to improve their emotional well-being.

During the first phase, 80 study participants who report elevated depressive and anxiety symptoms will test the Lumen tool, with the potential for wider use going forward.

The researchers hope that the project will increase access to mental healthcare for those who need it most.

READ MORE: Applying Artificial Intelligence to Chronic Disease Management

One of the main advantages of using AI as a platform to provide therapy is the ability to scale and reduce significant barriers to access, as well as sustainability of proven psychotherapy such as problem-solving treatment, said Ma.

The technology can also be quite adaptable to individuals depending on when they need it and how they want to access it, and can potentially reduce barriers due to stigma.

Despite the serious potential for these tools to broaden the availability of mental healthcare, Ma also noted that the use of AI in this area comes with several concerns just as the technology does in any part of healthcare delivery.

Like any novel treatment in early development, it's unknown at this point what the effectiveness and the sustained impact of AI in psychotherapy. It's certainly very worth exploring, as we are doing now, she said.

Patient privacy is a very important area that warrants not only additional research, but also additional legislation and regulation. Additionally, AI and the underlying algorithms are trained using existing data and information, and there could be unintended consequences due to implicit or explicit bias. Its very important to have transparency in how the models are trained, as well as to ensure the data used to train such models is representative of the population.

Mas statements align with those of other industry experts, who consistently highlight the necessity of safety, data privacy, and health equity when building and using these tools.

In a recent viewpoint published in JAMA, authors noted that chatbots and other AI-powered virtual agents are still relatively new, and much of the data available comes from research rather than widespread clinical implementation. For these reasons, healthcare leaders must continually evaluate the capacity of these tools to improve care delivery, the authors stated.

In the development stage of the Lumen tool, Mas team at UIC plans to do just that.

If the small-scale RCT proves promising, then we'll go on to a larger-scale RCT in which we'll recruit 200 patients, again with that depressive symptoms and/or anxiety, to further test the potential impact and effectiveness of Lumen, Ma said.

Ultimately, the success of these tools in healthcare will depend on the industrys ability to weigh possible risks and rewards.

Given the potential concerns, it's worth emphasizing the importance of balancing excitement for such novel treatments with caution. It's a fine line between ensuring protection of patient privacy and confidentiality and not restricting the innovation in this area, Ma concluded.

Continue reading here:
Expanding Access to Mental Healthcare with Artificial Intelligence - HealthITAnalytics.com

Cleaner air on motorways thanks to matrix signs with artificial intelligence – Innovation Origins

When matrix signs start blinking above a motorway, the average commuter already knows what time it is. Traffic jams. Although the adjusted speed limit does help to improve traffic flow. There is one more indirect effect of this dynamic traffic flow management system: Fewer emissions due to improved traffic flow. That is why a trial is starting in Germany where environmental data will be incorporated into traffic flow management.

The air quality in the vicinity of motorways could also be considerably improved in the Netherlands. This is why the government wants to tackle this problem through the National Air Quality Cooperation Program (NSL). For example, by promoting electric vehicles or offering alternative means of transport. Air quality values around motorways can be found via this link.

According to German scientists, the incorporation of environmental data into traffic flow management can reduce noise and pollution. They are now going to research this in the U-SARAH live project, coordinated by the Karlsruhe Institute of Technology (KIT). The Ministry of Infrastructure and Water Management of the Netherlands is funding the project to the tune of almost 1.1 million.

The aim of this study is to optimize and implement an environmental control system in an existing traffic route control system so as to reduce the environmental impact on the sections in question, explains Professor Peter Vortisch, head of the KIT Institute for Transport.

A microscopic traffic flow model developed over the course of a preliminary study with our partner Hessen Mobil enables the effects of the newly developed environmental control system to be simulated. This makes it possible to optimize the control system in such a way that both traffic flow and environmental effects are taken into account.

We want to test and evaluate the new control system under real conditions in a practical test, says Matthias Glatz. He is a project manager at Hessen Mobil. EDI GmbH, a spin-off of KIT, uses the extensive traffic data to model the road users reactions to dynamic speed limits by using artificial intelligence (AI). On the basis of this data, we plan to develop an AI-based acceptance model and a prediction model as modules for guiding the SBA, says Dr. Thomas Freudenmann. He is one of the founders and managing director of EDI GmbH. The existing control system will be expanded with these modules.

The simulation model developed in U-SARAH live can be used in future both for quality management and the optimization of route control systems. The results of the project will benefit not only the population, public authorities, and scientific institutes but also all manufacturers of traffic control systems. Thanks to the AI-based approach, the traffic situation can be estimated a few minutes in advance so that traffic can be controlled even better. The simulation-based development facilitates the easy integration of emissions data into traffic control systems. And without incurring high acquisition costs for measuring technology, explains Sebastian Buck of the KIT Institute for Transport.

By reducing emissions and optimizing the flow of traffic, the economic damage caused by traffic congestion and excess emissions can be reduced. An analysis platform developed within the project will help to examine the large data files from different angles across all steps. The platform will be made available to the public via the Ministry of Transports data cloud.

Go here to read the rest:
Cleaner air on motorways thanks to matrix signs with artificial intelligence - Innovation Origins

The North America artificial intelligence in healthcare diagnosis market is projected to reach from US$ 1,716.42 million in 2019 to US$ 32,009.61…

New York, Sept. 30, 2020 (GLOBE NEWSWIRE) -- Reportlinker.com announces the release of the report "North America Artificial Intelligence in Healthcare Diagnosis Market Forecast to 2027 - COVID-19 Impact and Regional Analysis by Diagnostic Tool ; Application ; End User ; Service ; and Country" - https://www.reportlinker.com/p05974389/?utm_source=GNW

The healthcare industry has always been a leader in innovation.The constant mutating of diseases and viruses makes it difficult to stay ahead of the curve.

However, with the help of artificial intelligence and machine learning algorithms, it continues to advance, creating new treatments and helping people live longer and healthier.A study published by The Lancet Digital Health compared the performance of deep learning a form of artificial intelligence (AI) in detecting diseases from medical imaging versus that of healthcare professionals, using a sample of studies carried out between 2012 and 2019.

The study found that, in the past few years, AI has become more precise in identifying disease diagnosis in these images and has become a more feasible source of diagnostic information.With advancements in AI, deep learning may become even more efficient in identifying diagnosis in the coming years.

Moreover, it can help doctors with diagnoses and notify when patients are weakening so that the medical intervention can occur sooner before the patient needs hospitalization. It can save costs for both the hospitals and patients. Additionally, the precision of machine learning can detect diseases such as cancer quickly, thus saving lives.In 2019, the medical imaging toolsegment accounted for a larger share of the North America artificial intelligence in healthcare diagnosis market. Its growth is attributed to an increasing adoption of AI technology for diagnosis of chronic conditions is likely to drive the growth of diagnostic tool segment in the North America artificial intelligence in healthcare diagnosis.In 2019, the radiology segment held a considerable share of the for North America artificial intelligence in healthcare diagnosis market, by the application. This segment is also predicted to dominate the market by2027 owing to rising demand for AI based application for radiology.A few major primary and secondary sources for the artificial intelligence in healthcare diagnosis market included US Food and Drug Administration, and World Health Organization.Read the full report: https://www.reportlinker.com/p05974389/?utm_source=GNW

About ReportlinkerReportLinker is an award-winning market research solution. Reportlinker finds and organizes the latest industry data so you get all the market research you need - instantly, in one place.

__________________________

Go here to read the rest:
The North America artificial intelligence in healthcare diagnosis market is projected to reach from US$ 1,716.42 million in 2019 to US$ 32,009.61...