Archive for the ‘Machine Learning’ Category

An introduction to generative AI with Swami Sivasubramanian – All Things Distributed

In the last few months, weve seen an explosion of interest in generative AI and the underlying technologies that make it possible. It has pervaded the collective consciousness for many, spurring discussions from board rooms to parent-teacher meetings. Consumers are using it, and businesses are trying to figure out how to harness its potential. But it didnt come out of nowhere machine learning research goes back decades. In fact, machine learning is something that weve done well at Amazon for a very long time. Its used for personalization on the Amazon retail site, its used to control robotics in our fulfillment centers, its used by Alexa to improve intent recognition and speech synthesis. Machine learning is in Amazons DNA.

To get to where we are, its taken a few key advances. First, was the cloud. This is the keystone that provided the massive amounts of compute and data that are necessary for deep learning. Next, were neural nets that could understand and learn from patterns. This unlocked complex algorithms, like the ones used for image recognition. Finally, the introduction of transformers. Unlike RNNs, which process inputs sequentially, transformers can process multiple sequences in parallel, which drastically speeds up training times and allows for the creation of larger, more accurate models that can understand human knowledge, and do things like write poems, even debug code.

I recently sat down with an old friend of mine, Swami Sivasubramanian, who leads database, analytics and machine learning services at AWS. He played a major role in building the original Dynamo and later bringing that NoSQL technology to the world through Amazon DynamoDB. During our conversation I learned a lot about the broad landscape of generative AI, what were doing at Amazon to make large language and foundation models more accessible, and last, but not least, how custom silicon can help to bring down costs, speed up training, and increase energy efficiency.

We are still in the early days, but as Swami says, large language and foundation models are going to become a core part of every application in the coming years. Im excited to see how builders use this technology to innovate and solve hard problems.

To think, it was more than 17 years ago, on his first day, that I gave Swami two simple tasks: 1/ help build a database that meets the scale and needs of Amazon; 2/ re-examine the data strategy for the company. He says it was an ambitious first meeting. But I think hes done a wonderful job.

If youd like to read more about what Swamis teams have built, you can read more here. The entire transcript of our conversation is available below. Now, as always, go build!

This transcript has been lightly edited for flow and readability.

***

Werner Vogels: Swami, we go back a long time. Do you remember your first day at Amazon?

Swami Sivasubramanian: I still remember… it wasnt very common for PhD students to join Amazon at that time, because we were known as a retailer or an ecommerce site.

WV: We were building things and thats quite a departure for an academic. Definitely for a PhD student. To go from thinking, to actually, how do I build?

So you brought DynamoDB to the world, and quite a few other databases since then. But now, under your purview theres also AI and machine learning. So tell me, what does your world of AI look like?

SS: After building a bunch of these databases and analytic services, I got fascinated by AI because literally, AI and machine learning puts data to work.

If you look at machine learning technology itself, broadly, its not necessarily new. In fact, some of the first papers on deep learning were written like 30 years ago. But even in those papers, they explicitly called out for it to get large scale adoption, it required a massive amount of compute and a massive amount of data to actually succeed. And thats what cloud got us to to actually unlock the power of deep learning technologies. Which led me to this is like 6 or 7 years ago to start the machine learning organization, because we wanted to take machine learning, especially deep learning style technologies, from the hands of scientists to everyday developers.

WV: If you think about the early days of Amazon (the retailer), with similarities and recommendations and things like that, were they the same algorithms that were seeing used today? Thats a long time ago almost 20 years.

SS: Machine learning has really gone through huge growth in the complexity of the algorithms and the applicability of use cases. Early on the algorithms were a lot simpler, like linear algorithms or gradient boosting.

The last decade, it was all around deep learning, which was essentially a step up in the ability for neural nets to actually understand and learn from the patterns, which is effectively what all the image based or image processing algorithms come from. And then also, personalization with different kinds of neural nets and so forth. And thats what led to the invention of Alexa, which has a remarkable accuracy compared to others. The neural nets and deep learning has really been a step up. And the next big step up is what is happening today in machine learning.

WV: So a lot of the talk these days is around generative AI, large language models, foundation models. Tell me, why is that different from, lets say, the more task-based, like fission algorithms and things like that?

SS: If you take a step back and look at all these foundation models, large language models… these are big models, which are trained with hundreds of millions of parameters, if not billions. A parameter, just to give context, is like an internal variable, where the ML algorithm must learn from its data set. Now to give a sense… what is this big thing suddenly that has happened?

A few things. One, transformers have been a big change. A transformer is a kind of a neural net technology that is remarkably scalable than previous versions like RNNs or various others. So what does this mean? Why did this suddenly lead to all this transformation? Because it is actually scalable and you can train them a lot faster, and now you can throw a lot of hardware and a lot of data [at them]. Now that means, I can actually crawl the entire world wide web and actually feed it into these kind of algorithms and start building models that can actually understand human knowledge.

WV: So the task-based models that we had before and that we were already really good at could you build them based on these foundation models? Task specific models, do we still need them?

SS: The way to think about it is that the need for task-based specific models are not going away. But what essentially is, is how we go about building them. You still need a model to translate from one language to another or to generate code and so forth. But how easy now you can build them is essentially a big change, because with foundation models, which are the entire corpus of knowledge… thats a huge amount of data. Now, it is simply a matter of actually building on top of this and fine tuning with specific examples.

Think about if youre running a recruiting firm, as an example, and you want to ingest all your resumes and store it in a format that is standard for you to search an index on. Instead of building a custom NLP model to do all that, now using foundation models with a few examples of an input resume in this format and here is the output resume. Now you can even fine tune these models by just giving a few specific examples. And then you essentially are good to go.

WV: So in the past, most of the work went into probably labeling the data. I mean, and that was also the hardest part because that drives the accuracy.

SS: Exactly.

WV: So in this particular case, with these foundation models, labeling is no longer needed?

SS: Essentially. I mean, yes and no. As always with these things there is a nuance. But a majority of what makes these large scale models remarkable, is they actually can be trained on a lot of unlabeled data. You actually go through what I call a pre-training phase, which is essentially you collect data sets from, lets say the world wide Web, like common crawl data or code data and various other data sets, Wikipedia, whatnot. And then actually, you dont even label them, you kind of feed them as it is. But you have to, of course, go through a sanitization step in terms of making sure you cleanse data from PII, or actually all other stuff for like negative things or hate speech and whatnot. Then you actually start training on a large number of hardware clusters. Because these models, to train them can take tens of millions of dollars to actually go through that training. Finally, you get a notion of a model, and then you go through the next step of what is called inference.

WV: Lets take object detection in video. That would be a smaller model than what we see now with the foundation models. Whats the cost of running a model like that? Because now, these models with hundreds of billions of parameters are very large.

SS: Yeah, thats a great question, because there is so much talk already happening around training these models, but very little talk on the cost of running these models to make predictions, which is inference. Its a signal that very few people are actually deploying it at runtime for actual production. But once they actually deploy in production, they will realize, oh no, these models are very, very expensive to run. And that is where a few important techniques actually really come into play. So one, once you build these large models, to run them in production, you need to do a few things to make them affordable to run at scale, and run in an economical fashion. Ill hit some of them. One is what we call quantization. The other one is what I call a distillation, which is that you have these large teacher models, and even though they are trained on hundreds of billions of parameters, they are distilled to a smaller fine-grain model. And speaking in a super abstract term, but that is the essence of these models.

WV: So we do build… we do have custom hardware to help out with this. Normally this is all GPU-based, which are expensive energy hungry beasts. Tell us what we can do with custom silicon hatt sort of makes it so much cheaper and both in terms of cost as well as, lets say, your carbon footprint.

SS: When it comes to custom silicon, as mentioned, the cost is becoming a big issue in these foundation models, because they are very very expensive to train and very expensive, also, to run at scale. You can actually build a playground and test your chat bot at low scale and it may not be that big a deal. But once you start deploying at scale as part of your core business operation, these things add up.

In AWS, we did invest in our custom silicons for training with Tranium and with Inferentia with inference. And all these things are ways for us to actually understand the essence of which operators are making, or are involved in making, these prediction decisions, and optimizing them at the core silicon level and software stack level.

WV: If cost is also a reflection of energy used, because in essence thats what youre paying for, you can also see that they are, from a sustainability point of view, much more important than running it on general purpose GPUs.

WV: So theres a lot of public interest in this recently. And it feels like hype. Is this something where we can see that this is a real foundation for future application development?

SS: First of all, we are living in very exciting times with machine learning. I have probably said this now every year, but this year it is even more special, because these large language models and foundation models truly can enable so many use cases where people dont have to staff separate teams to go build task specific models. The speed of ML model development will really actually increase. But you wont get to that end state that you want in the next coming years unless we actually make these models more accessible to everybody. This is what we did with Sagemaker early on with machine learning, and thats what we need to do with Bedrock and all its applications as well.

But we do think that while the hype cycle will subside, like with any technology, but these are going to become a core part of every application in the coming years. And they will be done in a grounded way, but in a responsible fashion too, because there is a lot more stuff that people need to think through in a generative AI context. What kind of data did it learn from, to actually, what response does it generate? How truthful it is as well? This is the stuff we are excited to actually help our customers [with].

WV: So when you say that this is the most exciting time in machine learning what are you going to say next year?

More:
An introduction to generative AI with Swami Sivasubramanian - All Things Distributed

Having one of these in-demand tech skills can help boost your pay by nearly $40,000here’s how – CNBC

The only thing standing between you and a pay bump of almost $40,000 could be a certificate in machine learning.

U.S. workers with advanced tech skills earn about 49% more than workers who don't use tech skills in their jobs, according to newly released research from Gallup and Amazon Web Services (AWS), which surveyed more than 3,000 U.S. workers and 1,170 U.S. employers in August 2022. This translates into average individual gains of $36,552 per year.

As the development and adoption of new technologies continue at a breakneck pace, the need for digitally savvy workers is "greater than ever," the report notes.

Newer technologies including cryptocurrency, the metaverse and artificial intelligence are becoming skills requirements for jobs in several industries, including finance, manufacturing and health care, with nearly two-thirds of employers saying it's highly likely" these inventions will become a core part of their business in the near future.

Those who consider digital upskilling stand to reap major benefits from this trend: At least four in 10 U.S. workers say learning new digital skills helped them boost their pay (43%), work more efficiently (42%), or get promoted (40%).

Here are the 10 tech skills employers say are "extremely likely" to become standard parts of doing business and the most in-demand skills they are hiring for according to AWS and Gallup:

At the top of the list is 5G, or the fifth generation of wireless technology, which cellphone companies began using in 2019. 5G technology can be used to make data transmission more efficient across industries: In health care, for example, large files can be transmitted more quickly between doctors and hospitals.

Generative AI tools, in particular, have become more popular in the workplace since the launch of ChatGPT in late 2022, says Jay Shankar, vice president of global talent acquisition at Amazon Web Services.

"It's a super important skillset employers are looking for, across all industries," she adds. "AI is practically everywhere now and to me, if there's one technical skill you want to learn, that's the area to focus on."

Many of the jobs hiring for these technical skills, such as machine learning engineer and full stack developer, offer competitive salaries of $100,000 per year or higher.

The rise of generative AI tools has elicited increased demand for prompt engineers, who test prompts and build user guides to improve chatbots' responses,Business Insiderreports. Some of these jobs, which don't require an engineering or coding background, can pay as much as $335,000.

If you're looking to enhance your generative AI skills, there are several certification and training courses online, from the University of Michigan, Coursera and other e-learning platforms. For other technical skills, including machine learning and data analytics, AWS offers free online courses.

While some experts have warned that certain technologies, like AI and robotics, could replace millions of jobs in the next 10 years, Shankar says such innovations should be used to help workers be better at their jobs not take them over completely. "It's enabling us to accomplish things faster, and evolve many roles," she adds. "But I don't think AI, for example, will ever fully replace humans."

DON'T MISS: Want to be smarter and more successful with your money, work & life?Sign up for our new newsletter!

Check out:

ChatGPT is the hottest new job skill that can help you get hired, according to HR experts

The No. 1 mistake job seekers make, according to the CEO of ZipRecruiterand it's entirely avoidable

10 in-demand remote jobs paying $100,000 or more that companies are hiring for now

See the original post here:
Having one of these in-demand tech skills can help boost your pay by nearly $40,000here's how - CNBC

MVTec further expands HALCON functionality with new deep … – Robotics Tomorrow

New version 23.05 extends HALCON's comprehensive software libraryNew Deep Counting feature for counting large quantitiesRelease on May 23, 2023

Munich, April 13, 2023 - MVTec Software GmbH (www.mvtec.com), a leading international software manufacturer for machine vision worldwide, will launch version 23.05 of the standard machine vision software HALCON on May 23, 2023. The focus of the new release is deep learning methods. The main feature here is Deep Counting, a deep-learning-based method that can robustly count large quantities of objects. In addition, improvements for the training of the deep learning technologies 3D Gripping Point Detection as well as Deep OCR have been integrated into the new HALCON version. With HALCON 23.05, it is now possible to further optimize the underlying deep learning networks, which are already pre-trained on industry-related images, for the user's own application. This allows even more robust recognition rates for Deep OCR applications as well as an even more reliable detection of suitable gripping surfaces for applications using 3D Gripping Point Detection technology. In addition, there are many other helpful improvements, such as the fact that external code can now be integrated into HALCON more easily.

Training for Deep OCRDeep OCR reads texts in a very robust way, even regardless of their orientation and font. For this purpose, the technology first detects the relevant text within the image and then reads it. With HALCON 23.05, it's now also possible to fine-tune the text detection by retraining the pretrained network with application-specific images. This provides even more robust results and opens new application possibilities. For example: the detection of text with arbitrary printing type or unseen character types as well as an improved readability in noisy, low contrast environments.

Training for 3D Gripping Point Detection3D Gripping Point Detection can be used to robustly detect surfaces on any object that is suitable for gripping with suction. In HALCON 23.05 there is now the possibility to retrain the pretrained model with own application-specific image data. The grippable surfaces are thus recognized even more robustly. The necessary labeling is done easily and efficiently via the MVTec Deep Learning Tool.

Easy Extensions InterfaceWith the help of HALCON extension packages the integration of external programming languages is possible. The advantage for customers: Functionalities that go beyond pure image processing can thus be covered by HALCON. In HALCON 23.05, the integration of external code has become much easier with the Easy Extensions Interface. This allows users to make their own functions written in .NET code usable in HDevelop and HDevEngine in just a few steps, while benefiting from the wide range of functionalities offered by the .NET framework. Even the data types and HALCON operators known from the HALCON/.NET language interface can be used. This increases both the flexibility and the application possibilities of HALCON.

About MVTec Software GmbHMVTec is a leading manufacturer of standard software for machine vision. MVTec products are used in all demanding areas of imaging: semiconductor industry, surface inspection, automatic optical inspection systems, quality control, metrology, as well as medicine and surveillance. By providing modern technologies such as 3D vision, deep learning, and embedded vision, software by MVTec also enables new automation solutions for the Industrial Internet of Things aka Industry 4.0. With locations in Germany, the USA, and China, as well as an established network of international distributors, MVTec is represented in more than 35 countries worldwide. http://www.mvtec.com

About MVTec HALCONMVTec HALCON is the comprehensive standard software for machine vision with an integrated development environment (HDevelop) that is used worldwide. It enables cost savings and improved time to market. HALCON's flexible architecture facilitates rapid development of any kind of machine vision application. MVTec HALCON provides outstanding performance and a comprehensive support of multi-core platforms, special instruction sets like AVX2 and NEON, as well as GPU acceleration. It serves all industries, with a library used in hundreds of thousands of installations in all areas of imaging like blob analysis, morphology, matching, measuring, and identification. The software provides the latest state-of-the-art machine vision technologies, such as comprehensive 3D vision and deep learning algorithms. The software secures your investment by supporting a wide range of operating systems and providing interfaces to hundreds of industrial cameras and frame grabbers, in particular by supporting standards like GenICam, GigE Vision, and USB3 Vision. By default, MVTec HALCON runs on Arm-based embedded vision platforms. It can also be ported to various target platforms. Thus, the software is ideally suited for the use within embedded and customized systems. http://www.halcon.com, http://www.embedded-vision-software.com

Originally posted here:
MVTec further expands HALCON functionality with new deep ... - Robotics Tomorrow

Multimodal Deep Learning – A Fusion of Multiple Modalities – NASSCOM Community

Multimodal Deep Learning and its Applications

As humans, our perception of the world is through our senses. We identify objects or anything through vision, sound, touch, and odor. Our way of processing this sensory information is multimodal. Modality refers to the way something is recognized, experienced, and recorded. Multimodal deep learning is an extensive research branch in Deep learning that works on the fusion of multimodal data.

The human brain consists of millions of neural networks that process multiple modalities from the external world. It could be recognizing a persons body movements, tone of voice, or even mimicking sounds. For AI to interpret Human Intelligence, we need a reasonable fusion of multimodal data and this is done through Multimodal Deep Learning.

Multimodal Machine Learning is developing computer algorithms that learn and predict using Multimodal datasets.

Multimodal Deep learning is a subset of the machine learning branch. With this technology, AI models are trained to identify relationships between multiple modalities such as images, videos, and texts and provide accurate predictions. From identifying the relevant link between datasets, Deep Learning models will be able to capture any place's environment and a person's emotional state.

If we say, Unimodal models that interpret only a single dataset have proven efficient in computer vision and Natural Language Processing. Unimodal models have limited capabilities; in certain tasks, these models failed to recognize humor, sarcasm, and hate speech. Whereas, Multimodal learning models can be referred to as a combination of unimodal models.

Multimodal deep learning includes modalities like visual, audio, and textual datasets. 3D visual and LiDAR data are slightly used multimodal data.

Multimodal Learning models work on the fusion of multiple unimodal neural networks.

First unimodal neural networks process the data separately and encode them, later, the encoded data is extracted and fused. Multimodal data fusion is an important process carried out using multiple fusion techniques. Finally, with the fusion of multimodal data, neural networks recognize and predict the outcome of the input key.

For example, in any video, there might be two unimodal models visual data and audio data. The perfect synchronization of both unimodal datasets provides simultaneous working of both models.

Fusing multimodal datasets improves the accuracy and robustness of Deep learning models, enhancing their performance in real-time scenarios.

Multimodal Deep learning has potential applications in computer vision algorithms. Here are some of its applications;

The research to reduce human efforts and develop machines matching with human intelligence is enormous. This requires multimodal datasets that can be combined using Machine Learning and Deep Learning models, paving the way for more advanced AI tools.

The recent surge in the popularity of AI tools has brought more additional investments in Artificial Intelligence and Machine Learning technology. This is a great time to grab job opportunities by learning and upskilling yourself in Artificial Intelligence and Machine Learning.

Continue reading here:
Multimodal Deep Learning - A Fusion of Multiple Modalities - NASSCOM Community

Prediction prolonged mechanical ventilation in trauma patients of … – Nature.com

Esteban, A. et al. Evolution of mortality over time in patients receiving mechanical ventilation. Am. J. Respir. Crit. Care Med. 188, 220230. https://doi.org/10.1164/rccm.201212-2169OC (2013).

Article PubMed Google Scholar

Divo, M. J., Murray, S., Cortopassi, F. & Celli, B. R. Prolonged mechanical ventilation in Massachusetts: The 2006 prevalence survey. Respir. Care 55, 16931698 (2010).

PubMed Google Scholar

Hsu, C. L. et al. Timing of tracheostomy as a determinant of weaning success in critically ill patients: A retrospective study. Crit. Care 9, R46-52. https://doi.org/10.1186/cc3018 (2005).

Article PubMed Google Scholar

Wang, C. H. et al. Predictive factors of in-hospital mortality in ventilated intensive care unit: A prospective cohort study. Medicine (Baltimore) 96, e9165. https://doi.org/10.1097/md.0000000000009165 (2017).

Article MathSciNet PubMed Google Scholar

Clark, P. A. & Lettieri, C. J. Clinical model for predicting prolonged mechanical ventilation. J. Crit. Care 28, 880.e881-880.e887 (2013).

Article Google Scholar

Sheikhbardsiri, H., Esamaeili Abdar, Z., Sheikhasadi, H., Ayoubi Mahani, S. & Sarani, A. Observance of patients rights in emergency department of educational hospitals in south-east Iran. Int. J. Hum. Rights Healthcare. 13, 435444 (2020).

Article Google Scholar

Parreco, J., Hidalgo, A., Parks, J. J., Kozol, R. & Rattan, R. Using artificial intelligence to predict prolonged mechanical ventilation and tracheostomy placement. J. Surg. Res. 228, 179187 (2018).

Article PubMed Google Scholar

Agle, S. C. et al. Early predictors of prolonged mechanical ventilation in major torso trauma patients who require resuscitation. Am. J. Surg. 192, 822827 (2006).

Article PubMed Google Scholar

Dimopoulou, I. et al. Prediction of prolonged ventilatory support in blunt thoracic trauma patients. Intensive Care Med. 29, 11011105 (2003).

Article PubMed Google Scholar

Figueroa-Casas, J. B. et al. Predictive models of prolonged mechanical ventilation yield moderate accuracy. J. Crit. Care 30, 502505 (2015).

Article PubMed Google Scholar

Davarani, E. R., Tavan, A., Amiri, H. & Sahebi, A. Response capability of hospitals to an incident caused by mass gatherings in southeast Iran. Injury 53, 17221726 (2022).

Article PubMed Google Scholar

Young, D., Harrison, D. A., Cuthbertson, B. H. & Rowan, K. Effect of early vs late tracheostomy placement on survival in patients receiving mechanical ventilation: The TracMan randomized trial. JAMA 309, 21212129. https://doi.org/10.1001/jama.2013.5154 (2013).

Article CAS PubMed Google Scholar

Gomes Silva, B. N., Andriolo, R. B., Saconato, H., Atallah, A. N. & Valente, O. Early versus late tracheostomy for critically ill patients. Cochrane Database Syst. Rev. 3 144 (2012).

Google Scholar

Rose, L. et al. Variation in definition of prolonged mechanical ventilation. Respir. Care 62, 13241332 (2017).

Article PubMed Google Scholar

Clark, P. A. & Lettieri, C. J. Clinical model for predicting prolonged mechanical ventilation. J. Crit. Care 28, 880-e881 (2013).

Article Google Scholar

Brook, A. D., Sherman, G., Malen, J. & Kollef, M. H. Early versus late tracheostomy in patients who require prolonged mechanical ventilation. Am. J. Crit. Care 9, 352 (2000).

Article CAS PubMed Google Scholar

Chang, Y.-C. et al. Ventilator dependence risk score for the prediction of prolonged mechanical ventilation in patients who survive sepsis/septic shock with respiratory failure. Sci. Rep. 8, 111 (2018).

ADS Google Scholar

Lone, N. I. & Walsh, T. S. Prolonged mechanical ventilation in critically ill patients: Epidemiology, outcomes and modelling the potential cost consequences of establishing a regional weaning unit. Crit. Care 15, 110 (2011).

Article Google Scholar

Dunn, H. et al. Mobilization of prolonged mechanical ventilation patients: An integrative review. Heart Lung 46, 221233. https://doi.org/10.1016/j.hrtlng.2017.04.033 (2017).

Article PubMed PubMed Central Google Scholar

Abujaber, A. et al. Using trauma registry data to predict prolonged mechanical ventilation in patients with traumatic brain injury: Machine learning approach. PLoS ONE 15, e0235231 (2020).

Article CAS PubMed PubMed Central Google Scholar

Zolbanin, H. M., Delen, D. & Zadeh, A. H. Predicting overall survivability in comorbidity of cancers: A data mining approach. Decis. Support Syst. 74, 150161 (2015).

Article Google Scholar

Shaikhina, T. et al. Decision tree and random forest models for outcome prediction in antibody incompatible kidney transplantation. Biomed. Signal Process. Control 52, 456462 (2019).

Article ADS Google Scholar

Archer, K. J. & Kimes, R. V. Empirical characterization of random forest variable importance measures. Comput. Stat. Data Anal. 52, 22492260 (2008).

Article MathSciNet MATH Google Scholar

Dag, A., Oztekin, A., Yucel, A., Bulur, S. & Megahed, F. M. Predicting heart transplantation outcomes through data analytics. Decis. Support Syst. 94, 4252 (2017).

Article Google Scholar

Cui, S., Wang, D., Wang, Y., Yu, P.-W. & Jin, Y. An improved support vector machine-based diabetic readmission prediction. Comput. Methods Programs Biomed. 166, 123135 (2018).

Article PubMed Google Scholar

Hale, A. T. et al. Machine-learning analysis outperforms conventional statistical models and CT classification systems in predicting 6-month outcomes in pediatric patients sustaining traumatic brain injury. Neurosurg. Focus 45, E2 (2018).

Article PubMed Google Scholar

Shi, H.-Y., Hwang, S.-L., Lee, K.-T. & Lin, C.-L. In-hospital mortality after traumatic brain injury surgery: A nationwide population-based comparison of mortality predictors used in artificial neural network and logistic regression models. J. Neurosurg. 118, 746752 (2013).

Article PubMed Google Scholar

Das, A. et al. Prediction of outcome in acute lower-gastrointestinal haemorrhage based on an artificial neural network: Internal and external validation of a predictive model. Lancet 362, 12611266 (2003).

Article PubMed Google Scholar

Han, J., Kamber, M. & Pei, J. Data Mining Concepts and Techniques 3rd edn. (University of Illinois at Urbana-Champaign Micheline Kamber Jian Pei Simon Fraser University, 2012).

MATH Google Scholar

Zolbanin, H. M., Delen, D. & Zadeh, A. H. Predicting overall survivability in comorbidity of cancers: A data mining approach. Decis Support Syst 74, 150161 (2015).

Article Google Scholar

Lakshmi, B. N., Indumathi, T. S. & Ravi, N. A study on C.5 decision tree classification algorithm for risk predictions during pregnancy. Procedia Technol. 24, 15421549 (2016).

Article Google Scholar

Rivers, E. et al. Early goal-directed therapy in the treatment of severe sepsis and septic shock. N. Engl. J. Med. 345, 13681377 (2001).

Article CAS PubMed Google Scholar

Weil, M. H. Functional Hemodynamic Monitoring 917 (Springer, 2005).

Book Google Scholar

Sevransky, J. Clinical assessment of hemodynamically unstable patients. Curr. Opin. Crit. Care 15, 234 (2009).

Article PubMed PubMed Central Google Scholar

Scheeren, T. W. L. et al. Current use of vasopressors in septic shock. Ann. Intensive Care 9, 112 (2019).

Article Google Scholar

Hidalgo, D. C., Patel, J., Masic, D., Park, D. & Rech, M. A. Delayed vasopressor initiation is associated with increased mortality in patients with septic shock. J. Crit. Care 55, 145148 (2020).

Article Google Scholar

Li, Y., Li, H. & Zhang, D. Timing of norepinephrine initiation in patients with septic shock: a systematic review and meta-analysis. Crit. Care 24, 19 (2020).

Article Google Scholar

Sellers, B. J., Davis, B. L., Larkin, P. W., Morris, S. E. & Saffle, J. R. Early prediction of prolonged ventilator dependence in thermally injured patients. J. Trauma 43, 899903 (1997).

Article CAS PubMed Google Scholar

Rachmale, S., Li, G., Wilson, G., Malinchoc, M. & Gajic, O. Practice of excessive FiO2 and effect on pulmonary outcomes in mechanically ventilated patients with acute lung injury. Respir. Care 57, 18871893 (2012).

Article PubMed Google Scholar

de Jonge, E. et al. Association between administered oxygen, arterial partial oxygen pressure and mortality in mechanically ventilated intensive care unit patients. Crit. Care 12, 18 (2008).

Article Google Scholar

Esan, A., Hess, D. R., Raoof, S., George, L. & Sessler, C. N. Severe hypoxemic respiratory failure: Part 1Ventilatory strategies. Chest 137, 12031216 (2010).

Article PubMed Google Scholar

Gajic, O. et al. Prediction of death and prolonged mechanical ventilation in acute lung injury. Crit. Care 11, 17 (2007).

Article Google Scholar

Seeley, E. et al. Predictors of mortality in acute lung injury during the era of lung protective ventilation. Thorax 63, 994998 (2008).

Article CAS PubMed Google Scholar

Nash, G., Blennerhassett, J. B. & Pontoppidan, H. Pulmonary lesions associated with oxygen therapy and artificial ventilation. Laval. Med. 276, 368374 (1967).

CAS Google Scholar

Ghauri, S. K., Javaeed, A., Mustafa, K. J. & Khan, A. S. Predictors of prolonged mechanical ventilation in patients admitted to intensive care units: A systematic review. Int. J. Health Sci. (Qassim) 13, 3138 (2019).

PubMed Google Scholar

Pu, L. et al. Weaning critically ill patients from mechanical ventilation: A prospective cohort study. J. Crit. Care 30, 862.e867813. https://doi.org/10.1016/j.jcrc.2015.04.001 (2015).

Article Google Scholar

Sellares, J. et al. Predictors of prolonged weaning and survival during ventilator weaning in a respiratory ICU. Intensive Care Med. 37, 775784. https://doi.org/10.1007/s00134-011-2179-3 (2011).

Article PubMed Google Scholar

Clark, P. A. & Lettieri, C. J. Clinical model for predicting prolonged mechanical ventilation. J. Crit. Care 28(880), e881-887. https://doi.org/10.1016/j.jcrc.2013.03.013 (2013).

Article Google Scholar

Clark, P. A., Inocencio, R. C. & Lettieri, C. J. I-TRACH: Validating a tool for predicting prolonged mechanical ventilation. J. Intensive Care Med. 33, 567573. https://doi.org/10.1177/0885066616679974 (2018).

Article PubMed Google Scholar

Rojek-Jarmua, A., Hombach, R. & Krzych, J. APACHE II score cannot predict successful weaning from prolonged mechanical ventilation. Chron. Respir. Dis. 14, 270275. https://doi.org/10.1177/1479972316687100 (2017).

Article PubMed PubMed Central Google Scholar

Go here to read the rest:
Prediction prolonged mechanical ventilation in trauma patients of ... - Nature.com