Archive for the ‘Machine Learning’ Category

Deep learning pioneer Geoffrey Hinton receives prestigious Royal Medal from the Royal Society – University of Toronto

The University of Torontos Geoffrey Hinton has been honoured withthe Royal Societysprestigious Royal Medal for his pioneering work in deep learning a field of artificial intelligence that mimics the way humans acquire certain types of knowledge.

TheU.K.s national academy of sciencessaid it is recognizing Hinton,a University Professor Emeritus in the department of computer science in the Faculty of Arts & Science, for pioneering work on algorithms that learn distributed representations in artificial neural networks and their application to speech and vision, leading to a transformation of the international information technology industry.

Its the latest in along list of accolades for Hinton, who is alsochief scientific adviser at theVector Institute for Artificial Intelligenceand a vice-president and engineering fellow at Google. Others includethe Association for Computing Machinerys A. M. Turing Award, widely considered the Nobel Prize of computing.

It is a great honour to receive the Royal Medal a medal previously awarded to intellectual giants like Darwin, Faraday, Boole and G.I. Taylor, Hinton says.

But unlike them, my success was the result of recruiting and nurturing an extraordinarily talented set of graduate students and post-docs who were responsible for many of the breakthroughs in deep learning that revolutionized artificial intelligence over the last 15 years.

Royal Medalshave been awarded annually since 1826 for advancements in the physical and biological sciences. A third medal for applied sciences has been awarded since 1965.

Previous U of T winners of the Royal Medalinclude Anthony Pawson andNobel Prize-winner John Polanyi.

Hinton, meanwhile,has been a Fellow of the Royal Society since 1998 and a Fellow of the Royal Society of Canada since 1996.

The Royal Medal is one of the most significant acknowledgements of an individuals research and career, says Melanie Woodin, dean of the Faculty of Arts & Science. And Professor Hinton is truly deserving of the distinction for his foundational research and for the exceptional contribution hes made toward shaping the modern world and the future. I am thrilled to congratulate him on this award.

I want to congratulate Geoff on this spectacular achievement, adds Eyal de Lara, chair of the department of computer science. We are very proud of the seminal contributions he has made to field of computer science, which are fundamentally reshaping our discipline and impacting society at large.

Deep learning is a typeof machine learningthat relies on a neural network modelled on the network of neurons in the human brain. In 1986, Hinton and his collaborators developed the breakthrough approach based on the backpropagation algorithm, a central mechanism by which artificial neural networks learn that would realize the promise of neural networks and form the current foundation of that technology.

Hinton and his colleagues in Toronto built on that initial work with a number of critical developments that enhanced the potential of AI and helped usher in todays revolution in deep learning with applications in speech and image recognition, self-driving vehicles, automated diagnosis of images and language, and more.

I believe that the spectacular recent progress in large language models, image generation and protein structure prediction is evidence that the deep learning revolution has only just started, Hinton says.

See the original post here:
Deep learning pioneer Geoffrey Hinton receives prestigious Royal Medal from the Royal Society - University of Toronto

PhD Position – Machine learning to increase geothermal energy efficiency, Karlsruhe Institute – ThinkGeoEnergy

The Karlsruhe Institute of Technology in Germany has an open PhD position for a project that will use machine learning to model scaling formation in cascade geothermal operations.

The Karlsruhe Institute of Technology (KIT) in Germany currently has an open PhD position in the upcoming Machine Learning for Enhancing Geothermal energy production (MALEG) project. Interested applicants may visit the official KIT page for more details on the application. Submissions will be accepted only until September 30, 2022.

The target of the MALEG project is the design and optimization of cascade production schemes aiming for the highest possible energy output in geothermal energy facilities by preventing scaling. The enhanced scaling potential of lower return temperatures is one key challenge as geothermal cascade use becomes a more common strategy to increase efficiency.

The research will be focusing on the development of a machine learning tool to quantify the impact of the enhanced cooling on the fluid-mineral equilibrium and to optimize the operations economically. The tool will be based on results from widely-applied deterministic models and experimental data collected at geothermal plants in Germany, Austria and Turkey by our international project partners. Once fully implemented the MALEG-tool will work as a digital twin of the power plant, ready to assess and predict scaling formation processes for geothermal production from different geological settings.

The ideal candidate should hold a masters degree in geosciences or geophysics with sound interest in aqueous geochemistry and experience in numerical modeling.

Source: Karlsruhe Institute of Technology

Read more:
PhD Position - Machine learning to increase geothermal energy efficiency, Karlsruhe Institute - ThinkGeoEnergy

Prediction of mortality risk of health checkup participants using machine learning-based models: the J-SHC study | Scientific Reports – Nature.com

Participants

This study was conducted as part of the ongoing Study on the Design of a Comprehensive Medical System for Chronic Kidney Disease (CKD) Based on Individual Risk Assessment by Specific Health Examination (J-SHC Study). A specific health checkup is conducted annually for all residents aged 4074years, covered by the National Health Insurance in Japan. In this study, a baseline survey was conducted in 685,889 people (42.7% males, age 4074years) who participated in specific health checkups from 2008 to 2014 in eight regions (Yamagata, Fukushima, Niigata, Ibaraki, Toyonaka, Fukuoka, Miyazaki, and Okinawa prefectures). The details of this study have been described elsewhere11. Of the 685,889 baseline participants, 169,910 were excluded from the study because baseline data on lifestyle information or blood tests were not available. In addition, 399,230 participants with a survival follow-up of fewer than 5years from the baseline survey were excluded. Therefore, 116,749 patients (42.4% men) with a known 5-year survival or mortality status were included in this study.

This study was conducted in accordance with the Declaration of Helsinki guidelines. This study was approved by the Ethics Committee of Yamagata University (Approval No. 2008103). All data were anonymized before analysis; therefore, the ethics committee of Yamagata University waived the need for informed consent from study participants.

For the validation of a predictive model, the most desirable way is a prospective study on unknown data. In this study, the data on health checkup dates were available. Therefore, we divided the total data into training and test datasets to build and test predictive models based on health checkup dates. The training dataset consisted of 85,361 participants who participated in the study in 2008. The test dataset consisted of 31,388 participants who participated in this study from 2009 to 2014. These datasets were temporally separated, and there were no overlapping participants. This method would evaluate the model in a manner similar to a prospective study and has an advantage that can demonstrate temporal generalizability. Clipping was performed for 0.01% outliers for preprocessing, and normalization was performed.

Information on 38 variables was obtained during the baseline survey of the health checkups. When there were highly correlated variables (correlation coefficient greater than 0.75), only one of these variables was included in the analysis. High correlations were found between body weight, abdominal circumference, body mass index, hemoglobin A1c (HbA1c), fasting blood sugar, and AST and alanine aminotransferase (ALT) levels. We then used body weight, HbA1c level, and AST level as explanatory variables. Finally, we used the following 34 variables to build the prediction models: age, sex, height, weight, systolic blood pressure, diastolic blood pressure, urine glucose, urine protein, urine occult blood, uric acid, triglycerides, high-density lipoprotein cholesterol (HDL-C), LDL-C, AST, -glutamyl transpeptidase (GTP), estimated glomerular filtration rate (eGFR), HbA1c, smoking, alcohol consumption, medication (for hypertension, diabetes, and dyslipidemia), history of stroke, heart disease, and renal failure, weight gain (more than 10kg since age 20), exercise (more than 30min per session, more than 2days per week), walking (more than 1h per day), walking speed, eating speed, supper 2h before bedtime, skipping breakfast, late-night snacks, and sleep status.

The values of each item in the training data set for the alive/dead groups were compared using the chi-square test, Student t-test, and MannWhitney U test, and significant differences (P<0.05) were marked with an asterisk (*) (Supplementary Tables S1 and S2).

We used two machine learning-based methods (gradient boosting decision tree [XGBoost], neural network) and one conventional method (logistic regression) to build the prediction models. All the models were built using Python 3.7. We used the XGBoost library for GBDT, TensorFlow for neural network, and Scikit-learn for logistic regression.

The data obtained in this study contained missing values. XGBoost can be trained to predict even with missing values because of its nature; however, neural network and logistic regression cannot be trained to predict with missing values. Therefore, we complemented the missing values using the k-nearest neighbor method (k=5), and the test data were complemented using an imputer trained using only the training data.

The parameters required for each model were determined for the training data using the RandomizedSearchCV class of the Scikit-learn library and repeating fivefold cross-validation 5000 times.

The performance of each prediction model was evaluated by predicting the test dataset, drawing a ROC curve, and using the AUC. In addition, the accuracy, precision, recall, F1 scores (the harmonic mean of precision and recall), and confusion matrix were calculated for each model. To assess the importance of explanatory variables for the predictive models, we used SHAP and obtained SHAP values that express the influence of each explanatory variable on the output of the model4,12. The workflow diagram of this study is shown in Fig.5.

Workflow diagram of development and performance evaluation of predictive models.

Read more here:
Prediction of mortality risk of health checkup participants using machine learning-based models: the J-SHC study | Scientific Reports - Nature.com

Industrial Automation Market to Generate Revenue of $289 Billion by 2028 | Growing Adoption of AI and Machine Learning to Play Key Role -…

Westford, USA, Aug. 24, 2022 (GLOBE NEWSWIRE) -- As the world becomes increasingly automated, businesses are turning to industrial automation solutions to help increase efficiency and productivity. By automating routine tasks and processes, businesses can free up workforce time to focus on more important duties. This growth of the industrial automation market is mainly due to the increasing demand for safe, reliable, and efficient manufacturing systems. These systems help manufacturers achieve increased output and reduce costs. As per SkyQuests findings, businesses save from 15 to 60% on worker costs, making it one of the most cost-effective investments a business can make.

In addition to reducing labor costs, industrial automation can also reduce environmental impact. For example, if a factory is using manual tasks to produce products, the production process often involves a lot of waste created from the work processes. With industrial automation, these tasks can be automated, leading to a decrease in waste and a reduction in environmental impact.

Get sample copy of this report:

https://skyquestt.com/sample-request/industrial-automation-market

There are a number of different industrial automation technologies available, so businesses can find the right solution for their specific needs. Some of the most common types of industrial automation include: robots, machine learning algorithms, computer-aided manufacturing (CAM), and wireless technology.

As per SkyQuest analysis, some of the leading companies in the industrial automation market are ABB Ltd., Siemens AG, Fanuc Corporation, Mitsubishi Electric Corporation, Kawasaki Heavy Industries Ltd., and Rexnord Corp. These companies offer a range of products and services that include controllers, drives, processing units, sensors, and software. They prefer to partner with larger manufacturers who can leverage their resources to develop and deploy advanced technology solutions across their entire manufacturing operations.

Increasing adoption of advanced manufacturing technologies, such as 3D printing, and recent shift in production to Asia are driving the growth of this industry. Additionally, surging demand for smart machines that can automatically optimize processes and reduce variability is boosting the growth of industrial automation market.

SkyQuest has published a report on global industrial automation market. The provides a detailed understanding about market trends, consumer analysis, demand and supply gap, pricing analysis, top players in the market and their market share, competitive landscape, value chain analysis, and market dynamics. It will help the market participants in identifying lucrative growth opportunity, targeting potential consumers, devising growth strategies, finding what competition are doing and where the opportunity lies to incentives on the weaknesses of others. For more details.

Browse summary of the report and Complete Table of Contents (ToC):

https://skyquestt.com/report/industrial-automation-market

Industrial Automation to Account for a Whopping 37% of the Global Workforce Says Analyst at SkyQuest

As industrial automation technologies continue to develop, so too is the industrys adoption of them. A recent study found that automation is growing even more rapidly than anticipated, and by 2025, it is predicted that industrial automation will account for a whopping 37% of the workforce.

According to a recent survey conducted by SkyQuest on industrial automation market, the chief executive of companies that have invested in industrial automation say that the technology has been key to their success. In fact, almost 2/3 of respondents reported that industrial automation has helped them boost production and improve efficiency. Furthermore, nearly 50% of these businesses say that the technology has increased their competitiveness and allowed them to attract new customers.

AI and Blockchain Technology are Trending in Industrial Automation Market

One of the most pressing issues facing industrial automation today is reliability. With a growing number of devices and systems interacting with one another, it's critical that these systems work as intended and without issue. Here are some of the top trends happening in industrial automation today:

Smart Manufacturing is Gaining Grounds in Industrial Automation Market

Manufacturing is not just a physical process. It's also a digital process. The rise of smart manufacturing technologies means that factories can now control and monitor their processes in real time, which enables more efficient production and improved safety. Today, automotive, electronics and FMCG sectors is contributing around 65% of the revenue to the global industrial automation market. Automotive sector is witnessing steady growth owing to soaring demand for safety features, enhanced functionality, efficient fuel economy and elevated adoption of intelligent mobility solutions. In addition, automotive industry is also witnessing rise in popularity of hybrid and electric vehicles which is adding to the growth momentum of this sector.

The most common types of smart manufacturing technologies are robotics, sensors, and machine learning algorithms. Robotics help factories automate tasks and functions so that they can be performed faster and with greater accuracy. Sensors enable factories to monitor conditions inside and outside the factory, and they can transmit this information to processors for analysis.

As per SkyQuest analysis, machine learning is being used to improve a variety of processes and operations. For example, it can be used to optimize production lines, predict the needs of customers and even determine when products need to be replaced. Additionally, it can be used to improve predictive maintenance and forecasting. Furthermore, it can also be used to develop autonomous systems.

Today, manufacturers across the global industrial automation market are opting for smart manufacturing due to following key factors:

Speak to Analyst for your custom requirements:

https://skyquestt.com/speak-with-analyst/industrial-automation-market

Top Players in Global Industrial Automation Market

Related Reports in SkyQuests Library:

Global Metaverse Infrastructure Market

Global Micro Mobile Data Center Market

Global Machine Learning Market

Global Location Based Services Market

Global Virtual Events Market

About Us:

SkyQuest Technologyis leading growth consulting firm providing market intelligence, commercialization and technology services. It has 450+ happy clients globally.

Address:

1 Apache Way, Westford, Massachusetts 01886

Phone:

USA (+1) 617-230-0741

Email:sales@skyquestt.com

LinkedInFacebookTwitter

Go here to see the original:
Industrial Automation Market to Generate Revenue of $289 Billion by 2028 | Growing Adoption of AI and Machine Learning to Play Key Role -...

Tackling the reproducibility and driving machine learning with digitisation – Scientific Computing World

Dr Birthe Nielsen discusses the role of the Methods Database in supporting life sciences research by digitising methods data across different life science functions.

Reproducibility of experiment findings and data interoperability are two of the major barriers facing life sciences R&D today. Independently verifying findings by re-creating experiments and generating the same results is fundamental to progressing research to the next stage in its lifecycle - be it advancing a drug to clinical development or a product to market. Yet, in the field of biology alone, one study found that 70 per cent of researchers are unable to reproduce the findings of other scientists and 60 per cent are unable to reproduce their own findings.

This causes delays to the R&D process throughout the life sciences ecosystem. For example, biopharmaceutical companies often use an external Contract Research Organisation (CROs) to conduct clinical studies. Without a centralised repository to provide consistent access, analytical methods are often shared with CROs via email or even by physical documents, and not in a standard format but using an inconsistent terminology. This leads to unnecessary variability and several versions of the same analytical protocol. This makes it very challenging for a CRO to re-establish and revalidate methods without a labour-intensive process that is open to human interpretation and thus error.

To tackle issues like this, the Pistoia Alliance launched the Methods Hub project. The project aims to overcome the issue of reproducibility by digitising methods data across different life science functions, and ensuring data is FAIR (Findable, Accessible, Interoperable, Reusable) from the point of creation. This will enable seamless and secure sharing within the R&D ecosystem, reduce experiment duplication, standardise formatting to make data machine-readable and increase reproducibility and efficiency. Robust data management is also the building block for machine learning and is the stepping-stone to realising the benefits of AI.

Digitisation of paper-based processes increases the efficiency and quality of methods data management. But it goes beyond manually keying in method parameters on a computer or using an Electronic Lab Notebook; A digital and automated workflow increases efficiency, instrument usages and productivity. Applying a shared data standards ensures consistency and interoperability in addition to fast and secure transfer of information between stakeholders.

One area that organisations need to address to comply with FAIR principles, and a key area in which the Methods Hub project helps, is how analytical methods are shared. This includes replacing free-text data capture with a common data model and standardised ontologies. For example, in a High-Performance Liquid Chromatography (HPLC) experiment, rather than manually typing out the analytical parameters (pump flow, injection volume, column temperature etc.), the scientist will simply download a method that will automatically populate the execution parameters in any given Chromatographic Data System (CSD). This not only saves time during data entry, but the common format eliminates room for human interpretation or error.

Additionally, creating a centralised repository like the Methods Hub in a vendor-neutral format is a step towards greater cyber-resiliency in the industry. When information is stored locally on a PC or an ELN and is not backed up, a single cyberattack can wipe it out instantly. Creating shared spaces for these notes via the cloud protects data and ensures it can be easily restored.

A proof of concept (PoC) via the Methods Hub project was recently successfully completed to demonstrate the value of methods digitisation. The PoC involved the digital transfer via cloud of analytical HPLC methods, proving it is possible to move analytical methods securely between two different companies and CDS vendors with ease. It has been successfully tested in labs at Merck and GSK, where there has been an effective transfer of HPLC-UV information between different systems. The PoC delivered a series of critical improvements to methods transfer that eliminated the manual keying of data, reduces risk, steps, and error, while increasing overall flexibility and interoperability.

The Alliance project team is now working to extend the platforms functionality to connect analytical methods with results data, which would be an industry first. The team will also be adding support for columns and additional hardware and other analytical techniques, such as mass spectrometry and nuclear magnetic resonance spectroscopy (NMR). It also plans to identify new use cases, and further develop the cloud platform that enables secure methods transfer.

If industry-wide data standards and approaches to data management are to be agreed on and implemented successfully, organisations must collaborate. The Alliance recognises methods data management is a big challenge for the industry, and the aim is to make Methods Hub an integral part of the system infrastructure in every analytical lab.

Tackling issues such as digitisation of methods data doesnt just benefit individual companies but will have a knock-on effect for the whole life sciences industry. Introducing shared standards accelerates R&D, improves quality, and reduces the cost and time burden on scientists and organisations. Ultimately this ensures that new therapies and breakthroughs reach patients sooner. We are keen to welcome new contributors to the project, so we can continue discussing common barriers to successful data management, and work together to develop new solutions.

Dr Birthe Nielsen is the Pistoia Alliance Methods Database project manager

Read more here:
Tackling the reproducibility and driving machine learning with digitisation - Scientific Computing World