Archive for the ‘Machine Learning’ Category

Stablecoins and Machine Learning – the Future of Investment Trading? – JD Supra

For decades, firms engaging in what is known as high frequency trading and algorithmic trading have cornered the market on transactions that utilize a combination of advanced computer algorithms, bespoke hardware and special access to opportunities to generate returns that are often more than 30% above the expected market return, year after year. The tools have historically been locked in firms that allow access only to investors will a large enough net worth to fund a significant up-front investment. The advent of stable-coins and machine learning (capable of generating custom, AI-driven investment plans) along with the development of crypto derivative trading is offering the opportunity to open the market to these types of investment classes.

The Reed Smith On-Chain team has enjoyed its time interacting with the industry experts at Consensus 2023 in Austin, Texas, and is looking forward to the continued discussions and panels involving industry leaders and innovators.

Huge leaps in artificial intelligence, virtual/augmented reality, quantum computing and other fields of computer science are poised to dwarf all the digital disruption that has preceded this moment

Read the rest here:
Stablecoins and Machine Learning - the Future of Investment Trading? - JD Supra

Indian job market to see 22% churn in 5 yrs; AI, machine learning among top roles: WEF – The Hindu

The Indian job market is estimated to witness 22% churn over the next five years, with top emerging roles coming from AI, machine learning and data segments, a new study showed on May 1.

Globally, the job market churn is estimated at 23%, with 69 million new jobs expected to be created and 83 million eliminated by 2027, the World Economic Forum said in its latest Future of Jobs report.

Also Read | Explained | Will artificial intelligence lead to job displacements?

"Almost a quarter of jobs (23%) are expected to change in the next five years through growth of 10.2% and decline of 12.3% (globally)," the WEF said.

According to the estimates of the 803 companies surveyed for the report, employers anticipate 69 million new jobs to be created and 83 million eliminated among the 673 million jobs corresponding to the dataset, a net decrease of 14 million jobs, or 2% of current employment.

Regarding India, it said 61% of companies think broader applications of ESG (environment, social and governance) standards will drive job growth, followed by increased adoption of new technologies (59%) and broadening digital access (55%).

Also Read | Indias AI penetration factor at 3.09, highest among all G20, OECD countries: Nasscom

Top roles for industry transformation in India would be AI (artificial intelligence) and machine learning specialists, and data analysts and scientists, it added.

The report also found that manufacturing and oil and gas sectors have the highest level of green skill intensity globally, with India, the U.S. and Finland featuring at the top of the list for the oil and gas sector.

Also, more populous economies such as India and China were more positive than the global average when compared with countries' viewpoints on talent availability while hiring.

On the other hand, India figured among the seven countries where job growth was slower for social jobs than non-social jobs.

In India, 97% of respondents said that the preferred source of funding for training was 'funded by organisation' as against the global average of 87%.

The WEF said that macro trends, including the green transition, ESG standards and localisation of supply chains are the leading drivers of job growth globally, with economic challenges, including high inflation, slower economic growth and supply shortages, posing the greatest threat.

Advancing technology adoption and increasing digitisation will cause significant labour market churn, with an overall net positive in job creation, it added.

Also Read | AI boom is dream and nightmare for workers in India, global South

"For people around the world, the past three years have been filled with upheaval and uncertainty for their lives and livelihoods, with COVID-19, geopolitical and economic shifts, and the rapid advancement of AI and other technologies now risks adding more uncertainty, said Saadia Zahidi, Managing Director, World Economic Forum.

"The good news is that there is a clear way forward to ensure resilience. Governments and businesses must invest in supporting the shift to the jobs of the future through the education, reskilling and social support structures that can ensure individuals are at the heart of the future of work," she added.

The survey covered 803 companies collectively employing more than 11.3 million workers in 27 industry clusters and 45 economies from all world regions.

The WEF said technology continues to pose both challenges and opportunities to labour markets, but employers expect most technologies to contribute positively to job creation.

The fastest-growing roles are being driven by technology and digitalisation. Big data ranks at the top among technologies seen to create jobs. The employment of data analysts and scientists, big data specialists, AI machine learning specialists and cybersecurity professionals is expected to grow on average by 30 per cent by 2027.

At the same time, the fastest declining roles are also being driven by technology and digitalisation, with clerical or secretarial roles, including bank tellers, cashiers and data entry clerks expected to decline fastest.

Also, while expectations of the displacement of physical and manual work by machines have decreased, reasoning, communicating and coordinating all traits with a comparative advantage for humans are expected to be more automatable in future.

Artificial intelligence, a key driver of potential algorithmic displacement, is expected to be adopted by nearly 75% of surveyed companies and is expected to lead to high churn with 50% of organisations expecting it to create job growth and 25% anticipating it to result in job losses.

However, the largest absolute gains in jobs will come from education and agriculture. The report found that jobs in the education industry are expected to grow by about 10%, leading to 3 million additional jobs for vocational education teachers and university and higher education teachers.

Jobs for agricultural professionals, especially agricultural equipment operators, graders and sorters, are expected to see a 15-30% increase, leading to an additional 4 million jobs.

Globally, six in 10 workers will require training before 2027, but only half of the employees are seen to have access to adequate training opportunities today.

At the same time, the report estimates that, on average, 44% of an individual worker's skills will need to be updated.

In response to the cost-of-living crisis, 36% of companies recognise that offering higher wages could help them attract talent. Yet, companies are planning to mix both investment and displacement to make their workforce more productive and cost-effective.

Four in five surveyed companies plan to invest in learning and training on the job as well as automating processes in the next five years.

Read the original:
Indian job market to see 22% churn in 5 yrs; AI, machine learning among top roles: WEF - The Hindu

Very Slow Movie Player Avoids E-Ink Ghosting With Machine Learning – Hackaday

[mat kelcey] was so impressed and inspired by the concept of a very slow movie player (which is the playing of a movie at a slow rate on a kind of DIY photo frame) that he created his own with a high-resolution e-ink display. It shows high definition frames from Alien (1979) at a rate of about one frame every 200 seconds, but a surprising amount of work went into getting a color film intended to look good on a movie screen also look good when displayed on black & white e-ink.

The usual way to display images on a screen that is limited to black or white pixels is dithering, or manipulating relative densities of white and black to give the impression of a much richer image than one might otherwise expect. By itself, a dithering algorithm isnt a cure-all and [mat] does an excellent job of explaining why, complete with loads of visual examples.

One consideration is the e-ink display itself. With these displays, changing the screen contents is where all the work happens, and it can be a visually imperfect process when it does. A very slow movie player aims to present each frame as cleanly as possible in an artful and stylish way, so rewriting the entire screen for every frame would mean uglier transitions, and that just wouldnt do.

So the overall challenge [mat] faced was twofold: how to dither a frame in a way that looked great, but also tried to minimize the number of pixels changed from the previous frame? All of a sudden, he had an interesting problem to solve and chose to solve it in an interesting way: training a GAN to generate the dithers, aiming to balance best image quality with minimal pixel change from the previous frame. The results do a great job of delivering quality visuals even when there are sharp changes in scene contrast to deal with. Curious about the code? Heres the GitHub repository.

Heres the original Very Slow Movie Player that so inspired [mat], and heres a color version that helps make every frame a work of art. And as for dithering? Its been around for ages, but that doesnt mean there arent new problems to solve in that space. For example, making dithering look good in the game Return of the Obra Dinn required a custom algorithm.

Original post:
Very Slow Movie Player Avoids E-Ink Ghosting With Machine Learning - Hackaday

Early antidepressant treatment response prediction in major … – BMC Psychiatry

Standards and guidelines of machine learning in psychiatry were followed when this study was conducted and reported [20].

This study included 291 inpatients in a tertiary hospital who were diagnosed as major depressive disorders. Patient eligibility was determined based on the criteria of the Diagnostic and Statistical Manual of the American Psychiatric Association, Fourth Edition (DSM-IV). Blood samples were collected before antidepressant treatment.

All patients met the following criteria: Han Chinese, 1865 years old, baseline 17-item Hamilton Depression Rating Scale (HAMD-17) [21] scores>17 points, and their depressive symptoms lasted at least 2 weeks. All patients had just been diagnosed or had recently relapsed and had not been on medication for at least two weeks prior to enrollment. All diagnoses were made independently by two psychiatrists with professional tenure or higher, and confirmed by a third psychiatrist. Participants had never been diagnosed with other DSM-IV Axis I diagnosis (including substance use disorder, schizophrenia, affective disorder, bipolar disorder, generalized anxiety disorder, panic disorder, obsessive-compulsive disorder). They had never been diagnosed with personality disorder or mental retardation. Patients with a history of organic brain syndrome, endocrine, and primary organic diseases, or other medical conditions that would hinder psychiatric evaluation were excluded from the study. Other exclusion criteria included blood, heart, liver, and kidney disorders; electroconvulsive therapy in the past 6 months; or an episode of mania in the previous 12 months. Pregnant and nursing females were also excluded from participation.

All study subjects in the study endorsed written consent that was approved by the Zhongda Hospital Ethics Committee (2016ZDSYLL100-P01) under the Declaration of Helsinki.

Response was defined as 50% reduction in HAMD-17 scores from baseline to two weeks [22]. Accordingly, the two-week treatment participants were divided into two groups, responders and non-responders.

Two retrospective self-report questionnaires, the Childhood Trauma Questionnaire (28-item short-form, CTQ-SF) and the Life Events Scale (LES), were used to evaluate recent stress exposures and childhood adversities, respectively. The evaluation of LES and CTQ scales was completed by the same nurse using consistent, scripted language. LES is a self-assessed questionnaire composed of 48 items, reflecting both positive and negative life events experienced within the past year. The LES is divided into positive and negative life events (NLES). The CTQ-SF was dichotomized for use in the gene-environment interaction analyses.

Twelve considered demographic and clinical features are age, gender, years of education, marital status, family history, first occurrence or not, age of onset, number of occurrences, illness duration, HAMD-17, NLES and CTQ-SF baseline scores (Supplemental Material Table1).

Primers were previously designed by us to encompass 100bp upstream and 100bp downstream of TPH2 SNPs that showed a significant association with the antidepressant response, as well as with GC sequence content of CpGs>20% after methylation [11, 12]. Out of the total 24 TPH2 SNPs, only 11 SNPs (rs7305115, rs2129575, rs11179002, rs11178998, rs7954758, rs1386494, rs1487278, rs17110563, rs34115267, rs10784941, rs17110489) met the DNA methylation status criteria of the sequences to be detected (Supplemental Material Table2). Methylation levels of 38 TPH2 CpGs were calculated and presented as the ratio of the number of methylated cytosines to the total number of cytosines.

In the data set comprising 291 observations of 51 variables (12 demographic and clinical features, 38 CpGs methylation levels and 1 response variable), 6% entries were missing (see Fig.1). Of the CpGs methylation levels, 3 CpGs (TPH2-7-99, TPH2-7-142, TPH2-7-170) were excluded because they had more than 45% missing values. Due to the randomness of experimental/technological errors and interrelatedness of the variables, missing completely at random (MCAR)/missing at random (MAR) was assumed for the DNA methylation data and the mean imputation can deal with the missing data [23, 24]. The values of other features with missing values were imputed with mode and mean in the case of categorical and numerical features, respectively.

Missingness pattern in the DNA methylation data set

Normalization (Linear transformation) was used to improve the numerical stability of the model and reduce training time [25]. To avoid overfitting when harnessing maximum amount of data, cross-validation (CV) using entire sample was used to report prediction performance. The CV was 5-fold and the averaged prediction metrics including the area under the receiver operating curve (AUC), F-Measure, G-Mean, accuracy, sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) were reported. Hyperparameter tuning was based on AUC with random search using the caret default tuning settings. A packaging method (Recursive Feature Elimination with random forest, RFE-RF) [26] with 5-fold CV was employed to select the features that contributed the most to the prediction of the early antidepressant response in MDD patients. The variable importance was also estimated using random forest. For better replicability, the 5-fold CV procedure was repeated 10 times.

ML methods were implemented via their interface with the open-source R package caret in a standardized and reproducible way. Five different supervised ML algorithms were used in this study, including logistic regression, classification and regression trees (CART), support vector machine with radial basis function kernel (SVM-RBF), a boosting method (logitboost) and random forests (RF) to develop predictive models. All analyses were implemented in R statistical software (version 4.0.4). We utilized the caret package which implements rpart, caTools, e1071 and RandomForest packages for CART, logitboost, SVM-RBF and RF, respectively.

Read this article:
Early antidepressant treatment response prediction in major ... - BMC Psychiatry

The Wonders of Machine Learning: Tackling Lint Debug Quickly with … – Design and Reuse

Achieving predictable chip design closure has become progressively challenging for teams worldwide. While linting tools have existed for decades, traditional tools require significant effort to filter out the noise and eventually zero-in on the real design issues. With increasing application specific integrated circuit (ASIC) size and complexity, chip designers require higher debug efficiency when managing a large number of violations in order to ultimately achieve a shorter turnaround time (TAT).

In the first two parts of this linting series, we established how linting offers a comprehensive mechanism to check for fundamental chip design faults and touched on the many benefits of having a guiding methodology for deeper functional lint analysis.

Recognizing the disparities between in-house coding styles, our extensive experience of working with industry leaders has given us an edge to accelerate RTL and system-on-chip (SoC) design flow for customers previously unseen. Solutions such as Synopsys VC SpyGlass CDC have already proven how valuable advanced machine learning (ML) algorithms are to achieve SoC design signoff with scalable performance and high debug productivity. Leveraging industry-standard practices and our decades of expertise, the latest offering of Synopsys VC SpyGlass Lint now includes powerful ML capabilities to significantly improve debug efficiency for designers.

In the finale of this blog series, well cover the downside of traditional linting tools, how the functionality of ML and root-cause analysis (ML-RCA) accelerate design signoff, the key benefits of Synopsys VC SpyGlass Lint, and where we see the future of smart linting headed.

Click here to read more ...

Read more:
The Wonders of Machine Learning: Tackling Lint Debug Quickly with ... - Design and Reuse