Archive for the ‘Machine Learning’ Category

Software Development Future: AI and Machine Learning – Robotics and Automation News

Discover how AI and ML can potentially change the software development industry, and how AI affects software development and minimizes developers workload

Software development is a long, complex, and expensive process. Business owners and developers themselves constantly seek ways to optimize it. Good news for you, using artificial intelligence (AI) and machine learning (ML) is becoming increasingly popular in that regard.

According to a recent survey by Gartner, AI and ML are some of the trends that will shape the future of software development. For instance, early 73 percent of adopters of GitHub Copilot, an AI-driven assistant for engineers, reported that it helped them stay in the flow.

The use of this tool resulted in 87 percent of developers conserving mental energy while performing repetitive tasks. That increased their productivity and performance.

Twinslash and other software vendors and developers, on other hand, build AI-driven tools to help engineers with testing, debugging, code maintenance, and so on.

So: lets learn more about AI and ML and their impact on software development.

The ability to automate monotonous manual tasks is one of the significant benefits of AI. There are several ways to effectively implement AI in the development process that completely replace human intervention or, at least, reduce it enough to remove the tediousness of repetitive tasks and allow your engineers to focus on more critical issues.

One of the common applications of AI in development is utilizing it to reduce the number of errors in the code.

AI-powered tools can analyze historical data to identify recurring errors or faults, spot them, and either highlight them for developers to fix or fix them independently in the background. The latter option will reduce the need to roll back for fixes when something goes wrong during your software development process.

AI improves the quality, coverage, and efficiency of software testing. This is because it can analyze large amounts of data without making mistakes. Eggplant and Test Sigma are two well-known AI-assisted software testing tools.

They aid software testers in writing, conducting, and maintaining automated tests to reduce the number of errors and boost the quality of software code. AI in testing is extremely useful in large-scale projects usually combined with automated testing tools, it helps to check through multi-leveled, modular software faster.

ML software can track how a user interacts with a particular platform and process this data to pinpoint patterns that can be used by developers and UX/UI designers to generate a more dynamic, slick software experience.

AI can also help discover UI blocks or elements of UX people are struggling with, so designers and developers can reconfigure and fix them.

Code security is of utmost importance in software development. You can use AI to analyze data and create models to distinguish abnormal activity from ordinary behavior. This will help software development companies catch issues and threats before they can cause any problems.

Apart from that, tools like Snyk, integrated into engineers Integrated Development Environment (IDE) can help pinpoint security vulnerabilities in the apps before releasing them in production.

Lets talk about the main overall trends that are changing the field of software engineering and product development.

Generative AI is a powerful technology that uses AI algorithms to create any kind of data code, design layouts, images, audio or video files, text, and even entire applications. It studies datasets independently and can help produce a wide range of content.

One of the most significant benefits of generative AI is that it can help developers create software quickly and efficiently. For instance, it assists with:

Code completion. AI-enabled code completion tools in IDEs, such as Microsofts Visual Studio Code, can help developers write code faster. For VS, such a tool is called IntelliCode it analyzes a ton of GitHub repos and searches for code snippets that might be relevant for the developers next step and completes the lines for them.

Layout design. AI-powered design tools can analyze user behavior and preferences to generate optimized layouts for websites and mobile applications. For example, for some AI-powered plugins on the design platform, Canva uses machine learning algorithms to suggest layouts, fonts, and colors for marketing materials.

(Entire) app development. With generative AI, developers can automate the process of creating software or pieces of software by telling the AI the prompts for an app one wants to build. OpenAIs Codex can do that, using natural language processing models both for parsing through conversational language and syntax of a programming language.

Continuous delivery is a software development practice where code updates are automatically built, tested, and deployed to production environments. AI-powered continuous delivery can optimize this process by using machine learning algorithms to identify and address issues before they become critical.

Machine learning algorithms can analyze the performance of production environments and predict potential issues before they occur, reducing downtime and improving software reliability.

Apart from that, ML can parse through different deployment strategies and recommend the best approach based on past performance and current conditions of the system.

Now, that trend isnt directly tied to software development, but it impacts it quite significantly. Product and project managers can use AI tools to plan the project faster.

Of course, tools like ChatGPT wont replace the experience of talking to actual potential users, but it can still help them quickly get a grasp of the market situation, trends, or common concerns users have with the competitors product.

Tools like that one can also be utilized to conduct drafts for SWOT analysis, which is also extra vital for planning out the value proposition of the software and prioritizing features-to-be-built for a roadmap. Now, ChatGPT is also a generative AI, but we thought that its application deserves a separate section.

As Eric Schmidt, former CEO of Google, once said, I think theres going to be a huge revolution in software development with AI. That revolution is now. It is safe to say that the future of software development lies in AI and ML.

With the rise of AI-powered programming assistants and AI-enabled design work and security assessments, software development will become more cost-effective. Utilizing AI and ML in software development will also increase productivity, fasten time-to-market, and improve software quality.

You might also like

Read more:
Software Development Future: AI and Machine Learning - Robotics and Automation News

Stablecoins and Machine Learning – the Future of Investment Trading? – JD Supra

For decades, firms engaging in what is known as high frequency trading and algorithmic trading have cornered the market on transactions that utilize a combination of advanced computer algorithms, bespoke hardware and special access to opportunities to generate returns that are often more than 30% above the expected market return, year after year. The tools have historically been locked in firms that allow access only to investors will a large enough net worth to fund a significant up-front investment. The advent of stable-coins and machine learning (capable of generating custom, AI-driven investment plans) along with the development of crypto derivative trading is offering the opportunity to open the market to these types of investment classes.

The Reed Smith On-Chain team has enjoyed its time interacting with the industry experts at Consensus 2023 in Austin, Texas, and is looking forward to the continued discussions and panels involving industry leaders and innovators.

Huge leaps in artificial intelligence, virtual/augmented reality, quantum computing and other fields of computer science are poised to dwarf all the digital disruption that has preceded this moment

Read the rest here:
Stablecoins and Machine Learning - the Future of Investment Trading? - JD Supra

Indian job market to see 22% churn in 5 yrs; AI, machine learning among top roles: WEF – The Hindu

The Indian job market is estimated to witness 22% churn over the next five years, with top emerging roles coming from AI, machine learning and data segments, a new study showed on May 1.

Globally, the job market churn is estimated at 23%, with 69 million new jobs expected to be created and 83 million eliminated by 2027, the World Economic Forum said in its latest Future of Jobs report.

Also Read | Explained | Will artificial intelligence lead to job displacements?

"Almost a quarter of jobs (23%) are expected to change in the next five years through growth of 10.2% and decline of 12.3% (globally)," the WEF said.

According to the estimates of the 803 companies surveyed for the report, employers anticipate 69 million new jobs to be created and 83 million eliminated among the 673 million jobs corresponding to the dataset, a net decrease of 14 million jobs, or 2% of current employment.

Regarding India, it said 61% of companies think broader applications of ESG (environment, social and governance) standards will drive job growth, followed by increased adoption of new technologies (59%) and broadening digital access (55%).

Also Read | Indias AI penetration factor at 3.09, highest among all G20, OECD countries: Nasscom

Top roles for industry transformation in India would be AI (artificial intelligence) and machine learning specialists, and data analysts and scientists, it added.

The report also found that manufacturing and oil and gas sectors have the highest level of green skill intensity globally, with India, the U.S. and Finland featuring at the top of the list for the oil and gas sector.

Also, more populous economies such as India and China were more positive than the global average when compared with countries' viewpoints on talent availability while hiring.

On the other hand, India figured among the seven countries where job growth was slower for social jobs than non-social jobs.

In India, 97% of respondents said that the preferred source of funding for training was 'funded by organisation' as against the global average of 87%.

The WEF said that macro trends, including the green transition, ESG standards and localisation of supply chains are the leading drivers of job growth globally, with economic challenges, including high inflation, slower economic growth and supply shortages, posing the greatest threat.

Advancing technology adoption and increasing digitisation will cause significant labour market churn, with an overall net positive in job creation, it added.

Also Read | AI boom is dream and nightmare for workers in India, global South

"For people around the world, the past three years have been filled with upheaval and uncertainty for their lives and livelihoods, with COVID-19, geopolitical and economic shifts, and the rapid advancement of AI and other technologies now risks adding more uncertainty, said Saadia Zahidi, Managing Director, World Economic Forum.

"The good news is that there is a clear way forward to ensure resilience. Governments and businesses must invest in supporting the shift to the jobs of the future through the education, reskilling and social support structures that can ensure individuals are at the heart of the future of work," she added.

The survey covered 803 companies collectively employing more than 11.3 million workers in 27 industry clusters and 45 economies from all world regions.

The WEF said technology continues to pose both challenges and opportunities to labour markets, but employers expect most technologies to contribute positively to job creation.

The fastest-growing roles are being driven by technology and digitalisation. Big data ranks at the top among technologies seen to create jobs. The employment of data analysts and scientists, big data specialists, AI machine learning specialists and cybersecurity professionals is expected to grow on average by 30 per cent by 2027.

At the same time, the fastest declining roles are also being driven by technology and digitalisation, with clerical or secretarial roles, including bank tellers, cashiers and data entry clerks expected to decline fastest.

Also, while expectations of the displacement of physical and manual work by machines have decreased, reasoning, communicating and coordinating all traits with a comparative advantage for humans are expected to be more automatable in future.

Artificial intelligence, a key driver of potential algorithmic displacement, is expected to be adopted by nearly 75% of surveyed companies and is expected to lead to high churn with 50% of organisations expecting it to create job growth and 25% anticipating it to result in job losses.

However, the largest absolute gains in jobs will come from education and agriculture. The report found that jobs in the education industry are expected to grow by about 10%, leading to 3 million additional jobs for vocational education teachers and university and higher education teachers.

Jobs for agricultural professionals, especially agricultural equipment operators, graders and sorters, are expected to see a 15-30% increase, leading to an additional 4 million jobs.

Globally, six in 10 workers will require training before 2027, but only half of the employees are seen to have access to adequate training opportunities today.

At the same time, the report estimates that, on average, 44% of an individual worker's skills will need to be updated.

In response to the cost-of-living crisis, 36% of companies recognise that offering higher wages could help them attract talent. Yet, companies are planning to mix both investment and displacement to make their workforce more productive and cost-effective.

Four in five surveyed companies plan to invest in learning and training on the job as well as automating processes in the next five years.

Read the original:
Indian job market to see 22% churn in 5 yrs; AI, machine learning among top roles: WEF - The Hindu

Very Slow Movie Player Avoids E-Ink Ghosting With Machine Learning – Hackaday

[mat kelcey] was so impressed and inspired by the concept of a very slow movie player (which is the playing of a movie at a slow rate on a kind of DIY photo frame) that he created his own with a high-resolution e-ink display. It shows high definition frames from Alien (1979) at a rate of about one frame every 200 seconds, but a surprising amount of work went into getting a color film intended to look good on a movie screen also look good when displayed on black & white e-ink.

The usual way to display images on a screen that is limited to black or white pixels is dithering, or manipulating relative densities of white and black to give the impression of a much richer image than one might otherwise expect. By itself, a dithering algorithm isnt a cure-all and [mat] does an excellent job of explaining why, complete with loads of visual examples.

One consideration is the e-ink display itself. With these displays, changing the screen contents is where all the work happens, and it can be a visually imperfect process when it does. A very slow movie player aims to present each frame as cleanly as possible in an artful and stylish way, so rewriting the entire screen for every frame would mean uglier transitions, and that just wouldnt do.

So the overall challenge [mat] faced was twofold: how to dither a frame in a way that looked great, but also tried to minimize the number of pixels changed from the previous frame? All of a sudden, he had an interesting problem to solve and chose to solve it in an interesting way: training a GAN to generate the dithers, aiming to balance best image quality with minimal pixel change from the previous frame. The results do a great job of delivering quality visuals even when there are sharp changes in scene contrast to deal with. Curious about the code? Heres the GitHub repository.

Heres the original Very Slow Movie Player that so inspired [mat], and heres a color version that helps make every frame a work of art. And as for dithering? Its been around for ages, but that doesnt mean there arent new problems to solve in that space. For example, making dithering look good in the game Return of the Obra Dinn required a custom algorithm.

Original post:
Very Slow Movie Player Avoids E-Ink Ghosting With Machine Learning - Hackaday

Early antidepressant treatment response prediction in major … – BMC Psychiatry

Standards and guidelines of machine learning in psychiatry were followed when this study was conducted and reported [20].

This study included 291 inpatients in a tertiary hospital who were diagnosed as major depressive disorders. Patient eligibility was determined based on the criteria of the Diagnostic and Statistical Manual of the American Psychiatric Association, Fourth Edition (DSM-IV). Blood samples were collected before antidepressant treatment.

All patients met the following criteria: Han Chinese, 1865 years old, baseline 17-item Hamilton Depression Rating Scale (HAMD-17) [21] scores>17 points, and their depressive symptoms lasted at least 2 weeks. All patients had just been diagnosed or had recently relapsed and had not been on medication for at least two weeks prior to enrollment. All diagnoses were made independently by two psychiatrists with professional tenure or higher, and confirmed by a third psychiatrist. Participants had never been diagnosed with other DSM-IV Axis I diagnosis (including substance use disorder, schizophrenia, affective disorder, bipolar disorder, generalized anxiety disorder, panic disorder, obsessive-compulsive disorder). They had never been diagnosed with personality disorder or mental retardation. Patients with a history of organic brain syndrome, endocrine, and primary organic diseases, or other medical conditions that would hinder psychiatric evaluation were excluded from the study. Other exclusion criteria included blood, heart, liver, and kidney disorders; electroconvulsive therapy in the past 6 months; or an episode of mania in the previous 12 months. Pregnant and nursing females were also excluded from participation.

All study subjects in the study endorsed written consent that was approved by the Zhongda Hospital Ethics Committee (2016ZDSYLL100-P01) under the Declaration of Helsinki.

Response was defined as 50% reduction in HAMD-17 scores from baseline to two weeks [22]. Accordingly, the two-week treatment participants were divided into two groups, responders and non-responders.

Two retrospective self-report questionnaires, the Childhood Trauma Questionnaire (28-item short-form, CTQ-SF) and the Life Events Scale (LES), were used to evaluate recent stress exposures and childhood adversities, respectively. The evaluation of LES and CTQ scales was completed by the same nurse using consistent, scripted language. LES is a self-assessed questionnaire composed of 48 items, reflecting both positive and negative life events experienced within the past year. The LES is divided into positive and negative life events (NLES). The CTQ-SF was dichotomized for use in the gene-environment interaction analyses.

Twelve considered demographic and clinical features are age, gender, years of education, marital status, family history, first occurrence or not, age of onset, number of occurrences, illness duration, HAMD-17, NLES and CTQ-SF baseline scores (Supplemental Material Table1).

Primers were previously designed by us to encompass 100bp upstream and 100bp downstream of TPH2 SNPs that showed a significant association with the antidepressant response, as well as with GC sequence content of CpGs>20% after methylation [11, 12]. Out of the total 24 TPH2 SNPs, only 11 SNPs (rs7305115, rs2129575, rs11179002, rs11178998, rs7954758, rs1386494, rs1487278, rs17110563, rs34115267, rs10784941, rs17110489) met the DNA methylation status criteria of the sequences to be detected (Supplemental Material Table2). Methylation levels of 38 TPH2 CpGs were calculated and presented as the ratio of the number of methylated cytosines to the total number of cytosines.

In the data set comprising 291 observations of 51 variables (12 demographic and clinical features, 38 CpGs methylation levels and 1 response variable), 6% entries were missing (see Fig.1). Of the CpGs methylation levels, 3 CpGs (TPH2-7-99, TPH2-7-142, TPH2-7-170) were excluded because they had more than 45% missing values. Due to the randomness of experimental/technological errors and interrelatedness of the variables, missing completely at random (MCAR)/missing at random (MAR) was assumed for the DNA methylation data and the mean imputation can deal with the missing data [23, 24]. The values of other features with missing values were imputed with mode and mean in the case of categorical and numerical features, respectively.

Missingness pattern in the DNA methylation data set

Normalization (Linear transformation) was used to improve the numerical stability of the model and reduce training time [25]. To avoid overfitting when harnessing maximum amount of data, cross-validation (CV) using entire sample was used to report prediction performance. The CV was 5-fold and the averaged prediction metrics including the area under the receiver operating curve (AUC), F-Measure, G-Mean, accuracy, sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) were reported. Hyperparameter tuning was based on AUC with random search using the caret default tuning settings. A packaging method (Recursive Feature Elimination with random forest, RFE-RF) [26] with 5-fold CV was employed to select the features that contributed the most to the prediction of the early antidepressant response in MDD patients. The variable importance was also estimated using random forest. For better replicability, the 5-fold CV procedure was repeated 10 times.

ML methods were implemented via their interface with the open-source R package caret in a standardized and reproducible way. Five different supervised ML algorithms were used in this study, including logistic regression, classification and regression trees (CART), support vector machine with radial basis function kernel (SVM-RBF), a boosting method (logitboost) and random forests (RF) to develop predictive models. All analyses were implemented in R statistical software (version 4.0.4). We utilized the caret package which implements rpart, caTools, e1071 and RandomForest packages for CART, logitboost, SVM-RBF and RF, respectively.

Read this article:
Early antidepressant treatment response prediction in major ... - BMC Psychiatry