Archive for the ‘Machine Learning’ Category

A path to resourceful autonomous agents | Berkeley News – UC Berkeley

In this talk, Sergey Levine discusses how advances in offline reinforcement learning can enable machine learning systems to learn to make more optimal decisions from data. (Video by: CITRIS and the Banatao Institute)

On Wednesday, April 12, Sergey Levine, associate professor of electrical engineering and computer sciences and the leader of the Robotic AI & Learning (RAIL) Lab at UC Berkeley, delivered the second of four Distinguished Lectures on the Status and Future of AI, co-hosted by CITRIS Research Exchange and the Berkeley Artificial Intelligence Research Group (BAIR).

Levines lecture examined algorithmic advances that can help machine learning systems retain both discernment and flexibility. By training machines with offline reinforcement learning (RL) methods, machines can solve problems in new environments by drawing on large sets of data and lessons previously learned while still maintaining the adaptability to introduce new behaviors, and thus new solutions.

As Levine explained, data-driven, or generative, AI techniques, such as the image generator DALL-E 2, are capable of producing seemingly human-made creations, while RL methods, such as the algorithms that control robots and beat humans at board games, can develop solutions that solve problems in unexpected ways. His research aims to discover how machine learning systems can adapt to unknown situations and make ideal decisions when faced with the full complexity of the real world.

Sergey Levine speaks about using large datasets for reinforcement learning at the Center for Information Technology Research in the Interest of Society and the Banatao Institute (CITRIS) at UC Berkeley.

If we really want agents that are goal-directed, that have purpose, that can come up with inventive solutions, itll take more than just learning, said Levine. Learning is important, and the data is important, but the combination of learning and search is a really powerful recipe.

Data without optimization doesnt allow us to solve new problems in new ways. Optimization without data is hard to apply to the real world outside of simulators, he said. If we can get both of those things, maybe we can get closer to this space explorer robot and actually have it come up with novel solutions to new and unexpected problems.

See the original post:
A path to resourceful autonomous agents | Berkeley News - UC Berkeley

What Are Adversarial Attacks in Machine Learning and How Can We … – MUO – MakeUseOf

Technology often means our lives are more convenient and secure. At the same time, however, such advances have unlocked more sophisticated ways for cybercriminals to attack us and corrupt our security systems, making them powerless.

Artificial intelligence (AI) can be utilized by cybersecurity professionals and cybercriminals alike; similarly, machine learning (ML) systems can be used for both good and evil. This lack of moral compass has made adversarial attacks in ML a growing challenge. So what actually are adversarial attacks? What are their purpose? And how can you protect against them?

Adversarial ML or adversarial attacks are cyberattacks that aim to trick an ML model with malicious input and thus lead to lower accuracy and poor performance. So, despite its name, adversarial ML is not a type of machine learning but a variety of techniques that cybercriminalsaka adversariesuse to target ML systems.

The main objective of such attacks is usually to trick the model into handing out sensitive information, failing to detect fraudulent activities, producing incorrect predictions, or corrupting analysis-based reports. While there are several types of adversarial attacks, they frequently target deep learning-based spam detection.

Youve probably heard about an adversary-in-the-middle attack, which is a new and more effective sophisticated phishing technique that involves the theft of private information, session cookies, and even bypassing multi-factor authentication (MFA) methods. Fortunately, you can combat these with phishing-resistant MFA technology.

The simplest way to classify types of adversarial attacks is to separate them into two main categoriestargeted attacks and untargeted attacks. As is suggested, targeted attacks have a specific target (like a particular person) while untargeted ones dont have anyone specific in mind: they can target almost anybody. Not surprisingly, untargeted attacks are less time-consuming but also less successful than their targeted counterparts.

These two types can be further subdivided into white-box and black-box adversarial attacks, where the color suggests the knowledge or the lack of knowledge of the targeted ML model. Before we dive deeper into white-box and black-box attacks, lets take a quick look at the most common types of adversarial attacks.

What sets these three types of adversarial attacks apart is the amount of knowledge adversaries have about the inner workings of the ML systems theyre planning to attack. While the white-box method requires exhaustive information about the targeted ML model (including its architecture and parameters), the black-box method requires no information and can only observe its outputs.

The grey-box model, meanwhile, stands in the middle of these two extremes. According to it, adversaries can have some information about the data set or other details about the ML model but not all of it.

While humans are still the critical component in strengthening cybersecurity, AI and ML have learned how to detect and prevent malicious attacksthey can increase the accuracy of detecting malicious threats, monitoring user activity, identifying suspicious content, and much more. But can they push back adversarial attacks and protect ML models?

One way we can combat cyberattacks is to train ML systems to recognize adversarial attacks ahead of time by adding examples to their training procedure.

Unlike this brute force approach, the defensive distillation method proposes we use the primary, more efficient model to figure out the critical features of a secondary, less efficient model and then improve the accuracy of the secondary with the primary one. ML models trained with defensive distillation are less sensitive to adversarial samples, which makes them less susceptible to exploitation.

We could also constantly modify the algorithms the ML models use for data classification, which could make adversarial attacks less successful.

Another notable technique is feature squeezing, which will cut back the search space available to adversaries by squeezing out unnecessary input features. Here, the aim is to minimize false positives and make adversarial examples detection more effective.

Adversarial attacks have shown us that many ML models can be shattered in surprising ways. After all, adversarial machine learning is still a new research field within the realm of cybersecurity, and it comes with many complex problems for AI and ML.

While there isnt a magical solution for protecting these models against all adversarial attacks, the future will likely bring more advanced techniques and smarter strategies for tackling this terrible adversary.

View post:
What Are Adversarial Attacks in Machine Learning and How Can We ... - MUO - MakeUseOf

IIT Madras researchers develop machine learning tool to detect tumours in brain, spinal cord – Deccan Herald

Researchers with the prestigious Indian Institute of Technology-Madras (IIT-M) have developed a machine learning-based computational tool for better detection of cancer-causing tumours in the brain and spinal cord. The web server known as GBMDriver (GlioBlastoma Mutiforme Drivers) is now publicly available online.

The GBMDriver was developed specifically to identify driver mutations and passenger mutations (passenger mutations are neutral mutations) in Glioblastoma. In order to develop this web server, a variety of factors such as amino acid properties, di- and tri-peptide motifs, conservation scores, and Position Specific Scoring Matrices (PSSM) were taken into account.

In this study, 9,386 driver mutations and 8728 passenger mutations in glioblastoma were analysed. Driver mutations in glioblastoma were identified with an accuracy of 81.99 per cent, in a blind set of 1809 mutants, which is better than existing computational methods. This method is completely dependent on the protein sequence.

Also Read |IIT Guwahati research team develops catalyst to produce hydrogen from wood alcohol

Glioblastoma is a fast and aggressively growing tumour in the brain and spinal cord. Although there has been research undertaken to understand this tumour, therapeutic options remain limited with an expected survival rate of less than two years from the initial diagnosis, the IIT-M said,

The research was led by Prof. M. Michael Gromiha, Department of Biotechnology, IIT-M and the findings have been published in the reputed peer-reviewed journal Briefings in Bioinformatics.

We have identified the important amino acid features for identifying cancer-causing mutations and achieved the highest accuracy for distinguishing between driver and neutral mutations. We hope that this tool (GBMDriver) could help to prioritize driver mutations in glioblastoma and assist in identifying potential therapeutic targets, thus helping to develop drug design strategies, Prof Gromiha said.

The Key Applications of this research include the methodology and features that are portable to apply for other diseases, and this could serve as one of the important criteria for disease prognosis.

Our method showed an accuracy and AUC of 73.59% and 0.82 respectively on 10-fold cross-validation and 81.99% and 0.87 in a blind set of 1809 mutants. We envisage that the present method is helpful to prioritize driver mutations in glioblastoma and assist in identifying therapeutic targets, Ms Medha Pandey, a PhD Student at IIT-M, said.

The rest is here:
IIT Madras researchers develop machine learning tool to detect tumours in brain, spinal cord - Deccan Herald

Software Development Future: AI and Machine Learning – Robotics and Automation News

Discover how AI and ML can potentially change the software development industry, and how AI affects software development and minimizes developers workload

Software development is a long, complex, and expensive process. Business owners and developers themselves constantly seek ways to optimize it. Good news for you, using artificial intelligence (AI) and machine learning (ML) is becoming increasingly popular in that regard.

According to a recent survey by Gartner, AI and ML are some of the trends that will shape the future of software development. For instance, early 73 percent of adopters of GitHub Copilot, an AI-driven assistant for engineers, reported that it helped them stay in the flow.

The use of this tool resulted in 87 percent of developers conserving mental energy while performing repetitive tasks. That increased their productivity and performance.

Twinslash and other software vendors and developers, on other hand, build AI-driven tools to help engineers with testing, debugging, code maintenance, and so on.

So: lets learn more about AI and ML and their impact on software development.

The ability to automate monotonous manual tasks is one of the significant benefits of AI. There are several ways to effectively implement AI in the development process that completely replace human intervention or, at least, reduce it enough to remove the tediousness of repetitive tasks and allow your engineers to focus on more critical issues.

One of the common applications of AI in development is utilizing it to reduce the number of errors in the code.

AI-powered tools can analyze historical data to identify recurring errors or faults, spot them, and either highlight them for developers to fix or fix them independently in the background. The latter option will reduce the need to roll back for fixes when something goes wrong during your software development process.

AI improves the quality, coverage, and efficiency of software testing. This is because it can analyze large amounts of data without making mistakes. Eggplant and Test Sigma are two well-known AI-assisted software testing tools.

They aid software testers in writing, conducting, and maintaining automated tests to reduce the number of errors and boost the quality of software code. AI in testing is extremely useful in large-scale projects usually combined with automated testing tools, it helps to check through multi-leveled, modular software faster.

ML software can track how a user interacts with a particular platform and process this data to pinpoint patterns that can be used by developers and UX/UI designers to generate a more dynamic, slick software experience.

AI can also help discover UI blocks or elements of UX people are struggling with, so designers and developers can reconfigure and fix them.

Code security is of utmost importance in software development. You can use AI to analyze data and create models to distinguish abnormal activity from ordinary behavior. This will help software development companies catch issues and threats before they can cause any problems.

Apart from that, tools like Snyk, integrated into engineers Integrated Development Environment (IDE) can help pinpoint security vulnerabilities in the apps before releasing them in production.

Lets talk about the main overall trends that are changing the field of software engineering and product development.

Generative AI is a powerful technology that uses AI algorithms to create any kind of data code, design layouts, images, audio or video files, text, and even entire applications. It studies datasets independently and can help produce a wide range of content.

One of the most significant benefits of generative AI is that it can help developers create software quickly and efficiently. For instance, it assists with:

Code completion. AI-enabled code completion tools in IDEs, such as Microsofts Visual Studio Code, can help developers write code faster. For VS, such a tool is called IntelliCode it analyzes a ton of GitHub repos and searches for code snippets that might be relevant for the developers next step and completes the lines for them.

Layout design. AI-powered design tools can analyze user behavior and preferences to generate optimized layouts for websites and mobile applications. For example, for some AI-powered plugins on the design platform, Canva uses machine learning algorithms to suggest layouts, fonts, and colors for marketing materials.

(Entire) app development. With generative AI, developers can automate the process of creating software or pieces of software by telling the AI the prompts for an app one wants to build. OpenAIs Codex can do that, using natural language processing models both for parsing through conversational language and syntax of a programming language.

Continuous delivery is a software development practice where code updates are automatically built, tested, and deployed to production environments. AI-powered continuous delivery can optimize this process by using machine learning algorithms to identify and address issues before they become critical.

Machine learning algorithms can analyze the performance of production environments and predict potential issues before they occur, reducing downtime and improving software reliability.

Apart from that, ML can parse through different deployment strategies and recommend the best approach based on past performance and current conditions of the system.

Now, that trend isnt directly tied to software development, but it impacts it quite significantly. Product and project managers can use AI tools to plan the project faster.

Of course, tools like ChatGPT wont replace the experience of talking to actual potential users, but it can still help them quickly get a grasp of the market situation, trends, or common concerns users have with the competitors product.

Tools like that one can also be utilized to conduct drafts for SWOT analysis, which is also extra vital for planning out the value proposition of the software and prioritizing features-to-be-built for a roadmap. Now, ChatGPT is also a generative AI, but we thought that its application deserves a separate section.

As Eric Schmidt, former CEO of Google, once said, I think theres going to be a huge revolution in software development with AI. That revolution is now. It is safe to say that the future of software development lies in AI and ML.

With the rise of AI-powered programming assistants and AI-enabled design work and security assessments, software development will become more cost-effective. Utilizing AI and ML in software development will also increase productivity, fasten time-to-market, and improve software quality.

You might also like

Read more:
Software Development Future: AI and Machine Learning - Robotics and Automation News

Stablecoins and Machine Learning – the Future of Investment Trading? – JD Supra

For decades, firms engaging in what is known as high frequency trading and algorithmic trading have cornered the market on transactions that utilize a combination of advanced computer algorithms, bespoke hardware and special access to opportunities to generate returns that are often more than 30% above the expected market return, year after year. The tools have historically been locked in firms that allow access only to investors will a large enough net worth to fund a significant up-front investment. The advent of stable-coins and machine learning (capable of generating custom, AI-driven investment plans) along with the development of crypto derivative trading is offering the opportunity to open the market to these types of investment classes.

The Reed Smith On-Chain team has enjoyed its time interacting with the industry experts at Consensus 2023 in Austin, Texas, and is looking forward to the continued discussions and panels involving industry leaders and innovators.

Huge leaps in artificial intelligence, virtual/augmented reality, quantum computing and other fields of computer science are poised to dwarf all the digital disruption that has preceded this moment

Read the rest here:
Stablecoins and Machine Learning - the Future of Investment Trading? - JD Supra