Archive for the ‘Machine Learning’ Category

Research Analyst / Associate / Fellow in Machine Learning and Artificial Intelligence job with NATIONAL UNIVERSITY OF SINGAPORE | 289568 – Times…

The Role

The Sustainable and Green Finance Institute (SGFIN) is a new university-level research institute in the National University of Singapore (NUS), jointly supported by the Monetary Authority of Singapore (MAS) and NUS. SGFIN aspires to develop deep research capabilities in sustainable and green finance, provide thought leadership in the sustainability space, and shape sustainability outcomes across the financial sector and the economy at large.

This role is ideally suited for those wishing to work in academic or industry research in quantitative analysis, particularly in the area of machine learning and artificial intelligence. The responsibilities of the role will include designing and developing various analytical frameworks to analyze structure, unstructured and non-traditional data related to corporate financial, environmental, and social indicators.

There are no teaching obligations for this position, and the candidate will have the opportunity to develop their research portfolio.

Duties and Responsibilities

The successful candidate will be expected to assume the following responsibilities:

Qualifications

Covid-19 Message

At NUS, the health and safety of our staff and students are one of our utmost priorities, and COVID-vaccination supports our commitment to ensure the safety of our community and to make NUS as safe and welcoming as possible. Many of our roles require a significant amount of physical interactions with students/staff/public members. Even for job roles that may be performed remotely, there will be instances where on-campus presences are required.

In accordance with Singapore's legal requirements, unvaccinated workers will not be able to work on the NUS premises with effect from 15 January 2022. As such, job applicants will need to be fully COVID-19 vaccinated to secure successful employment with NUS.

Read the original here:
Research Analyst / Associate / Fellow in Machine Learning and Artificial Intelligence job with NATIONAL UNIVERSITY OF SINGAPORE | 289568 - Times...

12 examples of artificial intelligence in everyday life – ITProPortal

In the article below, you can check out twelve examples of AI being present in our everyday lives.

Artificial intelligence (opens in new tab) (AI) is growing in popularity, and it's not hard to see why. AI has the potential to be applied in many different ways, from cooking to healthcare.

Though artificial intelligence may be a buzzword today, tomorrow, it might just become a standard part of our everyday lives. In fact - it's already here.

They work and continue to advance by using lots of sensor data, learning how to handle traffic and making real-time decisions.

Also known as autonomous vehicles, these cars use AI tech and machine learning to move around without the passenger having to take control at any time.

Let's begin with something really ubiquitous - smart digital assistants. Here we are talking about Siri, Google Assistant, Alexa and Cortana.

We included them in our list because they can essentially listen and then respond to your commands, turning them into actions.

So, you hit up Siri, you give her a command, like "call a friend," she analyzes what you said, sifts through all the background noise surrounding your speech, interprets your command, and actually does it, all in a couple of seconds.

The best part here is that these assistants are getting smarter and smarter, improving every stage of the command process we mentioned above. You don't have to be as specific with your commands as you were just a couple of years ago.

Furthermore, virtual assistants have become better and better at figuring out filtering useless background noise from your actual commands.

One of the most well-known AI initiatives is a project run by Microsoft. It comes as no surprise that Microsoft is one of the top AI companies (opens in new tab) around (though it's definitely not the only one).

The Microsoft Project InnerEye (opens in new tab) is state-of-the-art research that can potentially change the world.

This project aims to study the brain, specifically the brain's neurological system, to better understand how it functions. The aim of this project is to eventually be able to use artificial intelligence to diagnose and treat various neurological diseases.

The college students' (or is it professor's?) nightmare. Whether you are a content manager or a teacher grading essays, you have the same problem - the internet makes plagiarism easier.

There is a nigh unlimited amount of information and data out there, and less-than-scrupulous students and employees will readily take advantage of that.

Indeed, no human could compare and contrast somebody's essay with all the data out there. AIs are a whole different beast.

They can sift through an insane amount of information, compare it with the relevant text, and see if there is a match or not.

Furthermore, thanks to advancement and growth in this area, some tools can actually check sources in foreign languages, as well as images and audio.

You might have noticed that media recommendations on certain platforms are getting better and better, Netflix, YouTube, and Spotify being just three examples. You can thank AIs and machine learning for that.

The three platforms we mentioned take into account what you have already seen and liked. That's the easy part. Then, they compare and contrast it with thousands, if not tens of thousands, of pieces of media. They essentially learn from the data you provide, and then use their own database to provide you with content that best suits your needs.

Let's simplify this process for YouTube, just as an example.

The platform uses data such as tags, demographic data like your age or gender, as well as the same data of people consuming other pieces of media. Then, it mixes and matches, giving you your suggestions.

Today, many larger banks give you the option of depositing checks through your smartphone. Instead of actually walking to a bank, you can do it with just a couple of taps.

Besides the obvious safeguards when it comes to accessing your bank account through your phone, a check also requires your signature.

Now banks use AIs and machine learning software to read your handwriting, compare it with the signature you gave to the bank before, and safely use it to approve a check.

In general, machine learning and AI tech speeds up most operations done by software in a bank. This all leads to the more efficient execution of tasks, decreasing wait times and cost.

And while we are on the subject of banking, let's talk about fraud for a little bit. A bank processes a huge amount of transactions every day. Tracking all of that, analyzing, it's impossible for a regular human being.

Furthermore, how fraudulent transactions look changes from day to day. With AI and machine learning algorithms, you can have thousands of transactions analyzed in a second. Furthermore, you can also have them learn, figure out what problematic transactions can look like, and prepare themselves for future issues.

Next, whenever you apply for a loan or maybe get a credit card, a bank needs to check your application.

Taking into account multiple factors, like your credit score, your financial history, all of that can now be handled by software. This leads to shorter approval wait times and a lower margin for error.

Many businesses are using AI, specifically chatbots, as a way for their customers to interact with them.

Chatbots are often used as a customer service option for companies that do not have enough staff available at any given time to answer questions or respond to inquiries.

By using chatbots, these companies can free up staff time for other tasks while still getting important information from their customers.

These are a godsend during heavy traffic times, like Black Friday or Cyber Monday. They can save your company from getting overwhelmed with questions, allowing you to serve your customers much better.

Now, this is something we can all be thankful for - spam filters.

A typical spam filter has a number of rules and algorithms that minimize the amount of spam that can reach you. This not only saves you from annoying ads and Nigerian princes, but it also helps against credit card fraud, identity theft, and malware.

Now, what makes a good spam filter effective is the AI running it. The AI behind the filter uses email metadata; it keeps an eye on specific words or phrases, it focuses on some signals, all for the purpose of filtering out spam.

This everyday AI aspect got really popular through Netflix.

Namely - you might have noticed that a lot of thumbnails on websites and certain streaming apps have been replaced by short videos. One of the main reasons this got so popular is AI and machine learning.

Instead of having editors spend hundreds of hours on shortening, filtering, and cutting up longer videos into three-second videos, the AI does it for you. It analyzes hundreds of hours of content and then successfully summarizes it into a short bit of media.

AI also has potential in more unexpected areas, such as cooking.

A company called Rasa has developed an AI system that analyzes food and then recommends recipes based on what you have in your fridge and pantry. This type of AI is a great way for people who enjoy cooking but don't want to spend too much time planning out meals ahead of time.

If there is one thing we can say about AI and machine learning (opens in new tab), it is that they make every tech they come in contact with more effective and powerful. Facial recognition is no different.

There are now many apps that use AI for their facial recognition needs. For example, Snapchat uses AI tech to apply face filters by actually recognizing the visual information presented as a human face.

Facebook can now identify faces in specific photos and invite people to tag themselves or their friends.

And, of course, think about unlocking your phone with your face. Well, it needs AI and machine learning to function.

Let's take Apple Face ID as an example. When you are setting it up, it scans your face and puts roughly thirty thousand dos on it. It uses these dots as markers to help it recognize your face from many different angles.

This allows you to unlock your phone with your face in many different situations and lighting environments while at the same time preventing somebody else from doing the same.

The future is now. AI technology will only continue to develop, to grow and to become more and more vital for every industry and almost every aspect of our everyday lives. If the above examples are to be believed, it's only a matter of time.

Artificial intelligence (opens in new tab) will continue developing and being present in new areas of our lives in the future. As more innovative applications come out, we'll see more ways that AI can make our lives easier and more productive!

Read this article:
12 examples of artificial intelligence in everyday life - ITProPortal

Learning grammars of molecules to build them in the lab – The Hindu

Researchers generate molecular structures using machine learning algorithms, trained on smaller datasets

Researchers generate molecular structures using machine learning algorithms, trained on smaller datasets

We think of molecules as occurring in nature. Large macromolecules lead us to the basis of life. The twentieth century gave us new materials synthesised in the lab. We can now have designer molecules, where we formulate a wish list of properties for material (say, desired tensile strength as well as flexibility) and seek to not merely discover, but also construct, molecules that exhibit such properties. Generating molecules computationally involves the use of Artificial Intelligence (AI) and machine learning algorithms that require large datasets to train on. Moreover, the molecules thus designed may be hard to synthesise. So, the challenge is to circumvent these shortfalls.

Now, researchers from Massachusetts Institute of Technology (MIT) and International Business Machines (IBM) have together devised a method to generate molecules computationally which combines the power of machine learning with what are called graph grammars. This approach requires much smaller datasets (for example, about 100 datasets in the place of 81,000, as the researchers mention) and builds up the molecules in a bottom-up approach. The group has demonstrated this method on naphthalene diisocyanate molecule in a paper that has been reviewed and accepted for presentation at the International Conference on Learning Representations (ICLR 2022).

Artificial intelligence (AI) techniques, especially the use of machine learning algorithms, are in vogue today to find new molecular structures. These methods require tens of thousands of samples to train the neural networks. Also, the designed molecules may not be physically synthesisable. Ensuring synthesisability in these methods may need the incorporation of chemical knowledge, and extracting such knowledge from datasets is a significant challenge.

Chemical datasets with required properties may be very small in number. For instance, some researchers reported in 2019 that datasets on polyurethane property prediction have as few as 20 samples.

If we surmount all these challenges, there is a further problem with typical machine learning algorithms, which is that we cannot explain their results. That is, after discovering a molecule, we cannot figure out how we came up with it. The implication is that if we slightly change the desired properties, we may need to search all over again. Explainable AI is considered one of the grand challenges of contemporary AI research.

One alternative to such deep learning methods is the use of formal grammars. Grammar, in the context of languages, provides rules for how sentences can be constructed from words. We can design chemical grammars that specify rules for constructing molecules from atoms. In the last few years, several research teams have built such grammars. While this approach is hopeful, it calls for extensive expertise in chemistry, and after the grammar is built, incorporating properties from datasets, or optimisation, is hard.

Here, the researchers use mathematical objects called graph grammars for this purpose.

What mathematicians call graphs are networks or webs with nodes and edges between them. In this approach, a molecule is represented as a graph where the nodes are strings of atoms and edges are chemical bonds. A grammar for such structures tells us how to replace a string in a node with a whole molecular structure. Thus, parsing a structure means contracting some substructure; we keep doing this repeatedly until we get a single node.

The model uses machine learning techniques to learn graph grammars from datasets. The algorithm takes as input a set of molecular structures and a set of evaluation metrics (for example, synthesisability).

The grammar is constructed bottom-up, creating rules by contractions; choosing which structures to contract is based on the learning component, a neural network which builds on the chemical information. The algorithm simultaneously performs multiple, randomised searches to obtain multiple grammars as candidates. It still needs to evaluate them, and this is done using the input metrics.

While the method has been demonstrated for use in building molecules, the applications could be far reaching, beyond chemistry.

(The writer is a computer scientist, formerly with The Institute of Mathematical Sciences, Chennai, and currently visiting professor at Azim Premji University, Bengaluru.)

AI techniques used earlier required tens of thousands of samples to train the neural networks. Also, the designed molecules were not always physically synthesisable.

Read the original:
Learning grammars of molecules to build them in the lab - The Hindu

Does this artificial intelligence think like a human? – MIT News

In machine learning, understanding why a model makes certain decisions is often just as important as whether those decisions are correct. For instance, a machine-learning model might correctly predict that a skin lesion is cancerous, but it could have done so using an unrelated blip on a clinical photo.

While tools exist to help experts make sense of a models reasoning, often these methods only provide insights on one decision at a time, and each must be manually evaluated. Models are commonly trained using millions of data inputs, making it almost impossible for a human to evaluate enough decisions to identify patterns.

Now, researchers at MIT and IBM Research have created a method that enables a user to aggregate, sort, and rank these individual explanations to rapidly analyze a machine-learning models behavior. Their technique, called Shared Interest, incorporates quantifiable metrics that compare how well a models reasoning matches that of a human.

Shared Interest could help a user easily uncover concerning trends in a models decision-making for example, perhaps the model often becomes confused by distracting, irrelevant features, like background objects in photos. Aggregating these insights could help the user quickly and quantitatively determine whether a model is trustworthy and ready to be deployed in a real-world situation.

In developing Shared Interest, our goal is to be able to scale up this analysis process so that you could understand on a more global level what your models behavior is, says lead author Angie Boggust, a graduate student in the Visualization Group of the Computer Science and Artificial Intelligence Laboratory (CSAIL).

Boggust wrote the paper with her advisor, Arvind Satyanarayan, an assistant professor of computer science who leads the Visualization Group, as well as Benjamin Hoover and senior author Hendrik Strobelt, both of IBM Research. The paper will be presented at the Conference on Human Factors in Computing Systems.

Boggust began working on this project during a summer internship at IBM, under the mentorship of Strobelt. After returning to MIT, Boggust and Satyanarayan expanded on the project and continued the collaboration with Strobelt and Hoover, who helped deploy the case studies that show how the technique could be used in practice.

Human-AI alignment

Shared Interest leverages popular techniques that show how a machine-learning model made a specific decision, known as saliency methods. If the model is classifying images, saliency methods highlight areas of an image that are important to the model when it made its decision. These areas are visualized as a type of heatmap, called a saliency map, that is often overlaid on the original image. If the model classified the image as a dog, and the dogs head is highlighted, that means those pixels were important to the model when it decided the image contains a dog.

Shared Interest works by comparing saliency methods to ground-truth data. In an image dataset, ground-truth data are typically human-generated annotations that surround the relevant parts of each image. In the previous example, the box would surround the entire dog in the photo. When evaluating an image classification model, Shared Interest compares the model-generated saliency data and the human-generated ground-truth data for the same image to see how well they align.

The technique uses several metrics to quantify that alignment (or misalignment) and then sorts a particular decision into one of eight categories. The categories run the gamut from perfectly human-aligned (the model makes a correct prediction and the highlighted area in the saliency map is identical to the human-generated box) to completely distracted (the model makes an incorrect prediction and does not use any image features found in the human-generated box).

On one end of the spectrum, your model made the decision for the exact same reason a human did, and on the other end of the spectrum, your model and the human are making this decision for totally different reasons. By quantifying that for all the images in your dataset, you can use that quantification to sort through them, Boggust explains.

The technique works similarly with text-based data, where key words are highlighted instead of image regions.

Rapid analysis

The researchers used three case studies to show how Shared Interest could be useful to both nonexperts and machine-learning researchers.

In the first case study, they used Shared Interest to help a dermatologist determine if he should trust a machine-learning model designed to help diagnose cancer from photos of skin lesions. Shared Interest enabled the dermatologist to quickly see examples of the models correct and incorrect predictions. Ultimately, the dermatologist decided he could not trust the model because it made too many predictions based on image artifacts, rather than actual lesions.

The value here is that using Shared Interest, we are able to see these patterns emerge in our models behavior. In about half an hour, the dermatologist was able to make a confident decision of whether or not to trust the model and whether or not to deploy it, Boggust says.

In the second case study, they worked with a machine-learning researcher to show how Shared Interest can evaluate a particular saliency method by revealing previously unknown pitfalls in the model. Their technique enabled the researcher to analyze thousands of correct and incorrect decisions in a fraction of the time required by typical manual methods.

In the third case study, they used Shared Interest to dive deeper into a specific image classification example. By manipulating the ground-truth area of the image, they were able to conduct a what-if analysis to see which image features were most important for particular predictions.

The researchers were impressed by how well Shared Interest performed in these case studies, but Boggust cautions that the technique is only as good as the saliency methods it is based upon. If those techniques contain bias or are inaccurate, then Shared Interest will inherit those limitations.

In the future, the researchers want to apply Shared Interest to different types of data, particularly tabular data which is used in medical records. They also want to use Shared Interest to help improve current saliency techniques. Boggust hopes this research inspires more work that seeks to quantify machine-learning model behavior in ways that make sense to humans.

This work is funded, in part, by the MIT-IBM Watson AI Lab, the United States Air Force Research Laboratory, and the United States Air Force Artificial Intelligence Accelerator.

See more here:
Does this artificial intelligence think like a human? - MIT News

Stanford center uses AI and machine learning to expand data on women’s and children’s health, director says – The Stanford Daily

Stanfords Center for Artificial Intelligence in Medicine and Imaging (AIMI) is increasing engagement around the use of artificial intelligence (AI) and machine learning to build a better understanding of data on womens and childrens health, according to AIMI Director and radiology professor Curt Langlotz.

Langlotz explained that, while AIMI initially focused on applying AI to medical imaging, it has since expanded its focus to applications of AI for other types of data, such as electronic health records.

Specifically, the center conducts interdisciplinary machine learning research that optimizes how data of all forms are used to promote health, Langlotz said during a Monday event hosted by the Maternal and Child Health Research Institute (MCHRI). And that interdisciplinary flavor is in our DNA.

The center now has over 140 affiliated faculty across 20 departments, primarily housed in the engineering department and the school of medicine at Stanford, according to Langlotz.

AIMI has four main pillars: building an infrastructure for data science research, facilitating interdisciplinary collaborations, engaging the community and providing funding.

The center provides funding predominantly through a series of grant programs. Langlotz noted that the center awarded seven $75,000 grants in 2019 to fund mostly imaging projects, but it has since diversified funding to go toward projects investigating other forms of data, such as electronic health records. AIMI also collaborated with the Human-Centered Institute for Artificial Intelligence (HAI) in 2021 to give out six $200,000 grants, he added.

Outside of funding, AIMI hosts a virtual symposium on technology and health annually and has a health-policy committee that informs policymakers on the intersection between AI and healthcare. Furthermore, the center pairs industry partners with laboratories to work on larger research projects of mutual interest as part of the only industry affiliate program for the school of medicine, Langlotz added.

Industry often has expertise that we dont, so they may have expertise on bringing products to markets as they may know what customers are looking for, Langlotz said. And if were building these kinds of algorithms, we really would like them to ultimately reach patients.

Heike Daldrup-Link, a professor of radiology and pediatrics, and Alison Callahan, a research scientist at the Center for Biomedical Informatics, shared their research funded by the AIMI Center that rests at the intersection of computer science and medicine.

Daldrup-Links research involves analyzing childrens responses to lymphoma cancer therapy with a model that examines tumor sites using positron emission tomography (PET) scans. These scans reveal the metabolic processes occurring within tissues and organs, according toDaldrup-Link. The scans also serve as a good source to build algorithms because there are at least 270,000 scans per year from lymphoma patients, resulting in a large amount of available data.

Callahan is building AI models to extract information from electronic health records to learn more about pregnancy and postnatal health outcomes. She explained that much of the health data available from records is currently unstructured, meaning it does not conform to a database or simple model. Still, AI methods can really shine in extracting valuable information from unstructured content like clinical texts or notes, she said.

Callahan and Daldrup-Link are just two examples of researchers who use AI and machine learning methods to produce novel research on womens and childrens health. Developing new methods such as these are important in solving complex problems related to the field of healthcare, according to Langlotz.

If youre working on difficult and interesting applied problems that are clinically important, youre likely to encounter the need to develop new and interesting methods, Langlotz said. And thats proven true for us.

Original post:
Stanford center uses AI and machine learning to expand data on women's and children's health, director says - The Stanford Daily