Archive for the ‘Machine Learning’ Category

NORCAT partners with Vector Institute on AI training program – MINING.COM – MINING.com

Vectors mission to develop and sustain responsible AI-based innovation to help foster the economic growth and improve the lives of Canadians is aligned with NORCATs goal as a regional innovation centre to accelerate the growth of innovative companies that will drive future economic and social prosperity for Canada, said NORCAT CEO Don Duval in a press release.

We are proud to collaborate with the Vector Institute to create AI-based innovation, growth and productivity in Canada by focusing on the transformative potential of machine and deep learning, he said. Together, we will work to advance AI research and drive its application, adoption and commercialization in the global mining industry.

This partnership will allow NORCAT to offer its portfolio of mining technology clients access to Vectors FastLane program. Launched in 2021, the program is tailored to the needs of Canadas growth-oriented small-and-medium sized enterprises (SMEs), delivering leading-edge AI knowledge transfer that allows this unique community to capitalize on the transformative power of artificial intelligence.

In addition to its talent recruitment and workforce development initiatives, Vector works with its industry community through the FastLane program to deliver training and knowledge transfer that improves products and processes, including an expanded suite of programs, training courses and collaborative projects that will enable participants to raise their AI fluency, develop a deeper understanding of AIs business value, experiment with applying AI models to their real-world challenges and acquire the skills to compete and innovate using AI.

AI applies to every sector of our economy and represents a once-in-a-generation opportunity to improve the lives of Canadians, said Garth Gibson, president and CEO, Vector Institute. Through the FastLane program, Vectors partnership with NORCAT will help the Canadian mining industry do just that by driving innovation, upskilling workers and recruiting world-class talent.

For more information is here.

Read the rest here:
NORCAT partners with Vector Institute on AI training program - MINING.COM - MINING.com

Johns Hopkins and Amazon collaborate to explore transformative power of AI – The Hub at Johns Hopkins

ByLisa Ercolano

Johns Hopkins University and Amazon are teaming up to harness the power of artificial intelligence to transform the way humans interact online and with the world. The new JHU + Amazon Initiative for Interactive AI, housed in the Johns Hopkins Whiting School of Engineering, will leverage the university's world-class expertise in interactive AI to advance groundbreaking technologies in machine learning, computer vision, natural language understanding, and speech processing; democratize access to the benefits of AI innovations; and broaden participation in research from diverse, interdisciplinary scholars and other innovators.

Amazon's investment will span five years, comprising doctoral fellowships, sponsored research funding, gift funding, and community projects. Sanjeev Khudanpur, an associate professor of electrical and computer engineering at the Whiting School, will serve as the initiative's founding director. Khudanpur is an expert in the application of information-theoretic methods to human language technologies such as automatic speech recognition, machine translation, and natural language processing.

"Hopkins is already renowned for its pioneering work in these areas of AI, and working with Amazon researchers will accelerate the timetable for the next big strides," Khudanpur said. "I often compare humans and AI to Luke Skywalker and R2D2 in Star Wars: They're able to accomplish amazing feats in a tiny X-wing fighter because they interact effectively to align their complementary strengths. I am very excited at the prospect of the Hopkins AI community coming together under the auspices of this initiative, and charting the future of transformational, interactive AI together with Amazon researchers,"

Ed Schlesinger, dean of the Whiting School, said, "We are very excited to work with Amazon in this new initiative. We value the challenges that they bring us and the life-changing potential of the solutions we will create together, and look forward to strengthening our work together over the coming years."

Amazon's funding will support a broad range of activities, including annual fellowships for doctoral students; research projects led by Hopkins Engineering faculty in collaboration with postdoctoral researchers, undergraduate and graduate students, and research staff; and events and activities, such as lectures, workshops, and competitions aimed at making AI activities more accessible to the general public in the Baltimore-Washington region.

Prem Natarajan, Alexa AI vice president of natural understanding, says the partnership underscores Amazon's commitment to addressing the greatest challenges in Al, democratizing access to the benefits of Al innovations, and broadening participation in research from diverse, interdisciplinary scholars and other innovators.

"This initiative brings together the top talent at Amazon and Johns Hopkins in a joint mission to drive groundbreaking advances in interactive and multimodal AI," Natarajan said. "These advances will power the next generation of interactive AI experiences across a wide variety of domainsfrom home productivity to entertainment to health."

The two organizations have teamed up in the past, with four Johns Hopkins faculty members joining Amazon as part of its Scholars program: Ozge Sahin, a professor of operations management and business analytics at the Johns Hopkins Carey Business School, in 2019, and in 2020, Gregory Hager, Mandell Bellmore Professor of Computer Science; Ren Vidal, Herschel Seder Professor of Biomedical Engineering and director of the Mathematical Institute for Data Science; and Marin Kobilarov, associate professor of mechanical engineering.

The new initiative will build on Hopkins Engineering's existing strengths in the areas of machine learning, computer vision, natural language understanding, and speech processing. Its Mathematical Institute for Data Science conducts cutting-edge research on the mathematical, statistical, and computational foundations of machine learning and computer vision. The Center for Imaging Science and the Laboratory for Computational Sensing and Robotics conduct fundamental and applied research in nearly every area of basic and applied computer vision. The university's Center for Language and Speech Processing, one of the largest and most influential academic research centers of its kind in the world, conducts research in acoustic processing, automatic speech recognition, cognitive modeling, computational linguistics, information extraction, machine translation, and text analysis. CLSP researchers conducted some of the foundational research that led to the development of digital voice assistants.

"AI has tremendous potential to enhance human abilities, and to reach it, AI of the future will interact with humans the same way we naturally interact with each other. What endeared Amazon Alexa to users was the effortlessness of the interaction. I envision that the research done under this initiative will make it possible for us to use much more powerful AI in equally effortless ways, regardless of our own physical limitations," Khudanpur said.

Hager, a director for Amazon Physical Retail, and Vidal, currently an Amazon Scholar in visual search and AR, were instrumental in helping Amazon and JHU establish the collaboration.

"Computer vision and machine learning are transforming the way in which humans shop, share content, and interact with each other," Vidal said. "This partnership will lead to new collaborations between JHU and Amazon scientists that will help translate cutting-edge advances in deep learning and visual recognition into algorithms that help humans interact with the world."

Seth Zonies, a director of business development for Johns Hopkins Technology Ventures, the university's commercialization and industry collaboration arm, said, "This collaboration represents the opportunity to harness academic ingenuity to address needs in society through industry collaboration. The engineering faculty at Johns Hopkins are committed to applied research, and Amazon is at the forefront of product development in this field. We expect this collaboration to result in deployable, high-impact innovation."

Read more:
Johns Hopkins and Amazon collaborate to explore transformative power of AI - The Hub at Johns Hopkins

Research Analyst / Associate / Fellow in Machine Learning and Artificial Intelligence job with NATIONAL UNIVERSITY OF SINGAPORE | 289568 – Times…

The Role

The Sustainable and Green Finance Institute (SGFIN) is a new university-level research institute in the National University of Singapore (NUS), jointly supported by the Monetary Authority of Singapore (MAS) and NUS. SGFIN aspires to develop deep research capabilities in sustainable and green finance, provide thought leadership in the sustainability space, and shape sustainability outcomes across the financial sector and the economy at large.

This role is ideally suited for those wishing to work in academic or industry research in quantitative analysis, particularly in the area of machine learning and artificial intelligence. The responsibilities of the role will include designing and developing various analytical frameworks to analyze structure, unstructured and non-traditional data related to corporate financial, environmental, and social indicators.

There are no teaching obligations for this position, and the candidate will have the opportunity to develop their research portfolio.

Duties and Responsibilities

The successful candidate will be expected to assume the following responsibilities:

Qualifications

Covid-19 Message

At NUS, the health and safety of our staff and students are one of our utmost priorities, and COVID-vaccination supports our commitment to ensure the safety of our community and to make NUS as safe and welcoming as possible. Many of our roles require a significant amount of physical interactions with students/staff/public members. Even for job roles that may be performed remotely, there will be instances where on-campus presences are required.

In accordance with Singapore's legal requirements, unvaccinated workers will not be able to work on the NUS premises with effect from 15 January 2022. As such, job applicants will need to be fully COVID-19 vaccinated to secure successful employment with NUS.

Read the original here:
Research Analyst / Associate / Fellow in Machine Learning and Artificial Intelligence job with NATIONAL UNIVERSITY OF SINGAPORE | 289568 - Times...

12 examples of artificial intelligence in everyday life – ITProPortal

In the article below, you can check out twelve examples of AI being present in our everyday lives.

Artificial intelligence (opens in new tab) (AI) is growing in popularity, and it's not hard to see why. AI has the potential to be applied in many different ways, from cooking to healthcare.

Though artificial intelligence may be a buzzword today, tomorrow, it might just become a standard part of our everyday lives. In fact - it's already here.

They work and continue to advance by using lots of sensor data, learning how to handle traffic and making real-time decisions.

Also known as autonomous vehicles, these cars use AI tech and machine learning to move around without the passenger having to take control at any time.

Let's begin with something really ubiquitous - smart digital assistants. Here we are talking about Siri, Google Assistant, Alexa and Cortana.

We included them in our list because they can essentially listen and then respond to your commands, turning them into actions.

So, you hit up Siri, you give her a command, like "call a friend," she analyzes what you said, sifts through all the background noise surrounding your speech, interprets your command, and actually does it, all in a couple of seconds.

The best part here is that these assistants are getting smarter and smarter, improving every stage of the command process we mentioned above. You don't have to be as specific with your commands as you were just a couple of years ago.

Furthermore, virtual assistants have become better and better at figuring out filtering useless background noise from your actual commands.

One of the most well-known AI initiatives is a project run by Microsoft. It comes as no surprise that Microsoft is one of the top AI companies (opens in new tab) around (though it's definitely not the only one).

The Microsoft Project InnerEye (opens in new tab) is state-of-the-art research that can potentially change the world.

This project aims to study the brain, specifically the brain's neurological system, to better understand how it functions. The aim of this project is to eventually be able to use artificial intelligence to diagnose and treat various neurological diseases.

The college students' (or is it professor's?) nightmare. Whether you are a content manager or a teacher grading essays, you have the same problem - the internet makes plagiarism easier.

There is a nigh unlimited amount of information and data out there, and less-than-scrupulous students and employees will readily take advantage of that.

Indeed, no human could compare and contrast somebody's essay with all the data out there. AIs are a whole different beast.

They can sift through an insane amount of information, compare it with the relevant text, and see if there is a match or not.

Furthermore, thanks to advancement and growth in this area, some tools can actually check sources in foreign languages, as well as images and audio.

You might have noticed that media recommendations on certain platforms are getting better and better, Netflix, YouTube, and Spotify being just three examples. You can thank AIs and machine learning for that.

The three platforms we mentioned take into account what you have already seen and liked. That's the easy part. Then, they compare and contrast it with thousands, if not tens of thousands, of pieces of media. They essentially learn from the data you provide, and then use their own database to provide you with content that best suits your needs.

Let's simplify this process for YouTube, just as an example.

The platform uses data such as tags, demographic data like your age or gender, as well as the same data of people consuming other pieces of media. Then, it mixes and matches, giving you your suggestions.

Today, many larger banks give you the option of depositing checks through your smartphone. Instead of actually walking to a bank, you can do it with just a couple of taps.

Besides the obvious safeguards when it comes to accessing your bank account through your phone, a check also requires your signature.

Now banks use AIs and machine learning software to read your handwriting, compare it with the signature you gave to the bank before, and safely use it to approve a check.

In general, machine learning and AI tech speeds up most operations done by software in a bank. This all leads to the more efficient execution of tasks, decreasing wait times and cost.

And while we are on the subject of banking, let's talk about fraud for a little bit. A bank processes a huge amount of transactions every day. Tracking all of that, analyzing, it's impossible for a regular human being.

Furthermore, how fraudulent transactions look changes from day to day. With AI and machine learning algorithms, you can have thousands of transactions analyzed in a second. Furthermore, you can also have them learn, figure out what problematic transactions can look like, and prepare themselves for future issues.

Next, whenever you apply for a loan or maybe get a credit card, a bank needs to check your application.

Taking into account multiple factors, like your credit score, your financial history, all of that can now be handled by software. This leads to shorter approval wait times and a lower margin for error.

Many businesses are using AI, specifically chatbots, as a way for their customers to interact with them.

Chatbots are often used as a customer service option for companies that do not have enough staff available at any given time to answer questions or respond to inquiries.

By using chatbots, these companies can free up staff time for other tasks while still getting important information from their customers.

These are a godsend during heavy traffic times, like Black Friday or Cyber Monday. They can save your company from getting overwhelmed with questions, allowing you to serve your customers much better.

Now, this is something we can all be thankful for - spam filters.

A typical spam filter has a number of rules and algorithms that minimize the amount of spam that can reach you. This not only saves you from annoying ads and Nigerian princes, but it also helps against credit card fraud, identity theft, and malware.

Now, what makes a good spam filter effective is the AI running it. The AI behind the filter uses email metadata; it keeps an eye on specific words or phrases, it focuses on some signals, all for the purpose of filtering out spam.

This everyday AI aspect got really popular through Netflix.

Namely - you might have noticed that a lot of thumbnails on websites and certain streaming apps have been replaced by short videos. One of the main reasons this got so popular is AI and machine learning.

Instead of having editors spend hundreds of hours on shortening, filtering, and cutting up longer videos into three-second videos, the AI does it for you. It analyzes hundreds of hours of content and then successfully summarizes it into a short bit of media.

AI also has potential in more unexpected areas, such as cooking.

A company called Rasa has developed an AI system that analyzes food and then recommends recipes based on what you have in your fridge and pantry. This type of AI is a great way for people who enjoy cooking but don't want to spend too much time planning out meals ahead of time.

If there is one thing we can say about AI and machine learning (opens in new tab), it is that they make every tech they come in contact with more effective and powerful. Facial recognition is no different.

There are now many apps that use AI for their facial recognition needs. For example, Snapchat uses AI tech to apply face filters by actually recognizing the visual information presented as a human face.

Facebook can now identify faces in specific photos and invite people to tag themselves or their friends.

And, of course, think about unlocking your phone with your face. Well, it needs AI and machine learning to function.

Let's take Apple Face ID as an example. When you are setting it up, it scans your face and puts roughly thirty thousand dos on it. It uses these dots as markers to help it recognize your face from many different angles.

This allows you to unlock your phone with your face in many different situations and lighting environments while at the same time preventing somebody else from doing the same.

The future is now. AI technology will only continue to develop, to grow and to become more and more vital for every industry and almost every aspect of our everyday lives. If the above examples are to be believed, it's only a matter of time.

Artificial intelligence (opens in new tab) will continue developing and being present in new areas of our lives in the future. As more innovative applications come out, we'll see more ways that AI can make our lives easier and more productive!

Read this article:
12 examples of artificial intelligence in everyday life - ITProPortal

Learning grammars of molecules to build them in the lab – The Hindu

Researchers generate molecular structures using machine learning algorithms, trained on smaller datasets

Researchers generate molecular structures using machine learning algorithms, trained on smaller datasets

We think of molecules as occurring in nature. Large macromolecules lead us to the basis of life. The twentieth century gave us new materials synthesised in the lab. We can now have designer molecules, where we formulate a wish list of properties for material (say, desired tensile strength as well as flexibility) and seek to not merely discover, but also construct, molecules that exhibit such properties. Generating molecules computationally involves the use of Artificial Intelligence (AI) and machine learning algorithms that require large datasets to train on. Moreover, the molecules thus designed may be hard to synthesise. So, the challenge is to circumvent these shortfalls.

Now, researchers from Massachusetts Institute of Technology (MIT) and International Business Machines (IBM) have together devised a method to generate molecules computationally which combines the power of machine learning with what are called graph grammars. This approach requires much smaller datasets (for example, about 100 datasets in the place of 81,000, as the researchers mention) and builds up the molecules in a bottom-up approach. The group has demonstrated this method on naphthalene diisocyanate molecule in a paper that has been reviewed and accepted for presentation at the International Conference on Learning Representations (ICLR 2022).

Artificial intelligence (AI) techniques, especially the use of machine learning algorithms, are in vogue today to find new molecular structures. These methods require tens of thousands of samples to train the neural networks. Also, the designed molecules may not be physically synthesisable. Ensuring synthesisability in these methods may need the incorporation of chemical knowledge, and extracting such knowledge from datasets is a significant challenge.

Chemical datasets with required properties may be very small in number. For instance, some researchers reported in 2019 that datasets on polyurethane property prediction have as few as 20 samples.

If we surmount all these challenges, there is a further problem with typical machine learning algorithms, which is that we cannot explain their results. That is, after discovering a molecule, we cannot figure out how we came up with it. The implication is that if we slightly change the desired properties, we may need to search all over again. Explainable AI is considered one of the grand challenges of contemporary AI research.

One alternative to such deep learning methods is the use of formal grammars. Grammar, in the context of languages, provides rules for how sentences can be constructed from words. We can design chemical grammars that specify rules for constructing molecules from atoms. In the last few years, several research teams have built such grammars. While this approach is hopeful, it calls for extensive expertise in chemistry, and after the grammar is built, incorporating properties from datasets, or optimisation, is hard.

Here, the researchers use mathematical objects called graph grammars for this purpose.

What mathematicians call graphs are networks or webs with nodes and edges between them. In this approach, a molecule is represented as a graph where the nodes are strings of atoms and edges are chemical bonds. A grammar for such structures tells us how to replace a string in a node with a whole molecular structure. Thus, parsing a structure means contracting some substructure; we keep doing this repeatedly until we get a single node.

The model uses machine learning techniques to learn graph grammars from datasets. The algorithm takes as input a set of molecular structures and a set of evaluation metrics (for example, synthesisability).

The grammar is constructed bottom-up, creating rules by contractions; choosing which structures to contract is based on the learning component, a neural network which builds on the chemical information. The algorithm simultaneously performs multiple, randomised searches to obtain multiple grammars as candidates. It still needs to evaluate them, and this is done using the input metrics.

While the method has been demonstrated for use in building molecules, the applications could be far reaching, beyond chemistry.

(The writer is a computer scientist, formerly with The Institute of Mathematical Sciences, Chennai, and currently visiting professor at Azim Premji University, Bengaluru.)

AI techniques used earlier required tens of thousands of samples to train the neural networks. Also, the designed molecules were not always physically synthesisable.

Read the original:
Learning grammars of molecules to build them in the lab - The Hindu