Archive for the ‘Machine Learning’ Category

When It Comes to AI, Can We Ditch the Datasets? Using Synthetic Data for Training Machine-Learning Models – SciTechDaily

A machine-learning model for image classification thats trained using synthetic data can rival one trained on the real thing, a study shows.

Huge amounts of data are needed to train machine-learning models to perform image classification tasks, such as identifying damage in satellite photos following a natural disaster. However, these data are not always easy to come by. Datasets may cost millions of dollars to generate, if usable data exist in the first place, and even the best datasets often contain biases that negatively impact a models performance.

To circumvent some of the problems presented by datasets, MIT researchers developed a method for training a machine learning model that, rather than using a dataset, uses a special type of machine-learning model to generate extremely realistic synthetic data that can train another model for downstream vision tasks.

Their results show that a contrastive representation learning model trained using only these synthetic data is able to learn visual representations that rival or even outperform those learned from real data.

MIT researchers have demonstrated the use of a generative machine-learning model to create synthetic data, based on real data, that can be used to train another model for image classification. This image shows examples of the generative models transformation methods. Credit: Courtesy of the researchers

This special machine-learning model, known as a generative model, requires far less memory to store or share than a dataset. Using synthetic data also has the potential to sidestep some concerns around privacy and usage rights that limit how some real data can be distributed. A generative model could also be edited to remove certain attributes, like race or gender, which could address some biases that exist in traditional datasets.

We knew that this method should eventually work; we just needed to wait for these generative models to get better and better. But we were especially pleased when we showed that this method sometimes does even better than the real thing, says Ali Jahanian, a research scientist in the Computer Science and Artificial Intelligence Laboratory (CSAIL) and lead author of the paper.

Jahanian wrote the paper with CSAIL grad students Xavier Puig and Yonglong Tian, and senior author Phillip Isola, an assistant professor in the Department of Electrical Engineering and Computer Science. The research will be presented at the International Conference on Learning Representations.

Once a generative model has been trained on real data, it can generate synthetic data that are so realistic they are nearly indistinguishable from the real thing. The training process involves showing the generative model millions of images that contain objects in a particular class (like cars or cats), and then it learns what a car or cat looks like so it can generate similar objects.

Essentially by flipping a switch, researchers can use a pretrained generative model to output a steady stream of unique, realistic images that are based on those in the models training dataset, Jahanian says.

But generative models are even more useful because they learn how to transform the underlying data on which they are trained, he says. If the model is trained on images of cars, it can imagine how a car would look in different situations situations it did not see during training and then output images that show the car in unique poses, colors, or sizes.

Having multiple views of the same image is important for a technique called contrastive learning, where a machine-learning model is shown many unlabeled images to learn which pairs are similar or different.

The researchers connected a pretrained generative model to a contrastive learning model in a way that allowed the two models to work together automatically. The contrastive learner could tell the generative model to produce different views of an object, and then learn to identify that object from multiple angles, Jahanian explains.

This was like connecting two building blocks. Because the generative model can give us different views of the same thing, it can help the contrastive method to learn better representations, he says.

The researchers compared their method to several other image classification models that were trained using real data and found that their method performed as well, and sometimes better, than the other models.

One advantage of using a generative model is that it can, in theory, create an infinite number of samples. So, the researchers also studied how the number of samples influenced the models performance. They found that, in some instances, generating larger numbers of unique samples led to additional improvements.

The cool thing about these generative models is that someone else trained them for you. You can find them in online repositories, so everyone can use them. And you dont need to intervene in the model to get good representations, Jahanian says.

But he cautions that there are some limitations to using generative models. In some cases, these models can reveal source data, which can pose privacy risks, and they could amplify biases in the datasets they are trained on if they arent properly audited.

He and his collaborators plan to address those limitations in future work. Another area they want to explore is using this technique to generate corner cases that could improve machine learning models. Corner cases often cant be learned from real data. For instance, if researchers are training a computer vision model for a self-driving car, real data wouldnt contain examples of a dog and his owner running down a highway, so the model would never learn what to do in this situation. Generating that corner case data synthetically could improve the performance of machine learning models in some high-stakes situations.

The researchers also want to continue improving generative models so they can compose images that are even more sophisticated, he says.

Reference: Generative Models as a Data Source for Multiview Representation Learning by Ali Jahanian, Xavier Puig, Yonglong Tian and Phillip Isola.PDF

This research was supported, in part, by the MIT-IBM Watson AI Lab, the United States Air Force Research Laboratory, and the United States Air Force Artificial Intelligence Accelerator.

Original post:
When It Comes to AI, Can We Ditch the Datasets? Using Synthetic Data for Training Machine-Learning Models - SciTechDaily

NORCAT partners with Vector Institute on AI training program – MINING.COM – MINING.com

Vectors mission to develop and sustain responsible AI-based innovation to help foster the economic growth and improve the lives of Canadians is aligned with NORCATs goal as a regional innovation centre to accelerate the growth of innovative companies that will drive future economic and social prosperity for Canada, said NORCAT CEO Don Duval in a press release.

We are proud to collaborate with the Vector Institute to create AI-based innovation, growth and productivity in Canada by focusing on the transformative potential of machine and deep learning, he said. Together, we will work to advance AI research and drive its application, adoption and commercialization in the global mining industry.

This partnership will allow NORCAT to offer its portfolio of mining technology clients access to Vectors FastLane program. Launched in 2021, the program is tailored to the needs of Canadas growth-oriented small-and-medium sized enterprises (SMEs), delivering leading-edge AI knowledge transfer that allows this unique community to capitalize on the transformative power of artificial intelligence.

In addition to its talent recruitment and workforce development initiatives, Vector works with its industry community through the FastLane program to deliver training and knowledge transfer that improves products and processes, including an expanded suite of programs, training courses and collaborative projects that will enable participants to raise their AI fluency, develop a deeper understanding of AIs business value, experiment with applying AI models to their real-world challenges and acquire the skills to compete and innovate using AI.

AI applies to every sector of our economy and represents a once-in-a-generation opportunity to improve the lives of Canadians, said Garth Gibson, president and CEO, Vector Institute. Through the FastLane program, Vectors partnership with NORCAT will help the Canadian mining industry do just that by driving innovation, upskilling workers and recruiting world-class talent.

For more information is here.

Read the rest here:
NORCAT partners with Vector Institute on AI training program - MINING.COM - MINING.com

Johns Hopkins and Amazon collaborate to explore transformative power of AI – The Hub at Johns Hopkins

ByLisa Ercolano

Johns Hopkins University and Amazon are teaming up to harness the power of artificial intelligence to transform the way humans interact online and with the world. The new JHU + Amazon Initiative for Interactive AI, housed in the Johns Hopkins Whiting School of Engineering, will leverage the university's world-class expertise in interactive AI to advance groundbreaking technologies in machine learning, computer vision, natural language understanding, and speech processing; democratize access to the benefits of AI innovations; and broaden participation in research from diverse, interdisciplinary scholars and other innovators.

Amazon's investment will span five years, comprising doctoral fellowships, sponsored research funding, gift funding, and community projects. Sanjeev Khudanpur, an associate professor of electrical and computer engineering at the Whiting School, will serve as the initiative's founding director. Khudanpur is an expert in the application of information-theoretic methods to human language technologies such as automatic speech recognition, machine translation, and natural language processing.

"Hopkins is already renowned for its pioneering work in these areas of AI, and working with Amazon researchers will accelerate the timetable for the next big strides," Khudanpur said. "I often compare humans and AI to Luke Skywalker and R2D2 in Star Wars: They're able to accomplish amazing feats in a tiny X-wing fighter because they interact effectively to align their complementary strengths. I am very excited at the prospect of the Hopkins AI community coming together under the auspices of this initiative, and charting the future of transformational, interactive AI together with Amazon researchers,"

Ed Schlesinger, dean of the Whiting School, said, "We are very excited to work with Amazon in this new initiative. We value the challenges that they bring us and the life-changing potential of the solutions we will create together, and look forward to strengthening our work together over the coming years."

Amazon's funding will support a broad range of activities, including annual fellowships for doctoral students; research projects led by Hopkins Engineering faculty in collaboration with postdoctoral researchers, undergraduate and graduate students, and research staff; and events and activities, such as lectures, workshops, and competitions aimed at making AI activities more accessible to the general public in the Baltimore-Washington region.

Prem Natarajan, Alexa AI vice president of natural understanding, says the partnership underscores Amazon's commitment to addressing the greatest challenges in Al, democratizing access to the benefits of Al innovations, and broadening participation in research from diverse, interdisciplinary scholars and other innovators.

"This initiative brings together the top talent at Amazon and Johns Hopkins in a joint mission to drive groundbreaking advances in interactive and multimodal AI," Natarajan said. "These advances will power the next generation of interactive AI experiences across a wide variety of domainsfrom home productivity to entertainment to health."

The two organizations have teamed up in the past, with four Johns Hopkins faculty members joining Amazon as part of its Scholars program: Ozge Sahin, a professor of operations management and business analytics at the Johns Hopkins Carey Business School, in 2019, and in 2020, Gregory Hager, Mandell Bellmore Professor of Computer Science; Ren Vidal, Herschel Seder Professor of Biomedical Engineering and director of the Mathematical Institute for Data Science; and Marin Kobilarov, associate professor of mechanical engineering.

The new initiative will build on Hopkins Engineering's existing strengths in the areas of machine learning, computer vision, natural language understanding, and speech processing. Its Mathematical Institute for Data Science conducts cutting-edge research on the mathematical, statistical, and computational foundations of machine learning and computer vision. The Center for Imaging Science and the Laboratory for Computational Sensing and Robotics conduct fundamental and applied research in nearly every area of basic and applied computer vision. The university's Center for Language and Speech Processing, one of the largest and most influential academic research centers of its kind in the world, conducts research in acoustic processing, automatic speech recognition, cognitive modeling, computational linguistics, information extraction, machine translation, and text analysis. CLSP researchers conducted some of the foundational research that led to the development of digital voice assistants.

"AI has tremendous potential to enhance human abilities, and to reach it, AI of the future will interact with humans the same way we naturally interact with each other. What endeared Amazon Alexa to users was the effortlessness of the interaction. I envision that the research done under this initiative will make it possible for us to use much more powerful AI in equally effortless ways, regardless of our own physical limitations," Khudanpur said.

Hager, a director for Amazon Physical Retail, and Vidal, currently an Amazon Scholar in visual search and AR, were instrumental in helping Amazon and JHU establish the collaboration.

"Computer vision and machine learning are transforming the way in which humans shop, share content, and interact with each other," Vidal said. "This partnership will lead to new collaborations between JHU and Amazon scientists that will help translate cutting-edge advances in deep learning and visual recognition into algorithms that help humans interact with the world."

Seth Zonies, a director of business development for Johns Hopkins Technology Ventures, the university's commercialization and industry collaboration arm, said, "This collaboration represents the opportunity to harness academic ingenuity to address needs in society through industry collaboration. The engineering faculty at Johns Hopkins are committed to applied research, and Amazon is at the forefront of product development in this field. We expect this collaboration to result in deployable, high-impact innovation."

Read more:
Johns Hopkins and Amazon collaborate to explore transformative power of AI - The Hub at Johns Hopkins

Research Analyst / Associate / Fellow in Machine Learning and Artificial Intelligence job with NATIONAL UNIVERSITY OF SINGAPORE | 289568 – Times…

The Role

The Sustainable and Green Finance Institute (SGFIN) is a new university-level research institute in the National University of Singapore (NUS), jointly supported by the Monetary Authority of Singapore (MAS) and NUS. SGFIN aspires to develop deep research capabilities in sustainable and green finance, provide thought leadership in the sustainability space, and shape sustainability outcomes across the financial sector and the economy at large.

This role is ideally suited for those wishing to work in academic or industry research in quantitative analysis, particularly in the area of machine learning and artificial intelligence. The responsibilities of the role will include designing and developing various analytical frameworks to analyze structure, unstructured and non-traditional data related to corporate financial, environmental, and social indicators.

There are no teaching obligations for this position, and the candidate will have the opportunity to develop their research portfolio.

Duties and Responsibilities

The successful candidate will be expected to assume the following responsibilities:

Qualifications

Covid-19 Message

At NUS, the health and safety of our staff and students are one of our utmost priorities, and COVID-vaccination supports our commitment to ensure the safety of our community and to make NUS as safe and welcoming as possible. Many of our roles require a significant amount of physical interactions with students/staff/public members. Even for job roles that may be performed remotely, there will be instances where on-campus presences are required.

In accordance with Singapore's legal requirements, unvaccinated workers will not be able to work on the NUS premises with effect from 15 January 2022. As such, job applicants will need to be fully COVID-19 vaccinated to secure successful employment with NUS.

Read the original here:
Research Analyst / Associate / Fellow in Machine Learning and Artificial Intelligence job with NATIONAL UNIVERSITY OF SINGAPORE | 289568 - Times...

12 examples of artificial intelligence in everyday life – ITProPortal

In the article below, you can check out twelve examples of AI being present in our everyday lives.

Artificial intelligence (opens in new tab) (AI) is growing in popularity, and it's not hard to see why. AI has the potential to be applied in many different ways, from cooking to healthcare.

Though artificial intelligence may be a buzzword today, tomorrow, it might just become a standard part of our everyday lives. In fact - it's already here.

They work and continue to advance by using lots of sensor data, learning how to handle traffic and making real-time decisions.

Also known as autonomous vehicles, these cars use AI tech and machine learning to move around without the passenger having to take control at any time.

Let's begin with something really ubiquitous - smart digital assistants. Here we are talking about Siri, Google Assistant, Alexa and Cortana.

We included them in our list because they can essentially listen and then respond to your commands, turning them into actions.

So, you hit up Siri, you give her a command, like "call a friend," she analyzes what you said, sifts through all the background noise surrounding your speech, interprets your command, and actually does it, all in a couple of seconds.

The best part here is that these assistants are getting smarter and smarter, improving every stage of the command process we mentioned above. You don't have to be as specific with your commands as you were just a couple of years ago.

Furthermore, virtual assistants have become better and better at figuring out filtering useless background noise from your actual commands.

One of the most well-known AI initiatives is a project run by Microsoft. It comes as no surprise that Microsoft is one of the top AI companies (opens in new tab) around (though it's definitely not the only one).

The Microsoft Project InnerEye (opens in new tab) is state-of-the-art research that can potentially change the world.

This project aims to study the brain, specifically the brain's neurological system, to better understand how it functions. The aim of this project is to eventually be able to use artificial intelligence to diagnose and treat various neurological diseases.

The college students' (or is it professor's?) nightmare. Whether you are a content manager or a teacher grading essays, you have the same problem - the internet makes plagiarism easier.

There is a nigh unlimited amount of information and data out there, and less-than-scrupulous students and employees will readily take advantage of that.

Indeed, no human could compare and contrast somebody's essay with all the data out there. AIs are a whole different beast.

They can sift through an insane amount of information, compare it with the relevant text, and see if there is a match or not.

Furthermore, thanks to advancement and growth in this area, some tools can actually check sources in foreign languages, as well as images and audio.

You might have noticed that media recommendations on certain platforms are getting better and better, Netflix, YouTube, and Spotify being just three examples. You can thank AIs and machine learning for that.

The three platforms we mentioned take into account what you have already seen and liked. That's the easy part. Then, they compare and contrast it with thousands, if not tens of thousands, of pieces of media. They essentially learn from the data you provide, and then use their own database to provide you with content that best suits your needs.

Let's simplify this process for YouTube, just as an example.

The platform uses data such as tags, demographic data like your age or gender, as well as the same data of people consuming other pieces of media. Then, it mixes and matches, giving you your suggestions.

Today, many larger banks give you the option of depositing checks through your smartphone. Instead of actually walking to a bank, you can do it with just a couple of taps.

Besides the obvious safeguards when it comes to accessing your bank account through your phone, a check also requires your signature.

Now banks use AIs and machine learning software to read your handwriting, compare it with the signature you gave to the bank before, and safely use it to approve a check.

In general, machine learning and AI tech speeds up most operations done by software in a bank. This all leads to the more efficient execution of tasks, decreasing wait times and cost.

And while we are on the subject of banking, let's talk about fraud for a little bit. A bank processes a huge amount of transactions every day. Tracking all of that, analyzing, it's impossible for a regular human being.

Furthermore, how fraudulent transactions look changes from day to day. With AI and machine learning algorithms, you can have thousands of transactions analyzed in a second. Furthermore, you can also have them learn, figure out what problematic transactions can look like, and prepare themselves for future issues.

Next, whenever you apply for a loan or maybe get a credit card, a bank needs to check your application.

Taking into account multiple factors, like your credit score, your financial history, all of that can now be handled by software. This leads to shorter approval wait times and a lower margin for error.

Many businesses are using AI, specifically chatbots, as a way for their customers to interact with them.

Chatbots are often used as a customer service option for companies that do not have enough staff available at any given time to answer questions or respond to inquiries.

By using chatbots, these companies can free up staff time for other tasks while still getting important information from their customers.

These are a godsend during heavy traffic times, like Black Friday or Cyber Monday. They can save your company from getting overwhelmed with questions, allowing you to serve your customers much better.

Now, this is something we can all be thankful for - spam filters.

A typical spam filter has a number of rules and algorithms that minimize the amount of spam that can reach you. This not only saves you from annoying ads and Nigerian princes, but it also helps against credit card fraud, identity theft, and malware.

Now, what makes a good spam filter effective is the AI running it. The AI behind the filter uses email metadata; it keeps an eye on specific words or phrases, it focuses on some signals, all for the purpose of filtering out spam.

This everyday AI aspect got really popular through Netflix.

Namely - you might have noticed that a lot of thumbnails on websites and certain streaming apps have been replaced by short videos. One of the main reasons this got so popular is AI and machine learning.

Instead of having editors spend hundreds of hours on shortening, filtering, and cutting up longer videos into three-second videos, the AI does it for you. It analyzes hundreds of hours of content and then successfully summarizes it into a short bit of media.

AI also has potential in more unexpected areas, such as cooking.

A company called Rasa has developed an AI system that analyzes food and then recommends recipes based on what you have in your fridge and pantry. This type of AI is a great way for people who enjoy cooking but don't want to spend too much time planning out meals ahead of time.

If there is one thing we can say about AI and machine learning (opens in new tab), it is that they make every tech they come in contact with more effective and powerful. Facial recognition is no different.

There are now many apps that use AI for their facial recognition needs. For example, Snapchat uses AI tech to apply face filters by actually recognizing the visual information presented as a human face.

Facebook can now identify faces in specific photos and invite people to tag themselves or their friends.

And, of course, think about unlocking your phone with your face. Well, it needs AI and machine learning to function.

Let's take Apple Face ID as an example. When you are setting it up, it scans your face and puts roughly thirty thousand dos on it. It uses these dots as markers to help it recognize your face from many different angles.

This allows you to unlock your phone with your face in many different situations and lighting environments while at the same time preventing somebody else from doing the same.

The future is now. AI technology will only continue to develop, to grow and to become more and more vital for every industry and almost every aspect of our everyday lives. If the above examples are to be believed, it's only a matter of time.

Artificial intelligence (opens in new tab) will continue developing and being present in new areas of our lives in the future. As more innovative applications come out, we'll see more ways that AI can make our lives easier and more productive!

Read this article:
12 examples of artificial intelligence in everyday life - ITProPortal