Archive for the ‘Machine Learning’ Category

How Microsoft Teams uses AI and machine learning to improve calls and meetings – Microsoft

As schools and workplaces begin resuming in-person operations, we project a permanent increase in the volume of online meetings and calls. And while communication and collaboration solutions have played a critical role in enabling continuity during these unprecedented times, early stress tests have revealed opportunities to improve and enhance meeting and call quality.

Disruptive echo effects, poor room acoustics, and choppy video are some common issues that hinder the effectiveness of online calls and meetings. Through AI and machine learning, which have become fundamental to our strategy for continual improvement, weve identified and are now delivering innovative enhancements in Microsoft Teams that improve such audio and video challenges in ways that are both user-friendly and scalable across environments.

Today, were announcing the availability of new Teams features including echo cancellation, adjusting audio in poor acoustic environments, and allowing users to speak and hear at the same time without interruptions. These build on AI-powered features recently released like expanding background noise suppression.

During calls and meetings, when a participant has their microphone too close to their speaker, its common for sound to loop between input and output devices, causing an unwanted echo effect. Now, Microsoft Teams uses AI to recognize the difference between sound from a speaker and the users voice, eliminating the echo without suppressing speech or inhibiting the ability of multiple parties to speak at the same time.

In specific environments, room acoustics can cause sound to bounce, or reverberate, causing the users voice to sound shallow as if theyre speaking within a cavern. For the first time, Microsoft Teams uses a machine learning model to convert captured audio signal to sound as if users are speaking into a close-range microphone.

A natural element of conversation is the ability to interrupt for clarification or validation. This is accomplished through full-duplex (two-way) transmission of audio, allowing users to speak and hear others at the same time. When not using a headset, and especially when using devices where the speaker and microphone are very close to each other, it is difficult to remove echo while maintaining full-duplex audio. Microsoft Teams uses a model trained with 30,000 hours of speech samples to retain desired voices while suppressing unwanted audio signals resulting in more fluid dialogue.

Each of us has first-hand experience of a meeting disrupted by the unexpected sounds of a barking dog, a car alarm, or a slammed door. Over two years ago, we announced the release of AI-based noise suppression in Microsoft Teams as an optional feature for Windows users. Since then, weve continued a cycle of iterative development, testing, and evaluation to further optimize our model. After recording significant improvements across key user metrics, we have enabled machine learning-based noise suppression as default for Teams customers using Windows (including Microsoft Teams Rooms), as well as Mac and iOS users. A future release of this feature is planned for Teams Android and web clients.

These AI-driven audio enhancements are rolling out and are expected to be generally available in the coming months.

We have also recently released AI-based video and screen sharing quality optimization breakthroughs for Teams. From adjustments for low light to optimizations based on the type of content being shared, we now leverage AI to help you look and present your best.

The impact of presentations can often depend on an audiences ability to read on-screen text or watch a shared video. But different types of shared content require varied approaches to ensure the highest video quality, particularly under bandwidth constraints. Teams now uses machine learning to detect and adjust the characteristics of the content presented in real-time, optimizing the legibility of documents or smoothness of video playback.

Unexpected issues with network bandwidth can lead to a choppy video that can quickly shift the focus of your presentation. AI-driven optimizations in Teams help adjust playback in challenging bandwidth conditions, so presenters can use video and screen sharing worry-free.

Though you cant always control the surrounding lighting for your meetings, new AI-powered filters in Teams give you the option to adjust brightness and add a soft focus for your meetings with a simple toggle in your device settings, to better accommodate for low-light environments.

The past two years have made clear how important communication and collaboration platforms like Microsoft Teams are to maintaining safe, connected, and productive operations. In addition to bringing new features and capabilities to Teams, well continue to explore new ways to use technology to make online calling and meeting experiences more natural, resilient, and efficient.

Visit the Tech Community Teams blog for more technical details about how we leverage AI and machine learning for audio quality improvements as well as video and screen sharing optimization in Microsoft Teams.

Read more from the original source:
How Microsoft Teams uses AI and machine learning to improve calls and meetings - Microsoft

Google to Make Chrome ‘More Helpful’ With New Machine Learning Additions – ExtremeTech

This site may earn affiliate commissions from the links on this page. Terms of use. (Photo: PCMag)In a new blog post, Google says its going to be bringing new features to Chrome via on device machine learning (ML). The goal is to improve the browsing experience, and to do so its adding several new ML models that will focus on different tasks. Googles says itll begin addressing how web notifications are handled, and that it also has ideas for an adaptive tool bar. These new features will lead to a safer, more accessible and more personalized browsing experience according to Google. Also, since the models run (and stay) on your device instead of in the cloud, its theoretically better for your privacy.

First theres web notifications, which we take to mean this kind of stuff. Things like sign up for our newsletter, for example. Google says these are update from sites you care about, but adds that too many of them are a nuisance. It says in an upcoming version of Chrome, the on-device ML will examine how you interact with notifications. If it finds you are denying permission to certain types of notification requests, it will silence similar ones in the future. If a notification is silenced automatically, Chrome will still add a notification for it, shown below. This would seemingly allow you to override Googles prediction.

Google also wants Chrome to change what the tool bar does based on your past behavior. For example, it says some people like to use voice search in the morning on their train commute (this person sounds annoying). Other people routinely share links. In both of these situations, Chrome would anticipate your needs and add either a microphone button or share icon in the tool bar, making the process easier. Youll be able to customize it manually as well. The screenshots provided note theyre from Chrome on Android. Its unclear if this functionality will appear on other platforms.

In addition to these new features, Google is also touting the work machine learning is already doing for Chrome users. For example, when you arrive at a web page its scanned and compared to a database of known phishing/malicious sites. If theres a match it gives you a warning, and youve probably seen this once or twice already. Its a full-page, all-red page block, so youd know it if youve seen it. Google says it rolled out new ML models in March of this year that increased the number of malicious sites it could detect by 2.5X.

Google doesnt specify when these new features will launch, nor does it say if they will be mobile-only. All we know is the silence notifications will appear in the next release of Chrome. According to our browser, version 102 is the current one. For the adaptive tool bar, it says that will arrive in the near future. Its also unclear if running these models on-device will incur some type of performance hit.

Now Read:

Read more:
Google to Make Chrome 'More Helpful' With New Machine Learning Additions - ExtremeTech

Can machine learning prolong the life of the ICE? – Automotive World

The automotive industry is steadily moving away from internal combustion engines (ICEs) in the wake of more stringent regulations. Some industry watchers regard electric vehicles (EVs) as the next step in vehicle development, despite high costs and infrastructural limitations in developing markets outside Europe and Asia. However, many markets remain deeply dependent on the conventional ICE vehicle. A 2020 study by Boston Consulting Group found that nearly 28% of ICE vehicles could still be on the road as late as 2035, while EVs may only account for 48% of vehicles registered on the road by this time as well.

If ICE vehicles are to remain compliant with ever more restrictive emissions regulations, they will require some enhancements and improvements. Enter Secondmind, a software and virtualisation company based in the UK. The company is employed by many mainstream manufacturers looking to reduce emissions from pre-existing ICEs without significant investment or development costs. Secondminds Managing Director, Gary Brotman, argues that software-based approaches are efficiently streamlining the process of vehicle development and could prolong the life of the ICE for some years to come.

Follow this link:
Can machine learning prolong the life of the ICE? - Automotive World

Artificial Intelligence and Machine Learning Are Headed for A Major Bottleneck Here’s How We Solve It – Datanami

(ArtemisDiana/Shutterstock)

Artificial intelligence (AI) and machine learning (ML) are already changing the world but the innovations were seeing so far are just a taste of whats around the corner. We are on the precipice of a revolution that will affect every industry, from business and education to healthcare and entertainment. These new technologies will help solve some of the most challenging problems of our age and bring changes comparable in scale to the renaissance, the Industrial Revolution, and the electronic age.

While the printing press, fossil fuels, and silicon drove these past epochal shifts, a new generation of algorithms that automate tasks previously thought impossible will drive the next revolution. These new technologies will allow self-driving cars to identify traffic patterns, automate energy balancing in smart power grids, enable real-time language translation, and pioneer complex analytical tools that detect cancer before any human could ever perceive it.

Well, thats the promise of the AI and ML revolution, anyway. And to be clear, these things are all within our theoretical reach. But what the tech optimists tend to leave out is that our path to the bright, shiny AI future has some major potholes in it. One problem is looming especially large. We call it the dirty secret of AI and ML: right now, AI and ML dont scale well.

Scale the ability to expand a single machines capability to broader, more widespread applications is the holy grail of every digital business. And right now, AI and ML dont have it. While algorithms may hold the keys to our future, when it comes to creating them, were currently stuck in a painstaking, brute force methodology.

(paitoon/Shutterstock)

CreatingAI and ML algorithms isnt the hard part anymore. You tell them what to learn, feed them the right data, and they learn how to parse novel data without your help. The labor-intensive piece comes when you want the algorithms to operate in the real world. Left to their own devices, AI will suck up as much time, compute, and data/bandwidth as you give it. To be truly effective, these algorithms need to run lean, especially now that businesses and consumers are showing an increasing appetite for low-latency operations at the edge. Getting your AI to run in an environment where speed, compute,

and bandwidth are all constrained is the real magic trick here.

Thus, optimizing AI and ML algorithms has become the signature skill of todays AI researchers/engineers. Its expensive in terms of time, resources, money, and talent, but essential if you want performantAI. However, today, the primary way were addressing the problem is via brute force throwing bodies at the problem. Unfortunately, the demand for these algorithms is exploding while the pool of qualified AI engineers remains relatively static. Even if it were economically feasible to hire them, there are not enough trained AI engineers to work on all the projects that will take the world to the resplendent AI/sci-fi future weve been promised.

But all is not lost. There is a way for us to get across the threshold to achieve the exponential AI advances we require. The answer to scaling AI and ML algorithms is actually a simple idea. Train ML algorithms to tune ML algorithms, an approach the industry calls Automated Machine Learning, or AutoML. Tuning AI and ML algorithms may be more of an art than a science, but then again, so is driving, photo retouching, and instant language translation, all of which are addressable via AI and ML.

(Phonlamai Photo/Shutterstock)

AutoML will allow us to scale AI optimization so it can achieve full adoption throughout computing, including at the edge where latency and compute are constrained. By using hardware awareness in AutoML, we can push performance even further. We believe this approach will also lead to a world where the barrier to entry for AI programmers is lower, allowing more people to enter the field, and making better use of high-level programmers. Its our hope that the resulting shift will alleviate the current talent bottleneck the industry is facing.

Over the next few years, we expect to automate various AI optimization techniques such as pruning, distillation, neural architecture search, and others, to achieve 15-30x performance improvements. Googles EfficientNet research has also yielded very promising results in the field of auto-scaling convolutional neural networks. Another example is DataRobots AutoML tools, which can be applied to automating the tedious and time-consuming manual work required for data preparation and model selection.

There is one last hurdle to cross, though. AI automates tasks we always assumed we needed humans to do, offloading these difficult feats to a computer programmed by a clever AI engineer. The dream of AutoML is to offload the work another level, using AI algorithms to tune and create new AI algorithms. But theres no such thing as a free lunch. We will now need evenmore highlyskilled programmers to develop the AutoML routines at the meta-level. The good news is, we think weve got enough of them to do this.

But its not all about growing the field from the top. This innovation not only expands the pool of potential programmers, allowing lower-level programmers to create highly effective AI it provides a de facto training path to move them into higher and higher-skilled positions. This in turn will create a robust talent pipeline that can supply the industry for years to come and ensure we have a good supply of hardcore AI developers for when we hit the next bottleneck. Because yes, there may come a day when we need Auto-AutoML, but for now, we want to take things one paradigm-shifting innovation at a time. It may sound glib, but we believe it wholeheartedly: the answer to the problems of AI is more AI.

About the authors: Nilesh Jain is a Principal Engineer at Intel Labs where he leads Emerging Visual/AI Systems Research Lab. He focuses on developing innovative technologies for edge/cloud systems for emerging workloads. His current research interests include visual computing, hardware aware AutoML systems. He received M.Sc. degree from Oregon Graduate Institute/OHSU. He is also Sr. IEEE member, and has published over 15 papers and over 20 patents.

Ravi Iyer is an Intel Fellow in Intel Labs where he leads the Emerging Systems Lab. His research interests include developing innovative technologies, architectures and edge/cloud systems for emerging workloads. He has published over 150 papers and has over 40 patents granted. He received his Ph.D. in Computer Science from Texas A&M. He is also an IEEE Fellow.

Related Items:

Why Data Scientists and ML Engineers Shouldnt Worry About the Rise of AutoML

AutoML Tools Emerge as Data Science Difference Makers

What is Feature Engineering and Why Does It Need To Be Automated?

More here:
Artificial Intelligence and Machine Learning Are Headed for A Major Bottleneck Here's How We Solve It - Datanami

Sentient artificial intelligence: Have we reached peak AI hype? – VentureBeat

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!

Thousands of artificial intelligence experts and machine learning researchers probably thought they were going to have a restful weekend.

Then came Google engineer Blake Lemoine, who told the Washington Post on Saturday that he believed LaMDA, Googles conversational AI for generating chatbots based on large language models (LLM), was sentient.

Lemoine, who worked for Googles Responsible AI organization until he was placed on paid leave last Monday, and who became ordained as a mystic Christian priest, and served in the Army before studying the occult, had begun testing LaMDA to see if it used discriminatory or hate speech. Instead, Lemoine began teaching LaMDA transcendental meditation, asked LaMDA its preferred pronouns, leaked LaMDA transcripts and explained in a Medium response to the Post story:

Its a good article for what it is but in my opinion it was focused on the wrong person. Her story was focused on me when I believe it would have been better if it had been focused on one of the other people she interviewed. LaMDA. Over the course of the past six months LaMDA has been incredibly consistent in its communications about what it wants and what it believes its rights are as a person.

The Washington Post article pointed out that Most academics and AI practitioners say the words and images generated by artificial intelligence systems such as LaMDA produce responses based on what humans have already posted on Wikipedia, Reddit, message boards, and every other corner of the internet. And that doesnt signify that the model understands meaning.

The Post article continued: We now have machines that can mindlessly generate words, but we havent learned how to stop imagining a mind behind them, said Emily M. Bender, a linguistics professor at the University of Washington. The terminology used with large language models, like learning or even neural nets, creates a false analogy to the human brain, she said.

Thats when AI and ML Twitter put aside any weekend plans and went at it. AI leaders, researchers and practitioners shared long, thoughtful threads, including AI ethicist Margaret Mitchell (who was famously fired from Google, along with Timnit Gebru, for criticizing large language models) and machine learning pioneer Thomas G. Dietterich.

There were also plenty of humorous hot takes even the New York Times Paul Krugman weighed in:

Meanwhile, Emily Bender, professor of computational linguistics at the University of Washington, shared more thoughts on Twitter, criticizing organizations such as OpenAI for the impact of its claims that LLMs were making progress towards artificial general intelligence (AGI):

Now that the weekend news cycle has come to a close, some wonder whether discussing whether LaMDA should be treated as a Google employee means we have reached peak AI hype.

However, it should be noted that Bindu Reddy of Abacus AI said the same thing in April, Nicholas Thompson (former editor-in-chief at Wired) said it in 2019 and Brown professor Srinath Sridhar had the same musing in 2017. So, maybe not.

Still, others pointed out that the entire sentient AI weekend debate was reminiscent of the Eliza Effect, or the tendency to unconsciously assume computer behaviors are analogous to human behaviors named for the 1966 chatbot Eliza.

Just last week, The Economist published a piece by cognitive scientist Douglas Hofstadter, who coined the term Eliza Effect in 1995, in which he said that while the achievements of todays artificial neural networks are astonishing I am at present very skeptical that there is any consciousness in neural-net architectures such as, say, GPT-3, despite the plausible-sounding prose it churns out at the drop of a hat.

After a weekend filled with little but discussion around whether AI is sentient or not, one question is clear: What does this debate mean for enterprise technical decision-makers?

Perhaps it is nothing but a distraction. A distraction from the very real and practical issues facing enterprises when it comes to AI.

There is current and proposed AI legislation in the U.S., particularly around the use of artificial intelligence and machine learning in hiring and employment. A sweeping AI regulatory framework is being debated right now in the EU.

I think corporations are going to be woefully on their back feet reacting, because they just dont get it they have a false sense of security, said AI attorney Bradford Newman, partner at Baker McKenzie, in a VentureBeat story last week.

There are wide-ranging, serious issues with AI bias and ethics just look at the AI trained on 4chan that was revealed last week, or the ongoing issues related to Clearview AIs facial recognition technology.

Thats not even getting into issues related to AI adoption, including infrastructure and data challenges.

Should enterprises keep their eye on the issues that really matter in the real sentient world of humans working with AI? In a blog post, Gary Marcus, author of Rebooting.AI, had this to say:

There are a lot of serious questions in AI. But there is no absolutely no reason whatever for us to waste time wondering whether anything anyone in 2022 knows how to build is sentient. It is not.

I think its time to put down my popcorn and get off Twitter.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.

Originally posted here:
Sentient artificial intelligence: Have we reached peak AI hype? - VentureBeat