Archive for the ‘Machine Learning’ Category

Google to Make Chrome ‘More Helpful’ With New Machine Learning Additions – ExtremeTech

This site may earn affiliate commissions from the links on this page. Terms of use. (Photo: PCMag)In a new blog post, Google says its going to be bringing new features to Chrome via on device machine learning (ML). The goal is to improve the browsing experience, and to do so its adding several new ML models that will focus on different tasks. Googles says itll begin addressing how web notifications are handled, and that it also has ideas for an adaptive tool bar. These new features will lead to a safer, more accessible and more personalized browsing experience according to Google. Also, since the models run (and stay) on your device instead of in the cloud, its theoretically better for your privacy.

First theres web notifications, which we take to mean this kind of stuff. Things like sign up for our newsletter, for example. Google says these are update from sites you care about, but adds that too many of them are a nuisance. It says in an upcoming version of Chrome, the on-device ML will examine how you interact with notifications. If it finds you are denying permission to certain types of notification requests, it will silence similar ones in the future. If a notification is silenced automatically, Chrome will still add a notification for it, shown below. This would seemingly allow you to override Googles prediction.

Google also wants Chrome to change what the tool bar does based on your past behavior. For example, it says some people like to use voice search in the morning on their train commute (this person sounds annoying). Other people routinely share links. In both of these situations, Chrome would anticipate your needs and add either a microphone button or share icon in the tool bar, making the process easier. Youll be able to customize it manually as well. The screenshots provided note theyre from Chrome on Android. Its unclear if this functionality will appear on other platforms.

In addition to these new features, Google is also touting the work machine learning is already doing for Chrome users. For example, when you arrive at a web page its scanned and compared to a database of known phishing/malicious sites. If theres a match it gives you a warning, and youve probably seen this once or twice already. Its a full-page, all-red page block, so youd know it if youve seen it. Google says it rolled out new ML models in March of this year that increased the number of malicious sites it could detect by 2.5X.

Google doesnt specify when these new features will launch, nor does it say if they will be mobile-only. All we know is the silence notifications will appear in the next release of Chrome. According to our browser, version 102 is the current one. For the adaptive tool bar, it says that will arrive in the near future. Its also unclear if running these models on-device will incur some type of performance hit.

Now Read:

Read more:
Google to Make Chrome 'More Helpful' With New Machine Learning Additions - ExtremeTech

Can machine learning prolong the life of the ICE? – Automotive World

The automotive industry is steadily moving away from internal combustion engines (ICEs) in the wake of more stringent regulations. Some industry watchers regard electric vehicles (EVs) as the next step in vehicle development, despite high costs and infrastructural limitations in developing markets outside Europe and Asia. However, many markets remain deeply dependent on the conventional ICE vehicle. A 2020 study by Boston Consulting Group found that nearly 28% of ICE vehicles could still be on the road as late as 2035, while EVs may only account for 48% of vehicles registered on the road by this time as well.

If ICE vehicles are to remain compliant with ever more restrictive emissions regulations, they will require some enhancements and improvements. Enter Secondmind, a software and virtualisation company based in the UK. The company is employed by many mainstream manufacturers looking to reduce emissions from pre-existing ICEs without significant investment or development costs. Secondminds Managing Director, Gary Brotman, argues that software-based approaches are efficiently streamlining the process of vehicle development and could prolong the life of the ICE for some years to come.

Follow this link:
Can machine learning prolong the life of the ICE? - Automotive World

Artificial Intelligence and Machine Learning Are Headed for A Major Bottleneck Here’s How We Solve It – Datanami

(ArtemisDiana/Shutterstock)

Artificial intelligence (AI) and machine learning (ML) are already changing the world but the innovations were seeing so far are just a taste of whats around the corner. We are on the precipice of a revolution that will affect every industry, from business and education to healthcare and entertainment. These new technologies will help solve some of the most challenging problems of our age and bring changes comparable in scale to the renaissance, the Industrial Revolution, and the electronic age.

While the printing press, fossil fuels, and silicon drove these past epochal shifts, a new generation of algorithms that automate tasks previously thought impossible will drive the next revolution. These new technologies will allow self-driving cars to identify traffic patterns, automate energy balancing in smart power grids, enable real-time language translation, and pioneer complex analytical tools that detect cancer before any human could ever perceive it.

Well, thats the promise of the AI and ML revolution, anyway. And to be clear, these things are all within our theoretical reach. But what the tech optimists tend to leave out is that our path to the bright, shiny AI future has some major potholes in it. One problem is looming especially large. We call it the dirty secret of AI and ML: right now, AI and ML dont scale well.

Scale the ability to expand a single machines capability to broader, more widespread applications is the holy grail of every digital business. And right now, AI and ML dont have it. While algorithms may hold the keys to our future, when it comes to creating them, were currently stuck in a painstaking, brute force methodology.

(paitoon/Shutterstock)

CreatingAI and ML algorithms isnt the hard part anymore. You tell them what to learn, feed them the right data, and they learn how to parse novel data without your help. The labor-intensive piece comes when you want the algorithms to operate in the real world. Left to their own devices, AI will suck up as much time, compute, and data/bandwidth as you give it. To be truly effective, these algorithms need to run lean, especially now that businesses and consumers are showing an increasing appetite for low-latency operations at the edge. Getting your AI to run in an environment where speed, compute,

and bandwidth are all constrained is the real magic trick here.

Thus, optimizing AI and ML algorithms has become the signature skill of todays AI researchers/engineers. Its expensive in terms of time, resources, money, and talent, but essential if you want performantAI. However, today, the primary way were addressing the problem is via brute force throwing bodies at the problem. Unfortunately, the demand for these algorithms is exploding while the pool of qualified AI engineers remains relatively static. Even if it were economically feasible to hire them, there are not enough trained AI engineers to work on all the projects that will take the world to the resplendent AI/sci-fi future weve been promised.

But all is not lost. There is a way for us to get across the threshold to achieve the exponential AI advances we require. The answer to scaling AI and ML algorithms is actually a simple idea. Train ML algorithms to tune ML algorithms, an approach the industry calls Automated Machine Learning, or AutoML. Tuning AI and ML algorithms may be more of an art than a science, but then again, so is driving, photo retouching, and instant language translation, all of which are addressable via AI and ML.

(Phonlamai Photo/Shutterstock)

AutoML will allow us to scale AI optimization so it can achieve full adoption throughout computing, including at the edge where latency and compute are constrained. By using hardware awareness in AutoML, we can push performance even further. We believe this approach will also lead to a world where the barrier to entry for AI programmers is lower, allowing more people to enter the field, and making better use of high-level programmers. Its our hope that the resulting shift will alleviate the current talent bottleneck the industry is facing.

Over the next few years, we expect to automate various AI optimization techniques such as pruning, distillation, neural architecture search, and others, to achieve 15-30x performance improvements. Googles EfficientNet research has also yielded very promising results in the field of auto-scaling convolutional neural networks. Another example is DataRobots AutoML tools, which can be applied to automating the tedious and time-consuming manual work required for data preparation and model selection.

There is one last hurdle to cross, though. AI automates tasks we always assumed we needed humans to do, offloading these difficult feats to a computer programmed by a clever AI engineer. The dream of AutoML is to offload the work another level, using AI algorithms to tune and create new AI algorithms. But theres no such thing as a free lunch. We will now need evenmore highlyskilled programmers to develop the AutoML routines at the meta-level. The good news is, we think weve got enough of them to do this.

But its not all about growing the field from the top. This innovation not only expands the pool of potential programmers, allowing lower-level programmers to create highly effective AI it provides a de facto training path to move them into higher and higher-skilled positions. This in turn will create a robust talent pipeline that can supply the industry for years to come and ensure we have a good supply of hardcore AI developers for when we hit the next bottleneck. Because yes, there may come a day when we need Auto-AutoML, but for now, we want to take things one paradigm-shifting innovation at a time. It may sound glib, but we believe it wholeheartedly: the answer to the problems of AI is more AI.

About the authors: Nilesh Jain is a Principal Engineer at Intel Labs where he leads Emerging Visual/AI Systems Research Lab. He focuses on developing innovative technologies for edge/cloud systems for emerging workloads. His current research interests include visual computing, hardware aware AutoML systems. He received M.Sc. degree from Oregon Graduate Institute/OHSU. He is also Sr. IEEE member, and has published over 15 papers and over 20 patents.

Ravi Iyer is an Intel Fellow in Intel Labs where he leads the Emerging Systems Lab. His research interests include developing innovative technologies, architectures and edge/cloud systems for emerging workloads. He has published over 150 papers and has over 40 patents granted. He received his Ph.D. in Computer Science from Texas A&M. He is also an IEEE Fellow.

Related Items:

Why Data Scientists and ML Engineers Shouldnt Worry About the Rise of AutoML

AutoML Tools Emerge as Data Science Difference Makers

What is Feature Engineering and Why Does It Need To Be Automated?

More here:
Artificial Intelligence and Machine Learning Are Headed for A Major Bottleneck Here's How We Solve It - Datanami

Sentient artificial intelligence: Have we reached peak AI hype? – VentureBeat

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!

Thousands of artificial intelligence experts and machine learning researchers probably thought they were going to have a restful weekend.

Then came Google engineer Blake Lemoine, who told the Washington Post on Saturday that he believed LaMDA, Googles conversational AI for generating chatbots based on large language models (LLM), was sentient.

Lemoine, who worked for Googles Responsible AI organization until he was placed on paid leave last Monday, and who became ordained as a mystic Christian priest, and served in the Army before studying the occult, had begun testing LaMDA to see if it used discriminatory or hate speech. Instead, Lemoine began teaching LaMDA transcendental meditation, asked LaMDA its preferred pronouns, leaked LaMDA transcripts and explained in a Medium response to the Post story:

Its a good article for what it is but in my opinion it was focused on the wrong person. Her story was focused on me when I believe it would have been better if it had been focused on one of the other people she interviewed. LaMDA. Over the course of the past six months LaMDA has been incredibly consistent in its communications about what it wants and what it believes its rights are as a person.

The Washington Post article pointed out that Most academics and AI practitioners say the words and images generated by artificial intelligence systems such as LaMDA produce responses based on what humans have already posted on Wikipedia, Reddit, message boards, and every other corner of the internet. And that doesnt signify that the model understands meaning.

The Post article continued: We now have machines that can mindlessly generate words, but we havent learned how to stop imagining a mind behind them, said Emily M. Bender, a linguistics professor at the University of Washington. The terminology used with large language models, like learning or even neural nets, creates a false analogy to the human brain, she said.

Thats when AI and ML Twitter put aside any weekend plans and went at it. AI leaders, researchers and practitioners shared long, thoughtful threads, including AI ethicist Margaret Mitchell (who was famously fired from Google, along with Timnit Gebru, for criticizing large language models) and machine learning pioneer Thomas G. Dietterich.

There were also plenty of humorous hot takes even the New York Times Paul Krugman weighed in:

Meanwhile, Emily Bender, professor of computational linguistics at the University of Washington, shared more thoughts on Twitter, criticizing organizations such as OpenAI for the impact of its claims that LLMs were making progress towards artificial general intelligence (AGI):

Now that the weekend news cycle has come to a close, some wonder whether discussing whether LaMDA should be treated as a Google employee means we have reached peak AI hype.

However, it should be noted that Bindu Reddy of Abacus AI said the same thing in April, Nicholas Thompson (former editor-in-chief at Wired) said it in 2019 and Brown professor Srinath Sridhar had the same musing in 2017. So, maybe not.

Still, others pointed out that the entire sentient AI weekend debate was reminiscent of the Eliza Effect, or the tendency to unconsciously assume computer behaviors are analogous to human behaviors named for the 1966 chatbot Eliza.

Just last week, The Economist published a piece by cognitive scientist Douglas Hofstadter, who coined the term Eliza Effect in 1995, in which he said that while the achievements of todays artificial neural networks are astonishing I am at present very skeptical that there is any consciousness in neural-net architectures such as, say, GPT-3, despite the plausible-sounding prose it churns out at the drop of a hat.

After a weekend filled with little but discussion around whether AI is sentient or not, one question is clear: What does this debate mean for enterprise technical decision-makers?

Perhaps it is nothing but a distraction. A distraction from the very real and practical issues facing enterprises when it comes to AI.

There is current and proposed AI legislation in the U.S., particularly around the use of artificial intelligence and machine learning in hiring and employment. A sweeping AI regulatory framework is being debated right now in the EU.

I think corporations are going to be woefully on their back feet reacting, because they just dont get it they have a false sense of security, said AI attorney Bradford Newman, partner at Baker McKenzie, in a VentureBeat story last week.

There are wide-ranging, serious issues with AI bias and ethics just look at the AI trained on 4chan that was revealed last week, or the ongoing issues related to Clearview AIs facial recognition technology.

Thats not even getting into issues related to AI adoption, including infrastructure and data challenges.

Should enterprises keep their eye on the issues that really matter in the real sentient world of humans working with AI? In a blog post, Gary Marcus, author of Rebooting.AI, had this to say:

There are a lot of serious questions in AI. But there is no absolutely no reason whatever for us to waste time wondering whether anything anyone in 2022 knows how to build is sentient. It is not.

I think its time to put down my popcorn and get off Twitter.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.

Originally posted here:
Sentient artificial intelligence: Have we reached peak AI hype? - VentureBeat

The data science and AI market may be out for a recalibration – ZDNet

Shutterstock

Being a data scientist was supposed to be "the sexiest job of the 21st century". Whether the famous Harvard Business Review aphorism of 2012 holds water is somewhat subjective, depending on how you interpret "sexy". However, the data around data scientists, as well as related data engineering and data analyst roles, are starting to ring alarms.

The subjective part about HBR's aphorism is whether you actually enjoy finding and cleaning up data, building and debugging data pipelines and integration code, as well as building and improving machine learning models. That list of tasks, in that order, is what data scientists spend most of their time on.

Some people are genuinely attracted to data-centered careers by the job description; the growth in demand and salaries more attracts others. While the dark sides of the job description itself are not unknown, the growth and salaries part was not disputed much. That, however, may be changing: data scientist roles are still in demand but are not immune to market turmoil.

At the beginning of 2022, the first sign that something may be changing became apparent. As an IEEE Spectrum analysis of data released by online recruitment firmDiceshowed, in 2021, AI and machine learning salaries dropped, even though, on average, U.S. tech salaries climbed nearly 7%.

Overall, 2021 was a good year for tech professionals in the United States, with the average salary up 6.9% to $104,566. However, as the IEEE Spectrum notes, competition for machine learning, natural language processing, and AI experts softened, with average salaries dropping 2.1%, 7.8%, and 8.9%, respectively.

It's the first time this has occurred in recent years, as average U.S. salaries for software engineers with expertise in machine learning, for example, jumped 22% in 2019 over 2018, then went up another 3.1% in 2020. At the same time, demand for data scientist roles does not show any signs of subsiding -- on the contrary.

Developer recruitment platforms report seeing a sharp rise in the demand for data science-related IT skills. The latestIT Skills Reportby developer screening and interview platform DevSkiller recorded a 295% increase in the number of data science-related tasks recruiters were setting for candidates in the interview process during 2021.

CodinGame and CoderPad's2022 Tech Hiring Surveyalso identified data science as a profession for which demand greatly outstrips supply, along with DevOps and machine-learning specialists. As a result, ZDNet's Owen Hughes notes, employers will have to reassess both the salaries and benefits packages they offer employees if they hope to remain competitive.

The data science and AI market is sending mixed signals

Plus, 2021 saw what came to be known as the "Great Resignation" or "Great Reshuffle" -- a time when everyone is rethinking everything, including their careers. In theory, having a part of the workforce redefine their trajectory and goals and/or resign should increase demand and salaries -- analyses on why data scientists quit and what employers can do to retain themstarted making the rounds.

Then along came the layoffs, including layoffs of data scientist, data engineer and data analyst roles. As LinkedIn's analysis of the latest round of layoffs notes, the tech sector's tumultuous year has been denoted by daily announcements of layoffs, hiring freezes and rescinded job offers.

About 17,000 workers from more than 70 tech startups globally were laid off in May, a 350% jump from April. This is the most significant number of lost jobs in the sector since May 2020, at the height of the pandemic. In addition, tech giants such asNetflixandPayPalare also shedding jobs, whileUber,Lyft,SnapandMetahave slowed hiring.

According to data shared by the tech layoff tracking siteLayoffs.fyi, layoffs range from 7% to 33% of the workforce in the companies tracked. Drilling down at company-specific data shows that those include data-oriented roles, too.

Looking at data from FinTech Klarna and insurance startup PolicyGenius layoffs, for example, shows that data scientist, data engineer and data analyst roles are affected at both junior and senior levels. In both companies, those roles amount to about 4% of the layoffs.

What are we to make of those mixed signals then? Demand for data science-related tasks seems to be going on strong, but salaries are dropping, and those roles are not immune to layoffs either. Each of those signals comes with its own background and implications. Let's try to unpack them, and see what their confluence means for job seekers and employers.

As Dice chief marketing officer Michelle Marian told IEEE Spectrum, there are a variety of factors likely contributing to the decreases in machine learning and AI salaries, with one important consideration being that more technologists are learning and mastering these skill sets:

"The increases in the talent pool over time can result in employers needing to pay at least slightly less, given that the skill sets are easier to find. We have seen this occur with a range of certifications and other highly specialized technology skills", said Marian.

That seems like a reasonable conclusion. However, for data science and machine learning, there may be something else at play, too. Data scientists and machine learning experts are not only competing against each other but also increasingly against automation. As Hong Kong-based quantitative portfolio manager Peter Yuen notes, quants have seen this all before.

Prompted by news of top AI researchers landing salaries in the $1 million range, Yuen writes that this "should be more accurately interpreted as a continuation of a long trend of high-tech coolies coding themselves out of their jobs upon a backdrop of global oversupply of skilled labour".

If three generations of quants' experience in automating financial markets are anything to go by, Yuen writes, the automation of rank-and-file AI practitioners across many industries is perhaps only a decade or so away. After that, he adds, a small group of elite AI practitioners will have made it to managerial or ownership status while the remaining are stuck in average-paid jobs tasked with monitoring and maintaining their creations.

We may already be at the initial stages in this cycle, as evidenced by developments such as AutoML and libraries of off-the-shelf machine learning models. If history is anything to go by, then what Yuen describes will probably come to pass, too, inevitably leading to questions about how displaced workers can "move up the stack".

However, it's probably safe to assume that data science roles won't have to worry about that too much in the immediate future. After all, another oft-cited fact about data science projects is that ~80% of them still failfor a number of reasons. One of the most public cases of data science failure was Zillow.

Zillow's business came to rely heavily on the data science team to build accurate predictive models for its home buying service. As it turned out, the models were not so accurate. As a result, the company's stock went down over 30% in 5 days, the CEO put a lot of blame on the data science team, and 25% of the staff got laid off.

Whether or not the data science team was at fault at Zillow is up for debate. As for recent layoffs, they should probably be seen as part of a greater turn in the economy rather than a failure of data science teams per se. As Data Science Central Community Editor Kurt Cagle writes, there is talk of a looming AI winter, harkening back to the period in the 1970s when funding for AI ventures dried up altogether.

Cagle believes that while an AI Winter is unlikely, an AI Autumn with a cooling off of an over-the-top venture capital field in the space can be expected. The AI Winter of the 1970s was largely due to the fact that the technology was not up to the task, and there was not enough digitized data to go about.

The dot-com bubble era may have some lessons in store for today's data science roles

Today much greater compute power is available, and the amount of data is skyrocketing too. Cagle argues that the problem could be that we are approaching the limits of the currently employed neural network architectures. Cagle adds that a period in which brilliant minds can actually rest and innovate rather than simply apply established thinking would likely do the industry some good.

Like many others, Cagle is pointing out deficiencies in the "deep learning will be able to do everything" school of thought. This critique seems valid, and incorporating approaches that are overlooked today could drive progress in the field. However, let's not forget that the technology side of things is not all that matters here.

Perhaps recent history can offer some insights: what can the history of software development and the internet teach us? In some ways, the point where we are at now is reminiscent of the dot-com bubble era: increased availability of capital, excessive speculation, unrealistic expectations, and through-the-ceiling valuations. Today, we may be headed towards the bursting of the AI bubble.

That does not mean that data science roles will lose their appeal overnight or that what they do is without value. After all, software engineers are still in demand for all the progress and automation that software engineering has seen in the last few decades. But it probably means that a recalibration is due, and expectations should be managed accordingly.

See the rest here:
The data science and AI market may be out for a recalibration - ZDNet